text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
## Webscraping com `requests` e `BeautifulSoup` ### Código comentado de _webscraping_ simples A primeira biblioteca que precisamos é Requests. Ela gerencia a requisição HTTP. Significa que ela acessa e "copia" o código-fonte para nosso script. ``` import requests ``` Com a biblioteca carregada, vamos gerar a URL do site que desejamos. Neste exemplo, vamos raspar a agenda do presidente da República para a data de 19 de março de 2021. ``` # Eu costumo separar a URL em pedaços para faciliar a compreensão domain_url = "https://www.gov.br/" path_url = "planalto/pt-br/acompanhe-o-planalto/agenda-do-presidente-da-republica/" query_url = "2021-03-19" url = domain_url + path_url + query_url print(url) ``` Agora que temos a URL gerada, vamos fazer com que Requests busque seu conteúdo. Vamos armazenar na variável `site`. ``` site = requests.get(url) # Documentação: https://requests.readthedocs.io/en/master/api/#requests.get ``` Com a captura do código-fonte feita, podemos ver o resultado usando `status_code` (se conseguimos conexão com o servidor), `headers` (os _headers_ usados na captura), `content` (o conteúdo, o código em bytes), `text` (o conteúdo, o código em unicode, "legível") e outros métodos. Para saber mais métodos, consulte a [documentação de `requests.Response`](https://requests.readthedocs.io/en/master/api/#requests.Response). ``` print(site.status_code) # Documentação: https://requests.readthedocs.io/en/master/api/#requests.Response.status_code print(site.headers) # Documentação: https://requests.readthedocs.io/en/master/api/#requests.Response.headers print(site.content) # Documentação: https://requests.readthedocs.io/en/master/api/#requests.Response.content print(site.text) # Documentação: https://requests.readthedocs.io/en/master/api/#requests.Response.text ``` Agora que temos o código-fonte salvo na variável `site`, usamos BeautifulSoup para fazer o _parsing_, ou seja, compreender a estrutura. Vamos dar à biblioteca o apelido de `bs`. ``` from bs4 import BeautifulSoup as bs ``` Com a biblioteca carregada, vamos salvar `site.content` com _parsing_ na variável `content`. ``` content = bs(site.content, "html.parser") # Documentação: https://www.crummy.com/software/BeautifulSoup/bs4/doc/#making-the-soup print(content) ``` Visualmente não houve alteração. Contudo, o trabalho de BeautifulSoup foi interno: compreender a estrutura do HTML que foi apresentado; criar a árvore de elementos. Agora conseguimos encontrar tags, classes etc. Isso pode ser feito com: - `find()` ([documentação](https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find)), para elemento único ou o primeiro elemento; ou - `find_all()` ([documentação](https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all)), para elemento que aparece mais de uma vez e seus descendentes. (Há outras formas de encontrar elementos pela posição, como `find_parent()` e `find_next_sibling()`. Elas estão descritas na [documentação](https://www.crummy.com/software/BeautifulSoup/bs4/doc/#searching-the-tree).) Como argumentos, podemos usar a tag HTML e,opcionalmente, classe ou id de CSS. Vamos ver como encontramos `<ul>` (lista sem ordenação). Primeiro, somente com `find_all("ul")`... ``` print(content.find_all("ul")) ``` ...depois, com uma classe: `find("ul", class_="submenu")` ``` print(content.find("ul", class_="submenu")) ``` ...e, por fim, com um id: `find("ul", id="menu-barra-temp")`. ``` print(content.find("ul", id="menu-barra-temp")) ``` Com esta lógica, conseguimos encontrar os elementos que desejamos. Vamos testar: na agenda da Presidência há os compromissos de Jair Bolsonaro. Vamos inspecionar o código: ![image](https://gitlab.com/rodolfo-viana/eventos/-/raw/main/20210327_gdgfoz_webscrapingcompython/img/04.png) Reparemos que todos os compromissos estão dentro de `<ul class="list-compromissos">`, são _children_ desse elemento. Vamos pegar essa tag com a classe específica e salvar na variável `lista`: ``` lista = content.find("ul", class_="list-compromissos") print(lista) ``` Agora que temos `lista`, observamos que cada compromisso está dentro de `<li class="item-compromisso-wrapper">`. Devemos, portanto, pegar todos (`find_all()`) as tags `li` com classe `item-compromisso-wrapper` que estão dentro de `lista`. A esses itens vamos dar o nome de `itens`. ``` itens = lista.find_all("li", class_="item-compromisso-wrapper") print(itens) ``` Os dados que queremos estão nas tags: - horário inicial: `<time class="compromisso-inicio">` - horário final: `<time class="compromisso-fim">` - compromisso: `<h4 class="compromisso-titulo">` - local: `<div class="compromisso-local">` Como o que temos em `itens` é uma lista, podemos iterar sobre os elementos pegando (`find()`) os textos que estão contidos nas tags e classes, e salvar isso tudo num dicionário. ``` for i in itens: inicio = i.find("time", class_="compromisso-inicio") fim = i.find("time", class_="compromisso-fim") compromisso = i.find("h4", class_="compromisso-titulo") local = i.find("div", class_="compromisso-local") dicionario = dict( inicio=inicio, fim=fim, compromisso=compromisso, local=local ) print(dicionario) ``` Notem que ainda temos código HTML nos resultados. Para eliminar isso, podemos passar, depois de `find()`, a função `text`. Assim: ``` for i in itens: inicio = i.find("time", class_="compromisso-inicio").text fim = i.find("time", class_="compromisso-fim").text compromisso = i.find("h4", class_="compromisso-titulo").text local = i.find("div", class_="compromisso-local").text dicionario = dict( inicio=inicio, fim=fim, compromisso=compromisso, local=local ) print(dicionario) ``` Pronto. Perfeito. Raspamos os dados da agenda do presidente. Podemos agora salvar colocar o dicionário numa lista e salvar como `csv`, transformar em `json`, trabalhar com Pandas etc. ``` import csv # Crio uma lista vazia lista_final = list() for i in itens: inicio = i.find("time", class_="compromisso-inicio").text fim = i.find("time", class_="compromisso-fim").text compromisso = i.find("h4", class_="compromisso-titulo").text local = i.find("div", class_="compromisso-local").text dicionario = dict( inicio=inicio, fim=fim, compromisso=compromisso, local=local ) # Adiciono cada dicionário na lista que estava vazia lista_final.append(dicionario) # Crio um arquivo... with open('agenda_pres.csv', 'w') as file: # ...e escrevo os nomes das colunas... writer = csv.DictWriter(file, fieldnames=['inicio', 'fim', 'compromisso', 'local']) writer.writeheader() # ...e as linhas da lista writer.writerows(lista_final) ``` Cabe ressaltar que a lógica usada aqui, na página referente ao dia 19 de março de 2021, resultou em dois compromissos. Poderiam ser 2 milhões e o código seria o mesmo.
github_jupyter
# Custom Estimator with Keras **Learning Objectives** - Learn how to create custom estimator using tf.keras ## Introduction Up until now we've been limited in our model architectures to premade estimators. But what if we want more control over the model? We can use the popular Keras API to create a custom model. Keras is a high-level API to build and train deep learning models. It is user-friendly, modular and makes writing custom building blocks of Tensorflow code much easier. Once we've build a Keras model we then it to an estimator using `tf.keras.estimator.model_to_estimator()`This gives us access to all the flexibility of Keras for creating deep learning models, but also the production readiness of the estimator framework! ``` # Ensure that we have Tensorflow 1.12 installed. !pip3 freeze | grep tensorflow==1.12.0 || pip3 install tensorflow==1.12.0 import tensorflow as tf import numpy as np import shutil print(tf.__version__) ``` ## Train and Evaluate input functions For the most part, we can use the same train and evaluation input functions that we had in previous labs. Note the function `create_feature_keras_input` below. We will use this to create the first layer of the model. This function is called in turn during the `train_input_fn` and `eval_input_fn` as well. ``` CSV_COLUMN_NAMES = ["fare_amount","dayofweek","hourofday","pickuplon","pickuplat","dropofflon","dropofflat"] CSV_DEFAULTS = [[0.0],[1],[0],[-74.0], [40.0], [-74.0], [40.7]] def read_dataset(csv_path): def parse_row(row): # Decode the CSV row into list of TF tensors fields = tf.decode_csv(records = row, record_defaults = CSV_DEFAULTS) # Pack the result into a dictionary features = dict(zip(CSV_COLUMN_NAMES, fields)) # NEW: Add engineered features features = add_engineered_features(features) # Separate the label from the features label = features.pop("fare_amount") # remove label from features and store return features, label # Create a dataset containing the text lines. dataset = tf.data.Dataset.list_files(file_pattern = csv_path) # (i.e. data_file_*.csv) dataset = dataset.flat_map(map_func = lambda filename: tf.data.TextLineDataset(filenames = filename).skip(count = 1)) # Parse each CSV row into correct (features,label) format for Estimator API dataset = dataset.map(map_func = parse_row) return dataset def create_feature_keras_input(features, label): features = tf.feature_column.input_layer(features = features, feature_columns = create_feature_columns()) return features, label def train_input_fn(csv_path, batch_size = 128): #1. Convert CSV into tf.data.Dataset with (features, label) format dataset = read_dataset(csv_path) #2. Shuffle, repeat, and batch the examples. dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size) #3. Create single feature tensor for input to Keras Model dataset = dataset.map(map_func = create_feature_keras_input) return dataset def eval_input_fn(csv_path, batch_size = 128): #1. Convert CSV into tf.data.Dataset with (features, label) format dataset = read_dataset(csv_path) #2.Batch the examples. dataset = dataset.batch(batch_size = batch_size) #3. Create single feature tensor for input to Keras Model dataset = dataset.map(map_func = create_feature_keras_input) return dataset ``` ## Feature Engineering We'll use the same engineered features that we had in previous labs. ``` def add_engineered_features(features): features["latdiff"] = features["pickuplat"] - features["dropofflat"] # East/West features["londiff"] = features["pickuplon"] - features["dropofflon"] # North/South features["euclidean_dist"] = tf.sqrt(x = features["latdiff"]**2 + features["londiff"]**2) return features def create_feature_columns(): # One hot encode dayofweek and hourofday fc_dayofweek = tf.feature_column.categorical_column_with_identity(key = "dayofweek", num_buckets = 8) fc_hourofday = tf.feature_column.categorical_column_with_identity(key = "hourofday", num_buckets = 24) # Cross features to get combination of day and hour fc_day_hr = tf.feature_column.crossed_column(keys = [fc_dayofweek, fc_hourofday], hash_bucket_size = 24 * 7) # Bucketize latitudes and longitudes NBUCKETS = 16 latbuckets = np.linspace(38.0, 42.0, NBUCKETS).tolist() lonbuckets = np.linspace(-76.0, -72.0, NBUCKETS).tolist() fc_bucketized_plat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplon"), boundaries = lonbuckets) fc_bucketized_plon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplat"), boundaries = latbuckets) fc_bucketized_dlat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflon"), boundaries = lonbuckets) fc_bucketized_dlon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflat"), boundaries = latbuckets) feature_columns = [ #1. Engineered using tf.feature_column module tf.feature_column.indicator_column(categorical_column = fc_day_hr), # 168 columns fc_bucketized_plat, # 16 + 1 = 17 columns fc_bucketized_plon, # 16 + 1 = 17 columns fc_bucketized_dlat, # 16 + 1 = 17 columns fc_bucketized_dlon, # 16 + 1 = 17 columns #2. Engineered in input functions tf.feature_column.numeric_column(key = "latdiff"), # 1 column tf.feature_column.numeric_column(key = "londiff"), # 1 column tf.feature_column.numeric_column(key = "euclidean_dist") # 1 column ] return feature_columns ``` Calculate the number of feature columns that will be input to our Keras model ``` num_feature_columns = 168 + (16 + 1) * 4 + 3 print("num_feature_columns = {}".format(num_feature_columns)) ``` ## Build Custom Keras Model Now we can begin building our Keras model. Have a look at [the guide here](https://www.tensorflow.org/guide/keras) to see more explanation. #### **Exercise 1** Complete the code in the cell below to add a sequence of dense layers using Keras's `Sequential` API. Create a model which consists of six layers with `relu` as activation function. Have a look at [the documentation for tf.keras.layers.Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) to see which arguments to provide. ``` def create_keras_model(): model = tf.keras.Sequential() # TODO: Your code goes here # TODO: Your code goes here # TODO: Your code goes here # TODO: Your code goes here # TODO: Your code goes here # TODO: Your code goes here def rmse(y_true, y_pred): # Root Mean Squared Error return tf.sqrt(x = tf.reduce_mean(input_tensor = tf.square(x = y_pred - y_true))) model.compile(optimizer = tf.train.AdamOptimizer(), loss = "mean_squared_error", metrics = [rmse]) return model ``` ## Serving input function Once we've constructed our model in Keras, we next create the serving input function. This is also similar to what we have done in previous labs. Note that we use our `create_feature_keras_input` function again so that we perform our feature engineering during inference.## Build Custom Keras Model ``` # Create serving input function def serving_input_fn(): feature_placeholders = { "dayofweek": tf.placeholder(dtype = tf.int32, shape = [None]), "hourofday": tf.placeholder(dtype = tf.int32, shape = [None]), "pickuplon": tf.placeholder(dtype = tf.float32, shape = [None]), "pickuplat": tf.placeholder(dtype = tf.float32, shape = [None]), "dropofflon": tf.placeholder(dtype = tf.float32, shape = [None]), "dropofflat": tf.placeholder(dtype = tf.float32, shape = [None]), } features = {key: tensor for key, tensor in feature_placeholders.items()} # Perform our feature engineering during inference as well features, _ = create_feature_keras_input((add_engineered_features(features)), None) return tf.estimator.export.ServingInputReceiver(features = {"dense_input": features}, receiver_tensors = feature_placeholders) ``` ## Train and Evaluate To train our model, we can use `train_and_evaluate` as we have before. Note that we use `tf.keras.estimator.model_to_estimator` to create our estimator. It takes as arguments the compiled keras model, the OUTDIR, and optionally a `tf.estimator.Runconfig`. Have a look at [the documentation for tf.keras.estimator.model_to_estimator](https://www.tensorflow.org/api_docs/python/tf/keras/estimator/model_to_estimator) to make sure you understand how arguments are used. ### **Exercise 2** Complete the code below to create an estimator out of the Keras model we built above. ``` def train_and_evaluate(output_dir): estimator = tf.keras.estimator.model_to_estimator( # TODO: Your code goes here ) train_spec=tf.estimator.TrainSpec( input_fn = lambda: train_input_fn(csv_path = "./taxi-train.csv"), max_steps = 500) exporter = tf.estimator.LatestExporter('exporter', serving_input_fn) eval_spec=tf.estimator.EvalSpec( input_fn = lambda: eval_input_fn(csv_path = "./taxi-valid.csv"), steps = None, start_delay_secs = 10, # wait at least N seconds before first evaluation (default 120) throttle_secs = 10, # wait at least N seconds before each subsequent evaluation (default 600) exporters = exporter) # export SavedModel once at the end of training tf.logging.set_verbosity(tf.logging.INFO) # so loss is printed during training tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) %%time OUTDIR = "taxi_trained" shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file train_and_evaluate(OUTDIR) ``` Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
github_jupyter
``` # Copyright 2021 NVIDIA Corporation. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== ``` # 1 Overview In this notebook, we want to provide a tutorial on how to use standard DLRM model that trained on HugeCTR_DLRM_Training. notebook and deploy the saved model to Triton Inference Server. We could collect the inference benchmark by Triton performance analyzer tool 1. [Overview](#1) 2. [Generate the DLRM Deployment Configuration](#2) 3. [Load Models on Triton Server](#3) 4. [Prepare Inference Input Data](#4) 5. [Inference Benchmarm by Triton Performance Tool](#5) # 2. Generate the DLRM Deployment Configuration ## 2.1 Generate related model folders ``` # define some data folder to store the model related files # Standard Libraries import os from time import time import re import shutil import glob import warnings BASE_DIR = "/dlrm_infer" model_folder = os.path.join(BASE_DIR, "model") dlrm_model_repo= os.path.join(model_folder, "dlrm") dlrm_version =os.path.join(dlrm_model_repo, "1") if os.path.isdir(model_folder): shutil.rmtree(model_folder) os.makedirs(model_folder) if os.path.isdir(dlrm_model_repo): shutil.rmtree(dlrm_model_repo) os.makedirs(dlrm_model_repo) if os.path.isdir(dlrm_version): shutil.rmtree(dlrm_version) os.makedirs(dlrm_version) ``` ### 2.2 Copy DLRM model files to model repository ``` ! cp -r /dlrm_train/dlrm0_sparse_20000.model $dlrm_version/ ! cp /dlrm_train/dlrm_dense_20000.model $dlrm_version/ ! cp /dlrm_train/dlrm.json $dlrm_version/ !ls -l $dlrm_version ``` ### 2.3 Generate the Triton configuration for deploying DLRM ``` %%writefile $dlrm_model_repo/config.pbtxt name: "dlrm" backend: "hugectr" max_batch_size:1, input [ { name: "DES" data_type: TYPE_FP32 dims: [ -1 ] }, { name: "CATCOLUMN" data_type: TYPE_INT64 dims: [ -1 ] }, { name: "ROWINDEX" data_type: TYPE_INT32 dims: [ -1 ] } ] output [ { name: "OUTPUT0" data_type: TYPE_FP32 dims: [ -1 ] } ] instance_group [ { count: 1 kind : KIND_GPU gpus:[2] } ] parameters [ { key: "config" value: { string_value: "/dlrm_infer/model/dlrm/1/dlrm.json" } }, { key: "gpucache" value: { string_value: "true" } }, { key: "hit_rate_threshold" value: { string_value: "0.8" } }, { key: "gpucacheper" value: { string_value: "0.5" } }, { key: "label_dim" value: { string_value: "1" } }, { key: "slots" value: { string_value: "26" } }, { key: "cat_feature_num" value: { string_value: "26" } }, { key: "des_feature_num" value: { string_value: "13" } }, { key: "max_nnz" value: { string_value: "2" } }, { key: "embedding_vector_size" value: { string_value: "128" } }, { key: "embeddingkey_long_type" value: { string_value: "true" } } ] ``` ### 2.4 Generate the Hugectr Backend parameter server configuration for deploying dlrm ``` %%writefile $model_folder/ps.json { "supportlonglong":true, "models":[ { "model":"dlrm", "sparse_files":["/dlrm_infer/model/dlrm/1/dlrm0_sparse_20000.model"], "dense_file":"/dlrm_infer/model/dlrm/1/dlrm_dense_20000.model", "network_file":"/dlrm_infer/model/dlrm/1/dlrm.json" } ] } !ls -l $dlrm_version !ls -l $dlrm_model_repo ``` ## 3. Deploy DLRM on Triton Server At this stage, you should have already launched the Triton Inference Server with the following command: In this tutorial, we will deploy the DLRM to a single V100(32GB) docker run --gpus=all -it -v /dlrm_infer/:/dlrm_infer -v /dlrm_train/:/dlrm_train --net=host nvcr.io/nvidia/merlin/merlin-inference:0.7 /bin/bash After you enter into the container you can launch triton server with the command below: tritonserver --model-repository=/dlrm_infer/model/ --load-model=dlrm --model-control-mode=explicit --backend-directory=/usr/local/hugectr/backends --backend-config=hugectr,ps=/dlrm_infer/model/ps.json Note: The model-repository path is /dlrm_infer/model/. The path for the dlrm model network json file is /dlrm_infer/model/dlrm/1/dlrm.json. The path for the parameter server configuration file is /dlrm_infer/model/ps.json. ## 4. Prepare Inference Input Data ### 4.1 Read validation data ``` !ls -l /dlrm_train/dlrm/val import pandas as pd df=pd.read_parquet('/dlrm_train/dlrm/val/0.83ab760d4f4b4505a397e9b90247eb4a.parquet',engine='pyarrow') df.head(2) df.head(200000).to_csv('infer_test.txt', sep='\t', index=False,header=True) ``` ### 4.2 Follow the Triton requirements to generate input data with json format ``` %%writefile ./criteo2predict.py import argparse import sys import numpy as np import pandas as pd import json import pickle def parse_config(src_config): try: with open(src_config, 'r') as data_json: j_data = json.load(data_json) dense_dim = j_data["dense"] categorical_dim = j_data["categorical"] slot_size = j_data["slot_size"] assert(categorical_dim == np.sum(slot_size)) return dense_dim, categorical_dim, slot_size except: print("Invalid data configuration file!") def convert(src_csv, src_config, dst, batch_size,segmentation): dense_dim, categorical_dim, slot_size = parse_config(src_config) slot_size_array=[4976199, 3289052, 282487, 138210, 11, 2203, 8901, 67, 4, 948, 15, 25419, 5577159, 1385790, 4348882, 178673, 10023, 88, 34, 14705, 7112, 19283, 4, 6391, 1282, 60] offset = np.insert(np.cumsum(slot_size_array), 0, 0)[:-1] total_columns = 1 + dense_dim + categorical_dim df = pd.read_csv(src_csv, sep='\t', nrows=batch_size) cols = df.columns slot_num = len(slot_size) row_ptrs = [0 for _ in range(batch_size*slot_num + 1)] for i in range(1, len(row_ptrs)): row_ptrs[i] = row_ptrs[i-1] + slot_size[(i-1)%slot_num] label_df = pd.DataFrame(df['label'].values.reshape(1,batch_size)) dense_df = pd.DataFrame(df[['I'+str(i+1) for i in range(dense_dim)]].values.reshape(1, batch_size*dense_dim)) embedding_columns_df = pd.DataFrame(df[['C'+str(i+1) for i in range(categorical_dim)]].values.reshape(1, batch_size*categorical_dim)) row_ptrs_df = pd.DataFrame(np.array(row_ptrs).reshape(1, batch_size*slot_num + 1)) with open(dst, 'w') as dst_txt: dst_txt.write("{\n\"data\":[\n{\n") dst_txt.write("\"DES\":") dst_txt.write(','.join('%s' %id for id in dense_df.values.tolist())) dst_txt.write(",\n\"CATCOLUMN\":") dst_txt.write(','.join('%s' %id for id in (embedding_columns_df.values.reshape(-1,26)+offset).reshape(1,-1).tolist())) dst_txt.write(",\n\"ROWINDEX\":") dst_txt.write(','.join('%s' %id for id in row_ptrs_df.values.tolist())) dst_txt.write("\n}\n]\n}") if __name__ == '__main__': arg_parser = argparse.ArgumentParser(description='Convert Preprocessed Criteo Data to Inference Format') arg_parser.add_argument('--src_csv_path', type=str, required=True) arg_parser.add_argument('--src_config_path', type=str, required=True) arg_parser.add_argument('--dst_path', type=str, required=True) arg_parser.add_argument('--batch_size', type=int, default=128) arg_parser.add_argument('--segmentation', type=str, default=' ') args = arg_parser.parse_args() src_csv_path = args.src_csv_path segmentation = args.segmentation src_config_path = args.src_config_path dst_path = args.dst_path batch_size = args.batch_size convert(src_csv_path, src_config_path, dst_path, batch_size, segmentation) ``` ### 4.3 Define Inference Input Data Format ``` %%writefile ./dlrm_input_format.json { "dense": 13, "categorical": 26, "slot_size": [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1] } ``` ### 4.4 Generate the input json data with batch size=1 ``` batchsize=1 !python3 criteo2predict.py --src_csv_path=./infer_test.txt --src_config_path=dlrm_input_format.json --dst_path ./$batchsize".json" --batch_size=$batchsize --segmentation=',' ``` ### 4.4 Get Triton server status if deploy DLRM successfully in Step3 ``` !curl -v localhost:8000/v2/health/ready ``` ## 5. Get Inference benchmark by Triton Performance Tool ### 5.1 Get the inference performance for batchsize=1 ``` !perf_analyzer -m dlrm -u localhost:8000 --input-data 1.json --shape CATCOLUMN:26 --shape DES:13 --shape ROWINDEX:27 ``` ### 5.2 Get the inference performance for batchsize=131072 #### 5.2.1. Modify the max_batch_size from 1 to 131072 in $dlrm_model_repo/config.pbtxt ``` %%writefile $dlrm_model_repo/config.pbtxt name: "dlrm" backend: "hugectr" max_batch_size:131072, input [ { name: "DES" data_type: TYPE_FP32 dims: [ -1 ] }, { name: "CATCOLUMN" data_type: TYPE_INT64 dims: [ -1 ] }, { name: "ROWINDEX" data_type: TYPE_INT32 dims: [ -1 ] } ] output [ { name: "OUTPUT0" data_type: TYPE_FP32 dims: [ -1 ] } ] instance_group [ { count: 1 kind : KIND_GPU gpus:[2] } ] parameters [ { key: "config" value: { string_value: "/dlrm_infer/model/dlrm/1/dlrm.json" } }, { key: "gpucache" value: { string_value: "true" } }, { key: "hit_rate_threshold" value: { string_value: "0.8" } }, { key: "gpucacheper" value: { string_value: "0.5" } }, { key: "label_dim" value: { string_value: "1" } }, { key: "slots" value: { string_value: "26" } }, { key: "cat_feature_num" value: { string_value: "26" } }, { key: "des_feature_num" value: { string_value: "13" } }, { key: "max_nnz" value: { string_value: "2" } }, { key: "embedding_vector_size" value: { string_value: "128" } }, { key: "embeddingkey_long_type" value: { string_value: "true" } } ] ``` #### 5.2.2. Relaunch Triton server to reload DLRM according to Step 3 #### 5.2.3. Generate the input json file with batchsize=131072 ``` batchsize=131072 !python3 criteo2predict.py --src_csv_path=./infer_test.txt --src_config_path=dlrm_input_format.json --dst_path ./$batchsize".json" --batch_size=$batchsize --segmentation=',' !perf_analyzer -m dlrm -u localhost:8000 --input-data 131072.json --shape CATCOLUMN:3407872 --shape DES:1703936 --shape ROWINDEX:3407873 ``` ## If you want to get more inference results with different batchsize, please repeat step 5.2 with new batchsize
github_jupyter
<a href="https://colab.research.google.com/github/TeachingTextMining/TextClassification/blob/main/01-SA-Pipeline/01-SA-Pipeline-Reviews.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Entrenamiento y ejecución de un pipeline de clasificación textual La clasificación de textos consiste en, dado un texto, asignarle una entre varias categorías. Algunos ejemplos de esta tarea son: - dado un tweet, categorizar su connotación como positiva, negativa o neutra. - dado un post de Facebook, clasificarlo como portador de un lenguaje ofensivo o no. En la actividad exploraremos cómo crear un pipeline y entrenarlo para clasificar reviews de [IMDB](https://www.imdb.com/) sobre películas en las categorías \[$positive$, $negative$\] Puede encontrar más información sobre este problema en [Kaggle](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews) y en [Large Movie Review Datase](http://ai.stanford.edu/~amaas/data/sentiment/). **Instrucciones:** - siga las indicaciones y comentarios en cada apartado. **Después de esta actividad nos habremos familiarizado con:** - algunos tipos de características ampliamente utilizadas en la clasificación de textos. - cómo construir un pipeline para la clasificación de textos utilizando [scikit-learn](https://scikit-learn.org/stable/). - utilizar este pipeline para clasificar nuevos textos. **Requerimientos** - python 3.6 - 3.8 - pandas - plotly <a name="sec:setup"></a> ### Instalación de librerías e importación de dependencias. Para comenzar, es preciso instalar e incluir las librerías necesarias. En este caso, el entorno de Colab incluye las necesarias. Ejecute la siguiente casilla prestando atención a las explicaciones dadas en los comentarios. ``` # reset environment %reset -f # para construir gráficas y realizar análisis exploratorio de los datos import plotly.graph_objects as go import plotly.figure_factory as ff import plotly.express as px # para cargar datos y realizar pre-procesamiento básico import pandas as pd from collections import Counter # para pre-procesamiento del texto y extraer características from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from nltk.stem.snowball import EnglishStemmer # algoritmos de clasificación from sklearn.naive_bayes import MultinomialNB from sklearn.naive_bayes import BernoulliNB from sklearn.tree import DecisionTreeClassifier from sklearn.svm import SVC # para construir pipelines from sklearn.pipeline import Pipeline # para evaluar los modelos from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, classification_report, confusion_matrix, roc_curve, auc from sklearn.utils.multiclass import unique_labels # para guardar el modelo import pickle print('Done!') ``` #### Definición de funciones y variables necesarias para el pre-procesamiento de datos Antes de definir el pipeline definiremos algunas variables útiles como el listado de stop words y funciones para cargar los datos, entrenar el modelo etc. ``` #listado de stopwords. Este listado también se puede leer desde un fichero utilizando la función read_corpus stop_words=['i','me','my','myself','we','our','ours','ourselves','you','your','yours','yourself','yourselves', 'he','him','his','himself','she','her','hers','herself','it','its','itself','they','them','their', 'theirs','themselves','what','which','who','whom','this','that','these','those','am','is','are', 'was','were','be','been','being','have','has','had','having','do','does','did','doing','a','an', 'the','and','but','if','or','because','as','until','while','of','at','by','for','with','about', 'against','between','into','through','during','before','after','above','below','to','from','up', 'down','in','out','on','off','over','under','again','further','then','once','here','there','when', 'where','why','how','all','any','both','each','few','more','most','other','some','such','no','nor', 'not','only','own','same','so','than','too','very','s','t','can','will','just','don','should','now', 'ever'] # función auxiliar utilizada por CountVectorizer para procesar las frases def english_stemmer(sentence): stemmer = EnglishStemmer() analyzer = CountVectorizer(binary=False, analyzer='word', stop_words=stop_words, ngram_range=(1, 1)).build_analyzer() return (stemmer.stem(word) for word in analyzer(sentence)) # guarda un pipeline entrenado def save_model(model, modelName = "pickle_model.pkl"): pkl_filename = modelName with open(pkl_filename, 'wb') as file: pickle.dump(model, file) # carga un pipeline entrenado y guardado previamente def load_model(rutaModelo = "pickle_model.pkl"): # Load from file with open(rutaModelo, 'rb') as file: pickle_model = pickle.load(file) return pickle_model # función auxiliar para realizar predicciones con el modelo def predict_model(model, data, pref='m'): """ data: list of the text to predict pref: identificador para las columnas (labels_[pref], scores_[pref]_[class 1], etc.) """ res = {} scores = None labels = model.predict(data) if hasattr(model, 'predict_proba'): scores = model.predict_proba(data) # empaquetar scores dentro de un diccionario que contiene labels, scores clase 1, scores clase 2, .... El nombre de la clase se normaliza a lowercase res = {f'scores_{pref}_{cls.lower()}':score for cls, score in zip(model.classes_, [col for col in scores.T])} # añadir datos relativos a la predicción res[f'labels_{pref}'] = labels # convertir a dataframe ordenando las columnas primero el label y luego los scores por clase, las clases ordenadas alfabeticamente. res = pd.DataFrame(res, columns=sorted(list(res.keys()))) return res # función auxiliar que evalúa los resultados de una clasificación def evaluate_model(y_true, y_pred, y_score=None, pos_label='positive'): """ """ print('==== Sumario de la clasificación ==== ') print(classification_report(y_true, y_pred)) print('Accuracy -> {:.2%}\n'.format(accuracy_score(y_true, y_pred))) # graficar matriz de confusión display_labels = sorted(unique_labels(y_true, y_pred), reverse=True) cm = confusion_matrix(y_true, y_pred, labels=display_labels) z = cm[::-1] x = display_labels y = x[::-1].copy() z_text = [[str(y) for y in x] for x in z] fig_cm = ff.create_annotated_heatmap(z, x=x, y=y, annotation_text=z_text, colorscale='Viridis') fig_cm.update_layout( height=400, width=400, showlegend=True, margin={'t':150, 'l':0}, title={'text' : 'Matriz de Confusión', 'x':0.5, 'xanchor': 'center'}, xaxis = {'title_text':'Valor Real', 'tickangle':45, 'side':'top'}, yaxis = {'title_text':'Valor Predicho', 'tickmode':'linear'}, ) fig_cm.show() # curva roc (definido para clasificación binaria) fig_roc = None if y_score is not None: fpr, tpr, thresholds = roc_curve(y_true, y_score, pos_label=pos_label) fig_roc = px.area( x=fpr, y=tpr, title={'text' : f'Curva ROC (AUC={auc(fpr, tpr):.4f})', 'x':0.5, 'xanchor': 'center'}, labels=dict(x='Ratio Falsos Positivos', y='Ratio Verdaderos Positivos'), width=400, height=400 ) fig_roc.add_shape(type='line', line=dict(dash='dash'), x0=0, x1=1, y0=0, y1=1) fig_roc.update_yaxes(scaleanchor="x", scaleratio=1) fig_roc.update_xaxes(constrain='domain') fig_roc.show() print('Done!') ``` ### Carga de datos y análisis exploratorio Antes de entrenar el pipeline, es necesario cargar los datos. Existen diferentes opciones, entre estas: - montar nuestra partición de Google Drive y leer un fichero desde esta. - leer los datos desde un fichero en una carpeta local. - leer los datos directamente de un URL. Ejecute la siguiente casilla prestando atención a las instrucciones adicionales en los comentarios. ``` # descomente las siguientes 3 líneas para leer datos desde Google Drive, asumiendo que se trata de un fichero llamado review.csv localizado dentro de una carpeta llamada 'Datos' en su Google Drive #from google.colab import drive #drive.mount('/content/drive') #path = '/content/drive/MyDrive/Datos/ejemplo_review_train.csv' # descomente la siguiente línea para leer los datos desde un archivo local, por ejemplo, asumiendo que se encuentra dentro de un directorio llamado sample_data #path = './sample_data/ejemplo_review_train.csv' # descomente la siguiente línea para leer datos desde un URL path = 'https://github.com/TeachingTextMining/TextClassification/raw/main/01-SA-Pipeline/sample_data/ejemplo_review_train.csv' # leer los datos data = pd.read_csv(path, sep=',') print('Done!') ``` Una vez leídos los datos, ejecute la siguiente casilla para construir una gráfica que muestra la distribución de clases en el corpus. ``` text_col = 'Phrase' # columna del dataframe que contiene el texto (depende del formato de los datos) class_col = 'Sentiment' # columna del dataframe que contiene la clase (depende del formato de los datos) # obtener algunas estadísticas sobre los datos categories = sorted(data[class_col].unique(), reverse=False) hist= Counter(data[class_col]) print(f'Total de instancias -> {data.shape[0]}') print(f'Distribución de clases -> {{item[0]:round(item[1]/len(data[class_col]), 3) for item in sorted(hist.items(), key=lambda x: x[0])}}') print(f'Categorías -> {categories}') print(f'Comentario de ejemplo -> {data[text_col][0]}') print(f'Categoría del comentario -> {data[class_col][0]}') fig = go.Figure(layout=go.Layout(height=400, width=600)) fig.add_trace(go.Bar(x=categories, y=[hist[cat] for cat in categories])) fig.show() print('Done!') ``` Finalmente, ejecute la siguiente casilla para crear los conjuntos de entrenamiento y validación que se utilizarán para entrenar y validar los modelos. ``` # obtener conjuntos de entrenamiento (90%) y validación (10%) seed = 0 # fijar random_state para reproducibilidad train, val = train_test_split(data, test_size=.1, stratify=data[class_col], random_state=seed) print('Done!') ``` ### Creación de un pipeline para la clasificación de textos Para construir el pipeline, utilizaremos la clase Pipeline de sklean. Esta permite encadenar los diferentes pasos, por ejemplo, algoritmos de extracción de características y un clasificador. Por ejemplo, para obtener un pipeline que comprende CountVectorizer, seguido de TfidfTransformer y un Support Vector Machine como clasificador, se utilizaría esta sentencia: ~~~ Pipeline([ ('dataVect', CountVectorizer(analyzer=english_stemmer)), ('tfidf', TfidfTransformer(smooth_idf=True, use_idf=True)), (classifier, SVC(probability=True) ) ]) ~~~ Para tener mayor flexibilidad si se desean probar varios clasificadores, podría construirse el pipeline sin clasificador, incluyendo este con posterioridad. Este será el enfoque que seguiremos en la actividad. Ejecute la siguiente casilla para definir una función que construye un pipeline con las características antes mencionadas. ``` def preprocess_pipeline(): return Pipeline([ ('dataVect', CountVectorizer(analyzer=english_stemmer)), ('tfidf', TfidfTransformer(smooth_idf=True, use_idf=True)), ]) print('Done!') ``` ### Entrenamiento del modelo Ejecute la siguiente casilla que integra todas las funciones definidas para construir el pipeline, entrenarlo y guardarlo para su posterior uso. ``` # crear el pipeline (solo incluyendo los pasos de pre-procesamiento) model = preprocess_pipeline() # crear el clasificador y añadirlo al model. Puede probar diferentes clasificadores # classifier = MultinomialNB() # classifier = DecisionTreeClassifier() classifier = SVC(probability=True) model.steps.append(('classifier', classifier)) # obtener conjuntos de entrenamiento (90%) y validación (10%) seed = 0 # fijar random_state para reproducibilidad train, val = train_test_split(data, test_size=.1, stratify=data[class_col], random_state=seed) # entrenar el modelo model.fit(train[text_col], train[class_col]) # guardar el modelo save_model(model) print('Done!') ``` ### Evaluación del modelo Luego de entrenado el modelo, podemos evaluar su desempeño en los conjuntos de entrenamiento y validación. Ejecute la siguiente casilla para evaluar el modelo en el conjunto de entrenamiento. ``` # predecir y evaluar el modelo en el conjunto de entrenamiento print('==== Evaluación conjunto de entrenamiento ====') data = train true_labels = data[class_col] m_pred = predict_model(model, data[text_col].to_list(), pref='m') # el nombre de los campos dependerá de pref al llamar a predic_model y las clases. Ver comentarios en la definición de la función evaluate_model(true_labels, m_pred['labels_m'], m_pred['scores_m_positive'], 'positive') print('Done!') ``` Ejecute la siguiente casilla para evaluar el modelo en el conjunto de validación. Compare los resultados. ``` # predecir y evaluar el modelo en el conjunto de validación print('==== Evaluación conjunto de validación ====') data = val true_labels = data[class_col] m_pred = predict_model(model, data[text_col].to_list(), pref='m') # el nombre de los campos dependerá de pref al llamar a predic_model y las clases. Ver comentarios en la definición de la función evaluate_model(true_labels, m_pred['labels_m'], m_pred['scores_m_positive'], 'positive') print('Done!') ``` ## Predicción de nuevos datos Una vez entrenado el modelo, podemos evaluar su rendimiento en datos no utilizados durante el entrenamiento o emplearlo para predecir nuevas instancias. En cualquier caso, se debe cuidar realizar los pasos de pre-procesamiento necesarios según el caso. En el ejemplo, utilizaremos la porción de prueba preparada inicialmente. **Notar que**: - se cargará el modelo previamente entrenado y guardado, estableciendo las configuraciones pertinentes. - si disponemos de un modelo guardado, podremos ejecutar directamente esta parte del cuaderno. Sin embargo, será necesario al menos ejecutar previamente la sección [Instalación de librerías...](#sec:setup) ### Instanciar modelo pre-entrenado Para predecir nuevas instancias es preciso cargar el modelo previamente entrenado. Esto dependerá del formato en el que se exportó el modelo, pero en general se requieren dos elementos: la estructura del modelo y los pesos. Ejecute la siguiente casilla para cargar el modelo. ``` # cargar pipeline entrenado model = load_model() print('Done!') ``` ### Predecir nuevos datos Con el modelo cargado, es posible utilizarlo para analizar nuevos datos. Ejecute las siguientes casillas para: (a) categorizar un texto de muestra. (b) cargar nuevos datos, categorizarlos y mostrar algunas estadísticas sobre el corpus. ``` # ejemplo de texto a clasificar en formato [text 1, text 2, ..., text n] text = ['Brian De Palma\'s undeniable virtuosity can\'t really camouflage the fact that his plot here is a thinly disguised\ \"Psycho\" carbon copy, but he does provide a genuinely terrifying climax. His "Blow Out", made the next year, was an improvement.'] # predecir los nuevos datos. m_pred = predict_model(model, text, pref='m') # el nombre de los campos dependerá de pref al llamar a predic_model y las clases. Ver comentarios en la definición de la función pred_labels = m_pred['labels_m'].values[0] pred_proba = m_pred['scores_m_positive'].values[0] print(f'La categoría del review es -> {pred_labels}') print(f'El score asignado a la clase positiva es -> {pred_proba:.2f}') print('Done!') ``` También podemos predecir nuevos datos cargados desde un fichero. Ejecute la siguiente casilla, descomentando las instrucciones necesarias según sea el caso. ``` # descomente las siguientes 3 líneas para leer datos desde Google Drive, asumiendo que se trata de un fichero llamado review.csv localizado dentro de una carpeta llamada 'Datos' en su Google Drive #from google.colab import drive #drive.mount('/content/drive') #path = '/content/drive/MyDrive/Datos/ejemplo_review_train.csv' # descomente la siguiente línea para leer los datos desde un archivo local, por ejemplo, asumiendo que se encuentra dentro de un directorio llamado sample_data #path = './sample_data/ejemplo_review_train.csv' # descomente la siguiente línea para leer datos desde un URL path = 'https://github.com/TeachingTextMining/TextClassification/raw/main/01-SA-Pipeline/sample_data/ejemplo_review_test.csv' # leer los datos new_data = pd.read_csv(path, sep=',') print('Done!') ``` Ejecute la siguiente celda para predecir los datos y mostrar algunas estadísticas sobre el análisis realizado. ``` # predecir los datos de prueba text_col = 'Phrase' m_pred = predict_model(model, new_data[text_col].to_list(), pref='m') pred_labels = m_pred['labels_m'] # obtener algunas estadísticas sobre la predicción en el conjunto de pruebas categories = sorted(pred_labels.unique(), reverse=False) hist = Counter(pred_labels.values) fig = go.Figure(layout=go.Layout(height=400, width=600)) fig.add_trace(go.Bar(x=categories, y=[hist[cat] for cat in categories])) fig.show() print('Done!') ```
github_jupyter
# Get average size of predicted instances ``` import numpy as np import os import matplotlib.pyplot as plt import matplotlib.pylab as pylab # this can be modified for better representation pylab.rcParams['figure.figsize'] = 10,7 ROOT = '/home/maxsen/DEEPL/data/training_data/test/' #ROOT = '/data/proj/smFISH/Students/Max_Senftleben/files/data/20190422_AMEX_transfer_nuclei/npy/' # path to images list_of_images = [ROOT + i for i in os.listdir(ROOT)] #size_of_pixel = 0.11*ureg.micrometer * 0.11*ureg.micrometer size_of_pixel = 0.11 * 0.11 def get_pixel_area(numpy_array, size_of_pixel): height, width, dim = numpy_array.shape dsplits = np.dsplit(numpy_array, dim) ''' # used to show the masks plt.imshow(np.dstack((dsplits[0], dsplits[0], dsplits[0]))) plt.show() for i in dsplits: plt.imshow(np.dstack((i*100, i*100, i*100))) plt.show() ''' dsplits = dsplits[1:] db = {1:[], 2:[], 3:[]} counts = [np.unique(one_array, return_counts=True) for one_array in dsplits] counts = [list(i) for i in counts] for i in counts: index = i[0][1] count = i[1][1] * size_of_pixel db[index].append(count) return db def plot_nuclei_size(list_of_sizes, num_bins, save_path): import matplotlib.mlab as mlab fig, ax = plt.subplots() mu = np.mean(list_of_sizes) # mean of distribution sigma = np.std(list_of_sizes) # standard deviation of distribution print('Mean size: ', mu) print(sum(list_of_sizes)/len(list_of_sizes)) n, bins, patches = ax.hist(x=list_of_sizes, bins='auto', color='#0504aa', alpha=0.7, rwidth=0.9) print(len(bins)) y = ((1 / (np.sqrt(2 * np.pi) * sigma)) * np.exp(-0.5 * (1 / sigma * (bins - mu))**2)) y = mlab.normpdf(bins, mu, sigma) * sum(n * np.diff(bins)) ax.plot(bins, y, '--', color = 'r') ax.set_xlabel('Size in \u03BCm\u00b2') ax.set_ylabel('Frequency') #{}cm\u00b2".format(area) ax.text(250,400, "Mean = 138.86 \u03BCm\u00b2") ax.grid(axis='y', alpha=0.75) # Tweak spacing to prevent clipping of ylabel fig.tight_layout() plt.xlim(0,400) maxfreq = np.max(n) plt.ylim(0, 800) plt.savefig(save_path, dpi=100) plt.show() def iterate_through_images(folder, size_of_pixel): all_nuclei_sizes = [] for img in list_of_images: numpy_array = np.load(img) counts = get_pixel_area(numpy_array, size_of_pixel) #print(img) if 1 in counts: [all_nuclei_sizes.append(i) for i in counts[1]] return all_nuclei_sizes all_sizes = iterate_through_images(list_of_images, size_of_pixel) print(len(all_sizes)) between_110_170 = [i for i in all_sizes if i>99 and i<181] print(len(between_110_170)/len(all_sizes)) # save path for histogram save_path = '/home/maxsen/DEEPL/test_histo.png' # I hardcoded the metrix plot_nuclei_size(all_sizes, len(all_sizes), save_path) ```
github_jupyter
Jeff's initial writeup -- 2019_01_21 ## Comparing molecule loading using RDKit and OpenEye It's really important that both RDKitToolkitWrapper and OpenEyeToolkitWrapper load the same file to equivalent OFFMol objects. If the ToolkitWrappers don't create the same OFFMol from a single file/external molecule representation, it's possible that the OpenForceField toolkit will assign the same molecule different parameters depending on whether OE or RDK is running on the backend(1). This notebook is designed to help us see which molecule formats can be reliably loaded using both toolkits. The first test that is performed is loading from 3D. We have OpenEye load `MiniDrugBank_tripos.mol2`. RDKit can not load mol2, but can load SDF, so I've used openbabel(2) to convert `MiniDrugBank_tripos.mol2` to SDF (`MiniDrugBank.sdf`), and then we have RDKit load that. ### How do we compare whether two OFFMols are equivalent? This is, surprisingly, an unresolved question. "Equivalent" for our purposes means "would have the same SMIRKS-based parameters applied, for any valid SMIRKS". Since we don't have an `offxml` file containing parameters for all valid SMIRKS (because there are infinity valid SMIRKS, and because we'd also need an infinitely large molecule test set to contain representatives of all those infinity substructures), I'd like to find some sort of test other than SMIRKS matching that will answer this question. This notebook uses three comparison methods: 1. **graph isomorphism**: Convert the molecules to NetworkX graphs where nodes are atoms (described by element, is_aromatic, stereochemistry) and the edges are bonds (described by order, is_aromatic, stereochemistry). Then test to see if the graphs are isomorphic. 2. **SMILES comparison using OpenEye**: Convert each OFFMol to an OEMol, and then write them to canonical, isomeric, explicit hydrogen SMILES(3). Test for string equality. 3. **SMILES comparison using RDKit**: Convert each OFFMol to an RDMol, and then write them to canonical, isomeric, explicit hydrogen SMILES. Test for string equality. _As a note for future efforts, it's possible that, since each SMILES is also a valid SMIRKS, we could test for equality between OFFMol_1 and OFFMol_2 by seeing if OFFMol_1's SMILES is found in a SMIRKS substructure search of OFFMol_2, and vice versa._ Each of these has some weaknesses. * Graph isomorphism may be too strict -- eg. a toolkit may mess with the kekule structure around aromatic systems, selecting different resonance forms of bond networks. This would result in a chemically equivalent structure that fails an isomorphism test involving bond orders. * SMILES mismatches are hard to debug. Are the original OFFMols really meaningfully different? Or is our `to_rdkit`. or `to_openeye` function not implemented correctly? Or is the mismatch due to some strange toolkit quirk or quiet sanitization that goes on? ### What if RDK and OE get different representations when loading from 3D? This isn't the end of the world. We'd just remove SDF from RDKit's `ALLOWED_READ_FORMATS` list(4). RDKit's main input method would then just be SMILES. But actually we should ensure that... ### At a minimum, RDKit and OpenEye should be able to get the same OFFMol from the same SMILES My initial thought here is that we could have both toolkits load molecules from a SMILES database. "I converted `MiniDrugBank_tripos.mol2` to SDF so RDKit could read it, why not convert it to SMILES too?". **But conversion from 3D to SMILES requires either OpenEye or RDKit**, which relies on their 3D molecule interpretation being correct... which is what we're testing here in the first place! So, I couldn't think of a clear resolution to this problem. For background, Shuzhe Wang did some really good foundational work on toolkit molecule perception equivalence earlier in this project. Shuzhe's successful test of force field parameter application equivalence (5) used OE to load a test databse, convert the molecules to SMILES, and then had RDKit load those SMILES. It then tested whether the OE-loaded molecule and the RDKit-loaded molecule got identical force field parameters assigned. But this didn't test whether RDKit can load from 3D, just from SMILES. And not just any SMILES -- SMILES that came from OpenEye (including any sneaky sanitizaton that it could have done when loading from 3D). And since we didn't have a toolkit-independent molecule representation then (no OFFMols), he tested whether both molecules got the same force field parameters applied, which suffers from the "our current SMIRKS may not be fine-grained enough to catch all meaningful molecule differences" problem. Shuzhe's approach does highlight one way to get a test database of SMILES -- Load a 3D database using OpenEyeToolkitWrapper, then write all the resulting OFFMols to SMILES, and then have RDKitToolkitWrapper read all the SMILES and create OFFMols, then test whether both sets of OFFMols are identical. And, in case OE's molecule santiization is making us end up with unusually clean SMILES, we can do the same test in reverse -- that is, use RDKitToolkitWrapper to load a 3D SDF, write the resulting OFFMols to SMILES, and then have OpenEyeToolkitWrapper read the SMILES, and compare the resulting OFFMols. ## What this notebook does #### This notebook creates four OFFMol test sets: 1. **`oe_mols_from 3d`**: The results of calling `OFFMol.from_file('MiniDrugBank_tripos.mol2', toolkit_registry=an_oe_toolkit_wrapper)`. 2. **`rdk_mols_from 3d`**: The results of calling `OFFMol.from_file('MiniDrugBank.sdf', toolkit_registry=a_rdk_toolkit_wrapper)`. 3. **`rdk_mols_from_smiles_from_oe_mols_from_3d`**: The result of taking `oe_mols_from_3d`, converting them all to SMILES using OpenEyeToolkitWrapper, then reading SMILES to OFFMols using RDKitToolkitWrapper. 4. **`oe_mols_from_smiles_from_rdk_mols_from_3d`**: The result of taking `rdk_mols_from_3d`, converting them all to SMILES using RDKitToolkitWrapper, then reading SMILES to OFFMols using OpenEyeToolkitWrapper. #### This notebook then compares the OFFMol test sets using three methods (same as above): 1. Graph isomorphism 2. OFFMol.to_smiles() using OpenEyeToolkitWrapper 3. OFFMol.to_smiles using RDKitToolkitWrapper ## Results When the notebook is run using the current codebase, the important outputs are as follows: ``` In comparing oe_mols_from_3d to rdk_mols_from_3d, I found 284 graph matches, 293 OE SMILES matches (26 comparisons errored), and 293 RDKit SMILES matches (7 comparisons errored) out of 346 molecules ``` This is the core test that we wanted to do (loading the same molecules from 3D), and these are pretty good results. We pass the strictest test (graph isomorphism) for 284 out of 346 molecules. We see the the SMILES tests are a little more forgiving, both giving 293/346 successes. ``` In comparing oe_mols_from_3d to rdk_mols_from_smiles_from_oe_mols_from_3d, I found 293 graph matches, 293 OE SMILES matches (26 comparisons errored), and 332 RDKit SMILES matches (0 comparisons errored) out of 365 molecules ``` ``` In comparing oe_mols_from_smiles_from_rdk_mols_from_3d to rdk_mols_from_3d, I found 310 graph matches, 219 OE SMILES matches (0 comparisons errored), and 304 RDKit SMILES matches (7 comparisons errored) out of 319 molecules ``` ## Conclusions Based on this test, we can expect identical parameterization between RDKit and OpenEye ToolkitWrappers at least 82% (284/346) of the time. The real number is likely to be higher, as the SMILES comparisons are more realistic measures of identity. Also, this limit assumes infinitely specific SMIRKS. Passing molecules **FROM** rdkit **TO** OE is troublesome largely because rdkit won't recognize stereo around trivalent nitrogens, whereas OE will. See discussion about "how much stereochemistry info do we want specified" here: https://github.com/openforcefield/openforcefield/issues/146 ## Further work There is low-hanging fruit that will improve reliability. To continue debugging differences between OE and RDK, consider setting the "verbose" argument to `compare_mols_using_smiles` to True in the final cell of this workbook. Or, look at the molecule loading errors during dataset construction and add logic to catch those cases. #### Footnotes (1) Some molecule perception differences will be covered up by the fact that our SMIRKSes are somewhat coarse-grained currently (one SMIRKS can cover a lot of chemical space). But, this won't always be the case, as we may add finer-grained SMIRKS later. So, taking care of toolkit perception differences now can keep us from getting "surprised" in the future. (2)`conda list` --> `openbabel 1!2.4.0 py37_2 omnia` (3) `OFFMol.to_smiles()` actually operates by converting the OFFMol to an OEMol or RDMol (depending on which toolkit is available), and then uses the respective toolkit to create a SMILES. You can control which toolkit is used by instantialing an `openforcefield.utils.toolkits.OpenEyeToolkitWrapper` or `RDKitToolkitWrapper`, and passing it as the `toolkit_registry` argument to `to_smiles` (eg. `OFFMol.to_smiles(toolkit_registry=my_toolkit_wrapper)`) (4) If they really wanted to load from SDF using RDKit, a determined OFF toolkit user could just make an RDMol manually from SDK (`Chem.MolSupplierFromFile()` or something like that), and then run the molecules through `RDKitToolkitWrapper.from_rdkit()` to get OFFMol. However, this would put the burden of correct chemical perception _on them_. (5) https://github.com/openforcefield/openforcefield/blob/swang/examples/forcefield_modification/RDKitComparison.ipynb, search for `molecules =` ``` from openff.toolkit.utils.toolkits import RDKitToolkitWrapper, OpenEyeToolkitWrapper, UndefinedStereochemistryError from openff.toolkit.utils import get_data_file_path import difflib OETKW = OpenEyeToolkitWrapper() RDKTKW = RDKitToolkitWrapper() ``` ### Load molecules from SDF using RDKit ``` from openff.toolkit.topology.molecule import Molecule from rdkit import Chem rdk_mols_from_3d = Molecule.from_file(get_data_file_path('molecules/MiniDrugBank.sdf'), toolkit_registry = RDKTKW, file_format='sdf', allow_undefined_stereo=True) # Known loading problems (numbers mean X: "DrugBank_X"): # Pre-condition violations ("Stereo atoms should be specified...") for 3787 and 2684 are due to R-C=C=C-R motifs # Sanitization errors: # Nitro : 2799, 5415, 472, 794, 3739, 1570, 1802, 4865, 2465 # C#N : 3046, 3655, 1594, 6947, 2467 # C-N(=O)(C) : 6353 # Complicated aromatic situation with nitrogen?: 1659, 1661(?), 7049 # Unknown: 4346 ``` ### Load 3D molecules from Mol2 using OpenEye ``` # NBVAL_SKIP from openff.toolkit.topology import Molecule #molecules = Molecule.from_file(get_data_file_path('molecules/MiniDrugBank.sdf'), oe_mols_from_3d = Molecule.from_file(get_data_file_path('molecules/MiniDrugBank_tripos.mol2'), toolkit_registry = OETKW, file_format='sdf', allow_undefined_stereo=False) ``` ### Convert each OE-derived OFFMol to SMILES, and then use RDKitToolkitWrapper to read the SMILES. ``` # NBVAL_SKIP rdk_mols_from_smiles_from_oe_mols_from_3d = [] for oe_mol in oe_mols_from_3d: if oe_mol is None: continue smiles = oe_mol.to_smiles(toolkit_registry=OETKW) try: new_mol = Molecule.from_smiles(smiles, toolkit_registry=RDKTKW, hydrogens_are_explicit=True ) new_mol.name = oe_mol.name rdk_mols_from_smiles_from_oe_mols_from_3d.append(new_mol) except Exception as e: print(smiles) print(e) print() pass ``` ### Convert each RDKit-derived OFFMol to SMILES, and then use OpeneyeToolkitWrapper to read the SMILES. ``` oe_mols_from_smiles_from_rdk_mols_from_3d = [] for rdk_mol in rdk_mols_from_3d: try: smiles = rdk_mol.to_smiles(toolkit_registry=RDKTKW) new_mol = Molecule.from_smiles(smiles, toolkit_registry=OETKW, hydrogens_are_explicit=True ) new_mol.name = rdk_mol.name oe_mols_from_smiles_from_rdk_mols_from_3d.append(new_mol) except UndefinedStereochemistryError as e: print(smiles) print(e) print() pass ``` ### Define a few helper functions for molecule comparison ``` def print_smi_difference(mol_name, smi_1, smi_2, smi_1_label='OE: ', smi_2_label='RDK:'): print(mol_name) print(smi_1_label, smi_1) print(smi_2_label, smi_2) differences = list(difflib.ndiff(smi_1, smi_2)) msg = '' for i,s in enumerate(differences): if s[0]==' ': continue elif s[0]=='-': msg += u'Delete "{}" from position {}\n'.format(s[-1],i) elif s[0]=='+': msg += u'Add "{}" to position {}\n'.format(s[-1],i) # Sometimes the diffs get really big. Skip printing in those cases if msg.count('\n') < 5: print(msg) print() def compare_mols_using_smiles(mol_1, mol_2, toolkit_wrapper, mol_1_label, mol_2_label, verbose=False): mol_1_smi = mol_1.to_smiles(toolkit_registry=toolkit_wrapper) mol_2_smi = mol_2.to_smiles(toolkit_registry=toolkit_wrapper) if mol_1_smi == mol_2_smi: return True else: if verbose: print_smi_difference(mol_1.name, mol_1_smi, mol_2_smi, smi_1_label=mol_1_label, smi_2_label=mol_2_label) return False def compare_mols_using_nx(mol_1, mol_2): return mol_1 == mol_2 ``` ### And perform the actual comparisons ``` # NBVAL_SKIP mol_sets_to_compare = ((('oe_mols_from_3d', oe_mols_from_3d), ('rdk_mols_from_3d', rdk_mols_from_3d)), #(('oe_mols_from_smiles', oe_mols_from_smiles), # ('rdk_mols_from_smiles', rdk_mols_from_smiles)), (('oe_mols_from_3d', oe_mols_from_3d), ('rdk_mols_from_smiles_from_oe_mols_from_3d', rdk_mols_from_smiles_from_oe_mols_from_3d)), (('oe_mols_from_smiles_from_rdk_mols_from_3d', oe_mols_from_smiles_from_rdk_mols_from_3d), ('rdk_mols_from_3d', rdk_mols_from_3d)), ) for (set_1_name, mol_set_1), (set_2_name, mol_set_2) in mol_sets_to_compare: set_1_name_to_mol = {mol.name:mol for mol in mol_set_1 if not(mol is None)} set_2_name_to_mol = {mol.name:mol for mol in mol_set_2 if not(mol is None)} names_in_common = set(set_1_name_to_mol.keys()) & set(set_2_name_to_mol.keys()) print() print() print() print('There are {} molecules in the {} set'.format(len(mol_set_1), set_1_name)) print('There are {} molecules in the {} set'.format(len(mol_set_2), set_2_name)) print('These sets have {} molecules in common'.format(len(names_in_common))) graph_matches = 0 rdk_smiles_matches = 0 oe_smiles_matches = 0 errored_graph_comparisons = 0 errored_rdk_smiles_comparisons = 0 errored_oe_smiles_comparisons = 0 for name in names_in_common: set_1_mol = set_1_name_to_mol[name] set_2_mol = set_2_name_to_mol[name] nx_match = compare_mols_using_nx(set_1_mol, set_2_mol) if nx_match: graph_matches += 1 try: rdk_smi_match = compare_mols_using_smiles(set_1_mol, set_2_mol, RDKTKW, 'OE--(RDKTKW)-->SMILES: ','RDK--(RDKTKW)-->SMILES:', verbose = False) if rdk_smi_match: rdk_smiles_matches += 1 except Exception as e: errored_rdk_smiles_comparisons += 1 print(e) try: #if 1: oe_smi_match = compare_mols_using_smiles(set_1_mol, set_2_mol, OETKW, 'OE--(OETKW)-->SMILES: ','RDK--(OETKW)-->SMILES:', verbose = False) if oe_smi_match: oe_smiles_matches += 1 except: errored_oe_smiles_comparisons += 1 print("In comparing {} to {}, I found {} graph matches, {} OE SMILES matches ({} comparisons errored), and " \ "{} RDKit SMILES matches ({} comparisons errored) out of {} molecules".format(set_1_name, set_2_name, graph_matches, rdk_smiles_matches, errored_rdk_smiles_comparisons, oe_smiles_matches, errored_oe_smiles_comparisons, len(names_in_common))) ```
github_jupyter
``` import os os.environ["OMP_NUM_THREADS"] = "4" import numpy as np import matplotlib.pyplot as plt import h5py import time from mpl_toolkits import mplot3d import matplotlib as mpl from matplotlib.gridspec import GridSpec import scipy.stats as stats mpl.rc('text', usetex = True) mpl.rc('font', family = 'serif') import tensorflow as tf #code is implemented in TF 2.x tf.config.set_visible_devices([], 'GPU') tf.config.threading.set_inter_op_parallelism_threads(4) tf.config.threading.set_intra_op_parallelism_threads(2) tf.keras.backend.set_floatx('float64') from tensorflow.python.ops import math_ops # model-free ESN %run ESN_bias.ipynb # model to compute reservoir state derivative %run ESN_bias_drdt.ipynb ``` ## Data We load the data and treat it for the different cases: (i) given [x,z] reconstruct [y] (ii) given [x,y] reconstruct [z] (iii) given [x] reconstruct [y,z] ``` ## Load data obtained using scipy.odeint hf = h5py.File('./data/Lorenz_odeint.h5','r') u_exact = np.array(hf.get('q')) t = np.array(hf.get('t')) hf.close() t_lyap = 0.906**(-1) # Lyapunov time dt = t[1]-t[0] # timestep # norm is the maximum variation, used for the Normalized Root Mean Square Error m = u_exact.min(axis=0) M = u_exact.max(axis=0) norm = M-m print(norm) ## Parameters for the split of the training/validation ---------------------------- batches = 1 # number of batches for training begin = 50 # begin of training (wash out at the start to remove the transient of the reservoir) end = 10050 # has to be equal or smaller than end - this can be used for a "validation" of the ESN horizon = 10000 # size of the test set cut1 = 1000 # to get rid of the initial transient in the simulation cut2 = cut1+end+horizon+1 u_exact = u_exact[np.arange(cut1,cut2),:] # we keep only the necessary data ### Treatment of inputs ----------------------------------------------------------- Nx = u_exact.shape[1] num_inputs1 = Nx - 1 #for (i)-(ii) num_inputs3 = Nx - 2 #for (iii) wave = u_exact[:end+1,:].astype("float64") idx1 = [0,2] # for (i) idx_1 = [0,2,1] idx2 = [0,1] # for (ii) idx_2 = [0,1,2] idx3 = [0] # for (iii) idx_3 = [0,1,2] rnn_inputs1 = wave[:-1,idx1].reshape(1,end, num_inputs1) rnn_inputs2 = wave[:-1,idx2].reshape(1,end, num_inputs1) rnn_inputs3 = wave[:-1,idx3].reshape(1,end, num_inputs3) ### Treatment of output ------------------------------------------------------------ rnn_target1 = wave[1:,idx1] rnn_target2 = wave[1:,idx2] rnn_target3 = wave[1:,idx3] ### Time derivative ------------------------------------------------------------ def Lorenz_RHS(yy): """Computes RHS of Lorenz equations""" # Lorenz parameters sigma, rho, beta = [10.0, 28.0, 8./3.] # Lorenz equations x,y,z = yy.T x1 = sigma*(y-x) y1 = rho*x - y - np.multiply(x,z) z1 = np.multiply(x,y) - beta*z return np.stack([x1,y1,z1],axis=1) #same treatment of the inputs u_der = Lorenz_RHS(u_exact) #RHS of the equations, namely the system true time derivative wave_der = u_der[:end+1,:].astype("float64") rnn_inputs_der1 = wave_der[:-1,idx1].reshape(1,end, num_inputs1) rnn_inputs_der2 = wave_der[:-1,idx2].reshape(1,end, num_inputs1) rnn_inputs_der3 = wave_der[:-1,idx3].reshape(1,end, num_inputs3) ``` ## ESN Generation We create ESNs of different sizes, and with different input matrices, as the case with only one input (iii) has different input matrix with respect to the others (i)-(ii) ``` ### ESN hyperparameters for all the different sizes of the networks num_units1 = 100 #neurons num_units2 = 200 num_units5 = 500 num_units7 = 750 num_units10 = 1000 decay = 1.0 # for leaky-ESN rho_spectral = 0.9 # spectral radius of Wecho sigma_in = 0.1 # scaling of input connectivity = 20 sparseness1 = 1. - connectivity/ (num_units1 - 1.) # sparseness sparseness2 = 1. - connectivity/ (num_units2 - 1.) sparseness5 = 1. - connectivity/ (num_units5 - 1.) sparseness7 = 1. - connectivity/ (num_units7 - 1.) sparseness10 = 1. - connectivity/ (num_units10 - 1.) lmb = 1e-6 # Tikhonov factor activation = lambda x: tf.keras.activations.tanh(x) #the activation function of the ESN b_in = 10 #input bias #random numbers random_seed = 1 rng = np.random.RandomState(random_seed) # Initialize the ESN cells of different size. # For each size, we add cellx3 to create the cells for case (iii) # the cells in (iii) differ from (i)-(ii) because the size of input is different cell1 = EchoStateRNNCell(num_units=num_units1, num_inputs=num_inputs1, activation=activation, decay=decay, # decay (leakage) rate rho=rho_spectral, # spectral radius of echo matrix sigma_in=sigma_in, # scaling of input matrix b_in = b_in, sparseness = sparseness1, # sparsity of the echo matrix rng=rng) rng = np.random.RandomState(random_seed) cell13 = EchoStateRNNCell(num_units=num_units1, num_inputs=num_inputs3, activation=activation, decay=decay, # decay (leakage) rate rho=rho_spectral, # spectral radius of echo matrix sigma_in=sigma_in, # scaling of input matrix b_in = b_in, sparseness = sparseness1, # sparsity of the echo matrix rng=rng) rng = np.random.RandomState(random_seed) cell2 = EchoStateRNNCell(num_units=num_units2, num_inputs=num_inputs1, activation=activation, decay=decay, # decay (leakage) rate rho=rho_spectral, # spectral radius of echo matrix sigma_in=sigma_in, # scaling of input matrix b_in = b_in, sparseness = sparseness2, # sparsity of the echo matrix rng=rng) rng = np.random.RandomState(random_seed) cell23 = EchoStateRNNCell(num_units=num_units2, num_inputs=num_inputs3, activation=activation, decay=decay, # decay (leakage) rate rho=rho_spectral, # spectral radius of echo matrix sigma_in=sigma_in, # scaling of input matrix b_in = b_in, sparseness = sparseness2, # sparsity of the echo matrix rng=rng) rng = np.random.RandomState(random_seed) cell5 = EchoStateRNNCell(num_units=num_units5, num_inputs=num_inputs1, activation=activation, decay=decay, # decay (leakage) rate rho=rho_spectral, # spectral radius of echo matrix sigma_in=sigma_in, # scaling of input matrix b_in = b_in, sparseness = sparseness5, # sparsity of the echo matrix rng=rng) rng = np.random.RandomState(random_seed) cell53 = EchoStateRNNCell(num_units=num_units5, num_inputs=num_inputs3, activation=activation, decay=decay, # decay (leakage) rate rho=rho_spectral, # spectral radius of echo matrix sigma_in=sigma_in, # scaling of input matrix b_in = b_in, sparseness = sparseness5, # sparsity of the echo matrix rng=rng) rng = np.random.RandomState(random_seed) cell7 = EchoStateRNNCell(num_units=num_units7, num_inputs=num_inputs1, activation=activation, decay=decay, # decay (leakage) rate rho=rho_spectral, # spectral radius of echo matrix sigma_in=sigma_in, # scaling of input matrix b_in = b_in, sparseness = sparseness7, # sparsity of the echo matrix rng=rng) rng = np.random.RandomState(random_seed) cell73 = EchoStateRNNCell(num_units=num_units7, num_inputs=num_inputs3, activation=activation, decay=decay, # decay (leakage) rate rho=rho_spectral, # spectral radius of echo matrix sigma_in=sigma_in, # scaling of input matrix b_in = b_in, sparseness = sparseness7, # sparsity of the echo matrix rng=rng) rng = np.random.RandomState(random_seed) cell10 = EchoStateRNNCell(num_units=num_units10, num_inputs=num_inputs1, activation=activation, decay=decay, # decay (leakage) rate rho=rho_spectral, # spectral radius of echo matrix sigma_in=sigma_in, # scaling of input matrix b_in = b_in, sparseness = sparseness10, # sparsity of the echo matrix rng=rng) rng = np.random.RandomState(random_seed) cell103 = EchoStateRNNCell(num_units=num_units10, num_inputs=num_inputs3, activation=activation, decay=decay, # decay (leakage) rate rho=rho_spectral, # spectral radius of echo matrix sigma_in=sigma_in, # scaling of input matrix b_in = b_in, sparseness = sparseness10, # sparsity of the echo matrix rng=rng) #creating the different ESNs fom the cells ESN1 = tf.keras.layers.RNN(cell=cell1,dtype=tf.float64, return_sequences=True, return_state=True) ESN13 = tf.keras.layers.RNN(cell=cell13,dtype=tf.float64, return_sequences=True, return_state=True) ESN2 = tf.keras.layers.RNN(cell=cell2,dtype=tf.float64, return_sequences=True, return_state=True) ESN23 = tf.keras.layers.RNN(cell=cell23,dtype=tf.float64, return_sequences=True, return_state=True) ESN5 = tf.keras.layers.RNN(cell=cell5,dtype=tf.float64, return_sequences=True, return_state=True) ESN53 = tf.keras.layers.RNN(cell=cell53,dtype=tf.float64, return_sequences=True, return_state=True) ESN7 = tf.keras.layers.RNN(cell=cell7,dtype=tf.float64, return_sequences=True, return_state=True) ESN73 = tf.keras.layers.RNN(cell=cell73,dtype=tf.float64, return_sequences=True, return_state=True) ESN10 = tf.keras.layers.RNN(cell=cell10,dtype=tf.float64, return_sequences=True, return_state=True) ESN103 = tf.keras.layers.RNN(cell=cell103,dtype=tf.float64, return_sequences=True, return_state=True) def outputs(inputs, init_state, ESN): """ Compute the reservoir states in open-loop for a time-series of the inputs """ outputs, last_state = ESN(inputs=inputs, initial_state=init_state) outputs = tf.concat([outputs, tf.ones([1,outputs.shape[1],1], dtype=tf.float64), inputs], axis=2) #we concatenate a bias and the inputs to the reservoir state return outputs, last_state # Run all the netowrks through the training set rnn_init_state = tf.constant(np.zeros([batches, num_units1])) stored_outputs1, final_state1 = outputs(rnn_inputs1, rnn_init_state, ESN1) stored_outputs1 = stored_outputs1[0].numpy() stored_outputs13, final_state13 = outputs(rnn_inputs3, rnn_init_state, ESN13) stored_outputs13 = stored_outputs13[0].numpy() stored_outputs12, final_state12 = outputs(rnn_inputs2, rnn_init_state, ESN1) stored_outputs12 = stored_outputs12[0].numpy() rnn_init_state = tf.constant(np.zeros([batches, num_units2])) stored_outputs2, final_state2 = outputs(rnn_inputs1, rnn_init_state, ESN2) stored_outputs2 = stored_outputs2[0].numpy() stored_outputs23, final_state23 = outputs(rnn_inputs3, rnn_init_state, ESN23) stored_outputs23 = stored_outputs23[0].numpy() stored_outputs22, final_state22 = outputs(rnn_inputs2, rnn_init_state, ESN2) stored_outputs22 = stored_outputs22[0].numpy() rnn_init_state = tf.constant(np.zeros([batches, num_units5])) stored_outputs5, final_state5 = outputs(rnn_inputs1, rnn_init_state, ESN5) stored_outputs5 = stored_outputs5[0].numpy() stored_outputs53, final_state53 = outputs(rnn_inputs3, rnn_init_state, ESN53) stored_outputs53 = stored_outputs53[0].numpy() stored_outputs52, final_state52 = outputs(rnn_inputs2, rnn_init_state, ESN5) stored_outputs52 = stored_outputs52[0].numpy() rnn_init_state = tf.constant(np.zeros([batches, num_units7])) stored_outputs7, final_state7 = outputs(rnn_inputs1, rnn_init_state, ESN7) stored_outputs7 = stored_outputs7[0].numpy() stored_outputs73, final_state73 = outputs(rnn_inputs3, rnn_init_state, ESN73) stored_outputs73 = stored_outputs73[0].numpy() stored_outputs72, final_state72 = outputs(rnn_inputs2, rnn_init_state, ESN7) stored_outputs72 = stored_outputs72[0].numpy() rnn_init_state = tf.constant(np.zeros([batches, num_units10])) stored_outputs10, final_state10 = outputs(rnn_inputs1, rnn_init_state, ESN10) stored_outputs10 = stored_outputs10[0].numpy() stored_outputs103, final_state103 = outputs(rnn_inputs3, rnn_init_state, ESN103) stored_outputs103 = stored_outputs103[0].numpy() stored_outputs102, final_state102 = outputs(rnn_inputs2, rnn_init_state, ESN10) stored_outputs102 = stored_outputs102[0].numpy() ``` ## Load Weights and Losses fom Training The indices in the .h5 file indicate which observed state was used, e.g., [0,2] means [x,z]. Different files have different number of neurons and/or different obsrved states. The actions to read the .h5 files have been moved to Load_data.ipynb ``` %run Load_data.ipynb ``` ## Plot reconstruction We run the 1000 neurons network in the test set, then plot the reconstruction in the training and test set for the 1000 neurons networks ``` #running the networks in the test set of 100 LTs from end to end+horizon new_input1 = u_exact[end+1:end+horizon,idx1] new_input1 = new_input1.reshape(1,new_input1.shape[0],new_input1.shape[1]) Yt_1000y = np.matmul(outputs(new_input1,final_state10, ESN10)[0][0], W_y_1000)[:,-1] new_input2 = u_exact[end+1:end+horizon,idx2] new_input2 = new_input2.reshape(1,new_input2.shape[0],new_input2.shape[1]) Yt_1000z = np.matmul(outputs(new_input2,final_state102,ESN10)[0][0], W_z_1000)[:,-1] new_input3 = u_exact[end+1:end+horizon,idx3] new_input3 = new_input3.reshape(1,new_input3.shape[0],new_input3.shape[1]) Yt_1000yz = np.matmul(outputs(new_input3,final_state103, ESN103)[0][0], W_yz_1000) #we plot here the reconstruction in an interval of the training set panels (a)-(d) # and the PDFs for the reconstruction in the entirety of the training (b)-(e) and test (c)-(f) sets plt.rcParams["figure.figsize"] = (15,5) plt.rcParams["font.size"] = 30 axx = plt.figure() gs = GridSpec(2, 3, width_ratios=[3.5, 1, 1]) ax = axx.add_subplot(gs[3]) ax.set_ylim(-18,53) ax.set_ylabel('$\phi_3$', fontsize=25) ax.set_yticks([0,25,50]) truth = wave[1+begin:1+end,idx_2[-1]] truth1 = u_exact[end+1:end+horizon,idx_2[-1]] a_100z = np.matmul(stored_outputs103[begin:end],W_yz_1000)[:,-1] a_1000z = np.matmul(stored_outputs102[begin:end],W_z_1000)[:,-1] #for histograms in training i1, i2 = [1000,2000] # time series interval to plot ax.plot(np.arange(i1,i2)*0.01,truth[i1:i2], 'k', label='$\\textrm{True}$', linewidth=7.) ax.plot(np.arange(i1,i2)*0.01,a_1000z[i1:i2], 'darkgreen', label='(ii) $\mathbf{x}=[\phi_1,\phi_2]$', linewidth=4) ax.plot(np.arange(i1,i2)*0.01,a_100z[i1:i2], 'cornflowerblue', label='(iii) $\mathbf{x}=[\phi_1]$', linewidth=2) ax.set_xlabel('Time [LT]', fontsize=25) ax.tick_params(axis='both', labelsize=25) ax.annotate('(d)', xy=(1, 1), xytext=(-5, -5), va='top', ha='right', xycoords='axes fraction', textcoords='offset points', fontsize=25) ax.legend(fontsize=25, loc='lower center', bbox_to_anchor=(0.5,-0.1), frameon=False, handletextpad=0.4,labelspacing=.2, ncol=3, columnspacing=1., handlelength=1.) ax = axx.add_subplot(gs[4]) ax.set_ylim(-18,53) ax.set_xlim(-0.001,0.065) ax.set_xticks([0,0.025,0.05]) ax.set_xticklabels(['0.', '0.025', '0.05']) ax.tick_params(axis='both', labelsize=25) ax.tick_params( axis='y', # changes apply to the x-axis which='both', # both major and minor ticks are affected left=False, # ticks along the bottom edge are off right=False, # ticks along the top edge are off labelleft=False) # labels along the bottom edge are off #PDFs for true data, (ii) and (iii) for variable z in training set ax.set_xlabel('PDF', fontsize=25) density = stats.gaussian_kde(truth) n, x, _ = plt.hist(truth, bins=100, color='white',histtype=u'step', density=True, orientation='horizontal') density_100z = stats.gaussian_kde(a_100z) n, x_100z, _ = plt.hist(a_100z, bins=100, color='white',histtype=u'step', density=True, orientation='horizontal') density_1000z = stats.gaussian_kde(a_1000z) n, x_1000z, _ = plt.hist(a_1000z, bins=100, color='white',histtype=u'step', density=True, orientation='horizontal') ax.plot(density(x), x, 'k', linewidth=7., label='True') ax.plot(density_1000z(x_1000z), x_1000z, 'darkgreen', linewidth=4., label='$N_r=1000$',) ax.plot(density_100z(x_100z), x_100z, 'cornflowerblue', linewidth=2, label='$N_r=100$',) ax.annotate('(e)', xy=(1, 1), xytext=(-5, -5), va='top', ha='right', xycoords='axes fraction', textcoords='offset points', fontsize=25) ax = axx.add_subplot(gs[5]) ax.set_ylim(-18,53) ax.set_xlim(-0.001,0.065) ax.set_xticks([0,0.025,0.05]) ax.set_xticklabels(['0.', '0.025', '0.05']) ax.tick_params(axis='both', labelsize=25) ax.tick_params( axis='y', # changes apply to the x-axis which='both', # both major and minor ticks are affected left=False, # ticks along the bottom edge are off right=False, # ticks along the top edge are off labelleft=False) # labels along the bottom edge are off ax.set_xlabel('PDF', fontsize=25) #PDFs for true data, (ii) and (iii) for variable z in test set density = stats.gaussian_kde(truth1) n, x, _ = plt.hist(truth1, bins=100, color='white',histtype=u'step', density=True, orientation='horizontal') density_100z = stats.gaussian_kde(Yt_1000yz[:,-1][np.argsort(Yt_1000yz[:,-1])][2:]) n, x_100z, _ = plt.hist(Yt_1000yz[:,-1][np.argsort(Yt_1000yz[:,-1])][2:], bins=100, color='white',histtype=u'step', density=True, orientation='horizontal') density_1000z = stats.gaussian_kde(Yt_1000z) n, x_1000z, _ = plt.hist(Yt_1000z, bins=100, color='white',histtype=u'step', density=True, orientation='horizontal') ax.plot(density(x), x, 'k', linewidth=7., label='True') ax.plot(density_1000z(x_1000z), x_1000z, 'darkgreen', linewidth=4., label='$N_r=1000$',) ax.plot(density_100z(x_100z), x_100z, 'cornflowerblue', linewidth=2., label='$N_r=100$',) ax.annotate('(f)', xy=(1, 1), xytext=(-5, -5), va='top', ha='right', xycoords='axes fraction', textcoords='offset points', fontsize=25) ax = axx.add_subplot(gs[0]) ax.tick_params( axis='x', # changes apply to the x-axis which='both', # both major and minor ticks are affected bottom=False, # ticks along the bottom edge are off top=False, # ticks along the top edge are off labelbottom=False) a_100 = np.matmul(stored_outputs103[begin:end],W_yz_1000)[:,-2] a_1000 = np.matmul(stored_outputs10[begin:end],W_y_1000)[:,-1] ax.set_ylim(-45,29) ax.set_ylabel('$\phi_2$', labelpad=-10, fontsize=25) truth = wave[1+begin:1+end,idx_1[-1]] truth1 = u_exact[end+1:end+horizon,idx_1[-1]] ax.plot(np.arange(i1,i2)*0.01,truth[i1:i2], 'k', label='True', linewidth=7.) ax.plot(np.arange(i1,i2)*0.01,a_1000[i1:i2], 'blueviolet', label='(i) $\mathbf{x}=[\phi_1,\phi_3]$' , linewidth=4.) ax.plot(np.arange(i1,i2)*0.01,a_100[i1:i2], 'cornflowerblue', label='(iii) $\mathbf{x}=[\phi_1]$', linewidth=2.) ax.legend(fontsize=25, loc='lower center', bbox_to_anchor=(0.5,-0.1), frameon=False, handletextpad=0.4,labelspacing=.2, ncol=3, columnspacing=1., handlelength=1.) ax.tick_params(axis='both', labelsize=25) ax.annotate('(a)', xy=(1, 1), xytext=(-5, -5), va='top', ha='right', xycoords='axes fraction', textcoords='offset points', fontsize=25) ax = axx.add_subplot(gs[1]) ax.set_xlim(-0.001,0.065) ax.tick_params( axis='x', # changes apply to the x-axis which='both', # both major and minor ticks are affected bottom=False, # ticks along the bottom edge are off top=False, # ticks along the top edge are off labelbottom=False) ax.tick_params( axis='y', # changes apply to the x-axis which='both', # both major and minor ticks are affected left=False, # ticks along the bottom edge are off right=False, # ticks along the top edge are off labelleft=False) # labels along the bottom edge are off ax.set_ylim(-45,29) ax.tick_params(axis='both', labelsize=25) density = stats.gaussian_kde(truth) n, x, _ = plt.hist(truth, bins=100, color='white',histtype=u'step', density=True, orientation='horizontal') density_100 = stats.gaussian_kde(a_100) n, x_100, _ = plt.hist(a_100, bins=100, color='white',histtype=u'step', density=True, orientation='horizontal') density_1000 = stats.gaussian_kde(a_1000) n, x_1000, _ = plt.hist(a_1000, bins=100, color='white',histtype=u'step', density=True, orientation='horizontal') ax.plot(density(x), x, 'k', linewidth=7., label='True') ax.plot(density_1000(x_1000), x_1000, 'blueviolet', linewidth=4, label='$N_r=1000$',) ax.plot(density_100(x_100), x_100, 'cornflowerblue', linewidth=2, label='$N_r=100$',) ax.annotate('(b)', xy=(1, 1), xytext=(-5, -5), va='top', ha='right', xycoords='axes fraction', textcoords='offset points', fontsize=25) ax = axx.add_subplot(gs[2]) ax.set_ylim(-45,29) ax.set_xlim(-0.001,0.065) ax.set_xticks([0,0.025,0.05]) ax.tick_params(axis='both', labelsize=25) ax.tick_params( axis='x', # changes apply to the x-axis which='both', # both major and minor ticks are affected bottom=False, # ticks along the bottom edge are off top=False, # ticks along the top edge are off labelbottom=False) ax.tick_params( axis='y', # changes apply to the x-axis which='both', # both major and minor ticks are affected left=False, # ticks along the bottom edge are off right=False, # ticks along the top edge are off labelleft=False) # labels along the bottom edge are off # plt.ylim(0,0.065) density = stats.gaussian_kde(truth1) n, x, _ = plt.hist(truth1, bins=100, color='white',histtype=u'step', density=True, orientation='horizontal') density_100z = stats.gaussian_kde(Yt_1000yz[:,-2]) n, x_100z, _ = plt.hist(Yt_1000yz[:,-2], bins=100, color='white',histtype=u'step', density=True, orientation='horizontal') density_1000z = stats.gaussian_kde(Yt_1000y) n, x_1000z, _ = plt.hist(Yt_1000y, bins=100, color='white',histtype=u'step', density=True, orientation='horizontal') ax.plot(density(x), x, 'k', linewidth=7., label='True') ax.plot(density_1000z(x_1000z), x_1000z, 'blueviolet', linewidth=4., label='$N_r=1000$',) ax.plot(density_100z(x_100z), x_100z, 'cornflowerblue', linewidth=2., label='$N_r=100$',) ax.annotate('(c)', xy=(1, 1), xytext=(-5, -5), va='top', ha='right', xycoords='axes fraction', textcoords='offset points', fontsize=25) axx.tight_layout(pad=0.1, h_pad=0.2, w_pad=0.01) plt.show() ``` ## Error in training and test set ``` #losses and true losses for each case in the training set, F indicates forward euWler # a is the physics loss and b the true loss Loss_Mse = tf.keras.losses.MeanSquaredError() # (i) a_100 = l_y_100[np.nonzero(l_y_100)] b_100 = tl_y_100[np.nonzero(tl_y_100)] a_200 = l_y_200[np.nonzero(l_y_200)] b_200 = tl_y_200[np.nonzero(tl_y_200)] a_500 = l_y_500[np.nonzero(l_y_500)] b_500 = tl_y_500[np.nonzero(tl_y_500)] a_750 = l_y_750[np.nonzero(l_y_750)] b_750 = tl_y_750[np.nonzero(tl_y_750)] a_1000 = l_y_1000[np.nonzero(l_y_1000)] b_1000 = tl_y_1000[np.nonzero(tl_y_1000)] a_100F = l_y_100F[np.nonzero(l_y_100F)] b_100F = tl_y_100F[np.nonzero(tl_y_100F)] a_200F = l_y_200F[np.nonzero(l_y_200F)] b_200F = tl_y_200F[np.nonzero(tl_y_200F)] a_500F = l_y_500F[np.nonzero(l_y_500F)] b_500F = tl_y_500F[np.nonzero(tl_y_500F)] a_750F = l_y_750F[np.nonzero(l_y_750F)] b_750F = tl_y_750F[np.nonzero(tl_y_750F)] a_1000F = l_y_1000F[np.nonzero(l_y_1000F)] b_1000F = tl_y_1000F[np.nonzero(tl_y_1000F)] # (ii) a_100z = l_z_100[np.nonzero(l_z_100)] b_100z = tl_z_100[np.nonzero(tl_z_100)] a_200z = l_z_200[np.nonzero(l_z_200)] b_200z = tl_z_200[np.nonzero(tl_z_200)] a_500z = l_z_500[np.nonzero(l_z_500)] b_500z = tl_z_500[np.nonzero(tl_z_500)] a_750z = l_z_750[np.nonzero(l_z_750)] b_750z = tl_z_750[np.nonzero(tl_z_750)] a_1000z = l_z_1000[np.nonzero(l_z_1000)] b_1000z = tl_z_1000[np.nonzero(tl_z_1000)] a_100zF = l_z_100F[np.nonzero(l_z_100F)] b_100zF = tl_z_100F[np.nonzero(tl_z_100F)] a_200zF = l_z_200F[np.nonzero(l_z_200F)] b_200zF = tl_z_200F[np.nonzero(tl_z_200F)] a_500zF = l_z_500F[np.nonzero(l_z_500F)] b_500zF = tl_z_500F[np.nonzero(tl_z_500F)] a_750zF = l_z_750F[np.nonzero(l_z_750F)] b_750zF = tl_z_750F[np.nonzero(tl_z_750F)] a_1000zF = l_z_1000F[np.nonzero(l_z_1000F)] b_1000zF = tl_z_1000F[np.nonzero(tl_z_1000F)] # (iii) yz_100 = np.matmul(stored_outputs13[begin:end],W_yz_100) yz_200 = np.matmul(stored_outputs23[begin:end],W_yz_200) yz_500 = np.matmul(stored_outputs53[begin:end],W_yz_500) yz_750 = np.matmul(stored_outputs73[begin:end],W_yz_750) yz_1000 = np.matmul(stored_outputs103[begin:end],W_yz_1000) yz_100F = np.matmul(stored_outputs13[begin:end],W_yz_100F) yz_200F = np.matmul(stored_outputs23[begin:end],W_yz_200F) yz_500F = np.matmul(stored_outputs53[begin:end],W_yz_500F) yz_750F = np.matmul(stored_outputs73[begin:end],W_yz_750F) yz_1000F = np.matmul(stored_outputs103[begin:end],W_yz_1000F) b_100_y = Loss_Mse(yz_100[:,0],u_exact[begin+1:end+1,1]).numpy() b_200_y = Loss_Mse(yz_200[:,0],u_exact[begin+1:end+1,1]).numpy() b_500_y = Loss_Mse(yz_500[:,0],u_exact[begin+1:end+1,1]).numpy() b_750_y = Loss_Mse(yz_750[:,0],u_exact[begin+1:end+1,1]).numpy() b_1000_y = Loss_Mse(yz_1000[:,0],u_exact[begin+1:end+1,1]).numpy() b_100_yF = Loss_Mse(yz_100F[:,0],u_exact[begin+1:end+1,1]).numpy() b_200_yF = Loss_Mse(yz_200F[:,0],u_exact[begin+1:end+1,1]).numpy() b_500_yF = Loss_Mse(yz_500F[:,0],u_exact[begin+1:end+1,1]).numpy() b_750_yF = Loss_Mse(yz_750F[:,0],u_exact[begin+1:end+1,1]).numpy() b_1000_yF = Loss_Mse(yz_1000F[:,0],u_exact[begin+1:end+1,1]).numpy() b_100_z = Loss_Mse(yz_100[:,1],u_exact[begin+1:end+1,2]).numpy() b_200_z = Loss_Mse(yz_200[:,1],u_exact[begin+1:end+1,2]).numpy() b_500_z = Loss_Mse(yz_500[:,1],u_exact[begin+1:end+1,2]).numpy() b_750_z = Loss_Mse(yz_750[:,1],u_exact[begin+1:end+1,2]).numpy() b_1000_z = Loss_Mse(yz_1000[:,1],u_exact[begin+1:end+1,2]).numpy() b_100_zF = Loss_Mse(yz_100F[:,1],u_exact[begin+1:end+1,2]).numpy() b_200_zF = Loss_Mse(yz_200F[:,1],u_exact[begin+1:end+1,2]).numpy() b_500_zF = Loss_Mse(yz_500F[:,1],u_exact[begin+1:end+1,2]).numpy() b_750_zF = Loss_Mse(yz_750F[:,1],u_exact[begin+1:end+1,2]).numpy() b_1000_zF = Loss_Mse(yz_1000F[:,1],u_exact[begin+1:end+1,2]).numpy() # open-loop in the test set (that lasts 100 Lyapunov Times after the training set) # to compute the error in the test set for the different sizes of the reservoir and AD vs FE #(i) new_input1 = u_exact[end+1:end+horizon,idx1] new_input1 = new_input1.reshape(1,new_input1.shape[0],new_input1.shape[1]) out_100 = outputs(new_input1,final_state1, ESN1)[0][0,:,:] out_200 = outputs(new_input1,final_state2, ESN2)[0][0,:,:] out_500 = outputs(new_input1,final_state5, ESN5)[0][0,:,:] out_750 = outputs(new_input1,final_state7, ESN7)[0][0,:,:] out_1000 = outputs(new_input1,final_state10, ESN10)[0][0,:,:] Ytest_100 = np.matmul(out_100, W_y_100) Ytest_200 = np.matmul(out_200, W_y_200) Ytest_500 = np.matmul(out_500, W_y_500) Ytest_750 = np.matmul(out_750, W_y_750) Ytest_1000 = np.matmul(out_1000, W_y_1000) Ytest_100F = np.matmul(out_100, W_y_100F) Ytest_200F = np.matmul(out_200, W_y_200F) Ytest_500F = np.matmul(out_500, W_y_500F) Ytest_750F = np.matmul(out_750, W_y_750F) Ytest_1000F = np.matmul(out_1000, W_y_1000F) #(ii) new_input2 = u_exact[end+1:end+horizon,idx2] new_input2 = new_input2.reshape(1,new_input2.shape[0],new_input2.shape[1]) out_100 = outputs(new_input2,final_state12, ESN1)[0][0,:,:] out_200 = outputs(new_input2,final_state22, ESN2)[0][0,:,:] out_500 = outputs(new_input2,final_state52, ESN5)[0][0,:,:] out_750 = outputs(new_input2,final_state72, ESN7)[0][0,:,:] out_1000 = outputs(new_input2,final_state102, ESN10)[0][0,:,:] Ytest_100z = np.matmul(out_100, W_z_100) Ytest_200z = np.matmul(out_200, W_z_200) Ytest_500z = np.matmul(out_500, W_z_500) Ytest_750z = np.matmul(out_750, W_z_750) Ytest_1000z = np.matmul(out_1000, W_z_1000) Ytest_100zF = np.matmul(out_100, W_z_100F) Ytest_200zF = np.matmul(out_200, W_z_200F) Ytest_500zF = np.matmul(out_500, W_z_500F) Ytest_750zF = np.matmul(out_750, W_z_750F) Ytest_1000zF = np.matmul(out_1000, W_z_1000F) #(iii) new_input3 = u_exact[end+1:end+horizon,idx3] new_input3 = new_input3.reshape(1,new_input3.shape[0],new_input3.shape[1]) out_100 = outputs(new_input3,final_state13, ESN13)[0][0,:,:] out_200 = outputs(new_input3,final_state23, ESN23)[0][0,:,:] out_500 = outputs(new_input3,final_state53, ESN53)[0][0,:,:] out_750 = outputs(new_input3,final_state73, ESN73)[0][0,:,:] out_1000 = outputs(new_input3,final_state103, ESN103)[0][0,:,:] Ytest_100yz = np.matmul(out_100, W_yz_100) Ytest_200yz = np.matmul(out_200, W_yz_200) Ytest_500yz = np.matmul(out_500, W_yz_500) Ytest_750yz = np.matmul(out_750, W_yz_750) Ytest_1000yz = np.matmul(out_1000, W_yz_1000) Ytest_100yzF = np.matmul(out_100, W_yz_100F) Ytest_200yzF = np.matmul(out_200, W_yz_200F) Ytest_500yzF = np.matmul(out_500, W_yz_500F) Ytest_750yzF = np.matmul(out_750, W_yz_750F) Ytest_1000yzF = np.matmul(out_1000, W_yz_1000F) ``` ### NRMSE Plots ``` #Normalized MSE in test set automatic differentiation (square root to get NRMSE later) b_1 = np.array([Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_100[:,-1]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_200[:,-1]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_500[:,-1]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_750[:,-1]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_1000[:,-1]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_100yz[:,-2]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_200yz[:,-2]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_500yz[:,-2]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_750yz[:,-2]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_1000yz[:,-2]).numpy()/norm[idx_1[-1]]**2, # Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_100z[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_200z[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_500z[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_750z[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_1000z[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_100yz[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_200yz[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_500yz[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_750yz[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_1000yz[:,-1]).numpy()/norm[idx_2[-1]]**2]) #Normalized MSE in test set forward euler b_1F = np.array([Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_100F[:,-1]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_200F[:,-1]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_500F[:,-1]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_750F[:,-1]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_1000F[:,-1]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_100yzF[:,-2]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_200yzF[:,-2]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_500yzF[:,-2]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_750yzF[:,-2]).numpy()/norm[idx_1[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_1[-1]], Ytest_1000yzF[:,-2]).numpy()/norm[idx_1[-1]]**2, # Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_100zF[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_200zF[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_500zF[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_750zF[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_1000zF[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_100yzF[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_200yzF[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_500yzF[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_750yzF[:,-1]).numpy()/norm[idx_2[-1]]**2, Loss_Mse(u_exact[end+2:end+horizon+1,idx_2[-1]], Ytest_1000yzF[:,-1]).numpy()/norm[idx_2[-1]]**2]) #Normalized MSE in training set automatic differentiation b_2 = np.array([b_100z[-1]/norm[idx_2[-1]]**2,b_200z[-1]/norm[idx_2[-1]]**2, b_500z[-1]/norm[idx_2[-1]]**2,b_750z[-1]/norm[idx_2[-1]]**2, b_1000z[-1]/norm[idx_2[-1]]**2,b_100_z/norm[idx_2[-1]]**2, b_200_z/norm[idx_2[-1]]**2,b_500_z/norm[idx_2[-1]]**2, b_750_z/norm[idx_2[-1]]**2,b_1000_z/norm[idx_2[-1]]**2, # b_100[-1]/norm[idx_1[-1]]**2,b_200[-1]/norm[idx_1[-1]]**2, b_500[-1]/norm[idx_1[-1]]**2,b_750[-1]/norm[idx_1[-1]]**2, b_1000[-1]/norm[idx_1[-1]]**2,b_100_y/norm[idx_1[-1]]**2, b_200_y/norm[idx_1[-1]]**2.,b_500_y/norm[idx_1[-1]]**2, b_750_y/norm[idx_1[-1]]**2,b_1000_y/norm[idx_1[-1]]**2]) #Normalized MSE in training set forward euler b_2F = np.array([b_100zF[-1]/norm[idx_2[-1]]**2,b_200zF[-1]/norm[idx_2[-1]]**2, b_500zF[-1]/norm[idx_2[-1]]**2,b_750zF[-1]/norm[idx_2[-1]]**2, b_1000zF[-1]/norm[idx_2[-1]]**2,b_100_zF/norm[idx_2[-1]]**2, b_200_zF/norm[idx_2[-1]]**2,b_500_zF/norm[idx_2[-1]]**2, b_750_zF/norm[idx_2[-1]]**2,b_1000_zF/norm[idx_2[-1]]**2, # b_100F[-1]/norm[idx_1[-1]]**2,b_200F[-1]/norm[idx_1[-1]]**2, b_500F[-1]/norm[idx_1[-1]]**2,b_750F[-1]/norm[idx_1[-1]]**2, b_1000F[-1]/norm[idx_1[-1]]**2,b_100_yF/norm[idx_1[-1]]**2, b_200_yF/norm[idx_1[-1]]**2.,b_500_yF/norm[idx_1[-1]]**2, b_750_yF/norm[idx_1[-1]]**2,b_1000_yF/norm[idx_1[-1]]**2]) #panel (a)-(c) are results for FE and (b)-(d) for AD #dash-dotted lines are test set resutls and continuous lines training set ## plt.rcParams["figure.figsize"] = (15,5) plt.rcParams["font.size"] = 30 ax1, fig = plt.subplots(2,2) x_array = np.array([1,2,5,7.5,10]) axx = plt.subplot(2,2,3) plt.ylim(0.0099,0.1) plt.grid(True, which="both", axis='y', ls="-", alpha=0.3) # plt.xscale('log') plt.ylabel('NRMSE$(\phi_3)$',labelpad=15, fontsize=25) plt.plot(x_array,np.sqrt(b_1F[10:15]), c='darkgreen', marker='o', linestyle='-.', linewidth=1, markersize=8) plt.plot(x_array,np.sqrt(b_1F[15:20]), c='cornflowerblue', marker='o', linestyle='-.', linewidth=1, markersize=8) plt.plot(x_array,np.sqrt(b_2F[:5]), c='darkgreen', marker='o', linewidth=1, markersize=8, label='(ii) $\:\mathbf{x}=[\phi_1,\phi_2]$') plt.plot(x_array,np.sqrt(b_2F[5:10]), c='cornflowerblue', marker='o', linewidth=1, markersize=8, label='(iii) $\mathbf{x}=[\phi_1]$') axx.set_yscale('log') plt.tick_params(axis='y', labelsize=22) plt.legend(fontsize=25, loc='upper center', bbox_to_anchor=(.5,1.09), frameon=False, handletextpad=0.1,labelspacing=.4, handlelength=1) plt.annotate('(c)', xy=(1, 1), xytext=(-5, -5), va='top', ha='right', xycoords='axes fraction', textcoords='offset points', fontsize=25) plt.xticks(x_array,['$100$', '$200$', '$500$', '$750$', '$1000$'],fontsize=22) plt.xlabel('$N_r$', fontsize=25) axx = plt.subplot(2,2,1) plt.grid(True, which="both", axis='y', ls="-", alpha=0.3) plt.yscale('log') plt.yticks([1e-4,9e-4,8e-4,7e-4,6e-4,5e-4,4e-4,3e-4,2e-4,9e-5,8e-5, 1e-3,9e-3,8e-3,7e-3,6e-3,5e-3,4e-3,3e-3,2e-3, 1e-2,2e-2]) # plt.xscale('log') plt.plot(x_array,np.sqrt(b_1F[0:5]), c='blueviolet', marker='o', linestyle='-.', linewidth=1 , markersize=8) plt.plot(x_array,np.sqrt(b_1F[5:10]), c='cornflowerblue', marker='o', linestyle='-.', linewidth=1 , markersize=8) plt.plot(x_array,np.sqrt(b_2F[10:15]), c='blueviolet', marker='o', linewidth=1, markersize=8, label='(i) $\;\,\mathbf{x}=[\phi_1,\phi_3]$') plt.plot(x_array,np.sqrt(b_2F[15:20]), c='cornflowerblue', marker='o', linewidth=1, markersize=8, label='(iii) $\mathbf{x}=[\phi_1]$') plt.ylim(7e-5, 3e-2) # plt.tick_params(axis='x', pad=7) plt.tick_params(axis='y', labelsize=22) plt.ylabel('NRMSE$(\phi_2)$', labelpad=15, fontsize=25) plt.tick_params( axis='x', # changes apply to the x-axis which='both', # both major and minor ticks are affected bottom=False, # ticks along the bottom edge are off top=False, # ticks along the top edge are off labelbottom=False) # axx.yaxis.tick_right() # axx.yaxis.set_label_position("right") plt.annotate('(a)', xy=(1, 1), xytext=(-5, -5), va='top', ha='right', xycoords='axes fraction', textcoords='offset points', fontsize=25) plt.legend(fontsize=25, loc='lower center', bbox_to_anchor=(.5,-.09), frameon=False, handletextpad=0.1,labelspacing=.4, handlelength=1) axx = plt.subplot(2,2,4) plt.ylim(0.0099,0.1) plt.grid(True, which="both", axis='y', ls="-", alpha=0.3) plt.plot(x_array,np.sqrt(b_1[10:15]), c='darkgreen', marker='o', linestyle='-.', linewidth=1, markersize=8) plt.plot(x_array,np.sqrt(b_1[15:20]), c='cornflowerblue', marker='o', linestyle='-.', linewidth=1, markersize=8) plt.plot(x_array,np.sqrt(b_2[:5]), c='darkgreen', marker='o', linewidth=1, markersize=8, label='$\mathbf{x}=[\phi_1,\phi_2]$') plt.plot(x_array,np.sqrt(b_2[5:10]), c='cornflowerblue', marker='o', linewidth=1, markersize=8, label='$\mathbf{x}=[\phi_1]$') labels = [item.get_text() for item in axx.get_yticklabels()] empty_string_labels = ['']*len(labels) axx.set_yticklabels(empty_string_labels) axx.tick_params( axis='y', # changes apply to the x-axis which='both', # both major and minor ticks are affected left=False, # ticks along the bottom edge are off right=False, # ticks along the top edge are off labelleft=False) plt.tick_params(axis='y', labelsize=22) axx.set_yscale('log') # plt.legend(fontsize=25, loc='upper center', bbox_to_anchor=(.5,1.08), # frameon=False, handletextpad=0.1,labelspacing=.2) plt.annotate('(d)', xy=(1, 1), xytext=(-5, -5), va='top', ha='right', xycoords='axes fraction', textcoords='offset points', fontsize=25) plt.xticks(x_array,['$100$', '$200$', '$500$', '$750$', '$1000$'],fontsize=22) plt.xlabel('$N_r$', fontsize=25) axx = plt.subplot(2,2,2) plt.grid(True, which="both", axis='y', ls="-", alpha=0.3) plt.yscale('log') plt.plot(x_array,np.sqrt(b_1[0:5]), c='blueviolet', marker='o', linestyle='-.', linewidth=1 , markersize=8) plt.plot(x_array,np.sqrt(b_1[5:10]), c='cornflowerblue', marker='o', linestyle='-.', linewidth=1 , markersize=8) plt.plot(x_array,np.sqrt(b_2[10:15]), c='blueviolet', marker='o', linewidth=1, markersize=8, label='$\mathbf{x}=[\phi_1,\phi_3]$') plt.plot(x_array,np.sqrt(b_2[15:20]), c='cornflowerblue', marker='o', linewidth=1, markersize=8, label='$\mathbf{x}=[\phi_1]$') plt.ylim(7e-5, 3e-2) plt.yticks([1e-4,9e-4,8e-4,7e-4,6e-4,5e-4,4e-4,3e-4,2e-4,9e-5,8e-5, 1e-3,9e-3,8e-3,7e-3,6e-3,5e-3,4e-3,3e-3,2e-3, 1e-2,2e-2]) labels = [item.get_text() for item in axx.get_yticklabels()] empty_string_labels = ['']*len(labels) axx.set_yticklabels(empty_string_labels) plt.tick_params( axis='x', # changes apply to the x-axis which='both', # both major and minor ticks are affected bottom=False, # ticks along the bottom edge are off top=False, # ticks along the top edge are off labelbottom=False) axx.tick_params( axis='y', # changes apply to the x-axis which='both', # both major and minor ticks are affected left=False, # ticks along the bottom edge are off right=False, # ticks along the top edge are off labelleft=False) plt.tick_params(axis='y', labelsize=22) plt.annotate('(b)', xy=(1, 1), xytext=(-5, -5), va='top', ha='right', xycoords='axes fraction', textcoords='offset points', fontsize=25) # plt.legend(fontsize=25, loc='upper center', bbox_to_anchor=(.5,1.09), # frameon=False, handletextpad=0.1,labelspacing=.2) ax1.tight_layout(pad=0.2) plt.show() ``` ## Accuracy of automatic differentiation and forward euler In a case where the entire state is known (no reconstruction) we compare the accuracy of automatic differentiation and forward euler ``` num_units = num_units1 sparseness = 1. - connectivity/ (num_units1 - 1.) # sparseness idx = [0,1,2] # Initialize the ESN cell rng = np.random.RandomState(random_seed) cell_new = EchoStateRNNCell(num_units=num_units, num_inputs=Nx, activation=activation, decay=decay, # decay (leakage) rate rho=rho_spectral, # spectral radius of echo matrix sigma_in=sigma_in, # scaling of input matrix b_in = b_in, sparseness = sparseness, # sparsity of the echo matrix rng=rng) ESN_new = tf.keras.layers.RNN(cell=cell_new,dtype=tf.float64, return_sequences=True, return_state=True) #run the network in the training set to obtain the output matrix rnn_init_state = tf.constant(np.zeros([batches, num_units])) rnn_inputs = wave[:-1,idx].reshape(1,end, Nx) rnn_target = wave[1:,idx] stored_outputs = outputs(rnn_inputs, rnn_init_state, ESN_new)[0][0].numpy() XX_train = stored_outputs[begin:end,:] # since the entire state is known the entire output matrix is computed via ridge rigression LHS = np.dot(XX_train.T, XX_train) + lmb*np.eye(num_units+Nx+1) RHS = np.dot(XX_train.T, rnn_target[begin:end,:]) Wout = np.linalg.solve(LHS, RHS) # These files can be obtained by running API-ESN_Trainig.ipynb using idx=[0,1,2] up to # 'Compute the output matrix for the hidden state' hf = h5py.File('./data/Lorenz_Rec_drdt_'+ str(idx) +'_' + str(num_units) + '_' + str(end) + '.h5','r') dr = np.array(hf.get('drdt')) r = np.array(hf.get('r')) hf.close() #Compute the output and derivative y_pred = np.matmul(r[0,begin:end], Wout) #output fy = Lorenz_RHS(y_pred) #true derivative at the output y_dot = np.matmul(dr[0,begin:end], Wout) #derivative using automatic differentiation y_eul = (y_pred[1:] - y_pred[:-1])/dt #derivative using forward euler # MSE in training set RR_0 = Loss_Mse(y_pred[:-1],rnn_target[begin:end-1,:]).numpy() #output API_0 = Loss_Mse(y_dot[:-1] ,fy[:-1]).numpy() #derivative output with ad PI_0 = Loss_Mse(y_eul ,fy[:-1]).numpy() #derivative output wi fe print('MSE in training set:', RR_0, API_0, PI_0) #MSE in the training set for the output prediction for different sizes of the reservori RR = np.array([3.81e-7,7.83e-9, 6.58e-10, 4.03e-10, 2.22e-10]) #error in the output derivative using automatic differentiation (API) and forward euler (PI) API = np.array([1.63e-3, 2.99e-5, 3.04e-6, 1.38e-6, 9.01e-7]) PI = np.array([23.32, 23.32, 23.32, 23.32, 23.32]) # These 3 are obtained by running RR_0,API_0,PI_0 for all cases #Plot plt.rcParams["figure.figsize"] = (15,4) plt.rcParams["font.size"] = 30 from matplotlib.gridspec import * Nr = np.array([100, 250, 500, 750, 1000]) #neurons i1, i2 = [1000,2000] #range to plot x = np.arange(i1,i2)*0.01 x1 = np.arange(i1,i2-1)*0.01 axx = plt.figure() gs = GridSpec(1, 2, width_ratios=[1., 1.75], figure=axx, wspace=0.01) gs00 = GridSpecFromSubplotSpec(1, 2, width_ratios=[2., 1.5], subplot_spec=gs[1], wspace=0.01) ax = axx.add_subplot(gs[0]) ax.tick_params(axis='both', labelsize=22) ax.set_xlabel('Time [LT]', fontsize=25) ax.set_ylim(-350,350) ax.set_xticks([10.,12,14.,16.,18,20]) ax.plot(x,fy[i1:i2,0], 'k', linewidth=1, label='$\mathbf{f}(\mathbf{\hat{y}})_1$') ax.plot(x,fy[i1:i2,1], 'grey', linewidth=1, label='$\mathbf{f}(\mathbf{\hat{y}})_2$') ax.plot(x,fy[i1:i2,2], 'lightgray', linewidth=1, label='$\mathbf{f}(\mathbf{\hat{y}})_3$') ax.grid(True, which="both", axis='y', ls="-", alpha=0.3) ax.annotate('(a)', xy=(1, 1), xytext=(-5, -5), va='top', ha='right', xycoords='axes fraction', textcoords='offset points', fontsize=25) ax.legend(fontsize=25, loc='lower center', bbox_to_anchor=(0.5,-.09), frameon=False, handletextpad=0.1,labelspacing=.2,columnspacing=.5, ncol=3, handlelength=1.) ax = axx.add_subplot(gs00[0]) ax.tick_params(axis='both', labelsize=22) ax.set_xlabel('Time [LT]', fontsize=25) ax.set_yscale('log') ax.set_ylim(1e-12,1e3) ax.set_yticks([1e-11,1e-8,1e-5,1e-2,1e1,1e4]) ax.set_xticks([10.,12,14.,16.,18,20]) ax.grid(True, which="both", axis='y', ls="-", alpha=0.3) ax.plot(x, np.linalg.norm(y_eul[i1-1:i2-1,:] - fy[i1:i2,:],axis=1)**2/3, c='C0', linestyle='--', linewidth=1, label='$\mathcal{L}_{\mathrm{FE}}(t)$') ax.plot(x, np.linalg.norm(y_dot[i1:i2,:] - fy[i1:i2,:],axis=1)**2/3, c='C1', linestyle='-.', linewidth=1, label='$\mathcal{L}_{\mathrm{AD}}(t)$') ax.plot(x, np.linalg.norm(y_pred[i1:i2,:] - rnn_inputs[0,begin+i1+1:begin+i2+1],axis=1)**2/3, c='C2', linewidth=1, label='$\mathcal{L}_{\mathrm{Y}}(t)$') ax.annotate('(b)', xy=(1, 1), xytext=(-5, -5), va='top', ha='right', xycoords='axes fraction', textcoords='offset points', fontsize=25) ax.legend(fontsize=25, loc='lower center', bbox_to_anchor=(0.5,-.09), frameon=False, handletextpad=0.1,labelspacing=.2,columnspacing=.5, ncol=3, handlelength=1.) ax = axx.add_subplot(gs00[1]) ax.set_ylim(1e-12,1e3) ax.set_yscale('log') ax.plot(Nr, PI, label='$\overline{\mathcal{L}}_{\mathrm{FE}}$', linestyle='--', linewidth=2,marker='o',markersize=8) ax.plot(Nr, API, label='$\overline{\mathcal{L}}_{\mathrm{AD}}$', linestyle='-.', linewidth=2,marker='^',markersize=8) ax.plot(Nr, RR, label='$\overline{\mathcal{L}}_{\mathrm{Y}}$', linewidth=2, c='C2', marker='s',markersize=8) ax.tick_params(axis='both', labelsize=22) ax.set_xticks(Nr) ax.set_xlabel('$N_r$', fontsize=25) ax.set_yticks([1e-11,1e-8,1e-5,1e-2,1e1,1e4]) ax.set_xticklabels(['$100$', '$200$', '$500$', '$750$', '$1000$'],fontsize=22) ax.legend(fontsize=25, loc='upper right', bbox_to_anchor=(1.06,.88), frameon=False, handletextpad=0.5,labelspacing=.2, handlelength=1.) ax.grid(True, which="both", axis='y', ls="-", alpha=0.3) ax.annotate('(c)', xy=(1, 1), xytext=(-5, -5), va='top', ha='right', xycoords='axes fraction', textcoords='offset points', fontsize=25) ax.tick_params( axis='y', # changes apply to the x-axis which='both', # both major and minor ticks are affected left=False, # ticks along the bottom edge are off right=False, # ticks along the top edge are off labelleft=False) gs.tight_layout(axx,pad=0.1) plt.show() ```
github_jupyter
``` # Dependencies and Setup import pandas as pd years = [2015,2016,2017,2018,2019] df={} # Looping through years list for year in years: # File to Load file = f"../Resources/{year}.csv" # Read each years File and store into Pandas data frame df[year] = pd.read_csv(file) # Assigning names to each item in the list of data frames df_2015,df_2016,df_2017,df_2018,df_2019 = df[2015],df[2016],df[2017],df[2018],df[2019] # Checking for null values in all data frame for year in years: print(f"{year} \n{df[year].isna().sum()}\n----------") # Replacing null value found in 2018 dataframe with 0 df_2018.fillna(0,inplace = True) df_2018.isna().sum() # Checking the number of rows and columns in each data frame for year in years: print(f"{year} \n{df[year].shape}\n----------") #Drop unnecessary columns in each data frame df_2015.drop(columns=['Region','Standard Error'], inplace= True) df_2016.drop(columns=['Region','Lower Confidence Interval','Upper Confidence Interval'], inplace= True) df_2017.drop(columns=['Whisker.high','Whisker.low'], inplace= True) # Rename columns to match with other data frames df_2017 = df_2017.rename(columns={"Happiness.Rank": "Happiness Rank", "Happiness.Score": "Happiness Score", "Economy..GDP.per.Capita.": "Economy (GDP per Capita) ", "Health..Life.Expectancy.": "Health (Life Expectancy)", "Trust..Government.Corruption.": "Trust (Government Corruption)", "Dystopia.Residual": "Dystopia Residual" }) df_2018 = df_2018.rename(columns={"Country or region": "Country", "Score": "Happiness Score", "Overall rank": "Happiness Rank", "GDP per capita": "Economy (GDP per Capita)", "Social support": "Family", "Healthy life expectancy": "Health (Life Expectancy)", "Perceptions of corruption": "Trust (Government Corruption)", "Freedom to make life choices": "Freedom" }) df_2019 = df_2019.rename(columns={"Overall rank": "Happiness Rank", "Country or region": "Country", "Score": "Happiness Score", "GDP per capita": "Economy (GDP per Capita)", "Social support": "Family", "Healthy life expectancy": "Health (Life Expectancy)", "Perceptions of corruption": "Trust (Government Corruption)", "Freedom to make life choices": "Freedom" }) # Calculating Distopia Residual which is missing in 2018 and 2019 data df_2018['Dystopia Residual'] = df_2018['Happiness Score'] - \ (df_2018['Economy (GDP per Capita)'] + \ df_2018['Family'] + \ df_2018['Health (Life Expectancy)'] + \ df_2018['Freedom'] + \ df_2018['Generosity'] + \ df_2018['Trust (Government Corruption)']) df_2019['Dystopia Residual'] = df_2019['Happiness Score'] - \ (df_2019['Economy (GDP per Capita)'] + \ df_2019['Family'] + \ df_2019['Health (Life Expectancy)'] + \ df_2019['Freedom'] + \ df_2019['Generosity'] + \ df_2019['Trust (Government Corruption)']) # Copying changes back to the list of data frames df[2015],df[2016],df[2017],df[2018],df[2019] = df_2015,df_2016,df_2017,df_2018,df_2019 # Checking information of rows and columns in each data frame for year in years: print(f"{year} \n{df[year].info()}\n----------") # Merging all data frames into one data frame on Country column merge_df = df_2015.merge(df_2016, on=['Country'], how='outer', suffixes=['_2015','_2016']) merge_df = merge_df.merge(df_2017, on=['Country'], how='outer') merge_df = merge_df.merge(df_2018, on=['Country'], how='outer', suffixes=['_2017','_2018']) merge_df = merge_df.merge(df_2019, on=['Country'], how='outer') # Adding suffix to 2019 columns merge_df = merge_df.rename(columns={"Happiness Rank": "Happiness Rank_2019", "Happiness Score": "Happiness Score_2019", "Economy (GDP per Capita)": "Economy (GDP per Capita)_2019", "Family": "Family_2019", "Health (Life Expectancy)": "Health (Life Expectancy)_2019", "Trust (Government Corruption)": "Trust (Government Corruption)_2019", "Freedom": "Freedom_2019", "Generosity": "Generosity_2019", "Dystopia Residual": "Dystopia Residual_2019" }) merge_df.info() # Checking for countries that has records in 2015, but not in 2016 df_2015[~df_2015.Country.isin(df_2016.Country)] # Checking for countries that has records in 2016, but not in 2015 df_2016[~df_2016.Country.isin(df_2015.Country)] # Checking for countries that has records in later years but not in 2015 merge_df[~merge_df.Country.isin(df_2015.Country)] # Replacing null value found in merged dataframe with 0 merge_df.fillna(0,inplace = True) merge_df.isna().sum() # Checking the country names which are in Merged Data frame, but missing in any of the years data frames merge_df["Country"].loc[(~merge_df["Country"].isin(df_2015["Country"])) | \ (~merge_df["Country"].isin(df_2016["Country"])) | \ (~merge_df["Country"].isin(df_2017["Country"])) | \ (~merge_df["Country"].isin(df_2018["Country"])) | \ (~merge_df["Country"].isin(df_2019["Country"])) ].sort_values() # Making the country names matching for those refering to the same country merge_df["Country"].loc[merge_df.Country == "Northern Cyprus"] = "North Cyprus" merge_df["Country"].loc[merge_df.Country == "Macedonia"] = "North Macedonia" merge_df["Country"].loc[merge_df.Country == "Hong Kong S.A.R., China"] = "Hong Kong" merge_df["Country"].loc[merge_df.Country == "Taiwan Province of China"] = "Taiwan" merge_df["Country"].loc[merge_df.Country == "Trinidad & Tobago"] = "Trinidad and Tobago" merge_df["Country"].loc[merge_df.Country == "Somaliland Region"] = "Somaliland Region" merge_df["Country"].loc[merge_df.Country == "South Sudan"] = "Sudan" # merge_df["Country"].loc[merge_df.Country == "Somaliland Region"] = "Somalia" # Checking the country names which were not present in any of the years data frames country_check_list = merge_df["Country"].loc[(~merge_df["Country"].isin(df_2015["Country"])) | \ (~merge_df["Country"].isin(df_2016["Country"])) | \ (~merge_df["Country"].isin(df_2017["Country"])) | \ (~merge_df["Country"].isin(df_2018["Country"])) | \ (~merge_df["Country"].isin(df_2019["Country"])) ].sort_values().tolist() country_check_list = set(country_check_list) country_check_list merge_df.loc[merge_df["Country"].isin(country_check_list)] # Creating a list of country names duplicated dup_countries = merge_df['Country'].loc[merge_df['Country'].duplicated()].tolist() dup_countries # Merging duplicate country names row wise by getting sum of each column values # Loop through each duplicated country names in the list for country in dup_countries: # Making a new data frame having only the duplicated country names joined_rows = merge_df.loc[merge_df.Country == country] # Adding a row to the new data frame with the sum of each columns joined_rows.loc[country,:] = joined_rows.sum(axis=0) # Correcting the Country column value joined_rows['Country'] = country # Removing those rows from the merged data frame merge_df.drop(merge_df[merge_df.Country == country].index, inplace=True) # Concatenating the last row added(sum) to the original merged data frame merge_df = pd.concat([merge_df, joined_rows.tail(1)]) # Reseting the merged data frame's index merge_df.reset_index(drop=True, inplace=True) # Displaying the countries data which were missing in any of the year's data frame - after cleaning merge_df.loc[merge_df["Country"].isin(country_check_list)] merge_df.to_csv("../Output_Data/happiness.csv", index=False) ```
github_jupyter
``` import os os.chdir("../") import pandas as pd import glob from scipy.stats import ttest_ind import matplotlib.pyplot as plt resultDir = 'results' problem = 'cauctions' # choices=['setcover', 'cauctions', 'facilities', 'indset'] sampling_Strategies = ['uniform5','depthK','depthK2'] # choices: uniform5, depthK, depthK2, depthK3 sampling_seed = 0 sampleTimes_allStrategies = pd.DataFrame() for sampling_Strategy in sampling_Strategies: problem_folders = { 'setcover': f'setcover/500r_1000c_0.05d({sampling_Strategy})/{sampling_seed}', 'cauctions': f'cauctions/100_500({sampling_Strategy})/{sampling_seed}', 'facilities': f'facilities/100_100_5({sampling_Strategy})/{sampling_seed}', 'indset': f'indset/500_4({sampling_Strategy})/{sampling_seed}', } problem_folder = problem_folders[problem] depthTablePath = f'data/samples/{problem_folder}/depthTable(trainSol).csv' depthTable = pd.read_csv(depthTablePath, index_col=0) sampleTimes_allStrategies[f'{sampling_Strategy}'] = depthTable['sampleTimes'] bin_size = 5 binned = sampleTimes_allStrategies.groupby(sampleTimes_allStrategies.index // bin_size).sum() binned = binned / binned.sum() binned['GroupName'] = [f"[{i*bin_size},{(i+1)*bin_size-1}]" for i in binned.index] binned binned = binned.rename(columns={'uniform5':'uniform sampling', 'depthK':'heavy-head sampling', 'depthK2':'depth-info-based sampling'}) inv_bins = binned.sort_index(ascending=False) axe = inv_bins.plot.barh() axe.set_yticklabels(inv_bins['GroupName']) # axe.savefig(f'depthDist{problem}.pdf') fig = plt.gcf() fig.savefig(f'{resultDir}/depthDist_cauctions.pdf') ``` # 保存单个 depthTable 的 accessTimes 先读取`depthTable(trainSol).csv`, 这个是保存每种sampling strategy采样过程中的访问深度情况和采样情况,我们只需要读取uniform5 的 depthTable 的 accessTime 列就行了 ``` sampleDir = 'data/samples' problem = 'setcover' # choices=['setcover', 'cauctions', 'facilities', 'indset'] # sampling_Strategy = 'uniform5' # choices: uniform5, depthK, depthK2, depth_adaptive sampling_seed = 0 depthTablePaths = { 'setcover': f'setcover/500r_1000c_0.05d(uniform5)/{sampling_seed}', 'cauctions': f'cauctions/100_500(uniform5)/{sampling_seed}', 'facilities': f'facilities/100_100_5(uniform5)/{sampling_seed}', 'indset': f'indset/500_4(uniform5)/{sampling_seed}', } depthTablePath = f'{sampleDir}/{depthTablePaths[problem]}/depthTable(trainSol).csv' depthTable = pd.read_csv(depthTablePath, index_col=0) bin_size = 5 binned = depthTable['accessTimes'].groupby(depthTable.index // bin_size).sum() binned /= binned.sum() binned.index.name = 'depth//5' binned binned.to_csv('depthTable_SolSB_binned5.csv') dff = pd.read_csv('depthTable_SolSB_binned5.csv', index_col='depth//5') dff ```
github_jupyter
# logging - 파이썬을 처음 배울 땐, 로그를 print문으로 남겼지만(이 당시에 이게 로그 개념인지도 몰랐음) 점점 서비스나 어플리케이션이 커지면 남기는 로그가 많아지고 관리도 어려워집니다 - 이를 위해 로그 관련 라이브러리들이 만들어졌습니다. 대표적으로 파이썬 내장 모듈인 logging이 있습니다 - 용도 - 현재 상태보기 - 버그 추적 - 로그 분석(빈도 확인) ## 로그 생성하기 ``` import logging import time logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger("test") ``` ### log level - DEBUG : 상세한 정보가 필요시, 보통 디버깅이나 문제 분석시 사용 - INFO : 동작이 절차에 따라 진행되고 있는지 관찰할 때 사용 - WARNING : 어떤 문제가 조만간 발생할 조짐이 있을 경우 사용(디스크 용량 부족) - ERROR : 프로그램에 문제가 발생해 기능 일부가 동작하지 않을 경우 - CRITICAL : 심각한 문제가 발생해 시스템이 정상적으로 동작할 수 없을 경우 ``` logger.debug("debug message") logger.info("info message {a}".format(a=1)) logger.warning("Warn message") logger.error("error message") logger.critical("critical!!!!") dir(logger) ``` ### 로그 생성 시간 추가하고 싶은 경우 ``` mylogger = logging.getLogger("my") mylogger.setLevel(logging.INFO) formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') stream_hander = logging.StreamHandler() stream_hander.setFormatter(formatter) mylogger.addHandler(stream_hander) file_handler = logging.FileHandler('my.log') mylogger.addHandler(file_handler) mylogger.info("server start") ``` ### 로그 저장 ``` logging.basicConfig(filename="test.log", filemode='a', level=logging.DEBUG) ``` # init ``` import logging import optparse LOGGING_LEVELS = {'critical': logging.CRITICAL, 'error': logging.ERROR, 'warning': logging.WARNING, 'info': logging.INFO, 'debug': logging.DEBUG} def init(): parser = optparse.OptionParser() parser.add_option('-l', '--logging-level', help='Logging level') parser.add_option('-f', '--logging-file', help='Logging file name') (options, args) = parser.parse_args() logging_level = LOGGING_LEVELS.get(options.logging_level, logging.NOTSET) logging.basicConfig(level=logging_level, filename=options.logging_file, format='%(asctime)s %(levelname)s: %(message)s', datefmt='%Y-%m-%d %H:%M:%S') logging.debug("디버깅용 로그~~") logging.info("도움이 되는 정보를 남겨요~") logging.warning("주의해야되는곳!") logging.error("에러!!!") logging.critical("심각한 에러!!") import logging import logging.handlers # 로거 인스턴스를 만든다 logger = logging.getLogger('mylogger') # 포매터를 만든다 fomatter = logging.Formatter('[%(levelname)s|%(filename)s:%(lineno)s] %(asctime)s > %(message)s') # 스트림과 파일로 로그를 출력하는 핸들러를 각각 만든다. fileHandler = logging.FileHandler('./myLoggerTest.log') streamHandler = logging.StreamHandler() # 각 핸들러에 포매터를 지정한다. fileHandler.setFormatter(fomatter) streamHandler.setFormatter(fomatter) # 로거 인스턴스에 스트림 핸들러와 파일핸들러를 붙인다. logger.addHandler(fileHandler) logger.addHandler(streamHandler) # 로거 인스턴스로 로그를 찍는다. logger.setLevel(logging.DEBUG) logger.debug("===========================") logger.info("TEST START") logger.warning("스트림으로 로그가 남아요~") logger.error("파일로도 남으니 안심이죠~!") logger.critical("치명적인 버그는 꼭 파일로 남기기도 하고 메일로 발송하세요!") logger.debug("===========================") logger.info("TEST END!") ```
github_jupyter
# Time of Response and Practicing Effect --- ## 1. Introduction The main difference of responding the SDMT test on paper and responding on digital is thet the second one we have the precise moment every patient performs a task, so we might discover new features as time. We are interested on analyse the time of response every participant have, in addition, we want to check the practicing effect on trials over the time ``` import sys import pandas as pd import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats %matplotlib inline !pwd sys.path.insert(0, '/Users/pedrohserrano/neuroscience/utils') #internal packages import buildms as ms import statsms as stms color_ms = '#386cb0' #blue, This is the color chosen for patients with Multiple Sclerosis color_hc = 'red'#This is the color chosen for health control participants ``` --- ##### Loading Datasets ``` !python ~/neuroscience/utils/create_dataset.py ~/neuroscience/data/processed/dataset.csv df_measures = pd.read_csv('../data/interim/df_measures.csv', encoding="utf-8") df_measures_users = pd.read_csv('../data/interim/df_measures_users.csv', encoding="utf-8") df_symbols = pd.read_csv('../data/interim/df_symbols.csv', encoding="utf-8") score_variable = 'avg_test_ms' ``` --- ## 2. Methods ### 2.1 Expected Time per Person In general we might say that each person has its own distribution regarding on time of response, and that distribution has a mean and median, so as a first feautre we found the expected time on every patient. ``` scores_sorted = df_measures_users.set_index('userId')[score_variable].sort_values(ascending=False) ms_labels = [df_measures[df_measures['userId']==user].iloc[0]['ms'] for user in scores_sorted.index] colores = [color_ms if ms==1 else color_hc for ms in ms_labels] plt.figure(figsize=(18, 10)) position = range(len(scores_sorted.index)) plt.barh(position, scores_sorted, align='center', alpha=0.8, color=colores, label='HC') plt.yticks(position, scores_sorted.index) plt.legend(loc='best') plt.title('Average Time of Response in Miliseconds') plt.show() ``` As we can see the people on MS (blue) tend to be more slow in their answers. It is natural to see that the average time of response is negative correlated with the scores ever since every test is performed on exactly 90 seconds. Using the same logic we may think on an average time of response of every group, as we see in the table below, the people on Health Control, is half second faster on average, so allow them to score more on 90 seconds |Group|Average Time of Response| |:--:|:--:| |MS|3.82 Seconds| |HC|3.35 Seconds| ### 2.2 Expected Time per Symbol What if the time a person needs to respond a task depends on the figure showed on the application, the goal on this section is trying to find any difference on the time elapsed depending on the symbols. ``` plt.figure(figsize=(16, 6)) symbols_ms = df_symbols[df_symbols['ms']==1].groupby(['symbol'])['response_ms'].describe() symb_ms = symbols_ms['mean'].sort_values(ascending=False) plt.subplot(1,2,1) position_ms = range(len(symb_ms.index)) plt.title('Average Response MS Group') plt.ylabel('Average Time in Miliseconds') plt.bar(position_ms, symb_ms, align='center', alpha=0.5, color=color_ms, label='MS'); plt.ylim(0, 3000) plt.errorbar(position_ms, symb_ms, yerr=symbols_ms['std'].sort_values(ascending=False).values, fmt='o',color=color_ms) plt.xticks(position_ms, symb_ms.index.values, rotation=45); plt.legend() symbols_hc = df_symbols[df_symbols['ms']==0].groupby(['symbol'])['response_ms'].describe() symb_hc = symbols_hc['mean'].sort_values(ascending=False) plt.subplot(1,2,2) position_hc = range(len(symb_hc.index)) plt.title('Average Response HC Group') plt.ylabel('Average Time in Miliseconds') plt.bar(position_hc, symb_hc, align='center', alpha=0.5, color=color_hc, label='HC'); plt.ylim(0, 3000) plt.errorbar(position_hc, symb_hc, yerr=symbols_hc['std'].sort_values(ascending=False).values, fmt='o' ,color=color_hc) plt.xticks(position_hc, symb_hc.index.values, rotation=45); plt.legend() plt.show() ``` As we see on the section 2.1, the Health Control group tend to be faster, the plots above shows this difference but also shows that there is a clear step on speed on each symbol, the order of the velocity tend to be the same. It is notizable that circle is the "easiest" to learn on any case |Order |1|2|3|4|5|6|7|8|9| |--|--|--|--|--|--|--|--|--| |MS|**Mult**|Star|**Plus**|Triangle|Inf|Square|Hamburger|Window|Circle| |HC|**Plus**|Star|**Mult**|Triangle|Inf|Square|Hamburger|Window|Circle| So we might infer that does not matter if a person is MS or not, the speed to respond every symbols has kind of the same order of difficulty. In addition is clear to see that there is an important difference after the trianlge. It looks like that there is 2 levels of effort when a patient use a cognition to process every digit and symbols, we might split on: "Easy Symbols" (Inf, Square, Hamburger, Window, Circle) and "Hard Symbols" (Plus, Star, Mult). We should statistically prove this assumption trying to see if there is a significall difference. ``` df_symbols['difficulty'] = [ 'hard' if i in ['triangle','plus','star','mult'] else 'easy' for i in df_symbols['symbol']] ms_easy = df_symbols[(df_symbols['difficulty']=='easy') & (df_symbols['ms']==1)] ms_hard = df_symbols[(df_symbols['difficulty']=='hard') & (df_symbols['ms']==1)] hc_easy = df_symbols[(df_symbols['difficulty']=='easy') & (df_symbols['ms']==0)] hc_hard = df_symbols[(df_symbols['difficulty']=='hard') & (df_symbols['ms']==0)] d1 = stms.dCohen(ms_easy['response_ms'], ms_hard['response_ms']).effect_size() d2 = stms.dCohen(hc_easy['response_ms'], hc_hard['response_ms']).effect_size() print('Cohen´s d-value: \n MS: {} Small \n HC: {} Small'.format(round(d1,3),round(d2,3))) ``` Ever since the d-Cohen value to measure the effect size between the easy symbols and the hard symbols are small, we can not use this as a feature because the statisticla difference is small ### 2.3 Median Time of Response When one is measuring the time as a variable is natural to ask the evolution over the experiment, in other words: Do the patients are getting slower, faster or any of those? It is possible to plot the time of response, in order to know the evolution of the time, the most acurate statistic to measure the 50% percentil of time performed on every test we take the median, and then calculate the SD. We have to focus on the median over the time and its variance, is it possible that if they are getting faster then we should prove a **practicing effect** assumption ``` medians = df_symbols.groupby(['userId','timestamp'])['response_ms_med'].mean().reset_index() stds = df_symbols.groupby(['userId','timestamp'])['response_ms'].std().reset_index() medians['std'] = stds['response_ms'].values ms_label = list(zip(df_measures_users['userId'], [color_ms if ms==1 else color_hc for ms in df_measures_users['ms']])) plt.figure(figsize=[16, 16]) for idx,i in enumerate(ms_label): plt.subplot(6,4,idx+1) medians_user = medians[medians['userId']==str(i[0])] plt.errorbar(range(len(medians_user)), medians_user['response_ms_med'], yerr=medians_user['std'], color=str(i[1]), linestyle='-.') plt.xlabel('Number of Tests') plt.title(str(i[0])) plt.tight_layout() ``` As we see in the plots above, it seems they are learning, but they are not getting more variance, so they --- ## 3. Practicing Effect ### 3.1 Effect on Median Time of Response We should find if there is statistical difference on the first median time a person performed and the last median time the person performed, we use the d-Cohen value to find an effect split on MS and HC groups ``` first_med, last_med = [], [] for i in df_measures_users['userId']: user = medians[medians['userId']==str(i)] first_med.append(user['response_ms_med'].head(1).values[0]) last_med.append(user['response_ms_med'].tail(1).values[0]) table_med = {} for i in range(len(df_measures_users['userId'])): table_med[i] = (first_med[i], last_med[i]) med_users = pd.DataFrame(table_med).transpose() med_users.columns = ['first_med', 'last_med'] med_users['userId'] = df_measures_users['userId'].values med_users['ms'] = df_measures_users['ms'].values ms_med = med_users[med_users['ms']==1] hc_med = med_users[med_users['ms']==0] ``` If taken all the MS and HC scores and compares the first median vs the last median we can have the d value ``` d1 = stms.dCohen(ms_med['first_med'], ms_med['last_med']).effect_size() d2 = stms.dCohen(hc_med['first_med'], hc_med['last_med']).effect_size() print('Cohen´s d-value, First Median vs Last Median: \n MS: {} Large \n HC: {} Large'.format(round(d1,3),round(d2,3))) ``` There is a large effect between the first against the last time performed, on Health Control group is even bigger, so it means they learn even more ``` def rate_change(value): pct = value * 100 sign = '+' if pct > 0 else '' return '{}{:,.0f}%'.format(sign, pct) change = (med_users['last_med']-med_users['first_med'])/med_users['first_med'] med_users['rate_change'] = change.map(rate_change) med_users.sort_values('rate_change') plt.figure(figsize=[14, 8]) colores = [color_ms if ms==1 else color_hc for ms in df_measures_users['ms']] for idx, i in enumerate(colores): plt.plot(range(2), med_users[['first_med','last_med']].iloc[idx:idx+1].values[0], color=i) plt.xlabel('First Median Time vs Last Median Time') plt.ylabel('Median Time Miliseconds') ``` - In the literature shows that exist practicing effect and more significant after longer testing period - Not influenced by gender, age, relapses, disability progression and prior natalizumab treatment - They conclude that the practicing effect is less if they are more impaired on the EDSS scale ### 3.2 Effect on Scores Comparing the same patient's numbers before and after treatment, we are effectively using each patient as their own control. That way the correct rejection of the null hypothesis (here: of no difference made by the treatment) can become much more likely, with statistical power increasing simply because the random between-patient variation has now been eliminated $$ H_0: \mu_1 = \mu_2 \hspace{1cm} \\ H_a: \mu_1 \ne \mu_2 \hspace{1cm} $$ Let's conduct a paired t-test to see whether this difference is significant at a 95% confidence level **We take the scores performed on the Test-Retest experimentm also the information of the last scores performed** ``` import scipy.stats as stats group = df_measures_users['userId'].tolist() ms_builder = ms.MSscores(df_measures) df_scores = ms_builder.scores_table(score_variable, group) df_scores['ms'] = df_measures_users['ms'] df_scores_ms = df_scores[df_scores['ms']==1] df_scores_hc = df_scores[df_scores['ms']==0] df_firstlast = ms_builder.scores_first_last(score_variable, group) df_firstlast['ms'] = df_measures_users['ms'] df_firstlast_ms = df_firstlast[df_firstlast['ms']==1] df_firstlast_hc = df_firstlast[df_firstlast['ms']==0] ``` ##### MS Group ``` d1 = stms.dCohen(df_scores_ms['test'],df_scores_ms['re-test']).effect_size() print('Average Score of MS Group: \n 1rs Test: {} (SD {}) \n 2nd Test: {} (SD {}) \nCohen`s d-value: {} '.format( round(df_scores_ms['test'].mean(),2), round(df_scores_ms['test'].std(),2), round(df_scores_ms['re-test'].mean(),2), round(df_scores_ms['re-test'].std(),2),round(d1,3)), stats.ttest_rel(df_scores_ms['test'],df_scores_ms['re-test'])) d2 = stms.dCohen(df_firstlast_ms['first'],df_firstlast_ms['last']).effect_size() print('Average Score of MS Group: \n First: {} (SD {}) \n Last: {} (SD {}) \nCohen`s d-value: {} '.format( round(df_firstlast_ms['first'].mean(),2), round(df_firstlast_ms['first'].std(),2), round(df_firstlast_ms['first'].mean(),2), round(df_firstlast_ms['first'].std(),2),round(d2,3)), stats.ttest_rel(df_firstlast_ms['first'],df_firstlast_ms['last'])) ``` ##### HC Group ``` d3 = stms.dCohen(df_scores_hc['test'],df_scores_hc['re-test']).effect_size() print('Average Score of HC Group: \n 1rs Test: {} (SD {}) \n 2nd Test: {} (SD {}) \nCohen`s d-value: {} '.format( round(df_scores_hc['test'].mean(),2), round(df_scores_hc['test'].std(),2), round(df_scores_hc['re-test'].mean(),2), round(df_scores_hc['re-test'].std(),2),round(d3,3)), stats.ttest_rel(df_scores_hc['test'],df_scores_hc['re-test'])) d4 = stms.dCohen(df_firstlast_hc['first'],df_firstlast_hc['last']).effect_size() print('Average Score of HC Group: \n First: {} (SD {}) \n Last: {} (SD {}) \nCohen`s d-value: {} '.format( round(df_firstlast_hc['first'].mean(),2), round(df_firstlast_hc['first'].std(),2), round(df_firstlast_hc['first'].mean(),2), round(df_firstlast_hc['first'].std(),2),round(d4,3)), stats.ttest_rel(df_firstlast_hc['first'],df_firstlast_hc['last'])) ``` The p-value in the test output shows that the chances of seeing this large of a difference between times due to chance is just over p%. Si hay It seems that HC are learning
github_jupyter
# Anna KaRNNa In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [Sherjil Ozair](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> ``` import time from collections import namedtuple import numpy as np import tensorflow as tf ``` First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. ``` with open('anna.txt', 'r') as f: text=f.read() vocab = sorted(set(text)) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32) ``` Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever. ``` text[:100] ``` And we can see the characters encoded as integers. ``` encoded[:100] ``` Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from. ``` len(vocab) ``` ## Making training mini-batches Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this: <img src="assets/sequence_batching@1x.png" width=500px> <br> We start with our text encoded as integers in one long array in `encoded`. Let's create a function that will give us an iterator for our batches. I like using [generator functions](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/) to do this. Then we can pass `encoded` into this function and get our batch generator. The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the total number of batches, $K$, we can make from the array `arr`, you divide the length of `arr` by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from `arr`, $N * M * K$. After that, we need to split `arr` into $N$ sequences. You can do this using `arr.reshape(size)` where `size` is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (`batch_size` below), let's make that the size of the first dimension. For the second dimension, you can use `-1` as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$. Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the $N \times (M * K)$ array. For each subsequent batch, the window moves over by `n_steps`. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. The way I like to do this window is use `range` to take steps of size `n_steps` from $0$ to `arr.shape[1]`, the total number of steps in each sequence. That way, the integers you get from `range` always point to the start of a batch, and each window is `n_steps` wide. > **Exercise:** Write the code for creating batches in the function below. The exercises in this notebook _will not be easy_. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, **type out the solution code yourself.** ``` def get_batches(arr, batch_size, n_steps): '''Create a generator that returns batches of size batch_size x n_steps from arr. Arguments --------- arr :N*M*K: Array you want to make batches from batch_size :N: Batch size, the number of sequences per batch n_steps :M: Number of sequence steps per batch n_batches :K: Total number of batches ''' # Get the number of characters per batch and number of batches we can make characters_per_batch = batch_size * n_steps ## 내 답 #n_batches = floor(len(arr) / characters_per_batch) #floor, / 보다는 // 가 더 좋겠죠. n_batches = len(arr) // characters_per_batch # Keep only enough characters to make full batches arr = arr[:characters_per_batch*n_batches] # Reshape into batch_size rows arr = arr.reshape((batch_size,-1)) for n in range(0, arr.shape[1], n_steps): # The features x = arr[:,n:n+n_steps] # 세로는 전부, 가로는 n ~ n+n_step까지 # The targets, shifted by one y_temp = arr[:,n+1:n+n_steps+1] y = np.zeros(x.shape, dtype = x.dtype) y[:,:y_temp.shape[1]] = y_temp yield x, y ``` Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps. ``` batches = get_batches(encoded, 10, 50) x, y = next(batches) print('x\n', x[:10, :10]) print('\ny\n', y[:10, :10]) ``` If you implemented `get_batches` correctly, the above output should look something like ``` x [[55 63 69 22 6 76 45 5 16 35] [ 5 69 1 5 12 52 6 5 56 52] [48 29 12 61 35 35 8 64 76 78] [12 5 24 39 45 29 12 56 5 63] [ 5 29 6 5 29 78 28 5 78 29] [ 5 13 6 5 36 69 78 35 52 12] [63 76 12 5 18 52 1 76 5 58] [34 5 73 39 6 5 12 52 36 5] [ 6 5 29 78 12 79 6 61 5 59] [ 5 78 69 29 24 5 6 52 5 63]] y [[63 69 22 6 76 45 5 16 35 35] [69 1 5 12 52 6 5 56 52 29] [29 12 61 35 35 8 64 76 78 28] [ 5 24 39 45 29 12 56 5 63 29] [29 6 5 29 78 28 5 78 29 45] [13 6 5 36 69 78 35 52 12 43] [76 12 5 18 52 1 76 5 58 52] [ 5 73 39 6 5 12 52 36 5 78] [ 5 29 78 12 79 6 61 5 59 63] [78 69 29 24 5 6 52 5 63 76]] ``` although the exact numbers will be different. Check to make sure the data is shifted over one step for `y`. ## Building the model Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network. <img src="assets/charRNN.png" width=500px> ### Inputs First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called `keep_prob`. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size. > **Exercise:** Create the input placeholders in the function below. ``` def build_inputs(batch_size, num_steps): ''' Define placeholders for inputs, targets, and dropout Arguments --------- batch_size: Batch size, number of sequences per batch num_steps: Number of sequence steps in a batch ''' # Declare placeholders we'll feed into the graph inputs = tf.placeholder(tf.int32,shape=[batch_size,num_steps], name='inputs') targets = tf.placeholder(tf.int32,shape=[batch_size,num_steps], name='targets') # Keep probability placeholder for drop out layers keep_prob = tf.placeholder(tf.float32, name='keep_prop') return inputs, targets, keep_prob ``` ### LSTM Cell Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer. We first create a basic LSTM cell with ```python lstm = tf.contrib.rnn.BasicLSTMCell(num_units) ``` where `num_units` is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with ```python tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) ``` You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with [`tf.contrib.rnn.MultiRNNCell`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/rnn/MultiRNNCell). With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this ```python tf.contrib.rnn.MultiRNNCell([cell]*num_layers) ``` This might look a little weird if you know Python well because this will create a list of the same `cell` object. However, TensorFlow 1.0 will create different weight matrices for all `cell` objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like ```python def build_cell(num_units, keep_prob): lstm = tf.contrib.rnn.BasicLSTMCell(num_units) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)]) ``` Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell. We also need to create an initial cell state of all zeros. This can be done like so ```python initial_state = cell.zero_state(batch_size, tf.float32) ``` Below, we implement the `build_lstm` function to create these LSTM cells and the initial state. ``` def build_lstm(lstm_size, num_layers, batch_size, keep_prob): ''' Build LSTM cell. Arguments --------- keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability lstm_size: Size of the hidden layers in the LSTM cells num_layers: Number of LSTM layers batch_size: Batch size ''' def build_cell(num_units, keep_prob): ### Build the LSTM Cell # Use a basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(num_units) # Add dropout to the cell outputs drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob) return drop # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)]) initial_state = cell.zero_state(batch_size, tf.float32) return cell, initial_state ``` ### RNN Output Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text. If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$. We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, `lstm_output`. First we need to concatenate this whole list into one array with [`tf.concat`](https://www.tensorflow.org/api_docs/python/tf/concat). Then, reshape it (with `tf.reshape`) to size $(M * N) \times L$. One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with `tf.variable_scope(scope_name)` because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names. > **Exercise:** Implement the output layer in the function below. ``` def build_output(lstm_output, in_size, out_size): ''' Build a softmax layer, return the softmax output and logits. Arguments --------- lstm_output: List of output tensors from the LSTM layer in_size: Size of the input tensor, for example, size of the LSTM cells out_size: Size of this softmax layer ''' # Reshape output so it's a bunch of rows, one row for each step for each sequence. # Concatenate lstm_output over axis 1 (the columns) seq_output = tf.concat(lstm_output,axis=1) # Reshape seq_output to a 2D tensor with lstm_size columns x = tf.reshape(seq_output,[-1,in_size]) # Connect the RNN outputs to a softmax layer with tf.variable_scope('softmax'): # Create the weight and bias variables here softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(out_size)) # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch # of rows of logit outputs, one for each step and sequence logits = tf.matmul(x, softmax_w) + softmax_b # Use softmax to get the probabilities for predicted characters out = tf.nn.softmax(logits, name='predictions') return out, logits ``` ### Training loss Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(M*N) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(M*N) \times C$. Then we run the logits and targets through `tf.nn.softmax_cross_entropy_with_logits` and find the mean to get the loss. >**Exercise:** Implement the loss calculation in the function below. ``` def build_loss(logits, targets, lstm_size, num_classes): ''' Calculate the loss from the logits and the targets. Arguments --------- logits: Logits from final fully connected layer targets: Targets for supervised learning lstm_size: Number of LSTM hidden units num_classes: Number of classes in targets ''' # One-hot encode targets and reshape to match logits, one row per sequence per step y_one_hot = tf.one_hot(targets,num_classes) y_reshaped = tf.reshape(y_one_hot,logits.get_shape()) # Softmax cross entropy loss loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels = y_reshaped) loss = tf.reduce_mean(loss) return loss ``` ### Optimizer Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step. ``` def build_optimizer(loss, learning_rate, grad_clip): ''' Build optmizer for training, using gradient clipping. Arguments: loss: Network loss learning_rate: Learning rate for optimizer ''' # Optimizer for training, using gradient clipping to control exploding gradients tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) return optimizer ``` ### Build the network Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use [`tf.nn.dynamic_rnn`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/nn/dynamic_rnn). This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as `final_state` so we can pass it to the first LSTM cell in the the next mini-batch run. For `tf.nn.dynamic_rnn`, we pass in the cell and initial state we get from `build_lstm`, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN. > **Exercise:** Use the functions you've implemented previously and `tf.nn.dynamic_rnn` to build the network. ``` class CharRNN: def __init__(self, num_classes, batch_size=64, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one character at a time, so providing an option for that if sampling == True: batch_size, num_steps = 1, 1 else: batch_size, num_steps = batch_size, num_steps tf.reset_default_graph() # Build the input placeholder tensors self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps) # Build the LSTM cell cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob) ### Run the data through the RNN layers # First, one-hot encode the input tokens x_one_hot = tf.one_hot(self.inputs, num_classes) # Run each sequence step through the RNN with tf.nn.dynamic_rnn outputs, state = tf.nn.dynamic_rnn( cell, inputs=x_one_hot, initial_state=self.initial_state ) self.final_state = state # Get softmax predictions and logits self.prediction, self.logits = build_output(outputs, lstm_size, num_classes) # Loss and optimizer (with gradient clipping) self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes) self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip) ``` ## Hyperparameters Here are the hyperparameters for the network. * `batch_size` - Number of sequences running through the network in one pass. * `num_steps` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. * `lstm_size` - The number of units in the hidden layers. * `num_layers` - Number of hidden LSTM layers to use * `learning_rate` - Learning rate for training * `keep_prob` - The dropout keep probability when training. If you're network is overfitting, try decreasing this. Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnn#tips-and-tricks). > ## Tips and Tricks >### Monitoring Validation Loss vs. Training Loss >If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular: > - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on. > - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer) > ### Approximate number of parameters > The two most important parameters that control the model are `lstm_size` and `num_layers`. I would advise that you always use `num_layers` of either 2/3. The `lstm_size` can be adjusted based on how much data you have. The two important quantities to keep track of here are: > - The number of parameters in your model. This is printed when you start training. > - The size of your dataset. 1MB file is approximately 1 million characters. >These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples: > - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `lstm_size` larger. > - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss. > ### Best models strategy >The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end. >It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance. >By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative. ``` batch_size = 10 # Sequences per batch num_steps = 50 # Number of sequence steps per batch lstm_size = 128 # Size of hidden layers in LSTMs num_layers = 2 # Number of LSTM layers learning_rate = 0.01 # Learning rate keep_prob = 0.5 # Dropout keep probability ``` ## Time for training This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I save a checkpoint. Here I'm saving checkpoints with the format `i{iteration number}_l{# hidden layer units}.ckpt` > **Exercise:** Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU. ``` epochs = 20 # Print losses every N interations print_every_n = 50 # Save every N iterations save_every_n = 200 model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps, lstm_size=lstm_size, num_layers=num_layers, learning_rate=learning_rate) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/______.ckpt') counter = 0 for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for x, y in get_batches(encoded, batch_size, num_steps): counter += 1 start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: keep_prob, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.loss, model.final_state, model.optimizer], feed_dict=feed) if (counter % print_every_n == 0): end = time.time() print('Epoch: {}/{}... '.format(e+1, epochs), 'Training Step: {}... '.format(counter), 'Training loss: {:.4f}... '.format(batch_loss), '{:.4f} sec/batch'.format((end-start))) if (counter % save_every_n == 0): saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) ``` #### Saved checkpoints Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables ``` tf.train.get_checkpoint_state('checkpoints') ``` ## Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. ``` def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): samples = [c for c in prime] model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) ``` Here, pass in the path to a checkpoint and sample from the network. ``` tf.train.latest_checkpoint('checkpoints') checkpoint = tf.train.latest_checkpoint('checkpoints') samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i200_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i600_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i1200_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) ```
github_jupyter
``` !pip install mnist import numpy as np import cv2 import random import matplotlib.pyplot as plt from PIL import Image import mnist random.seed(1) np.random.seed(1) from __future__ import print_function import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras import backend as K from keras.preprocessing.image import ImageDataGenerator num_classes = 10 # input image dimensions img_rows, img_cols = 28, 28 x_train = mnist.train_images() y_train = mnist.train_labels() x_test = mnist.test_images() y_test = mnist.test_labels() # the data, split between train and test sets #(x_train, y_train), (x_test, y_test) = mnist.load_data() if K.image_data_format() == 'channels_first': x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols) x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols) input_shape = (1, img_rows, img_cols) else: x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) #x_train = x_train.astype('float32') #x_test = x_test.astype('float32') #x_train /= 255 #x_test /= 255 print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) from keras.callbacks import ModelCheckpoint batch_size = 128 epochs = 1000 datagen = ImageDataGenerator( rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, shear_range=10, zoom_range=0.05) # compute quantities required for featurewise normalization # (std, mean, and principal components if ZCA whitening is applied) datagen.fit(x_train) filepath = 'weights-{epoch:02d}-{acc:.4f}-{val_acc:.4f}.hdf5' checkpoint = ModelCheckpoint(filepath, monitor='acc', verbose=1, save_best_only=True, mode='max') # fits the model on batches with real-time data augmentation: model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size), steps_per_epoch=len(x_train) / batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test), callbacks=[checkpoint]) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) !tar cf weights.tar *.hdf5 !ls -lh from google.colab import drive drive.mount('/content/drive') !cp weights.tar.xz "/content/drive/My Drive/imimic/wann" !ls "/content/drive/My Drive/imimic/wann" ```
github_jupyter
#### - Sobhan Moradian Daghigh #### - 12/3/2021 #### - PR - EX01 - Q6 - Part b. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt from scipy.interpolate import griddata import lmfit ``` #### Reading data ``` dataset = pd.read_csv('./inputs/Q6/first_half_logs.csv', names=['timestamp', 'tag_id','x_pos', 'y_pos', 'heading', 'direction', 'energy', 'speed', 'total_distance']) dataset.head() dataset.info() players = dataset.groupby(by=dataset.tag_id) players.first() for grp, pdf in players: print('player: {} - total_distance: {}'.format(grp, pdf.iloc[:, -1].max())) ``` #### It seems that theres some non-player captures which Im wanna filter them. #### Also I decided to ignore one of the substitute player to have 11 players at all. ``` dataset = dataset.drop(dataset[dataset.tag_id == 6].index) dataset = dataset.drop(dataset[dataset.tag_id == 12].index) dataset = dataset.drop(dataset[dataset.tag_id == 11].index) players = dataset.groupby(by=dataset.tag_id) players.first() for grp, pdf in players: x_mean = pdf.loc[:, 'x_pos'].mean() y_mean = pdf.loc[:, 'y_pos'].mean() plt.scatter(x_mean, y_mean, label='player {}'.format(grp)) plt.legend(bbox_to_anchor=(1.3, 1.01)) plt.xlim([0, 105]) plt.ylim([0, 70]) plt.grid() plt.show() ``` #### Past B. ``` fig, ax = plt.subplots(4, 3, figsize=(20, 20)) index = 0 for grp, pdf in players: x_pos = dataset[dataset.tag_id == grp].loc[:, 'x_pos'] y_pos = dataset[dataset.tag_id == grp].loc[:, 'y_pos'] xedges = list(range(0, 105, 1)) yedges = list(range(0, 68, 1)) H, xedges, yedges = np.histogram2d(x_pos, y_pos, bins=(xedges, yedges)) X, Y = np.meshgrid(np.linspace(0, H.shape[0], H.shape[1]), np.linspace(0, H.shape[1], H.shape[0])) # slice the H matrix to seperated parts x, y, z = np.array([]), np.array([]), np.array([]) for i in range(0, H.shape[0]): for j in range(0, H.shape[1]): x = np.append(x, i) y = np.append(y, j) z = np.append(z, H[i][j]) error = np.sqrt(z+1) # Interpolate, using cubic method Z = griddata((x, y), z, (X, Y), method='cubic') # Model fitting gaussian_model = lmfit.models.Gaussian2dModel() params = gaussian_model.guess(z, x, y) result = gaussian_model.fit(z, x=x, y=y, params=params, weights=1/error) fit = gaussian_model.func(X, Y, **result.best_values) ax[int(np.divide(index, 3)), np.mod(index, 3)].pcolor(X, Y, fit) ax[int(np.divide(index, 3)), np.mod(index, 3)].set_title('Gaussian Distribution - player {}'.format(grp)) ax[int(np.divide(index, 3)), np.mod(index, 3)].grid(linewidth=0.3) index += 1 meanx = round(result.params['centerx'].value, 2) meany = round(result.params['centery'].value, 2) varx = result.params["sigmax"].value vary = result.params["sigmay"].value covxx = round(np.multiply(varx, varx), 2) covxy = round(np.divide(np.sum(np.multiply(np.subtract(x_pos, x_pos.mean()), np.subtract(y_pos, y_pos.mean()))), (len(x_pos) - 1)), 2) covyy = round(np.multiply(vary, vary), 2) print('Player {}:\n\tMean [x_pos: {}, y_pos: {}]\n\tCovariance [xx: {}, xy: {}\n\t\t xy: {}, yy: {}]' .format(grp, meanx, meany, covxx, covxy, covxy, covyy)) # player 15 lmfit.report_fit(result) ```
github_jupyter
CT Reconstruction (ADMM Plug-and-Play Priors w/ BM3D, SVMBIR+CG) ================================================================ This example demonstrates the use of class [admm.ADMM](../_autosummary/scico.optimize.rst#scico.optimize.ADMM) to solve a tomographic reconstruction problem using the Plug-and-Play Priors framework <cite data-cite="venkatakrishnan-2013-plugandplay2"/>, using BM3D <cite data-cite="dabov-2008-image"/> as a denoiser and SVMBIR <cite data-cite="svmbir-2020"/> for tomographic projection. This version uses the data fidelity term as the ADMM f, and thus the optimization with respect to the data fidelity uses CG rather than the prox of the SVMBIRSquaredL2Loss functional. ``` import numpy as np import jax import matplotlib.pyplot as plt import svmbir from xdesign import Foam, discrete_phantom import scico.numpy as snp from scico import metric, plot from scico.functional import BM3D, NonNegativeIndicator from scico.linop import Diagonal, Identity from scico.linop.radon_svmbir import SVMBIRSquaredL2Loss, TomographicProjector from scico.optimize.admm import ADMM, LinearSubproblemSolver from scico.util import device_info plot.config_notebook_plotting() ``` Generate a ground truth image. ``` N = 256 # image size density = 0.025 # attenuation density of the image np.random.seed(1234) x_gt = discrete_phantom(Foam(size_range=[0.05, 0.02], gap=0.02, porosity=0.3), size=N - 10) x_gt = x_gt / np.max(x_gt) * density x_gt = np.pad(x_gt, 5) x_gt[x_gt < 0] = 0 ``` Generate tomographic projector and sinogram. ``` num_angles = int(N / 2) num_channels = N angles = snp.linspace(0, snp.pi, num_angles, endpoint=False, dtype=snp.float32) A = TomographicProjector(x_gt.shape, angles, num_channels) sino = A @ x_gt ``` Impose Poisson noise on sinogram. Higher max_intensity means less noise. ``` max_intensity = 2000 expected_counts = max_intensity * np.exp(-sino) noisy_counts = np.random.poisson(expected_counts).astype(np.float32) noisy_counts[noisy_counts == 0] = 1 # deal with 0s y = -np.log(noisy_counts / max_intensity) ``` Reconstruct using default prior of SVMBIR <cite data-cite="svmbir-2020"/>. ``` weights = svmbir.calc_weights(y, weight_type="transmission") x_mrf = svmbir.recon( np.array(y[:, np.newaxis]), np.array(angles), weights=weights[:, np.newaxis], num_rows=N, num_cols=N, positivity=True, verbose=0, )[0] ``` Set up an ADMM solver. ``` y, x0, weights = jax.device_put([y, x_mrf, weights]) ρ = 15 # ADMM penalty parameter σ = density * 0.18 # denoiser sigma f = SVMBIRSquaredL2Loss(y=y, A=A, W=Diagonal(weights), scale=0.5) g0 = σ * ρ * BM3D() g1 = NonNegativeIndicator() solver = ADMM( f=f, g_list=[g0, g1], C_list=[Identity(x_mrf.shape), Identity(x_mrf.shape)], rho_list=[ρ, ρ], x0=x0, maxiter=20, subproblem_solver=LinearSubproblemSolver(cg_kwargs={"tol": 1e-4, "maxiter": 100}), itstat_options={"display": True, "period": 1}, ) ``` Run the solver. ``` print(f"Solving on {device_info()}\n") x_bm3d = solver.solve() hist = solver.itstat_object.history(transpose=True) ``` Show the recovered image. ``` norm = plot.matplotlib.colors.Normalize(vmin=-0.1 * density, vmax=1.2 * density) fig, ax = plt.subplots(1, 3, figsize=[15, 5]) plot.imview(img=x_gt, title="Ground Truth Image", cbar=True, fig=fig, ax=ax[0], norm=norm) plot.imview( img=x_mrf, title=f"MRF (PSNR: {metric.psnr(x_gt, x_mrf):.2f} dB)", cbar=True, fig=fig, ax=ax[1], norm=norm, ) plot.imview( img=x_bm3d, title=f"BM3D (PSNR: {metric.psnr(x_gt, x_bm3d):.2f} dB)", cbar=True, fig=fig, ax=ax[2], norm=norm, ) fig.show() ``` Plot convergence statistics. ``` plot.plot( snp.vstack((hist.Prml_Rsdl, hist.Dual_Rsdl)).T, ptyp="semilogy", title="Residuals", xlbl="Iteration", lgnd=("Primal", "Dual"), ) ```
github_jupyter
# Image similarity estimation using a Siamese Network with a contrastive loss **Author:** Mehdi<br> **Date created:** 2021/05/06<br> **Last modified:** 2021/05/06<br> **ORIGINAL SOURCE:** https://github.com/keras-team/keras-io/blob/master/examples/vision/ipynb/siamese_contrastive.ipynb<br> **Description:** Similarity learning using a siamese network trained with a contrastive loss. ## Introduction [Siamese Networks](https://en.wikipedia.org/wiki/Siamese_neural_network) are neural networks which share weights between two or more sister networks, each producing embedding vectors of its respective inputs. In supervised similarity learning, the networks are then trained to maximize the contrast (distance) between embeddings of inputs of different classes, while minimizing the distance between embeddings of similar classes, resulting in embedding spaces that reflect the class segmentation of the training inputs. ## Setup ``` import random import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt ``` ## Hyperparameters ``` epochs = 10 batch_size = 16 margin = 1 # Margin for constrastive loss. ``` ## Load the MNIST dataset ``` (x_train_val, y_train_val), (x_test, y_test) = keras.datasets.mnist.load_data() # Change the data type to a floating point format x_train_val = x_train_val.astype("float32") x_test = x_test.astype("float32") ``` ## Define training and validation sets ``` # Keep 50% of train_val in validation set x_train, x_val = x_train_val[:30000], x_train_val[30000:] y_train, y_val = y_train_val[:30000], y_train_val[30000:] del x_train_val, y_train_val ``` ## Create pairs of images We will train the model to differentiate between digits of different classes. For example, digit `0` needs to be differentiated from the rest of the digits (`1` through `9`), digit `1` - from `0` and `2` through `9`, and so on. To carry this out, we will select N random images from class A (for example, for digit `0`) and pair them with N random images from another class B (for example, for digit `1`). Then, we can repeat this process for all classes of digits (until digit `9`). Once we have paired digit `0` with other digits, we can repeat this process for the remaining classes for the rest of the digits (from `1` until `9`). ``` def make_pairs(x, y): """Creates a tuple containing image pairs with corresponding label. Arguments: x: List containing images, each index in this list corresponds to one image. y: List containing labels, each label with datatype of `int`. Returns: Tuple containing two numpy arrays as (pairs_of_samples, labels), where pairs_of_samples' shape is (2len(x), 2,n_features_dims) and labels are a binary array of shape (2len(x)). """ num_classes = max(y) + 1 digit_indices = [np.where(y == i)[0] for i in range(num_classes)] pairs = [] labels = [] for idx1 in range(len(x)): # add a matching example x1 = x[idx1] label1 = y[idx1] idx2 = random.choice(digit_indices[label1]) x2 = x[idx2] pairs += [[x1, x2]] labels += [1] # add a non-matching example label2 = random.randint(0, num_classes - 1) while label2 == label1: label2 = random.randint(0, num_classes - 1) idx2 = random.choice(digit_indices[label2]) x2 = x[idx2] pairs += [[x1, x2]] labels += [0] return np.array(pairs), np.array(labels).astype("float32") # make train pairs pairs_train, labels_train = make_pairs(x_train, y_train) # make validation pairs pairs_val, labels_val = make_pairs(x_val, y_val) # make test pairs pairs_test, labels_test = make_pairs(x_test, y_test) ``` We get: **pairs_train.shape = (60000, 2, 28, 28)** - We have 60,000 pairs - Each pair contains 2 images - Each image has shape `(28, 28)` Split the training pairs ``` x_train_1 = pairs_train[:, 0] # x_train_1.shape is (60000, 28, 28) x_train_2 = pairs_train[:, 1] ``` Split the validation pairs ``` x_val_1 = pairs_val[:, 0] # x_val_1.shape = (60000, 28, 28) x_val_2 = pairs_val[:, 1] ``` Split the test pairs ``` x_test_1 = pairs_test[:, 0] # x_test_1.shape = (20000, 28, 28) x_test_2 = pairs_test[:, 1] ``` ## Visualize pairs and their labels ``` def visualize(pairs, labels, to_show=6, num_col=3, predictions=None, test=False): """Creates a plot of pairs and labels, and prediction if it's test dataset. Arguments: pairs: Numpy Array, of pairs to visualize, having shape (Number of pairs, 2, 28, 28). to_show: Int, number of examples to visualize (default is 6) `to_show` must be an integral multiple of `num_col`. Otherwise it will be trimmed if it is greater than num_col, and incremented if if it is less then num_col. num_col: Int, number of images in one row - (default is 3) For test and train respectively, it should not exceed 3 and 7. predictions: Numpy Array of predictions with shape (to_show, 1) - (default is None) Must be passed when test=True. test: Boolean telling whether the dataset being visualized is train dataset or test dataset - (default False). Returns: None. """ # Define num_row # If to_show % num_col != 0 # trim to_show, # to trim to_show limit num_row to the point where # to_show % num_col == 0 # # If to_show//num_col == 0 # then it means num_col is greater then to_show # increment to_show # to increment to_show set num_row to 1 num_row = to_show // num_col if to_show // num_col != 0 else 1 # `to_show` must be an integral multiple of `num_col` # we found num_row and we have num_col # to increment or decrement to_show # to make it integral multiple of `num_col` # simply set it equal to num_row * num_col to_show = num_row * num_col # Plot the images fig, axes = plt.subplots(num_row, num_col, figsize=(5, 5)) for i in range(to_show): # If the number of rows is 1, the axes array is one-dimensional if num_row == 1: ax = axes[i % num_col] else: ax = axes[i // num_col, i % num_col] ax.imshow(tf.concat([pairs[i][0], pairs[i][1]], axis=1), cmap="gray") ax.set_axis_off() if test: ax.set_title("True: {} | Pred: {:.5f}".format(labels[i], predictions[i][0])) else: ax.set_title("Label: {}".format(labels[i])) if test: plt.tight_layout(rect=(0, 0, 1.9, 1.9), w_pad=0.0) else: plt.tight_layout(rect=(0, 0, 1.5, 1.5)) plt.show() ``` Inspect training pairs ``` visualize(pairs_train[:-1], labels_train[:-1], to_show=4, num_col=4) ``` Inspect validation pairs ``` visualize(pairs_val[:-1], labels_val[:-1], to_show=4, num_col=4) ``` Inspect test pairs ``` visualize(pairs_test[:-1], labels_test[:-1], to_show=4, num_col=4) ``` ## Define the model There are be two input layers, each leading to its own network, which produces embeddings. A `Lambda` layer then merges them using an [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance) and the merged output is fed to the final network. ``` # Provided two tensors t1 and t2 # Euclidean distance = sqrt(sum(square(t1-t2))) def euclidean_distance(vects): """Find the Euclidean distance between two vectors. Arguments: vects: List containing two tensors of same length. Returns: Tensor containing euclidean distance (as floating point value) between vectors. """ x, y = vects sum_square = tf.math.reduce_sum(tf.math.square(x - y), axis=1, keepdims=True) return tf.math.sqrt(tf.math.maximum(sum_square, tf.keras.backend.epsilon())) input = layers.Input((28, 28, 1)) x = tf.keras.layers.BatchNormalization()(input) x = layers.Conv2D(4, (5, 5), activation="tanh")(x) x = layers.AveragePooling2D(pool_size=(2, 2))(x) x = layers.Conv2D(16, (5, 5), activation="tanh")(x) x = layers.AveragePooling2D(pool_size=(2, 2))(x) x = layers.Flatten()(x) x = tf.keras.layers.BatchNormalization()(x) x = layers.Dense(10, activation="tanh")(x) embedding_network = keras.Model(input, x) input_1 = layers.Input((28, 28, 1)) input_2 = layers.Input((28, 28, 1)) # As mentioned above, Siamese Network share weights between # tower networks (sister networks). To allow this, we will use # same embedding network for both tower networks. tower_1 = embedding_network(input_1) tower_2 = embedding_network(input_2) merge_layer = layers.Lambda(euclidean_distance)([tower_1, tower_2]) normal_layer = tf.keras.layers.BatchNormalization()(merge_layer) output_layer = layers.Dense(1, activation="sigmoid")(normal_layer) siamese = keras.Model(inputs=[input_1, input_2], outputs=output_layer) ``` ## Define the constrastive Loss ``` def loss(margin=1): """Provides 'constrastive_loss' an enclosing scope with variable 'margin'. Arguments: margin: Integer, defines the baseline for distance for which pairs should be classified as dissimilar. - (default is 1). Returns: 'constrastive_loss' function with data ('margin') attached. """ # Contrastive loss = mean( (1-true_value) * square(prediction) + # true_value * square( max(margin-prediction, 0) )) def contrastive_loss(y_true, y_pred): """Calculates the constrastive loss. Arguments: y_true: List of labels, each label is of type float32. y_pred: List of predictions of same length as of y_true, each label is of type float32. Returns: A tensor containing constrastive loss as floating point value. """ square_pred = tf.math.square(y_pred) margin_square = tf.math.square(tf.math.maximum(margin - (y_pred), 0)) return tf.math.reduce_mean( (1 - y_true) * square_pred + (y_true) * margin_square ) return contrastive_loss ``` ## Compile the model with the contrastive loss ``` siamese.compile(loss=loss(margin=margin), optimizer="RMSprop", metrics=["accuracy"]) siamese.summary() ``` ## Train the model ``` history = siamese.fit( [x_train_1, x_train_2], labels_train, validation_data=([x_val_1, x_val_2], labels_val), batch_size=batch_size, epochs=epochs, ) ``` ## Visualize results ``` def plt_metric(history, metric, title, has_valid=True): """Plots the given 'metric' from 'history'. Arguments: history: history attribute of History object returned from Model.fit. metric: Metric to plot, a string value present as key in 'history'. title: A string to be used as title of plot. has_valid: Boolean, true if valid data was passed to Model.fit else false. Returns: None. """ plt.plot(history[metric]) if has_valid: plt.plot(history["val_" + metric]) plt.legend(["train", "validation"], loc="upper left") plt.title(title) plt.ylabel(metric) plt.xlabel("epoch") plt.show() # Plot the accuracy plt_metric(history=history.history, metric="accuracy", title="Model accuracy") # Plot the constrastive loss plt_metric(history=history.history, metric="loss", title="Constrastive Loss") ``` ## Evaluate the model ``` results = siamese.evaluate([x_test_1, x_test_2], labels_test) print("test loss, test acc:", results) ``` ## Visualize the predictions ``` predictions = siamese.predict([x_test_1, x_test_2]) visualize(pairs_test, labels_test, to_show=3, predictions=predictions, test=True) ```
github_jupyter
# Magnetics - Directional derivatives, ASA and ASA2-PVD filtering #### as part of worked filters in NFIS PROJECT - CPRM-UFPR #### CPRM Intern researcher: Luizemara S. A. Szameitat (luizemara@gmail.com) Notebook from https://github.com/lszam/cprm-nfis. Last modified: Dec/2021 ___ ○ References: Richard Blakely (1996). Potential Theory in Gravity & Magnetic Applications. FORMATO DO GRID DE ENTRADA: O dado precisa ser um array de duas dimensões, com nós das células nesta ordem: de oeste para leste, e de sul para norte. ``` import numpy as np import scipy.fftpack from math import radians, sin, cos, sqrt import matplotlib.pyplot as plt import seaborn as sb ``` ##### Functions ``` def Kvalue(i, j, nx, ny, dkx, dky): ''' Kvalue(i, j, nx, ny, dkx, dky) adapted from Blakely (1996) ''' nyqx = nx / 2 + 1 nyqy = ny / 2 + 1 kx = float() ky = float() if j <= nyqx: kx = (j-1) * dkx else: kx = (j-nx-1) * dkx if i <= nyqy: ky = (i-1) * dky else: ky = (i-ny-1) * dky return kx, ky def go_filter(grid, filter_type, nx, ny, dkx, dky): ''' Potential Field's filtering routines ''' # FFT gridfft = scipy.fftpack.fft2(grid) # From matrix to vector gridfft = np.reshape(gridfft, nx*ny) # Creating complex vector gridfft_filt = np.zeros(nx*ny).astype(complex) # Filtering if filter_type == 'dz1': for j in range(1, ny+1): for i in range(1, nx+1): ij = (j-1) * nx + i kx, ky = Kvalue(i, j, nx, ny, dkx, dky) k = sqrt(kx**2 + ky**2) gridfft_filt[ij-1] = gridfft[ij-1]*k**1 #filtering expression elif filter_type == 'dy1': for j in range(1, ny+1): for i in range(1, nx+1): ij = (j-1) * nx + i kx, ky = Kvalue(i, j, nx, ny, dkx, dky) k = kx gridfft_filt[ij-1] = gridfft[ij-1]*k*complex(0, 1) elif filter_type == 'dx1': for j in range(1, ny+1): for i in range(1, nx+1): ij = (j-1) * nx + i kx, ky = Kvalue(i, j, nx, ny, dkx, dky) k = ky gridfft_filt[ij-1] = gridfft[ij-1]*k*complex(0, 1) else: return print ('Not supported filter_type=', filter_type) #Return to matrix gridfft_filt = np.reshape(gridfft_filt, (ny, nx)) #iFFT gridfft_filt = scipy.fftpack.ifft2(gridfft_filt) grid_out = np.reshape(gridfft_filt.real, (ny, nx)) return grid_out ``` ### Input data ``` # XYZ grid reading (CSV format, no header, no line numbers) filename = 'https://raw.githubusercontent.com/lszam/cprm-nfis/main/data/pentanomaly.csv' #reading csv columns - coordinates and physical property csv_values = np.genfromtxt(filename, delimiter=",") #choose the delimiter x = csv_values[:,0] y = csv_values[:,1] z = csv_values[:,2] #Getting x and y values as lists xnodes=[] ynodes=[] [xnodes.append(item) for item in x if not xnodes.count(item)] [ynodes.append(item) for item in y if not ynodes.count(item)] # Meshgrids of X and Y xgrid, ygrid = np.meshgrid(xnodes, ynodes) #Number of items in x and y directions nx = np.size(xgrid,1) ny = np.size(ygrid,0) #Arranging the physical property into a matrix Z = np.array(z) # data from list to array grid_tfa = np.reshape(Z, (ny, nx)) dX, dY = np.abs(xnodes[0]-xnodes[1]), np.abs(ynodes[0]-ynodes[1]) # cell size #Plot input data plt.figure(figsize=(4, 4)) sb_plot2 = sb.heatmap(grid_tfa, cmap="Spectral", cbar_kws={'label': 'TFA(nT)'}) sb_plot2.invert_yaxis() ``` ### Derivatives ``` π = np.pi dkx = 2. * π / (nx * dX) dky = 2. * π / (ny * dY) # Filtering gridX = go_filter(grid_tfa, 'dx1', nx, ny, dkx, dky) gridY = go_filter(grid_tfa, 'dy1', nx, ny, dkx, dky) gridZ = go_filter(grid_tfa, 'dz1', nx, ny, dkx, dky) # Plot plt.figure(figsize=(10.5, 3)) plt.subplot(1, 3, 1) sb_plot1 = sb.heatmap(gridX, cmap="Spectral", cbar_kws={'label': 'DX nT/m'}) sb_plot1.invert_yaxis() plt.subplot(1, 3, 2) sb_plot2 = sb.heatmap(gridY, cmap="Spectral", cbar_kws={'label': 'DY nT/m'}) sb_plot2.invert_yaxis() plt.subplot(1, 3, 3) sb_plot3 = sb.heatmap(gridZ, cmap="Spectral", cbar_kws={'label': 'DZ nT/m'}) sb_plot3.invert_yaxis() plt.tight_layout() ``` ### ASA filter ``` # ASA filter ''' Analytic signal amplitude (ASA) ''' grid_asa = np.sqrt((gridX**2)+(gridY**2)+(gridZ**2)) # Plot plt.figure(figsize=(4, 4)) sb_plot2 = sb.heatmap(grid_asa, cmap="Spectral", cbar_kws={'label': 'ASA'}) sb_plot2.invert_yaxis() ``` ### Pseudo-vertical derivative of the squared ASA (ASA2-PVD) ``` grid_asa2pvd = go_filter(grid_asa**2, 'dz1', nx, ny, dkx, dky) # Plot plt.figure(figsize=(4, 4)) sb_plot2 = sb.heatmap(grid_asa2pvd, cmap="Spectral", cbar_kws={'label': 'ASA2-PVD'}) sb_plot2.invert_yaxis() #Save results as txt file grid_output = np.reshape(grid_asa2pvd, ny*nx) np.savetxt(str('asa2pvd.xyz'),np.c_[x,y,grid_output]) ```
github_jupyter
# Building your own algorithm container With Amazon SageMaker, you can package your own algorithms that can than be trained and deployed in the SageMaker environment. This notebook will guide you through an example that shows you how to build a Docker container for SageMaker and use it for training and inference. By packaging an algorithm in a container, you can bring almost any code to the Amazon SageMaker environment, regardless of programming language, environment, framework, or dependencies. _**Note:**_ SageMaker now includes a [pre-built scikit container](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/scikit_learn_iris/Scikit-learn%20Estimator%20Example%20With%20Batch%20Transform.ipynb). We recommend the pre-built container be used for almost all cases requiring a scikit algorithm. However, this example remains relevant as an outline for bringing in other libraries to SageMaker as your own container. 1. [Building your own algorithm container](#Building-your-own-algorithm-container) 1. [When should I build my own algorithm container?](#When-should-I-build-my-own-algorithm-container%3F) 1. [Permissions](#Permissions) 1. [The example](#The-example) 1. [The presentation](#The-presentation) 1. [Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker](#Part-1%3A-Packaging-and-Uploading-your-Algorithm-for-use-with-Amazon-SageMaker) 1. [An overview of Docker](#An-overview-of-Docker) 1. [How Amazon SageMaker runs your Docker container](#How-Amazon-SageMaker-runs-your-Docker-container) 1. [Running your container during training](#Running-your-container-during-training) 1. [The input](#The-input) 1. [The output](#The-output) 1. [Running your container during hosting](#Running-your-container-during-hosting) 1. [The parts of the sample container](#The-parts-of-the-sample-container) 1. [The Dockerfile](#The-Dockerfile) 1. [Building and registering the container](#Building-and-registering-the-container) 1. [Testing your algorithm on your local machine or on an Amazon SageMaker notebook instance](#Testing-your-algorithm-on-your-local-machine-or-on-an-Amazon-SageMaker-notebook-instance) 1. [Part 2: Using your Algorithm in Amazon SageMaker](#Part-2%3A-Using-your-Algorithm-in-Amazon-SageMaker) 1. [Set up the environment](#Set-up-the-environment) 1. [Create the session](#Create-the-session) 1. [Upload the data for training](#Upload-the-data-for-training) 1. [Create an estimator and fit the model](#Create-an-estimator-and-fit-the-model) 1. [Hosting your model](#Hosting-your-model) 1. [Deploy the model](#Deploy-the-model) 2. [Choose some data and use it for a prediction](#Choose-some-data-and-use-it-for-a-prediction) 3. [Optional cleanup](#Optional-cleanup) 1. [Run Batch Transform Job](#Run-Batch-Transform-Job) 1. [Create a Transform Job](#Create-a-Transform-Job) 2. [View Output](#View-Output) _or_ I'm impatient, just [let me see the code](#The-Dockerfile)! ## When should I build my own algorithm container? You may not need to create a container to bring your own code to Amazon SageMaker. When you are using a framework (such as Apache MXNet or TensorFlow) that has direct support in SageMaker, you can simply supply the Python code that implements your algorithm using the SDK entry points for that framework. This set of frameworks is continually expanding, so we recommend that you check the current list if your algorithm is written in a common machine learning environment. Even if there is direct SDK support for your environment or framework, you may find it more effective to build your own container. If the code that implements your algorithm is quite complex on its own or you need special additions to the framework, building your own container may be the right choice. If there isn't direct SDK support for your environment, don't worry. You'll see in this walk-through that building your own container is quite straightforward. ## Permissions Running this notebook requires permissions in addition to the normal `SageMakerFullAccess` permissions. This is because we'll creating new repositories in Amazon ECR. The easiest way to add these permissions is simply to add the managed policy `AmazonEC2ContainerRegistryFullAccess` to the role that you used to start your notebook instance. There's no need to restart your notebook instance when you do this, the new permissions will be available immediately. ## The example Here, we'll show how to package a simple Python example which showcases the [decision tree][] algorithm from the widely used [scikit-learn][] machine learning package. The example is purposefully fairly trivial since the point is to show the surrounding structure that you'll want to add to your own code so you can train and host it in Amazon SageMaker. The ideas shown here will work in any language or environment. You'll need to choose the right tools for your environment to serve HTTP requests for inference, but good HTTP environments are available in every language these days. In this example, we use a single image to support training and hosting. This is easy because it means that we only need to manage one image and we can set it up to do everything. Sometimes you'll want separate images for training and hosting because they have different requirements. Just separate the parts discussed below into separate Dockerfiles and build two images. Choosing whether to have a single image or two images is really a matter of which is more convenient for you to develop and manage. If you're only using Amazon SageMaker for training or hosting, but not both, there is no need to build the unused functionality into your container. [scikit-learn]: http://scikit-learn.org/stable/ [decision tree]: http://scikit-learn.org/stable/modules/tree.html ## The presentation This presentation is divided into two parts: _building_ the container and _using_ the container. # Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker ### An overview of Docker If you're familiar with Docker already, you can skip ahead to the next section. For many data scientists, Docker containers are a new concept, but they are not difficult, as you'll see here. Docker provides a simple way to package arbitrary code into an _image_ that is totally self-contained. Once you have an image, you can use Docker to run a _container_ based on that image. Running a container is just like running a program on the machine except that the container creates a fully self-contained environment for the program to run. Containers are isolated from each other and from the host environment, so the way you set up your program is the way it runs, no matter where you run it. Docker is more powerful than environment managers like conda or virtualenv because (a) it is completely language independent and (b) it comprises your whole operating environment, including startup commands, environment variable, etc. In some ways, a Docker container is like a virtual machine, but it is much lighter weight. For example, a program running in a container can start in less than a second and many containers can run on the same physical machine or virtual machine instance. Docker uses a simple file called a `Dockerfile` to specify how the image is assembled. We'll see an example of that below. You can build your Docker images based on Docker images built by yourself or others, which can simplify things quite a bit. Docker has become very popular in the programming and devops communities for its flexibility and well-defined specification of the code to be run. It is the underpinning of many services built in the past few years, such as [Amazon ECS]. Amazon SageMaker uses Docker to allow users to train and deploy arbitrary algorithms. In Amazon SageMaker, Docker containers are invoked in a certain way for training and a slightly different way for hosting. The following sections outline how to build containers for the SageMaker environment. Some helpful links: * [Docker home page](http://www.docker.com) * [Getting started with Docker](https://docs.docker.com/get-started/) * [Dockerfile reference](https://docs.docker.com/engine/reference/builder/) * [`docker run` reference](https://docs.docker.com/engine/reference/run/) [Amazon ECS]: https://aws.amazon.com/ecs/ ### How Amazon SageMaker runs your Docker container Because you can run the same image in training or hosting, Amazon SageMaker runs your container with the argument `train` or `serve`. How your container processes this argument depends on the container: * In the example here, we don't define an `ENTRYPOINT` in the Dockerfile so Docker will run the command `train` at training time and `serve` at serving time. In this example, we define these as executable Python scripts, but they could be any program that we want to start in that environment. * If you specify a program as an `ENTRYPOINT` in the Dockerfile, that program will be run at startup and its first argument will be `train` or `serve`. The program can then look at that argument and decide what to do. * If you are building separate containers for training and hosting (or building only for one or the other), you can define a program as an `ENTRYPOINT` in the Dockerfile and ignore (or verify) the first argument passed in. #### Running your container during training When Amazon SageMaker runs training, your `train` script is run just like a regular Python program. A number of files are laid out for your use, under the `/opt/ml` directory: /opt/ml |-- input | |-- config | | |-- hyperparameters.json | | `-- resourceConfig.json | `-- data | `-- <channel_name> | `-- <input data> |-- model | `-- <model files> `-- output `-- failure ##### The input * `/opt/ml/input/config` contains information to control how your program runs. `hyperparameters.json` is a JSON-formatted dictionary of hyperparameter names to values. These values will always be strings, so you may need to convert them. `resourceConfig.json` is a JSON-formatted file that describes the network layout used for distributed training. Since scikit-learn doesn't support distributed training, we'll ignore it here. * `/opt/ml/input/data/<channel_name>/` (for File mode) contains the input data for that channel. The channels are created based on the call to CreateTrainingJob but it's generally important that channels match what the algorithm expects. The files for each channel will be copied from S3 to this directory, preserving the tree structure indicated by the S3 key structure. * `/opt/ml/input/data/<channel_name>_<epoch_number>` (for Pipe mode) is the pipe for a given epoch. Epochs start at zero and go up by one each time you read them. There is no limit to the number of epochs that you can run, but you must close each pipe before reading the next epoch. ##### The output * `/opt/ml/model/` is the directory where you write the model that your algorithm generates. Your model can be in any format that you want. It can be a single file or a whole directory tree. SageMaker will package any files in this directory into a compressed tar archive file. This file will be available at the S3 location returned in the `DescribeTrainingJob` result. * `/opt/ml/output` is a directory where the algorithm can write a file `failure` that describes why the job failed. The contents of this file will be returned in the `FailureReason` field of the `DescribeTrainingJob` result. For jobs that succeed, there is no reason to write this file as it will be ignored. #### Running your container during hosting Hosting has a very different model than training because hosting is responding to inference requests that come in via HTTP. In this example, we use our recommended Python serving stack to provide robust and scalable serving of inference requests: ![Request serving stack](stack.png) This stack is implemented in the sample code here and you can mostly just leave it alone. Amazon SageMaker uses two URLs in the container: * `/ping` will receive `GET` requests from the infrastructure. Your program returns 200 if the container is up and accepting requests. * `/invocations` is the endpoint that receives client inference `POST` requests. The format of the request and the response is up to the algorithm. If the client supplied `ContentType` and `Accept` headers, these will be passed in as well. The container will have the model files in the same place they were written during training: /opt/ml `-- model `-- <model files> ### The parts of the sample container In the `container` directory are all the components you need to package the sample algorithm for Amazon SageMager: . |-- Dockerfile |-- build_and_push.sh `-- decision_trees |-- nginx.conf |-- predictor.py |-- serve |-- train `-- wsgi.py Let's discuss each of these in turn: * __`Dockerfile`__ describes how to build your Docker container image. More details below. * __`build_and_push.sh`__ is a script that uses the Dockerfile to build your container images and then pushes it to ECR. We'll invoke the commands directly later in this notebook, but you can just copy and run the script for your own algorithms. * __`decision_trees`__ is the directory which contains the files that will be installed in the container. * __`local_test`__ is a directory that shows how to test your new container on any computer that can run Docker, including an Amazon SageMaker notebook instance. Using this method, you can quickly iterate using small datasets to eliminate any structural bugs before you use the container with Amazon SageMaker. We'll walk through local testing later in this notebook. In this simple application, we only install five files in the container. You may only need that many or, if you have many supporting routines, you may wish to install more. These five show the standard structure of our Python containers, although you are free to choose a different toolset and therefore could have a different layout. If you're writing in a different programming language, you'll certainly have a different layout depending on the frameworks and tools you choose. The files that we'll put in the container are: * __`nginx.conf`__ is the configuration file for the nginx front-end. Generally, you should be able to take this file as-is. * __`predictor.py`__ is the program that actually implements the Flask web server and the decision tree predictions for this app. You'll want to customize the actual prediction parts to your application. Since this algorithm is simple, we do all the processing here in this file, but you may choose to have separate files for implementing your custom logic. * __`serve`__ is the program started when the container is started for hosting. It simply launches the gunicorn server which runs multiple instances of the Flask app defined in `predictor.py`. You should be able to take this file as-is. * __`train`__ is the program that is invoked when the container is run for training. You will modify this program to implement your training algorithm. * __`wsgi.py`__ is a small wrapper used to invoke the Flask app. You should be able to take this file as-is. In summary, the two files you will probably want to change for your application are `train` and `predictor.py`. ### The Dockerfile The Dockerfile describes the image that we want to build. You can think of it as describing the complete operating system installation of the system that you want to run. A Docker container running is quite a bit lighter than a full operating system, however, because it takes advantage of Linux on the host machine for the basic operations. For the Python science stack, we will start from a standard Ubuntu installation and run the normal tools to install the things needed by scikit-learn. Finally, we add the code that implements our specific algorithm to the container and set up the right environment to run under. Along the way, we clean up extra space. This makes the container smaller and faster to start. Let's look at the Dockerfile for the example: ``` !cat container/Dockerfile ``` ### Building and registering the container The following shell code shows how to build the container image using `docker build` and push the container image to ECR using `docker push`. This code is also available as the shell script `container/build-and-push.sh`, which you can run as `build-and-push.sh decision_trees_sample` to build the image `decision_trees_sample`. This code looks for an ECR repository in the account you're using and the current default region (if you're using a SageMaker notebook instance, this will be the region where the notebook instance was created). If the repository doesn't exist, the script will create it. ``` %%sh # The name of our algorithm algorithm_name=sagemaker-decision-trees cd container chmod +x decision_trees/train chmod +x decision_trees/serve account=$(aws sts get-caller-identity --query Account --output text) # Get the region defined in the current configuration (default to us-west-2 if none defined) region=$(aws configure get region) region=${region:-us-west-2} fullname="${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest" # If the repository doesn't exist in ECR, create it. aws ecr describe-repositories --repository-names "${algorithm_name}" > /dev/null 2>&1 if [ $? -ne 0 ] then aws ecr create-repository --repository-name "${algorithm_name}" > /dev/null fi # Get the login command from ECR and execute it directly aws ecr get-login-password --region ${region}|docker login --username AWS --password-stdin ${fullname} # Build the docker image locally with the image name and then push it to ECR # with the full name. docker build -t ${algorithm_name} . docker tag ${algorithm_name} ${fullname} docker push ${fullname} ``` ## Testing your algorithm on your local machine or on an Amazon SageMaker notebook instance While you're first packaging an algorithm use with Amazon SageMaker, you probably want to test it yourself to make sure it's working right. In the directory `container/local_test`, there is a framework for doing this. It includes three shell scripts for running and using the container and a directory structure that mimics the one outlined above. The scripts are: * `train_local.sh`: Run this with the name of the image and it will run training on the local tree. For example, you can run `$ ./train_local.sh sagemaker-decision-trees`. It will generate a model under the `/test_dir/model` directory. You'll want to modify the directory `test_dir/input/data/...` to be set up with the correct channels and data for your algorithm. Also, you'll want to modify the file `input/config/hyperparameters.json` to have the hyperparameter settings that you want to test (as strings). * `serve_local.sh`: Run this with the name of the image once you've trained the model and it should serve the model. For example, you can run `$ ./serve_local.sh sagemaker-decision-trees`. It will run and wait for requests. Simply use the keyboard interrupt to stop it. * `predict.sh`: Run this with the name of a payload file and (optionally) the HTTP content type you want. The content type will default to `text/csv`. For example, you can run `$ ./predict.sh payload.csv text/csv`. The directories as shipped are set up to test the decision trees sample algorithm presented here. # Part 2: Using your Algorithm in Amazon SageMaker Once you have your container packaged, you can use it to train models and use the model for hosting or batch transforms. Let's do that with the algorithm we made above. ## Set up the environment Here we specify a bucket to use and the role that will be used for working with SageMaker. ``` # S3 prefix prefix = "DEMO-scikit-byo-iris" # Define IAM role import boto3 import re import os import numpy as np import pandas as pd from sagemaker import get_execution_role role = get_execution_role() ``` ## Create the session The session remembers our connection parameters to SageMaker. We'll use it to perform all of our SageMaker operations. ``` import sagemaker as sage from time import gmtime, strftime sess = sage.Session() ``` ## Upload the data for training When training large models with huge amounts of data, you'll typically use big data tools, like Amazon Athena, AWS Glue, or Amazon EMR, to create your data in S3. For the purposes of this example, we're using some the classic [Iris dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set), which we have included. We can use use the tools provided by the SageMaker Python SDK to upload the data to a default bucket. ``` WORK_DIRECTORY = "data" data_location = sess.upload_data(WORK_DIRECTORY, key_prefix=prefix) ``` ## Create an estimator and fit the model In order to use SageMaker to fit our algorithm, we'll create an `Estimator` that defines how to use the container to train. This includes the configuration we need to invoke SageMaker training: * The __container name__. This is constructed as in the shell commands above. * The __role__. As defined above. * The __instance count__ which is the number of machines to use for training. * The __instance type__ which is the type of machine to use for training. * The __output path__ determines where the model artifact will be written. * The __session__ is the SageMaker session object that we defined above. Then we use fit() on the estimator to train against the data that we uploaded above. ``` account = sess.boto_session.client("sts").get_caller_identity()["Account"] region = sess.boto_session.region_name image = "{}.dkr.ecr.{}.amazonaws.com/sagemaker-decision-trees:latest".format(account, region) tree = sage.estimator.Estimator( image, role, 1, "ml.c4.2xlarge", output_path="s3://{}/output".format(sess.default_bucket()), sagemaker_session=sess, ) tree.fit(data_location) ``` ## Hosting your model You can use a trained model to get real time predictions using HTTP endpoint. Follow these steps to walk you through the process. ### Deploy the model Deploying the model to SageMaker hosting just requires a `deploy` call on the fitted model. This call takes an instance count, instance type, and optionally serializer and deserializer functions. These are used when the resulting predictor is created on the endpoint. ``` from sagemaker.predictor import csv_serializer predictor = tree.deploy(1, "ml.m4.xlarge", serializer=csv_serializer) ``` ### Choose some data and use it for a prediction In order to do some predictions, we'll extract some of the data we used for training and do predictions against it. This is, of course, bad statistical practice, but a good way to see how the mechanism works. ``` shape = pd.read_csv("data/iris.csv", header=None) shape.sample(3) # drop the label column in the training set shape.drop(shape.columns[[0]], axis=1, inplace=True) shape.sample(3) import itertools a = [50 * i for i in range(3)] b = [40 + i for i in range(10)] indices = [i + j for i, j in itertools.product(a, b)] test_data = shape.iloc[indices[:-1]] ``` Prediction is as easy as calling predict with the predictor we got back from deploy and the data we want to do predictions with. The serializers take care of doing the data conversions for us. ``` print(predictor.predict(test_data.values).decode("utf-8")) ``` ### Optional cleanup When you're done with the endpoint, you'll want to clean it up. ``` sess.delete_endpoint(predictor.endpoint) ``` ## Run Batch Transform Job You can use a trained model to get inference on large data sets by using [Amazon SageMaker Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html). A batch transform job takes your input data S3 location and outputs the predictions to the specified S3 output folder. Similar to hosting, you can extract inferences for training data to test batch transform. ### Create a Transform Job We'll create an `Transformer` that defines how to use the container to get inference results on a data set. This includes the configuration we need to invoke SageMaker batch transform: * The __instance count__ which is the number of machines to use to extract inferences * The __instance type__ which is the type of machine to use to extract inferences * The __output path__ determines where the inference results will be written ``` transform_output_folder = "batch-transform-output" output_path = "s3://{}/{}".format(sess.default_bucket(), transform_output_folder) transformer = tree.transformer( instance_count=1, instance_type="ml.m4.xlarge", output_path=output_path, assemble_with="Line", accept="text/csv", ) ``` We use tranform() on the transfomer to get inference results against the data that we uploaded. You can use these options when invoking the transformer. * The __data_location__ which is the location of input data * The __content_type__ which is the content type set when making HTTP request to container to get prediction * The __split_type__ which is the delimiter used for splitting input data * The __input_filter__ which indicates the first column (ID) of the input will be dropped before making HTTP request to container ``` transformer.transform( data_location, content_type="text/csv", split_type="Line", input_filter="$[1:]" ) transformer.wait() ``` For more information on the configuration options, see [CreateTransformJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html) ### View Output Lets read results of above transform job from s3 files and print output ``` s3_client = sess.boto_session.client("s3") s3_client.download_file( sess.default_bucket(), "{}/iris.csv.out".format(transform_output_folder), "/tmp/iris.csv.out" ) with open("/tmp/iris.csv.out") as f: results = f.readlines() print("Transform results: \n{}".format("".join(results))) ```
github_jupyter
``` import os os.chdir('../src/') print(os.getcwd()) from traffic_analysis.d05_evaluation.chunk_evaluator import ChunkEvaluator from traffic_analysis.d00_utils.load_confs import load_parameters from traffic_analysis.d06_visualisation.plot_frame_level_map import plot_map_over_time from traffic_analysis.d06_visualisation.plot_video_level_diff import plot_video_stats_diff_distribution from traffic_analysis.d06_visualisation.plot_video_level_performance import plot_video_level_performance import numpy as np import pandas as pd params = load_parameters() pd.set_option('display.max_columns', 500) %matplotlib inline ``` Read in video level data ``` xml_paths = ["C:\\Users\\Caroline Wang\\OneDrive\\DSSG\\air_pollution_estimation\\annotations\\15_2019-06-29_13-01-03.094068_00001.01252.xml", "C:\\Users\\Caroline Wang\\OneDrive\\DSSG\\air_pollution_estimation\\annotations\\14_2019-06-29_13-01-19.744908_00001.05900.xml"] video_level_df = pd.read_csv("../data/carolinetemp/video_level_df.csv", dtype={"camera_id": str}, parse_dates=["video_upload_datetime"]) del video_level_df['Unnamed: 0'] ``` Read in frame level data ``` frame_level_df = pd.read_csv("../data/carolinetemp/frame_level_df.csv", dtype={"camera_id": str}, converters={"bboxes": lambda x: [float(coord) for coord in x.strip("[]").split(", ")]}, parse_dates=["video_upload_datetime"]) del frame_level_df['Unnamed: 0'] ``` Run evaluators ``` chunk_evaluator = ChunkEvaluator(annotation_xml_paths=xml_paths, selected_labels=params["selected_labels"], video_level_df=video_level_df, frame_level_df = frame_level_df, video_level_column_order=params["video_level_column_order"]) video_level_performance, video_level_diff = chunk_evaluator.evaluate_video_level() frame_level_map = chunk_evaluator.evaluate_frame_level() plot_video_level_performance(video_level_performance, metrics = {'bias':"../data/plots/video_bias_kcf.pdf", 'rmse':"../data/plots/video_rmse_kcf.pdf", 'mae': "../data/plots/video_mae_kcf.pdf"}) ``` Plot video level diffs ``` plot_video_stats_diff_distribution(video_level_diff, video_stat_types = ["counts", "stops", "starts"] # save_path = "../data/plots/diff_dist_csrt.pdf" ) ``` ## Frame Level Plotting ``` plot_map_over_time(frame_level_map, save_path = "../data/plots/map_over_time.pdf" ) ```
github_jupyter
# Key Points for Review in this Section - Hyperlinks in Markdown - magic: ```!```, ```%%bash```, ```%%python``` and ```%%file``` - ```shift-tab``` to see help of function - ```zip()``` - ```dict.get(key, otherwise)``` - immutability and identity -- box and things inside - ```functions(*list)``` and ```function(**dict)``` - ```enumerarte(iterable, start)``` - lazy evaluation: ```range()``` and ```(x for x )``` - ```try``` and ```except``` # Crash course in Jupyter and Python - Introduction to Jupyter - Using Markdown - Magic functions - REPL - Saving and exporting Jupyter notebooks - Python - Data types - Operators - Collections - Functions and methods - Control flow - Loops, comprehension - Packages and namespace - Coding style - Understanding error messages - Getting help ## Class Repository Course material will be posted here. Please make any personal modifications to a **copy** of the notebook to avoid merge conflicts. https://github.com/cliburn/sta-663-2019.git ## Introduction to Jupyter - [Official Jupyter docs](https://jupyter.readthedocs.io/en/latest/) - User interface and kernels - Notebook, editor, terminal - Literate programming - Code and markdown cells - Menu and toolbar - Key bindings - Polyglot programming ``` %load_ext rpy2.ipython import warnings warnings.simplefilter('ignore', FutureWarning) df = %R iris df.head() %%R -i df -o res library(tidyverse) res <- df %>% group_by(Species) %>% summarize_all(mean) res ``` ### Using Markdown - What is markdown? - Headers - Formatting text - Syntax-highlighted code - Lists - Hyperlinks and images - LaTeX See `Help | Markdown` #### Hyperlinks [I'm an inline-style link](https://www.google.com) [I'm an inline-style link with title](https://www.google.com "Google's Homepage") [I'm a reference-style link][Arbitrary case-insensitive reference text] [I'm a relative reference to a repository file](../blob/master/LICENSE) [You can use numbers for reference-style link definitions][1] Or leave it empty and use the [link text itself]. URLs and URLs in angle brackets will automatically get turned into links. http://www.example.com or <http://www.example.com> and sometimes example.com (but not on Github, for example). Some text to show that the reference links can follow later. [arbitrary case-insensitive reference text]: https://www.mozilla.org [1]: http://slashdot.org [link text itself]: http://www.reddit.com #### LaTex using dollar for LaTex $\alpha$ ### Magic functions - [List of magic functions](https://ipython.readthedocs.io/en/stable/interactive/magics.html) - `%magic` - Shell access - Convenience functions - Quick and dirty text files ``` %magic ! echo "hello world" %%bash echo "hello world" %%python print(1+2) ``` ### REPL - Read, Eval, Print, Loop - Learn by experimentation ``` 1 + 2 ``` ### Saving and exporting Jupyter notebooks - The File menu item - Save and Checkpoint - Exporting - Close and Halt - Cleaning up with the Running tab ## Introduction to Python - [Official Python docs](https://docs.python.org/3/) - [Why Python?](https://insights.stackoverflow.com/trends?tags=python%2Cjavascript%2Cjava%2Cc%2B%2B%2Cr%2Cjulia-lang%2Cscala&utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python) - General purpose language (web, databases, introductory programming classes) - Language for scientific computation (physics, engineering, statistics, ML, AI) - Human readable - Interpreted - Dynamic typing - Strong typing - Multi-paradigm - Implementations (CPython, PyPy, Jython, IronPython) ### Data types - boolean - int, double, complex - strings - None ``` True, False 1, 2, 3 import numpy as np np.pi, np.e 3 + 4j 'hello, world' "hell's bells" """三轮车跑的快 上面坐个老太太 要五毛给一块 你说奇怪不奇怪""" None None is None ``` ### Operators - mathematical - logical - bitwise - membership - identity - assignment and in-place operators - operator precedence #### Arithmetic ``` 2 ** 3 11 / 3 11 // 3 11 % 3 ``` #### Logical ``` True and False True or False not (True or False) ``` #### Relational ``` 2 == 2, 2 == 3, 2 != 3, 2 < 3, 2 <= 3, 2 > 3, 2 >= 3 ``` #### Bitwise ``` format(10, '04b') format(7, '04b') x = 10 & 7 x, format(x, '04b') x = 10 | 7 x, format(x, '04b') x = 10 ^ 7 x, format(x, '04b') ``` #### Membership ``` 'hell' in 'hello' 3 in range(5), 7 in range(5) 'a' in dict(zip('abc', range(3))) ``` #### Identity ``` x = [2,3] y = [2,3] x == y, x is y id(x), id(y) x = 'hello' y = 'hello' x == y, x is y id(x), id(y) ``` #### Assignment ``` x = 2 x = x + 2 x x *= 2 x ``` ### Collections - Sequence containers - list, tuple - Mapping containers - set, dict - The [`collections`](https://docs.python.org/2/library/collections.html) module #### Lists ``` xs = [1,2,3] xs[0], xs[-1] xs[1] = 9 xs ``` #### Tuples ``` ys = (1,2,3) ys[0], ys[-1] try: ys[1] = 9 except TypeError as e: print(e) ``` #### Sets ``` zs = [1,1,2,2,2,3,3,3,3] set(zs) ``` #### Dictionaries ``` {'a': 0, 'b': 1, 'c': 2} dict(a=0, b=1, c=2) dict(zip('abc', range(3))) list(zip([1, 2, 3], [4, 5, 6, 7], [8, 9, 10])) d = {'a':1, 'b':2} print(d.get('a')) print(d.get('c', 0)) # using get is better than [] maybe ``` ### Functions and methods - Anatomy of a function - Docstrings - Class methods ``` list(range(10)) [item for item in dir() if not item.startswith('_')] def f(a, b): """Do something with a and b. Assume that the + and * operatores are defined for a and b. """ return 2*a + 3*b f(2, 3) f(3, 2) f(b=3, a=2) f(*(2,3)) f(**dict(a=2, b=3)) f('hello', 'world') f([1,2,3], ['a', 'b', 'c']) f((1, 2), ('a', 'b')) print("sum() # shift-tab to see instructions") ``` ### Control flow - if and the ternary operator - Checking conditions - what evaluates as true/false? - if-elif-else - while - break, continue - pass ``` if 1 + 1 == 2: print("Phew!") 'vegan' if 1 + 1 == 2 else 'carnivore' 'vegan' if 1 + 1 == 3 else 'carnivore' if 1+1 == 3: print("oops") else: print("Phew!") for grade in [94, 79, 81, 57]: if grade > 90: print('A') elif grade > 80: print('B') elif grade > 70: print('C') else: print('Are you in the right class?') i = 10 while i > 0: print(i) i -= 1 for i in range(1, 10): if i % 2 == 0: continue print(i) for i in range(1, 10): if i % 2 == 0: break print(i) for i in range(1, 10): if i % 2 == 0: pass else: print(i) ``` ### Loops and comprehensions - for, range, enumerate - lazy and eager evaluation - list, set, dict comprehensions - generator expression ``` for i in range(1,5): print(i**2, end=',') for i, x in enumerate(range(1,5)): print(i, x**2) for i, x in enumerate(range(1,5), start=10): print(i, x**2) range(5) list(range(5)) ``` #### Comprehensions ``` [x**3 % 3 for x in range(10)] {x**3 % 3 for x in range(10)} {k: v for k, v in enumerate('abcde')} (x**3 for x in range(10)) list(x**3 for x in range(10)) ``` ### Packages and namespace - Modules (file) - Package (hierarchical modules) - Namespace and naming conflicts - Using `import` - [Batteries included](https://docs.python.org/3/library/index.html) ``` %%file foo.py def foo(x): return f"And FOO you too, {x}" import foo foo.foo("Winnie the Pooh") from foo import foo foo("Winnie the Pooh") import numpy as np np.random.randint(0, 10, (5,5)) ``` ### Coding style - [PEP 8 — the Style Guide for Python Code](https://pep8.org/) - Many code editors can be used with linters to check if your code conforms to PEP 8 style guidelines. - E.g. see [jupyter-autopep8](https://github.com/kenkoooo/jupyter-autopep8) ### Understanding error messages - [Built-in exceptions](https://docs.python.org/3/library/exceptions.html) ``` try: 1 / 0 except ZeroDivisionError as e: print(e) ``` ### Getting help - `?foo`, `foo?`, `help(foo)` - Use a search engine - Use `StackOverflow` - Ask your TA ``` help(help) ```
github_jupyter
<img style="float: left;" src="./images/cemsf.png" width="100"/><img style="float: right;" src="./images/icons.png" width="500"/> # Global ECMWF Fire Forecasting ## Harmonized danger classes According to EFFIS [documentation and user guidelines](https://effis.jrc.ec.europa.eu/about-effis/technical-background/fire-danger-forecast/): - In most European countries, the core of the wildfire season starts on 1st of March and ends on 31st of October. - The EFFIS network adopts the Canadian Forest Fire Weather Index (FWI) System as the method to assess the fire danger level in a harmonized way throughout Europe. **European** Fire Danger Classes (FWI ranges, upper bound excluded): - Very low = 0 - 5.2 - Low = 5.2 - 11.2 - Moderate = 11.2 - 21.3 - High = 21.3 - 38.0 - Very high = 38.0 - 50.0 - Extreme > 50.0 In ECMWF experience, the above thresholds are particularly suited to assess fire danger in southern Europe, e.g. in the Mediterranean Region. Some countries, tend to calibrate these thresholds depending on local vegetation characteristics and fire regimes. This require local knowledge and/or experimentation. For instance, **Portugal** uses the following thresholds for local-level assessments of fire danger: - Reduced risk = 8.4, - Moderate risk = 17.2, - High risk = 24.6, - Maximum risk = 38.3 Northern European countries might be more inclined to test **Canadian** threshold levels for the purpose of local-level assessments of fire danger: - Very Low = 0 - 1, - Low = 2 - 4, - Moderate = 5 - 8, - High = 9 - 16, - VeryHigh = 17 - 30, - Extreme > 30 As another example, in **Indonesia** threshold levels are (probably due to high level of humidity): - Very Low = 0 - 3, - Low = 3 - 5, - Moderate = 5 - 10, - High = 10 - 17, - VeryHigh = 17 - 28, - Extreme > 28 ## Classified forecasts Raw FWI forecast values are expressed as a continuous rating in the range [0, +Inf[ (very rarely above 100). In order to aid decision makers raw forecasts are routinely converted into danger classes, based on the thresholds mentioned above, before being displayed by the EFFIS/GWIS viewer. In this tutorial we are going to look at the predictive capability of the fire danger forecasts. Let us use the forecast issued on 14th July to see whether dangerous fire weather could have been predicted in the area where the Attica fires started burning on 23rd July (leadtime = 10 days). ``` # Import the necessary libraries and enable inline displaying of plots import numpy as np import pandas as pd import matplotlib.pyplot as plt import xarray as xr %matplotlib inline # Open raw RT HRES forecast for Attica (Greece), issued on 14th July 2018 (10 days before the Attica fires) ds = xr.open_dataset("./eodata/geff/201807_Greece/rt_hr/ECMWF_FWI_20180714_1200_hr_fwi_rt.nc") # Plot the raw forecast, Day 10 ds.fwi[9].plot(); # Plot the re-classified forecast, Day 10 ds.fwi[9].plot(levels = [0.0, 5.2, 11.2, 21.3, 38.0, 50.0], colors = ["#008000", "#FFFF00", "#FFA500", "#FF0000", "#654321", "#000000"], label = ['Very low', 'Low', 'Moderate', 'High', 'Very high', 'Extreme']); # Highlight only cells above Very High danger ds_vh = xr.where(cond = ds < 38.0, x = 0, y = ds) ds_vh.fwi[9].plot(levels = [0, 1]); ``` # Other danger indicators Given the different climatic conditions in Europe, EFFIS also publishes two indicators that provide information on the local/temporal variability of the FWI compared to a historical series of approximately 30 years. These indicators are: (a) the **ranking**, which provides percentiles of occurrence of the values, and (b) the **anomaly**, computed as a standard deviation from the historical mean values. More advanced tutorials will look at anomaly and ranking.
github_jupyter
``` # Import libraries import os import sys import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt from scipy import stats # Here are my rc parameters for matplotlib mpl.rc('font', serif='Helvetica Neue') mpl.rcParams.update({'font.size': 12}) mpl.rcParams['figure.figsize'] = 3.2, 2.8 mpl.rcParams['figure.dpi'] = 200 mpl.rcParams['xtick.direction'] = 'in' mpl.rcParams['ytick.direction'] = 'in' mpl.rcParams['lines.linewidth'] = 0.5 mpl.rcParams['axes.linewidth'] = 1.5 # This link shows you how to greyscale a cmap # https://jakevdp.github.io/PythonDataScienceHandbook/04.07-customizing-colorbars.html def find(name): home = os.path.expanduser("~") for root, dirs, files in os.walk(home): if name in dirs: return os.path.join(root, name) # First let's find all of our data whingPath = find('whingdingdilly') ipyPath = whingPath + '/ipython' rootPath = ipyPath + '/epsilon_1_find_diameter' epsOneMono = rootPath + '/mono/diamTxt' epsOneParFrac = rootPath + '/varyParticleFraction/diamTxts' epsOnePeRat = rootPath + '/5050mix/eps1_mix_txts' epsHSMono = rootPath + '/HS_mono/diamTxt' # Populate my file containers fileContainer = [] pathContainer = [] os.chdir(rootPath) fileContainer.append(os.listdir(epsOneMono)) fileContainer.append(os.listdir(epsOneParFrac)) fileContainer.append(os.listdir(epsOnePeRat)) fileContainer.append(os.listdir(epsHSMono)) pathContainer.append(epsOneMono) pathContainer.append(epsOneParFrac) pathContainer.append(epsOnePeRat) pathContainer.append(epsHSMono) nSweeps = len(fileContainer) # Get rid of ugly files for i in xrange(nSweeps): if '.DS_Store' in fileContainer[i]: fileContainer[i].remove('.DS_Store') # Functions to sort my data with def getFromTxt(fname, first, last): """Takes a string, text before and after desired text, outs text between""" start = fname.index( first ) + len( first ) end = fname.index( last, start ) myTxt = fname[start:end] return float(myTxt) def multiSort(arr1, arr2, arr3): """Sort an array the slow (but certain) way, returns original indices in sorted order""" # Doing this for PeR, PeS, xS in this case cpy1 = np.copy(arr1) cpy2 = np.copy(arr2) cpy3 = np.copy(arr3) ind = np.arange(0, len(arr1)) for i in xrange(len(cpy1)): for j in xrange(len(cpy1)): # Sort by first variable if cpy1[i] > cpy1[j] and i < j: # Swap copy array values cpy1[i], cpy1[j] = cpy1[j], cpy1[i] cpy2[i], cpy2[j] = cpy2[j], cpy2[i] cpy3[i], cpy3[j] = cpy3[j], cpy3[i] # Swap the corresponding indices ind[i], ind[j] = ind[j], ind[i] # If first variable is equal, resort to second variable elif cpy1[i] == cpy1[j] and cpy2[i] > cpy2[j] and i < j: # Swap copy array values cpy1[i], cpy1[j] = cpy1[j], cpy1[i] cpy2[i], cpy2[j] = cpy2[j], cpy2[i] cpy3[i], cpy3[j] = cpy3[j], cpy3[i] # Swap the corresponding indices ind[i], ind[j] = ind[j], ind[i] elif cpy1[i] == cpy1[j] and cpy2[i] == cpy2[j] and cpy3[i] > cpy3[j] and i < j: # Swap copy array values cpy1[i], cpy1[j] = cpy1[j], cpy1[i] cpy2[i], cpy2[j] = cpy2[j], cpy2[i] cpy3[i], cpy3[j] = cpy3[j], cpy3[i] # Swap the corresponding indices ind[i], ind[j] = ind[j], ind[i] return ind def indSort(arr1, arr2): """Take sorted index array, use to sort array""" # arr1 is array to sort # arr2 is index array cpy = np.copy(arr1) for i in xrange(len(arr1)): arr1[i] = cpy[arr2[i]] # Load data pre-sorted for i in xrange(nSweeps): paList = [] pbList = [] xfList = [] for j in xrange(len(fileContainer[i])): paList.append(getFromTxt(fileContainer[i][j], "pa", "_pb")) pbList.append(getFromTxt(fileContainer[i][j], "pb", "_xa")) try: xfList.append(getFromTxt(fileContainer[i][j], "xa", "_ep")) except: xfList.append(getFromTxt(fileContainer[i][j], "xa", ".txt")) xfList[j] = 100.0 - xfList[j] # Now sort the array of txtFile names indArr = multiSort(paList, pbList, xfList) indSort(fileContainer[i], indArr) # for i in xrange(len(fileContainer[0])): # print(fileContainer[0][i]) # Read the data for each parameter study into a pandas dataframe all_sims = [] for i in xrange(nSweeps): all_sims.append([]) os.chdir(pathContainer[i]) for j in xrange(len(fileContainer[i])): df = pd.read_csv(fileContainer[i][j], sep='\s+', header=0) all_sims[i].append(df) # Go back to the source folder (save figures here) os.chdir(rootPath) # Get the name of each of the headers in the dataframes list(all_sims[0][0]) # Make sure all data is chronilogical def chkSort(array): """Make sure array is chronilogical""" for i in xrange(len(array)-2): if array[i] > array[i+1]: print("{} is not greater than {} for indices=({},{})").format(array[i+1], array[i], i, i+1) return False return True # Check to see if timesteps are in order for i in xrange(nSweeps): for j in xrange(len(fileContainer[i])): myBool = chkSort(all_sims[i][j]['Timestep']) if myBool is False: print("{} is not chronilogically sorted!").format(fileContainer[i][j]) exit(1) else: print("{} sorted... ").format(fileContainer[i][j]) display(all_sims[0][0]) # Function that will sort wrt one variable def singleSort(arr): for i in xrange(len(arr)): for j in xrange(len(arr)): if arr[i] < arr[j] and i > j: arr[i], arr[j] = arr[j], arr[i] # Function to get conversion from timesteps to Brownian time def computeTauPerTstep(epsilon): # This is actually indpendent of runtime :) # sigma = 1.0 # threeEtaPiSigma = 1.0 # runFor = 200 # tauBrown = 1.0 # tauLJ = ((sigma**2) * threeEtaPiSigma) / epsilon # dt = 0.00001 * tauLJ # simLength = runFor * tauBrown # totTsteps = int(simLength / dt) # tstepPerTau = int(totTsteps / float(simLength)) kBT = 1.0 tstepPerTau = int(epsilon / (kBT * 0.00001)) return tstepPerTau def theoryDenom(xF, peS, peF): xF /= 100.0 xS = 1.0 - xF return 4.0 * ((xS * peS) + (xF * peF)) def theory(xF, peS, peF): kappa = 4.05 xF /= 100.0 xS = 1.0 - xF return ((3.0 * (np.pi**2) * kappa) / (4.0 * ((xS * peS) + (xF * peF)))) # Make an additional frame that gives total number of particles, and simulation parameters params = [] for i in xrange(nSweeps): paramList = [] for j in xrange(len(fileContainer[i])): partAll = all_sims[i][j]['Gas_tot'][0] partA = all_sims[i][j]['Gas_A'][0] partB = all_sims[i][j]['Gas_B'][0] pa = getFromTxt(fileContainer[i][j], "pa", "_pb") pb = getFromTxt(fileContainer[i][j], "pb", "_xa") # File naming convention is different for these files try: xa = getFromTxt(fileContainer[i][j], "xa", "_ep") except: xa = getFromTxt(fileContainer[i][j], "xa", ".txt") # Is epsilon in the filename? try: eps = getFromTxt(fileContainer[i][j], "ep", ".txt") except: eps = 0 xf = 100.0 - xa paramList.append((partAll, partA, partB, pa, pb, xf, eps)) # Put the data for this parameter sweep into it's own dataframe params.append(pd.DataFrame(paramList, columns=['partAll', 'partA', 'partB', 'peA', 'peB', 'xF', 'eps']) ) pd.set_option('display.max_rows', 2) display(params[i]) # Now get time-based steady state values all_SS = [] all_stdDev = [] all_var = [] startInd = 50 for i in xrange(nSweeps): # Make list of steady state column headers headers = list(all_sims[i][0]) headers.remove('Timestep') SS = pd.DataFrame(columns=headers) stdDev = pd.DataFrame(columns=headers) var = pd.DataFrame(columns=headers) # Initialize dataframes for j in xrange(len(fileContainer[i])): SS.loc[j] = [0] * len(headers) stdDev.loc[j] = [0] * len(headers) var.loc[j] = [0] * len(headers) # Make dataframe of steady-state data for j in xrange(len(fileContainer[i])): # Loop through each column (aside from tstep column) for k in range(1, len(all_sims[i][j].iloc[1])): # Compute mean of data after steady-state time (25tb) in jth column of ith file avg = np.mean(all_sims[i][j].iloc[startInd:-1, k]) SS[headers[k-1]][j] = avg # Compute the standard deviation and variance in this data stdDevor = np.std(all_sims[i][j].iloc[startInd:-1, k]) stdDev[headers[k-1]][j] = stdDevor var[headers[k-1]][j] = stdDevor ** 2 # Normalize by number of particles for j in xrange(len(fileContainer[i])): if params[i]['partA'][j] != 0: SS['Gas_A'][j] /= params[i]['partA'][j] SS['Dense_A'][j] /= params[i]['partA'][j] # Now my standard error is a percentage stdDev['Gas_A'][j] /= params[i]['partA'][j] stdDev['Dense_A'][j] /= params[i]['partA'][j] var['Gas_A'][j] /= params[i]['partA'][j] var['Dense_A'][j] /= params[i]['partA'][j] if params[i]['partB'][j] != 0: SS['Gas_B'][j] /= params[i]['partB'][j] SS['Dense_B'][j] /= params[i]['partB'][j] stdDev['Gas_B'][j] /= params[i]['partB'][j] stdDev['Dense_B'][j] /= params[i]['partB'][j] var['Gas_B'][j] /= params[i]['partB'][j] var['Dense_B'][j] /= params[i]['partB'][j] SS['Gas_tot'][:] /= params[i]['partAll'][:] SS['Dense_tot'][:] /= params[i]['partAll'][:] SS['Lg_clust'][:] /= params[i]['partAll'][:] SS['MCS'][:] /= params[i]['partAll'][:] stdDev['Gas_tot'][:] /= params[i]['partAll'][:] stdDev['Dense_tot'][:] /= params[i]['partAll'][:] stdDev['Lg_clust'][:] /= params[i]['partAll'][:] stdDev['MCS'][:] /= params[i]['partAll'][:] var['Gas_tot'][:] /= params[i]['partAll'][:] var['Dense_tot'][:] /= params[i]['partAll'][:] var['Lg_clust'][:] /= params[i]['partAll'][:] var['MCS'][:] /= params[i]['partAll'][:] SS['Gas_A'][:] *= 100.0 SS['Gas_B'][:] *= 100.0 SS['Gas_tot'][:] *= 100.0 SS['Dense_A'][:] *= 100.0 SS['Dense_B'][:] *= 100.0 SS['Dense_tot'][:] *= 100.0 SS['Lg_clust'][:] *= 100.0 SS['MCS'][:] *= 100.0 stdDev['Gas_A'][:] *= 100.0 stdDev['Gas_B'][:] *= 100.0 stdDev['Gas_tot'][:] *= 100.0 stdDev['Dense_A'][:] *= 100.0 stdDev['Dense_B'][:] *= 100.0 stdDev['Dense_tot'][:] *= 100.0 stdDev['Lg_clust'][:] *= 100.0 stdDev['MCS'][:] *= 100.0 var['Gas_A'][:] *= 100.0 var['Gas_B'][:] *= 100.0 var['Gas_tot'][:] *= 100.0 var['Dense_A'][:] *= 100.0 var['Dense_B'][:] *= 100.0 var['Dense_tot'][:] *= 100.0 var['Lg_clust'][:] *= 100.0 # Put these values into the modular container all_SS.append(SS) all_stdDev.append(stdDev) all_var.append(var) # Delete these and loop through next parameter sweep del SS del stdDev del var pd.set_option('display.max_rows', 6) display(all_SS[0]) # Now make figure plots for i in xrange(nSweeps): for j in xrange(len(fileContainer[i])): if params[i]['xF'][j] == 0 and params[i]['eps'][j] == 0: plt.scatter(params[i]['peA'][j], all_SS[i]['sigALL'][j], c='k') for i in xrange(nSweeps): for j in xrange(len(fileContainer[i])): if params[i]['xF'][j] == 0 and params[i]['eps'][j] != 0: plt.scatter(params[i]['peA'][j], all_SS[i]['sigALL'][j], c='r') plt.xlabel(r'Activity $(Pe)$') plt.ylabel(r'Effective diameter $(\delta_{mode})$') plt.show() # I need to find a functional fit to the epsilon = 1, sigma vs Pe relationship from scipy.optimize import curve_fit # Grab only the data you want to fit xfit = [] yfit = [] for i in xrange(nSweeps): for j in xrange(len(fileContainer[i])): if params[i]['xF'][j] == 0 and params[i]['eps'][j] == 0 and params[i]['peA'][j] >= 100: xfit.append(params[i]['peA'][j]) yfit.append(all_SS[i]['sigALL'][j]) # This is the function I'll fit to def fitFunction(x, a, b, c): return a * np.exp(-b * x) + c popt, pcov = curve_fit(fitFunction, xfit, yfit, bounds=(0, [10., 0.05, 10.])) # Now plot the data and the fit xFunc = np.arange(50.0, 500.0, 0.001) yFunc = [] for i in xrange(len(xFunc)): yFunc.append(fitFunction(xFunc[i], *popt)) plt.scatter(xfit, yfit) plt.plot(xFunc, yFunc) plt.xlabel(r'Activity $(Pe)$', fontsize=14) plt.ylabel(r'Mode center-to-center distance $(\delta)$', fontsize=14) plt.title(r'Monodisperse (exponential fit)', fontsize=14) plt.text(0.3, 0.5, 'a={:.3f}, b={:.3f}, c={:.3f}'.format(*popt), fontsize=16, transform=plt.gca().transAxes) plt.text(0.425, 0.6, r'$f(Pe)=a \cdot e^{-(b \cdot Pe)} + c$', fontsize=16, transform=plt.gca().transAxes) plt.xlim(50, 500) plt.ylim(0.725, 1.0) plt.show() # We're going to try alternate fitting functions (in log-log space) # from numpy.polynomial.polynomial import polyfit # Function for basic line def plotLine(x, m, b): return (m*x) + b def getRSquared(xdat, ydat, m, b): # Compute the average y value avg = np.mean(ydat) real = 0.0 pred = 0.0 # Get the distance from real and predicted values to mean for i in xrange(len(ydat)): real += (ydat[i] - avg)**2 pred += (plotLine(xdat[i], m, b) - avg)**2 return pred / real # Get log of each dataset xlog = np.log(xfit) ylog = np.log(yfit) # Get range of each dataset rangeNorm = np.arange(50.0, 500.0, 0.01) rangeLog = np.arange(np.log(50.0), np.log(500.0), 0.0001) # Fit line to each potential set of axes m, b = np.polyfit(xfit, yfit, 1) res = getRSquared(xfit, yfit, m, b) plt.plot(rangeNorm, plotLine(rangeNorm, m, b)) plt.scatter(xfit, yfit) plt.xlabel(r'Activity $(Pe)$', fontsize=14) plt.ylabel(r'Separation $(\delta)$', fontsize=14) plt.title(r'x, y', fontsize=14) plt.text(0.0675, 0.825,'m={:.6f}, b={:.3f}, r^2={:.4f}'.format(m, b, res), fontsize=16, transform=plt.gca().transAxes) plt.show() m, b = np.polyfit(xlog, yfit, 1) res = getRSquared(xlog, yfit, m, b) plt.plot(rangeLog, plotLine(rangeLog, m, b)) plt.scatter(xlog, yfit) plt.xlabel(r'ln-Activity $(\ln(Pe))$', fontsize=14) plt.ylabel(r'Separation $(\delta)$', fontsize=14) plt.title(r'log x, y', fontsize=14) plt.text(0.0675, 0.825,'m={:.6f}, b={:.3f}, r^2={:.4f}'.format(m, b, res), fontsize=16, transform=plt.gca().transAxes) plt.show() m, b = np.polyfit(xfit, ylog, 1) res = getRSquared(xfit, ylog, m, b) plt.plot(rangeNorm, plotLine(rangeNorm, m, b)) plt.scatter(xfit, ylog) plt.xlabel(r'Activity $(Pe)$', fontsize=14) plt.ylabel(r'ln-Separation $(\ln(\delta))$', fontsize=14) plt.title(r'x, log y', fontsize=14) plt.text(0.0675, 0.825,'m={:.6f}, b={:.3f}, r^2={:.4f}'.format(m, b, res), fontsize=16, transform=plt.gca().transAxes) plt.show() m, b = np.polyfit(xlog, ylog, 1) outM = m outB = b res = getRSquared(xlog, ylog, m, b) plt.plot(rangeLog, plotLine(rangeLog, m, b)) plt.scatter(xlog, ylog) plt.xlabel(r'ln-Activity $(\ln(Pe))$', fontsize=14) plt.ylabel(r'ln-Separation $(\ln(\delta))$', fontsize=14) plt.title(r'log x, log y', fontsize=14) plt.text(0.0675, 0.825,'m={:.6f}, b={:.3f}, r^2={:.4f}'.format(m, b, res), fontsize=16, transform=plt.gca().transAxes) plt.show() # It is clear the log-log scale fits best, find and plot this function def powerLaw(x, m, b): return (x**m)*np.exp(b) xsDat = [] ysDat = [] for i in xrange(nSweeps): for j in xrange(len(fileContainer[i])): if params[i]['xF'][j] == 0 and params[i]['eps'][j] == 0: xsDat.append(params[i]['peA'][j]) ysDat.append(all_SS[i]['sigALL'][j]) xs = np.arange(0.0, 500.0, 0.0001) plt.plot(xs, powerLaw(xs, outM, outB)) plt.scatter(xsDat, ysDat) plt.xlim(0, 500) plt.ylim(0.725, 1.0) plt.xlabel(r'Activity $(Pe)$', fontsize=14) plt.ylabel(r'Separation $(\delta)$', fontsize=14) plt.text(0.425, 0.75, r'$\delta (Pe)=(Pe)^{m}e^{b}$', fontsize=16, transform=plt.gca().transAxes) plt.text(0.2, 0.65,'m={:.6f}, b={:.3f}, r^2={:.4f}'.format(outM, outB, res), fontsize=14, transform=plt.gca().transAxes) plt.show() # At what activity does this give a diameter of 1.0 crossPe = np.exp(-outB/outM) print('Variables for fit are: m={}, and b={}').format(outM, outB) print('Diameter is 1.0, when Pe={:.1f}').format(crossPe) # Make the inset individually, then add figures together def deltaToForce(delta): sig = 1.0 eps = 1.0 ljForce = 24.0 * eps * ( (2*((sig**12)/(delta**13))) - ((sig**6)/(delta**7)) ) return ljForce def deltaToEpsReq(delta): FLJ = deltaToForce(delta) epsReq = FLJ / 24.0 return epsReq def netEpsilon(xF, peS, peF): epsBrown = 10.0 sig = 1.0 xF /= 100.0 xS = 1.0 - xF netEps = (4.0*((peF*sig*xF) + (peS*sig*xS)) / 24.0) + epsBrown return netEps for j in xrange(len(fileContainer[1])): if params[1]['xF'][j] % 100.0 != 0: epsReqAll = deltaToEpsReq(all_SS[1]['sigALL'][j]) epsReqAA = deltaToEpsReq(all_SS[1]['sigAA'][j]) epsReqAB = deltaToEpsReq(all_SS[1]['sigAB'][j]) epsReqBB = deltaToEpsReq(all_SS[1]['sigBB'][j]) plt.scatter(params[1]['xF'][j] / 100.0, epsReqAA, c='r', s=150, zorder=1) # plt.scatter(params[1]['xF'][j] / 100.0, epsReqAB, c='b', s=150, zorder=1) plt.scatter(params[1]['xF'][j] / 100.0, epsReqBB, c='g', s=150, zorder=1) # Get a line for the data xline = np.arange(0.0, 100.0, 0.001) yline = np.zeros_like(xline) for i in xrange(len(xline)): yline[i] = netEpsilon(xline[i], 0.0, 500.0) plt.plot(xline / 100.0, yline, linestyle='--', c='k', lw=1.5, zorder=0) plt.xlabel(r'$x_{f}$') plt.ylabel(r'$\epsilon_{req}$') plt.xlim(0.05, 0.95) plt.ylim(20, 80) plt.show() # Let's combine those last two plots and make the figure look nice from matplotlib.ticker import MultipleLocator, FormatStrFormatter mpl.rcParams['axes.linewidth'] = 1.6 fig, ax = plt.subplots(1, 1, figsize=(5, 5)) fontsz = 16 # Larger figure for i in xrange(nSweeps): for j in xrange(len(fileContainer[i])): if params[i]['xF'][j] == 0 and params[i]['eps'][j] == 0: ax.scatter(params[i]['peA'][j], all_SS[i]['sigALL'][j], c='k', s=100) for i in xrange(nSweeps): for j in xrange(len(fileContainer[i])): if params[i]['xF'][j] == 0 and params[i]['eps'][j] != 0: ax.scatter(params[i]['peA'][j], all_SS[i]['sigALL'][j], c='r', s=100) ax.set_xlim(-20, 520) ax.set_ylim(0.735, 1.02) ax.set_xlabel(r'Activity $(Pe)$', fontsize=fontsz) ax.set_ylabel(r'Effective diameter $(\delta_{mode})$', fontsize=fontsz) # Inset figure left, bottom, width, height = [0.55, 0.40, 0.325, 0.325] ax2 = fig.add_axes([left, bottom, width, height]) for j in xrange(len(fileContainer[1])): if params[1]['xF'][j] % 100.0 != 0: epsReqAll = deltaToEpsReq(all_SS[1]['sigALL'][j]) epsReqAA = deltaToEpsReq(all_SS[1]['sigAA'][j]) epsReqAB = deltaToEpsReq(all_SS[1]['sigAB'][j]) epsReqBB = deltaToEpsReq(all_SS[1]['sigBB'][j]) ax2.scatter(params[1]['xF'][j] / 100.0, epsReqAll, c='r', s=100, zorder=1) # Get a line for the data xline = np.arange(0.0, 100.0, 0.001) yline = np.zeros_like(xline) for i in xrange(len(xline)): yline[i] = netEpsilon(xline[i], 0.0, 500.0) ax2.plot(xline / 100.0, yline, linestyle='--', c='k', lw=1.5, zorder=0) ax2.set_xlabel(r'$x_{f}$', fontsize=fontsz) ax2.set_ylabel(r'$\epsilon_{req}$', fontsize=fontsz) ax2.set_xlim(0.05, 0.95) ax2.set_ylim(20, 80) # Set ticks for first plot ax.xaxis.set_major_locator(MultipleLocator(100)) ax.xaxis.set_minor_locator(MultipleLocator(50)) ax.yaxis.set_major_locator(MultipleLocator(0.05)) ax.yaxis.set_minor_locator(MultipleLocator(0.025)) # Set tick dims ax.tick_params(which='major', length=6, width = 1.5) ax.tick_params(which='minor', length=5, width = 1.5) # Set ticks for second plot ax2.xaxis.set_major_locator(MultipleLocator(0.2)) ax2.xaxis.set_minor_locator(MultipleLocator(0.1)) ax2.yaxis.set_major_locator(MultipleLocator(20)) ax2.yaxis.set_minor_locator(MultipleLocator(10)) # Set tick dims ax2.tick_params(which='major', length=6, width = 1.5) ax2.tick_params(which='minor', length=5, width = 1.5) # Save the figure as pdf plt.savefig('diameter_data.pdf', dpi=2000, bbox_inches = 'tight', pad_inches = 0) # Remake this figure with inset showing each type of interaction # Color wheel to get third color # https://www.sessions.edu/color-calculator/ overlayColr = ['#ff00ff', '#00ffff', '#00ff00'] # [Slow, Slow-fast, Fast] overlayColr = ['#d8b365', '#c6609d', '#5ab4ac'] from matplotlib.ticker import MultipleLocator, FormatStrFormatter from matplotlib.lines import Line2D mpl.rcParams['axes.linewidth'] = 1.5 fig, ax = plt.subplots(1, 1, figsize=(5, 5)) fontsz = 16 lbsz = 12 # Larger figure ######################################### for i in xrange(nSweeps): for j in xrange(len(fileContainer[i])): if params[i]['xF'][j] == 0 and params[i]['eps'][j] == 0: ax.scatter(params[i]['peA'][j], all_SS[i]['sigALL'][j], c='k', s=100, marker='d') # for i in xrange(nSweeps): # for j in xrange(len(fileContainer[i])): # if params[i]['xF'][j] == 0 and params[i]['eps'][j] != 0: # ax.scatter(params[i]['peA'][j], all_SS[i]['sigALL'][j], # c=overlayColr[2], s=100, marker='d') for i in xrange(nSweeps): for j in xrange(len(fileContainer[i])): if params[i]['xF'][j] == 0 and params[i]['eps'][j] != 0: ax.scatter(params[i]['peA'][j], all_SS[i]['sigALL'][j], c='#c60000', s=100, marker='d') ax.set_xlim(-20, 520) ax.set_ylim(0.735, 1.02) ax.set_xlabel(r'Activity $(Pe)$', fontsize=fontsz) ax.set_ylabel(r'Effective diameter $(\sigma_{eff})$', fontsize=fontsz) # Inset figure ######################################### left, bottom, width, height = [0.55, 0.40, 0.325, 0.325] ax2 = fig.add_axes([left, bottom, width, height]) for j in xrange(len(fileContainer[1])): if params[1]['xF'][j] % 100.0 != 0: epsReqAll = deltaToEpsReq(all_SS[1]['sigALL'][j]) epsReqAA = deltaToEpsReq(all_SS[1]['sigAA'][j]) epsReqAB = deltaToEpsReq(all_SS[1]['sigAB'][j]) epsReqBB = deltaToEpsReq(all_SS[1]['sigBB'][j]) ax2.scatter(params[1]['xF'][j] / 100.0, epsReqAA, facecolor=(1, 1, 0, 0.0), s=100, zorder=1, edgecolors=overlayColr[0], lw=2) ax2.scatter(params[1]['xF'][j] / 100.0, epsReqAB, facecolor=(1, 1, 0, 0.0), s=100, zorder=1, edgecolors=overlayColr[1], lw=2) ax2.scatter(params[1]['xF'][j] / 100.0, epsReqBB, facecolor=(1, 1, 0, 0.0), s=100, zorder=1, edgecolors=overlayColr[2], lw=2) # Get a line for the data xline = np.arange(0.0, 100.0, 0.001) yline = np.zeros_like(xline) for i in xrange(len(xline)): yline[i] = netEpsilon(xline[i], 0.0, 500.0) ax2.plot(xline / 100.0, yline, linestyle='--', c='k', lw=1.5, zorder=0) ax2.set_xlabel(r'$x_{f}$', fontsize=fontsz) ax2.set_ylabel(r'$\epsilon_{req}$', fontsize=fontsz) ax2.set_xlim(0.05, 0.95) ax2.set_ylim(20, 80) # Set ticks for first plot ax.xaxis.set_major_locator(MultipleLocator(100)) ax.xaxis.set_minor_locator(MultipleLocator(50)) ax.yaxis.set_major_locator(MultipleLocator(0.05)) ax.yaxis.set_minor_locator(MultipleLocator(0.025)) # Set tick dims ax.tick_params(which='major', length=6, width = 1.5, labelsize=lbsz) ax.tick_params(which='minor', length=5, width = 1.5, labelsize=lbsz) # Set ticks for second plot ax2.xaxis.set_major_locator(MultipleLocator(0.2)) ax2.xaxis.set_minor_locator(MultipleLocator(0.1)) ax2.yaxis.set_major_locator(MultipleLocator(20)) ax2.yaxis.set_minor_locator(MultipleLocator(10)) # Set tick dims ax2.tick_params(which='major', length=6, width = 1.5, labelsize=lbsz) ax2.tick_params(which='minor', length=5, width = 1.5, labelsize=lbsz) # ax.set_facecolor('#d3d3d3') # ax2.set_facecolor('#d3d3d3') # Make a shape legend leg = [Line2D([0], [0], marker='o', color='k', linestyle='', label='Mixture', markerfacecolor='w', markersize=10, markeredgewidth=2), Line2D([0], [0], marker='d', color='k', linestyle='', label='Monodisperse', markerfacecolor='k', markersize=10)] ax.legend(handles=leg, loc = 2, bbox_to_anchor=(-0.03, 0.25), bbox_transform=ax.transAxes, frameon=False, handletextpad=-0.1, fontsize=16) plt.text(-0.17, 0.9625, '(c)', transform=ax.transAxes, fontsize=fontsz) # plt.text(0.25, 0.83, r'$\epsilon=HS$', fontsize=fontsz, transform=ax.transAxes, color='#5ab4ac') plt.text(0.25, 0.83, r'$\epsilon=HS$', fontsize=fontsz, transform=ax.transAxes, color='#c60000') plt.text(0.1, 0.65, r'$\epsilon=k_{B}T$', fontsize=fontsz, transform=ax.transAxes) # Save the figure as pdf plt.savefig('diameter_data_types.pdf', dpi=2000, bbox_inches = 'tight', pad_inches = 0.01) # This means that the best way to do this is with a weighted average # epsilon_ALL = (4*PeF*sigma*xF + 4*PeS*sigma*xS) / 24.0 # I just need one of the above plots ```
github_jupyter
``` #hide from fastai.vision.all import * from utils import * matplotlib.rc('image', cmap='Greys') ``` # Under the Hood: Training a Digit Classifier ## Pixels: The Foundations of Computer Vision ## Sidebar: Tenacity and Deep Learning ## End sidebar ``` path = untar_data(URLs.MNIST_SAMPLE) #hide Path.BASE_PATH = path path.ls() (path/'train').ls() threes = (path/'train'/'3').ls().sorted() sevens = (path/'train'/'7').ls().sorted() threes im3_path = threes[1] im3 = Image.open(im3_path) im3 array(im3)[4:10,4:10] tensor(im3)[4:10,4:10] im3_t = tensor(im3) df = pd.DataFrame(im3_t[4:15,4:22]) df.style.set_properties(**{'font-size':'6pt'}).background_gradient('Greys') ``` ## First Try: Pixel Similarity ``` seven_tensors = [tensor(Image.open(o)) for o in sevens] three_tensors = [tensor(Image.open(o)) for o in threes] len(three_tensors),len(seven_tensors) show_image(three_tensors[1]); stacked_sevens = torch.stack(seven_tensors).float()/255 stacked_threes = torch.stack(three_tensors).float()/255 stacked_threes.shape len(stacked_threes.shape) stacked_threes.ndim mean3 = stacked_threes.mean(0) show_image(mean3); mean7 = stacked_sevens.mean(0) show_image(mean7); a_3 = stacked_threes[1] show_image(a_3); dist_3_abs = (a_3 - mean3).abs().mean() dist_3_sqr = ((a_3 - mean3)**2).mean().sqrt() dist_3_abs,dist_3_sqr dist_7_abs = (a_3 - mean7).abs().mean() dist_7_sqr = ((a_3 - mean7)**2).mean().sqrt() dist_7_abs,dist_7_sqr F.l1_loss(a_3.float(),mean7), F.mse_loss(a_3,mean7).sqrt() ``` ### NumPy Arrays and PyTorch Tensors ``` data = [[1,2,3],[4,5,6]] arr = array (data) tns = tensor(data) arr # numpy tns # pytorch tns[1] tns[:,1] tns[1,1:3] tns+1 tns.type() tns*1.5 ``` ## Computing Metrics Using Broadcasting ``` valid_3_tens = torch.stack([tensor(Image.open(o)) for o in (path/'valid'/'3').ls()]) valid_3_tens = valid_3_tens.float()/255 valid_7_tens = torch.stack([tensor(Image.open(o)) for o in (path/'valid'/'7').ls()]) valid_7_tens = valid_7_tens.float()/255 valid_3_tens.shape,valid_7_tens.shape def mnist_distance(a,b): return (a-b).abs().mean((-1,-2)) mnist_distance(a_3, mean3) valid_3_dist = mnist_distance(valid_3_tens, mean3) valid_3_dist, valid_3_dist.shape tensor([1,2,3]) + tensor([1,1,1]) (valid_3_tens-mean3).shape def is_3(x): return mnist_distance(x,mean3) < mnist_distance(x,mean7) is_3(a_3), is_3(a_3).float() is_3(valid_3_tens) accuracy_3s = is_3(valid_3_tens).float() .mean() accuracy_7s = (1 - is_3(valid_7_tens).float()).mean() accuracy_3s,accuracy_7s,(accuracy_3s+accuracy_7s)/2 ``` ## Stochastic Gradient Descent (SGD) ``` gv(''' init->predict->loss->gradient->step->stop step->predict[label=repeat] ''') def f(x): return x**2 plot_function(f, 'x', 'x**2') plot_function(f, 'x', 'x**2') plt.scatter(-1.5, f(-1.5), color='red'); ``` ### Calculating Gradients ``` xt = tensor(3.).requires_grad_() yt = f(xt) yt yt.backward() xt.grad xt = tensor([3.,4.,10.]).requires_grad_() xt def f(x): return (x**2).sum() yt = f(xt) yt yt.backward() xt.grad ``` ### Stepping With a Learning Rate ### An End-to-End SGD Example ``` time = torch.arange(0,20).float(); time speed = torch.randn(20)*3 + 0.75*(time-9.5)**2 + 1 plt.scatter(time,speed); def f(t, params): a,b,c = params return a*(t**2) + (b*t) + c def mse(preds, targets): return ((preds-targets)**2).mean() ``` #### Step 1: Initialize the parameters ``` params = torch.randn(3).requires_grad_() #hide orig_params = params.clone() ``` #### Step 2: Calculate the predictions ``` preds = f(time, params) def show_preds(preds, ax=None): if ax is None: ax=plt.subplots()[1] ax.scatter(time, speed) ax.scatter(time, to_np(preds), color='red') ax.set_ylim(-300,100) show_preds(preds) ``` #### Step 3: Calculate the loss ``` loss = mse(preds, speed) loss ``` #### Step 4: Calculate the gradients ``` loss.backward() params.grad params.grad * 1e-5 params ``` #### Step 5: Step the weights. ``` lr = 1e-5 params.data -= lr * params.grad.data params.grad = None preds = f(time,params) mse(preds, speed) show_preds(preds) def apply_step(params, prn=True): preds = f(time, params) loss = mse(preds, speed) loss.backward() params.data -= lr * params.grad.data params.grad = None if prn: print(loss.item()) return preds ``` #### Step 6: Repeat the process ``` for i in range(10): apply_step(params) #hide params = orig_params.detach().requires_grad_() _,axs = plt.subplots(1,4,figsize=(12,3)) for ax in axs: show_preds(apply_step(params, False), ax) plt.tight_layout() ``` #### Step 7: stop ### Summarizing Gradient Descent ``` gv(''' init->predict->loss->gradient->step->stop step->predict[label=repeat] ''') ``` ## The MNIST Loss Function ``` train_x = torch.cat([stacked_threes, stacked_sevens]).view(-1, 28*28) train_y = tensor([1]*len(threes) + [0]*len(sevens)).unsqueeze(1) train_x.shape,train_y.shape dset = list(zip(train_x,train_y)) x,y = dset[0] x.shape,y valid_x = torch.cat([valid_3_tens, valid_7_tens]).view(-1, 28*28) valid_y = tensor([1]*len(valid_3_tens) + [0]*len(valid_7_tens)).unsqueeze(1) valid_dset = list(zip(valid_x,valid_y)) def init_params(size, std=1.0): return (torch.randn(size)*std).requires_grad_() weights = init_params((28*28,1)) bias = init_params(1) (train_x[0]*weights.T).sum() + bias def linear1(xb): return xb@weights + bias preds = linear1(train_x) preds corrects = (preds>0.0).float() == train_y corrects corrects.float().mean().item() weights[0] *= 1.0001 preds = linear1(train_x) ((preds>0.0).float() == train_y).float().mean().item() trgts = tensor([1,0,1]) prds = tensor([0.9, 0.4, 0.2]) def mnist_loss(predictions, targets): return torch.where(targets==1, 1-predictions, predictions).mean() torch.where(trgts==1, 1-prds, prds) mnist_loss(prds,trgts) mnist_loss(tensor([0.9, 0.4, 0.8]),trgts) ``` ### Sigmoid ``` def sigmoid(x): return 1/(1+torch.exp(-x)) plot_function(torch.sigmoid, title='Sigmoid', min=-4, max=4) def mnist_loss(predictions, targets): predictions = predictions.sigmoid() return torch.where(targets==1, 1-predictions, predictions).mean() ``` ### SGD and Mini-Batches ``` coll = range(15) dl = DataLoader(coll, batch_size=5, shuffle=True) list(dl) ds = L(enumerate(string.ascii_lowercase)) ds dl = DataLoader(ds, batch_size=6, shuffle=True) list(dl) ``` ## Putting It All Together ``` weights = init_params((28*28,1)) bias = init_params(1) dl = DataLoader(dset, batch_size=256) xb,yb = first(dl) xb.shape,yb.shape valid_dl = DataLoader(valid_dset, batch_size=256) batch = train_x[:4] batch.shape preds = linear1(batch) preds loss = mnist_loss(preds, train_y[:4]) loss loss.backward() weights.grad.shape,weights.grad.mean(),bias.grad def calc_grad(xb, yb, model): preds = model(xb) loss = mnist_loss(preds, yb) loss.backward() calc_grad(batch, train_y[:4], linear1) weights.grad.mean(),bias.grad calc_grad(batch, train_y[:4], linear1) weights.grad.mean(),bias.grad weights.grad.zero_() bias.grad.zero_(); def train_epoch(model, lr, params): for xb,yb in dl: calc_grad(xb, yb, model) for p in params: p.data -= p.grad*lr p.grad.zero_() (preds>0.0).float() == train_y[:4] def batch_accuracy(xb, yb): preds = xb.sigmoid() correct = (preds>0.5) == yb return correct.float().mean() batch_accuracy(linear1(batch), train_y[:4]) def validate_epoch(model): accs = [batch_accuracy(model(xb), yb) for xb,yb in valid_dl] return round(torch.stack(accs).mean().item(), 4) validate_epoch(linear1) lr = 1. params = weights,bias train_epoch(linear1, lr, params) validate_epoch(linear1) for i in range(20): train_epoch(linear1, lr, params) print(validate_epoch(linear1), end=' ') ``` ### Creating an Optimizer ``` linear_model = nn.Linear(28*28,1) w,b = linear_model.parameters() w.shape,b.shape class BasicOptim: def __init__(self,params,lr): self.params,self.lr = list(params),lr def step(self, *args, **kwargs): for p in self.params: p.data -= p.grad.data * self.lr def zero_grad(self, *args, **kwargs): for p in self.params: p.grad = None opt = BasicOptim(linear_model.parameters(), lr) def train_epoch(model): for xb,yb in dl: calc_grad(xb, yb, model) opt.step() opt.zero_grad() validate_epoch(linear_model) def train_model(model, epochs): for i in range(epochs): train_epoch(model) print(validate_epoch(model), end=' ') train_model(linear_model, 20) linear_model = nn.Linear(28*28,1) opt = SGD(linear_model.parameters(), lr) train_model(linear_model, 20) dls = DataLoaders(dl, valid_dl) learn = Learner(dls, nn.Linear(28*28,1), opt_func=SGD, loss_func=mnist_loss, metrics=batch_accuracy) learn.fit(10, lr=lr) ``` ## Adding a Nonlinearity ``` def simple_net(xb): res = xb@w1 + b1 res = res.max(tensor(0.0)) res = res@w2 + b2 return res w1 = init_params((28*28,30)) b1 = init_params(30) w2 = init_params((30,1)) b2 = init_params(1) plot_function(F.relu) simple_net = nn.Sequential( nn.Linear(28*28,30), nn.ReLU(), nn.Linear(30,1) ) learn = Learner(dls, simple_net, opt_func=SGD, loss_func=mnist_loss, metrics=batch_accuracy) learn.fit(40, 0.1) plt.plot(L(learn.recorder.values).itemgot(2)); learn.recorder.values[-1][2] ``` ### Going Deeper ``` dls = ImageDataLoaders.from_folder(path) learn = cnn_learner(dls, resnet18, pretrained=False, loss_func=F.cross_entropy, metrics=accuracy) learn.fit_one_cycle(1, 0.1) ``` ## Jargon Recap ## Questionnaire 1. How is a grayscale image represented on a computer? How about a color image? 1. How are the files and folders in the `MNIST_SAMPLE` dataset structured? Why? 1. Explain how the "pixel similarity" approach to classifying digits works. 1. What is a list comprehension? Create one now that selects odd numbers from a list and doubles them. 1. What is a "rank-3 tensor"? 1. What is the difference between tensor rank and shape? How do you get the rank from the shape? 1. What are RMSE and L1 norm? 1. How can you apply a calculation on thousands of numbers at once, many thousands of times faster than a Python loop? 1. Create a 3×3 tensor or array containing the numbers from 1 to 9. Double it. Select the bottom-right four numbers. 1. What is broadcasting? 1. Are metrics generally calculated using the training set, or the validation set? Why? 1. What is SGD? 1. Why does SGD use mini-batches? 1. What are the seven steps in SGD for machine learning? 1. How do we initialize the weights in a model? 1. What is "loss"? 1. Why can't we always use a high learning rate? 1. What is a "gradient"? 1. Do you need to know how to calculate gradients yourself? 1. Why can't we use accuracy as a loss function? 1. Draw the sigmoid function. What is special about its shape? 1. What is the difference between a loss function and a metric? 1. What is the function to calculate new weights using a learning rate? 1. What does the `DataLoader` class do? 1. Write pseudocode showing the basic steps taken in each epoch for SGD. 1. Create a function that, if passed two arguments `[1,2,3,4]` and `'abcd'`, returns `[(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd')]`. What is special about that output data structure? 1. What does `view` do in PyTorch? 1. What are the "bias" parameters in a neural network? Why do we need them? 1. What does the `@` operator do in Python? 1. What does the `backward` method do? 1. Why do we have to zero the gradients? 1. What information do we have to pass to `Learner`? 1. Show Python or pseudocode for the basic steps of a training loop. 1. What is "ReLU"? Draw a plot of it for values from `-2` to `+2`. 1. What is an "activation function"? 1. What's the difference between `F.relu` and `nn.ReLU`? 1. The universal approximation theorem shows that any function can be approximated as closely as needed using just one nonlinearity. So why do we normally use more? ### Further Research 1. Create your own implementation of `Learner` from scratch, based on the training loop shown in this chapter. 1. Complete all the steps in this chapter using the full MNIST datasets (that is, for all digits, not just 3s and 7s). This is a significant project and will take you quite a bit of time to complete! You'll need to do some of your own research to figure out how to overcome some obstacles you'll meet on the way.
github_jupyter
``` import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn import metrics import matplotlib.pyplot as plt import seaborn as sns data = pd.read_pickle('../../data/processed/all_samples.pickle') data['datetime'] = pd.to_datetime(data.date) data['day'] = data.datetime.dt.weekday_name data = pd.get_dummies(data, prefix='day', columns=['day']) features = ['hour', 'daylight_yn', 'holiday_yn', 'rush_hour_yn', 'temp', 'wind_speed', 'precipitation', 'road_length', 'class_freeway', 'class_local', 'class_major', 'class_other', 'class_unimproved', 'day_Monday', 'day_Tuesday', 'day_Wednesday', 'day_Thursday', 'day_Friday', 'day_Saturday', 'day_Sunday'] labels = 'accident_yn' X = data[features] y = data[labels] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42) forest = RandomForestClassifier(n_estimators=100) forest.fit(X_train, y_train) y_pred = forest.predict(X_test) print('Random Forest (n=100)') print('Accuracy:', metrics.accuracy_score(y_test, y_pred)) print('Precision:', metrics.precision_score(y_test, y_pred)) print('Recall:', metrics.recall_score(y_test, y_pred)) feature_importance = pd.Series(forest.feature_importances_, index=X.columns).sort_values(ascending=False) print(feature_importance) forest_1k = RandomForestClassifier(n_estimators=1000) forest_1k.fit(X_train, y_train) y_pred_1k = forest_1k.predict(X_test) print('Random Forest (n=0100)') print('Accuracy:', metrics.accuracy_score(y_test, y_pred_1k)) print('Precision:', metrics.precision_score(y_test, y_pred_1k)) print('Recall:', metrics.recall_score(y_test, y_pred_1k)) feature_importance = pd.Series(forest_1k.feature_importances_, index=X.columns).sort_values(ascending=False) print(feature_importance) ``` **n=1,000 results** *Metrics* | Metric | Value | | --------- | ------------------ | | Accuracy | 0.8289608836950848 | | Precision | 0.680050627981696 | | Recall | 0.5907007425198315 | *Feature Importance* | Feature | Relative Importance | | ---------------- | ------------------- | | road_length | 0.421045 | | temp | 0.155326 | | wind_speed | 0.099657 | | class_major | 0.090212 | | hour | 0.081451 | | class_local | 0.069730 | | precipitation | 0.013593 | | daylight_yn | 0.012632 | | class_freeway | 0.010737 | | class_unimproved | 0.008278 | | holiday_yn | 0.006628 | | rush_hour_yn | 0.005235 | | day_Monday | 0.003692 | | day_Thursday | 0.003615 | | day_Sunday | 0.003610 | | day_Tuesday | 0.003590 | | day_Wednesday | 0.003531 | | day_Friday | 0.003319 | | day_Saturday | 0.003096 | | class_other | 0.001022 | ``` important_features = ['hour', 'daylight_yn', 'temp', 'wind_speed', 'precipitation', 'road_length', 'class_freeway', 'class_local', 'class_major'] X_important = data[important_features] X_train_imp, X_test_imp, y_train_imp, y_test_imp = train_test_split(X_important, y, test_size=0.4, random_state=42) forest_imp = RandomForestClassifier(n_estimators=100) forest_imp.fit(X_train_imp, y_train_imp) y_pred_imp = forest_imp.predict(X_test_imp) print('Random Forest (n=100)') print('Accuracy:', metrics.accuracy_score(y_test_imp, y_pred_imp)) print('Precision:', metrics.precision_score(y_test_imp, y_pred_imp)) print('Recall:', metrics.recall_score(y_test_imp, y_pred_imp)) feature_importance_imp = pd.Series(forest_imp.feature_importances_, index=X_important.columns).sort_values(ascending=False) print(feature_importance_imp) most_important_features = ['hour', 'temp', 'wind_speed', 'road_length', 'class_local', 'class_major'] X_most_important = data[most_important_features] X_train_most, X_test_most, y_train_most, y_test_most = train_test_split(X_most_important, y, test_size=0.4, random_state=42) forest_most = RandomForestClassifier(n_estimators=100) forest_most.fit(X_train_most, y_train_most) y_pred_most = forest_most.predict(X_test_most) print('Random Forest (n=100)') print('Accuracy:', metrics.accuracy_score(y_test_most, y_pred_most)) print('Precision:', metrics.precision_score(y_test_most, y_pred_most)) print('Recall:', metrics.recall_score(y_test_most, y_pred_most)) feature_importance_most = pd.Series(forest_most.feature_importances_, index=X_most_important.columns).sort_values(ascending=False) print(feature_importance_most) ```
github_jupyter
# First BigQuery ML models for Taxifare Prediction In this notebook, we will use BigQuery ML to build our first models for taxifare prediction.BigQuery ML provides a fast way to build ML models on large structured and semi-structured datasets. ## Learning Objectives 1. Choose the correct BigQuery ML model type and specify options 2. Evaluate the performance of your ML model 3. Improve model performance through data quality cleanup 4. Create a Deep Neural Network (DNN) using SQL Each learning objective will correspond to a __#TODO__ in the [student lab notebook](../labs/first_model.ipynb) -- try to complete that notebook first before reviewing this solution notebook. We'll start by creating a dataset to hold all the models we create in BigQuery ### Import libraries ``` !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst !pip install --user google-cloud-bigquery==1.25.0 ``` **Restart** the kernel before proceeding further (On the Notebook menu - Kernel - Restart Kernel). ``` import os ``` ### Set environment variables ``` %%bash export PROJECT=$(gcloud config list project --format "value(core.project)") echo "Your current GCP Project Name is: "$PROJECT PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 # Do not change these os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID os.environ["REGION"] = REGION if PROJECT == "your-gcp-project-here": print("Don't forget to update your PROJECT name! Currently:", PROJECT) ``` ## Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __serverlessml__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too. ``` %%bash ## Create a BigQuery dataset for serverlessml if it doesn't exist datasetexists=$(bq ls -d | grep -w serverlessml) if [ -n "$datasetexists" ]; then echo -e "BigQuery dataset already exists, let's not recreate it." else echo "Creating BigQuery dataset titled: serverlessml" bq --location=US mk --dataset \ --description 'Taxi Fare' \ $PROJECT:serverlessml echo "\nHere are your current datasets:" bq ls fi ## Create GCS bucket if it doesn't exist already... exists=$(gsutil ls -d | grep -w gs://${BUCKET}/) if [ -n "$exists" ]; then echo -e "Bucket exists, let's not recreate it." else echo "Creating a new GCS bucket." gsutil mb -l ${REGION} gs://${BUCKET} echo "\nHere are your current buckets:" gsutil ls fi ``` ## Model 1: Raw data Let's build a model using just the raw data. It's not going to be very good, but sometimes it is good to actually experience this. The model will take a minute or so to train. When it comes to ML, this is blazing fast. ``` %%bigquery CREATE OR REPLACE MODEL serverlessml.model1_rawdata OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg') AS SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count * 1.0 AS passengers FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1 ``` Once the training is done, visit the [BigQuery Cloud Console](https://console.cloud.google.com/bigquery) and look at the model that has been trained. Then, come back to this notebook. Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data: ``` %%bigquery SELECT * FROM ML.EVALUATE(MODEL serverlessml.model1_rawdata) ``` Let's report just the error we care about, the Root Mean Squared Error (RMSE) ``` %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model1_rawdata) ``` We told you it was not going to be good! Recall that our heuristic got 8.13, and our target is $6. Note that the error is going to depend on the dataset that we evaluate it on. We can also evaluate the model on our own held-out benchmark/test dataset, but we shouldn't make a habit of this (we want to keep our benchmark dataset as the final evaluation, not make decisions using it all along the way. If we do that, our test dataset won't be truly independent). ``` %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model1_rawdata, ( SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count * 1.0 AS passengers FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 2 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 )) ``` ## Model 2: Apply data cleanup Recall that we did some data cleanup in the previous lab. Let's do those before training. This is a dataset that we will need quite frequently in this notebook, so let's extract it first. ``` %%bigquery CREATE OR REPLACE TABLE serverlessml.cleaned_training_data AS SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM serverlessml.cleaned_training_data LIMIT 0 %%bigquery CREATE OR REPLACE MODEL serverlessml.model2_cleanup OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg') AS SELECT * FROM serverlessml.cleaned_training_data %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model2_cleanup) ``` ## Model 3: More sophisticated models What if we try a more sophisticated model? Let's try Deep Neural Networks (DNNs) in BigQuery: ### DNN To create a DNN, simply specify __dnn_regressor__ for the model_type and add your hidden layers. ``` %%bigquery -- This model type is in alpha, so it may not work for you yet. -- This training takes on the order of 15 minutes. CREATE OR REPLACE MODEL serverlessml.model3b_dnn OPTIONS(input_label_cols=['fare_amount'], model_type='dnn_regressor', hidden_units=[32, 8]) AS SELECT * FROM serverlessml.cleaned_training_data %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model3b_dnn) ``` Nice! ## Evaluate DNN on benchmark dataset Let's use the same validation dataset to evaluate -- remember that evaluation metrics depend on the dataset. You can not compare two models unless you have run them on the same withheld data. ``` %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model3b_dnn, ( SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_datetime, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count * 1.0 AS passengers, 'unused' AS key FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 )) ``` Wow! Later in this sequence of notebooks, we will get to below $4, but this is quite good, for very little work. In this notebook, we showed you how to use BigQuery ML to quickly build ML models. We will come back to BigQuery ML when we want to experiment with different types of feature engineering. The speed of BigQuery ML is very attractive for development. Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
github_jupyter
``` import pandas as pd fake_news_df = pd.DataFrame(dict(title=[], isFakeNews=[], src=[])) ``` ## COVID-19-rumor-dataset ``` covid_rumour_df = pd.read_csv('COVID-19-rumor-dataset/en_dup.csv') covid_rumour_df = covid_rumour_df[~(covid_rumour_df['label'] == 'U')] # Drop Unknown Label covid_rumour_df['label'] = covid_rumour_df['label'] == 'F' # return True if is Fake covid_rumour_df['src'] = 'COVID-19-rumor-dataset' # define src covid_rumour_df = covid_rumour_df[['label', 'content', 'src']].rename(columns={'label':'isFakeNews','content':'title'}) # filter and rename columns fake_news_df = fake_news_df.append(covid_rumour_df) ``` ## FakeNewsNet ``` politifact_real_df = pd.read_csv("FakeNewsNet/politifact_real.csv") politifact_fake_df = pd.read_csv("FakeNewsNet/politifact_fake.csv") politifact_real_df['isFakeNews'] = False politifact_fake_df['isFakeNews'] = True politifact_df = politifact_real_df.append(politifact_fake_df) politifact_df['src'] = 'FakeNewsNet/politifact' fake_news_df = fake_news_df.append(politifact_df[['title', 'isFakeNews', 'src']]) gossipcop_real_df = pd.read_csv("FakeNewsNet/gossipcop_real.csv") gossipcop_fake_df = pd.read_csv("FakeNewsNet/gossipcop_fake.csv") gossipcop_real_df['isFakeNews'] = False gossipcop_fake_df['isFakeNews'] = True gossipcop_df = gossipcop_real_df.append(gossipcop_fake_df) gossipcop_df['src'] = 'FakeNewsNet/gossipcop' fake_news_df = fake_news_df.append(gossipcop_df[['title', 'isFakeNews', 'src']]) ``` ## Fake-and-real-news-dataset ``` fake_df = pd.read_csv('Fake-and-real-news-dataset/Fake.csv') true_df = pd.read_csv('Fake-and-real-news-dataset/True.csv') fake_df['isFakeNews'] = True true_df['isFakeNews'] = False fake_df['src'] = 'Fake-and-real-news-dataset' true_df['src'] = 'Fake-and-real-news-dataset' fake_news_df = fake_news_df\ .append(fake_df[['title', 'isFakeNews', 'src']])\ .append(true_df[['title', 'isFakeNews', 'src']]) ``` ## FakeCovid ``` fakeCoviddf_jun = pd.read_csv("FakeCovid/FakeCovid_June2020.csv") fakeCoviddf_jul = pd.read_csv("FakeCovid/FakeCovid_July2020.csv") fakeCoviddf_jun = fakeCoviddf_jun[fakeCoviddf_jun['lang'] == 'English'].copy() fakeCoviddf_jul = fakeCoviddf_jul[fakeCoviddf_jul['lang'] == 'en'].copy() fakeCoviddf_jun.loc[fakeCoviddf_jun['class'] == 'TRUE', 'isFakeNews'] = False fakeCoviddf_jul.loc[fakeCoviddf_jul['class'].str.lower().str.contains("true"), 'isFakeNews'] = False fakeCoviddf_jun['isFakeNews'].fillna(True, inplace = True) fakeCoviddf_jul['isFakeNews'].fillna(True, inplace = True) fakeCoviddf_jun['src'] = 'FakeCovid' fakeCoviddf_jul['src'] = 'FakeCovid' fake_news_df = fake_news_df\ .append(fakeCoviddf_jun[['ref_title', 'isFakeNews','src']].rename(columns={'ref_title':'title'}))\ .append(fakeCoviddf_jul[['source_title', 'isFakeNews','src']].rename(columns={'source_title':'title'})) fake_news_df.dropna(inplace=True) fake_news_df.to_csv("fakeNews.csv", index=False) ```
github_jupyter
# Sim-launcher This script show how to launch sims using Python ### 1. Package imports ``` import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from dotenv import load_dotenv from pathlib import Path # Python 3.6+ only import os import psycopg2 from psycopg2.extras import execute_values import random import time ``` ### 2. Environment Variables ``` # Load the environment variables env_path = Path('..') / '.env' print(env_path) load_dotenv(dotenv_path=env_path) # Print this to see if the env variables are read now os.getenv("COMPOSE_PROJECT_NAME") ``` ### 3. Database connection (writer) ``` # Generic function to test the connection to the database def connect(): """ Connect to the PostgreSQL database server """ conn = None try: # connect to the PostgreSQL server print('Connecting to the PostgreSQL database...') conn = psycopg2.connect( host=os.getenv("MAIN_HOST"), database=os.getenv("MAIN_DB"), user=os.getenv("DBWRITE_USER"), password=os.getenv("DBWRITE_PWD"), port = os.getenv("MAIN_PORT") ) # create a cursor cur = conn.cursor() # execute a statement print('PostgreSQL database version:') cur.execute('SELECT version()') # display the PostgreSQL database server version db_version = cur.fetchone() print(db_version) # close the communication with the PostgreSQL cur.close() except (Exception, psycopg2.DatabaseError) as error: print(error) finally: if conn is not None: conn.close() print('Database connection closed.') # Make the test database connection connect() conn = psycopg2.connect( host=os.getenv("MAIN_HOST"), database=os.getenv("MAIN_DB"), user=os.getenv("DBWRITE_USER"), password=os.getenv("DBWRITE_PWD"), port = os.getenv("MAIN_PORT") ) # create a cursor cur = conn.cursor() ``` ### 4. Database queries #### 4.1 Base-case analysis ``` sql_set = 'INSERT INTO analysis_sets (description) VALUES (%s);' set_data = 'first set as test' sql_analysis = 'INSERT INTO analysis_record (user_id, status, include_tesla) VALUES (%s, %s, %s);' analysis_data = (os.getenv("AUTH0_USERID"), 'inserted', 'FALSE') sql_user = 'INSERT INTO user_details (user_id, user_name, email_id) VALUES (%s, %s, %s) ON CONFLICT (user_id) DO UPDATE SET last_submit_date = NOW();' user_data = (os.getenv("AUTH0_USERID"), os.getenv("AUTH0_USERNAME"), os.getenv("AUTH0_EMAIL")) sql_params = 'INSERT INTO analysis_params (param_id, param_value) VALUES %s' params_data = [(1, '123' ),(2, '70' ),(14, '10' ),(3, '80' ),(4, '100' ),(9, '40' ),(10, '50' ),(11, '25' ),(12, '23' ),(13, '20' ), (15, '1' ), ( 16, '10' ), (17, '80' ), (18, '0' ), (19, '60' ), (20, '20' ), (21, '200' )] ``` ##### Launch an analysis ``` # cur.mogrify(sql_set, (set_data,)) # All these should be executed together as a transaction ################### the following will launch a sim cur.execute(sql_set, (set_data, )) cur.execute(sql_analysis, analysis_data) cur.execute(sql_user, user_data) execute_values(cur, sql_params, params_data) conn.commit() ``` Launch a set of analysis requests ``` ################### The following will launch 30 sims with varying seed ####################################################################### create_new_set = True # a boolean to encode whether to create a new set for this analysis request or add this to the previous one number_of_sims = 30 # launch five sims for i in range(0, number_of_sims): seed = random.randint(1, 1000) if(create_new_set): set_data = 'Testing the effect of varying seed for basecase with everything else constant' cur.execute(sql_set, (set_data, )) cur.execute(sql_analysis, analysis_data) cur.execute(sql_user, user_data) # change the seed params_data.pop(0) # remove the current list element for parameter 'global_seed' (param_id = 1) params_data.insert(0, (1, str(seed))) execute_values(cur, sql_params, params_data) create_new_set = False # since the next 4 simulations belong to the same set time.sleep(3) # sleep for 3 seconds so the next analysis request is 3 seconds later conn.commit() print("sim with seed:" + str(seed) + " launched") ``` Launch a set of analysis requests with different critical distance ``` create_new_set = True # a boolean to encode whether to create a new set for this analysis request or add this to the previous one critical_dists = [20, 30, 40, 50, 60, 70, 80, 90, 100] # critical distance set for dist in critical_dists: if(create_new_set): set_data = 'Testing the effect of varying critical distance keeping everything else constant' cur.execute(sql_set, (set_data, )) cur.execute(sql_analysis, analysis_data) cur.execute(sql_user, user_data) # change the seed params_data.pop(1) # remove the current list element for parameter 'critical_distance' (param_id = 2), list pos 1 params_data.insert(1, (2, str(dist))) execute_values(cur, sql_params, params_data) create_new_set = False # since the next 4 simulations belong to the same set conn.commit() print("sim with critical_distance:" + str(dist) + " launched") time.sleep(3) # sleep for 3 seconds so the next analysis request is 3 seconds later ``` #### 4.2 Adding new chargers ``` # The order of columns in the csv is important new_evse_scenario = pd.read_csv('new_evse_scenario.csv') new_evse_scenario new_evse_scenario.dtypes new_evse_data = [tuple(row) for row in new_evse_scenario.itertuples(index=False)] sql_new_evse = """INSERT INTO new_evses (latitude, longitude, dcfc_plug_count, dcfc_power, level2_plug_count, level2_power, dcfc_fixed_charging_price, dcfc_var_charging_price_unit, dcfc_var_charging_price, dcfc_fixed_parking_price, dcfc_var_parking_price_unit, dcfc_var_parking_price, level2_fixed_charging_price, level2_var_charging_price_unit, level2_var_charging_price, level2_fixed_parking_price, level2_var_parking_price_unit, level2_var_parking_price, connector_code) VALUES %s""" # All these should be executed together as a transaction set_data = 'testing a new evse scenario using code' cur.execute(sql_set, (set_data, )) cur.execute(sql_analysis, analysis_data) cur.execute(sql_user, user_data) execute_values(cur, sql_params, params_data) if (len(new_evse_scenario.index) > 0): execute_values(cur, sql_new_evse, new_evse_data) # use execute values with new evses by creating an array of tuples conn.commit() ``` _____________________ ______________________ ### 4.3 Disabling chargers ``` # seeds = [123, 366, 495] disabled_charger_count = 30 sql_bevse_id = """select bevse_id from built_evse where dcfc_count >= 1 and connector_code IN (1, 2, 3);""" bevse_ids = pd.read_sql_query(sql=sql_bevse_id, con=conn, params=()) bevse_ids sql_set = 'INSERT INTO analysis_sets (description) VALUES (%s);' sql_analysis = 'INSERT INTO analysis_record (user_id, status, include_tesla) VALUES (%s, %s, %s);' analysis_data = (os.getenv("AUTH0_USERID"), 'inserted', 'FALSE') sql_user = 'INSERT INTO user_details (user_id, user_name, email_id) VALUES (%s, %s, %s) ON CONFLICT (user_id) DO UPDATE SET last_submit_date = NOW();' user_data = (os.getenv("AUTH0_USERID"), os.getenv("AUTH0_USERNAME"), os.getenv("AUTH0_EMAIL")) sql_params = 'INSERT INTO analysis_params (param_id, param_value) VALUES %s' params_data = [(1, '123' ),(2, '70' ),(14, '10' ),(3, '80' ),(4, '100' ),(9, '40' ),(10, '50' ),(11, '25' ),(12, '23' ),(13, '20' ), (15, '1' ), ( 16, '10' ), (17, '80' ), (18, '0' ), (19, '60' ), (20, '20' ), (21, '200' )] sql_bevse_subset = '''select bevse_id, longitude, latitude, connector_code, dcfc_count, dcfc_fixed_charging_price, dcfc_var_charging_price_unit, dcfc_var_charging_price, dcfc_fixed_parking_price, dcfc_var_parking_price_unit, dcfc_var_parking_price from built_evse where dcfc_count >= 1 and connector_code IN (1, 2, 3) and bevse_id NOT IN %s ''' sql_evses_now = 'INSERT INTO evses_now (evse_id, longitude, latitude, connector_code, dcfc_count, dcfc_fixed_charging_price, dcfc_var_charging_price_unit, dcfc_var_charging_price, dcfc_fixed_parking_price, dcfc_var_parking_price_unit, dcfc_var_parking_price) VALUES %s' ################### The following will launch 3 sims with varying disabled chargers ####################################################################### create_new_set = True # a boolean to encode whether to create a new set for this analysis request or add this to the previous one number_of_sims = 5 # len(seeds) # launch five sims for i in range(0, number_of_sims): # seed = seeds[i] disable_bevse_ids = [] rints = random.sample(range(len(bevse_ids)), disabled_charger_count) print(rints) disable_bevse_ids = bevse_ids['bevse_id'].take(rints) print(disable_bevse_ids.tolist()) # get evses that have not been disabled bevses_subset = pd.read_sql_query(sql=sql_bevse_subset, con=conn, params=(tuple(disable_bevse_ids.tolist()),)) # Rename columns and add 'b' to id bevses_subset.rename(columns={"bevse_id": "evse_id"}, inplace=True) bevses_subset['evse_id'] = 'b' + bevses_subset['evse_id'].astype(int).astype(str) print(bevses_subset) bevses_list = [] for index, row in bevses_subset.iterrows(): bevses_list.append((row['evse_id'], row['longitude'], row['latitude'], row['connector_code'], row['dcfc_count'], row['dcfc_fixed_charging_price'], row['dcfc_var_charging_price_unit'], row['dcfc_var_charging_price'], row['dcfc_fixed_parking_price'], row['dcfc_var_parking_price_unit'], row['dcfc_var_parking_price'])) print(bevses_list) query = cur.mogrify(sql_evses_now, (bevses_list,)) print(query) if(create_new_set): set_data = 'Testing the effect of disabling %s chargers' % str(disabled_charger_count) cur.execute(sql_set, (set_data, )) cur.execute(sql_analysis, analysis_data) cur.execute(sql_user, user_data) execute_values(cur, sql_params, params_data) execute_values(cur, sql_evses_now, bevses_list) create_new_set = False # since the next 4 simulations belong to the same set conn.commit() time.sleep(3) # sleep for 3 seconds so the next analysis request is 3 seconds later ```
github_jupyter
# CNN with Bidirctional RNN - Char Classification Using TensorFlow ## TODO ``` ``` ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import math import tensorflow as tf import tensorflow.contrib.seq2seq as seq2seq import cv2 %matplotlib notebook # Increase size of plots plt.rcParams['figure.figsize'] = (9.0, 5.0) # Helpers from ocr.helpers import implt from ocr.mlhelpers import TrainingPlot, DataSet from ocr.imgtransform import coordinates_remap from ocr.datahelpers import loadWordsData, correspondingShuffle from ocr.tfhelpers import Graph, create_cell tf.reset_default_graph() sess = tf.InteractiveSession() print("OpenCV: " + cv2.__version__) print("Numpy: " + np.__version__) print("TensorFlow: " + tf.__version__) ``` ## Loading Images ``` images, _, gaplines = loadWordsData(['data/words/'], loadGaplines=True) ``` ## Settings ``` PAD = 0 # Value for PADding images POS = 1 # Values of positive and negative label 0/-1 NEG = 0 POS_SPAN = 1 # Number of positive values around true position (5 is too high) POS_WEIGHT = 10 # Weighting possitive values in loss counting slider_size = (60, 30) # Height is set to 60 by data and width should be even slider_step = 2 # Number of pixels slider moving N_INPUT = 1800 # Size of sequence input vector will depend on CNN num_buckets = 10 n_classes = 2 # Number of different outputs rnn_layers = 4 # 4 - 2 - 256 rnn_residual_layers = 2 # HAVE TO be smaller than encoder_layers rnn_units = 128 attention_size = 64 learning_rate = 1e-4 # 1e-4 dropout = 0.4 # Percentage of dopped out data train_set = 0.8 # Percentage of training data TRAIN_STEPS = 500000 # Number of training steps! TEST_ITER = 150 LOSS_ITER = 50 SAVE_ITER = 2000 BATCH_SIZE = 32 # EPOCH = 2000 # "Number" of batches in epoch ``` ## Dataset ``` # Shuffle data images, gaplines = correspondingShuffle([images, gaplines]) for i in range(len(images)): # Add border and offset gaplines - RUN ONLY ONCE images[i] = cv2.copyMakeBorder(images[i], 0, 0, int(slider_size[1]/2), int(slider_size[1]/2), cv2.BORDER_CONSTANT, value=0) gaplines[i] += int(slider_size[1] / 2) # Image standardization same as tf.image.per_image_standardization for i in range(len(images)): images[i] = (images[i] - np.mean(images[i])) / max(np.std(images[i]), 1.0/math.sqrt(images[i].size)) # Split data on train and test dataset div = int(train_set * len(images)) trainImages = images[0:div] testImages = images[div:] trainGaplines = gaplines[0:div] testGaplines = gaplines[div:] print("Training images:", div) print("Testing images:", len(images) - div) class BucketDataIterator(): """ Iterator for feeding seq2seq model during training """ def __init__(self, images, gaplines, gap_span, num_buckets=5, slider=(60, 30), slider_step=2, imgprocess=lambda x: x, train=True): self.train = train # self.slider = slider # self.slider_step = slider_step length = [(image.shape[1]-slider[1])//slider_step for image in images] # Creating indices from gaplines indices = gaplines - int(slider[1]/2) indices = indices // slider_step # Split images to sequence of vectors # + targets seq of labels per image in images seq images_seq = np.empty(len(images), dtype=object) targets_seq = np.empty(len(images), dtype=object) for i, img in enumerate(images): images_seq[i] = [imgprocess(img[:, loc * slider_step: loc * slider_step + slider[1]].flatten()) for loc in range(length[i])] targets_seq[i] = np.ones((length[i])) * NEG for offset in range(gap_span): ind = indices[i] + (-(offset % 2) * offset // 2) + ((1 - offset%2) * offset // 2) if ind[0] < 0: ind[0] = 0 if ind[-1] >= length[i]: ind[-1] = length[i] - 1 targets_seq[i][ind] = POS # Create pandas dataFrame and sort it by images seq lenght (length) # in_length == out_length self.dataFrame = pd.DataFrame({'length': length, 'images': images_seq, 'targets': targets_seq }).sort_values('length').reset_index(drop=True) bsize = int(len(images) / num_buckets) self.num_buckets = num_buckets # Create buckets by slicing parts by indexes self.buckets = [] for bucket in range(num_buckets-1): self.buckets.append(self.dataFrame.iloc[bucket * bsize: (bucket+1) * bsize]) self.buckets.append(self.dataFrame.iloc[(num_buckets-1) * bsize:]) self.buckets_size = [len(bucket) for bucket in self.buckets] # cursor[i] will be the cursor for the ith bucket self.cursor = np.array([0] * num_buckets) self.bucket_order = np.random.permutation(num_buckets) self.bucket_cursor = 0 self.shuffle() print("Iterator created.") def shuffle(self, idx=None): """ Shuffle idx bucket or each bucket separately """ for i in [idx] if idx is not None else range(self.num_buckets): self.buckets[i] = self.buckets[i].sample(frac=1).reset_index(drop=True) self.cursor[i] = 0 def next_batch(self, batch_size): """ Creates next training batch of size: batch_size Retruns: image seq, letter seq, image seq lengths, letter seq lengths """ i_bucket = self.bucket_order[self.bucket_cursor] # Increment cursor and shuffle in case of new round self.bucket_cursor = (self.bucket_cursor + 1) % self.num_buckets if self.bucket_cursor == 0: self.bucket_order = np.random.permutation(self.num_buckets) if self.cursor[i_bucket] + batch_size > self.buckets_size[i_bucket]: self.shuffle(i_bucket) # Handle too big batch sizes if (batch_size > self.buckets_size[i_bucket]): batch_size = self.buckets_size[i_bucket] res = self.buckets[i_bucket].iloc[self.cursor[i_bucket]: self.cursor[i_bucket]+batch_size] self.cursor[i_bucket] += batch_size # PAD input sequence and output # Pad sequences with <PAD> to same length max_length = max(res['length']) input_seq = np.zeros((batch_size, max_length, N_INPUT), dtype=np.float32) for i, img in enumerate(res['images']): input_seq[i][:res['length'].values[i]] = img input_seq = input_seq.swapaxes(0, 1) # Need to pad according to the maximum length output sequence targets = np.ones([batch_size, max_length], dtype=np.float32) * PAD for i, target in enumerate(targets): target[:res['length'].values[i]] = res['targets'].values[i] return input_seq, targets, res['length'].values def next_feed(self, size): """ Create feed directly for model training """ (inputs_, targets_, length_) = self.next_batch(size) return { inputs: inputs_, targets: targets_, length: length_, keep_prob: (1.0 - dropout) if self.train else 1.0 } # Create iterator for feeding BiRNN train_iterator = BucketDataIterator(trainImages, trainGaplines, POS_SPAN, num_buckets, slider_size, slider_step, train=True) test_iterator = BucketDataIterator(testImages, testGaplines, POS_SPAN, 2, slider_size, slider_step, train=False) ``` # Create classifier ## Inputs ``` # Input placehodlers # N_INPUT -> size of vector representing one image in sequence # Inputs shape (max_seq_length, batch_size, vec_size) - time major inputs = tf.placeholder(shape=(None, None, N_INPUT), dtype=tf.float32, name='inputs') length = tf.placeholder(shape=(None,), dtype=tf.int32, # EDITED: tf.int32 name='length') # required for training, not required for testing and application targets = tf.placeholder(shape=(None, None), dtype=tf.int64, name='targets') # Dropout value keep_prob = tf.placeholder(tf.float32, name='keep_prob') sequence_size, batch_size, _ = tf.unstack(tf.shape(inputs)) ``` ## Standardization + CNN ``` # Help functions for standard layers def conv2d(x, W, name=None): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME', name=name) def max_pool_2x2(x, name=None): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name=name) # 1. Layer - Convulation variables W_conv1 = tf.get_variable('W_conv1', shape=[5, 5, 1, 4], initializer=tf.contrib.layers.xavier_initializer()) b_conv1 = tf.Variable(tf.constant(0.1, shape=[4]), name='b_conv1') # 3. Layer - Convulation variables W_conv2 = tf.get_variable('W_conv2', shape=[5, 5, 4, 8], initializer=tf.contrib.layers.xavier_initializer()) b_conv2 = tf.Variable(tf.constant(0.1, shape=[8]), name='b_conv2') def CNN(x): # 1. Layer - Convulation h_conv1 = tf.nn.relu(conv2d(x, W_conv1) + b_conv1, name='h_conv1') # 2. Layer - Max Pool h_pool1 = max_pool_2x2(h_conv1, name='h_pool1') # 3. Layer - Convulation h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2, name='h_conv2') # 4. Layer - Max Pool return max_pool_2x2(h_conv2, name='h_pool2') # Input images CNN inpts = tf.map_fn( lambda seq: tf.map_fn( lambda img: tf.reshape( CNN(tf.reshape(img, [1, slider_size[0], slider_size[1], 1])), [-1]), seq), inputs, dtype=tf.float32) ``` ### Attention ``` # attention_states: size [batch_size, max_time, num_units] attention_states = tf.transpose(inpts, [1, 0, 2]) # Create an attention mechanism attention_mechanism = tf.contrib.seq2seq.LuongAttention( rnn_units, attention_states, memory_sequence_length=length) final_cell = create_cell(rnn_units, 2*rnn_layers, 2*rnn_residual_layers, is_dropout=True, keep_prob=keep_prob) final_cell = seq2seq.AttentionWrapper( final_cell, attention_mechanism, attention_layer_size=attention_size) final_initial_state = final_cell.zero_state(batch_size, tf.float32) attention_output, _ = tf.nn.dynamic_rnn( cell=final_cell, inputs=attention_states, sequence_length=length, # initial_state=final_initial_state, dtype = tf.float32) pred = tf.layers.dense(inputs=attention_output, units=2, name='pred') prediction = tf.argmax(pred, axis=-1, name='prediction') ``` ## Optimizer ``` # Define loss and optimizer # loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=pred, labels=targets), name='loss') weights = tf.multiply(targets, POS_WEIGHT) + 1 loss = tf.reduce_mean(tf.losses.sparse_softmax_cross_entropy( logits=pred, labels=targets, weights=weights), name='loss') train_step = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss, name='train_step') # Evaluate model correct_pred = tf.equal(prediction, targets) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # accuracy = tf.reduce_mean(prediction * targets) # Testing for only zero predictions ``` ## Training ``` sess.run(tf.global_variables_initializer()) saver = tf.train.Saver() # Creat plot for live stats ploting trainPlot = TrainingPlot(TRAIN_STEPS, TEST_ITER, LOSS_ITER) try: for i_batch in range(TRAIN_STEPS): fd = train_iterator.next_feed(BATCH_SIZE) train_step.run(fd) if i_batch % LOSS_ITER == 0: # Plotting loss tmpLoss = loss.eval(fd) trainPlot.updateCost(tmpLoss, i_batch // LOSS_ITER) if i_batch % TEST_ITER == 0: # Plotting accuracy fd_test = test_iterator.next_feed(BATCH_SIZE) accTest = accuracy.eval(fd_test) accTrain = accuracy.eval(fd) trainPlot.updateAcc(accTest, accTrain, i_batch // TEST_ITER) if i_batch % SAVE_ITER == 0: saver.save(sess, 'models/gap-clas/A-RNN/model') # if i_batch % EPOCH == 0: # fd_test = test_iterator.next_feed(BATCH_SIZE) # print('batch %r - loss: %r' % (i_batch, sess.run(loss, fd_test))) # predict_, target_ = sess.run([pred, targets], fd_test) # for i, (inp, pred) in enumerate(zip(target_, predict_)): # print(' expected > {}'.format(inp)) # print(' predicted > {}'.format(pred)) # break # print() except KeyboardInterrupt: saver.save(sess, 'models/gap-clas/A-RNN/model') print('Training interrupted, model saved.') fd_test = test_iterator.next_feed(2*BATCH_SIZE) accTest = accuracy.eval(fd_test) print("Training finished with accuracy:", accTest) % matplotlib inline num_examples = 5 # Shuffle test images testImages = testImages[np.random.permutation(len(testImages))] imgs = testImages[:num_examples] # Split images to sequence of vectors length = [(image.shape[1]-slider_size[1])//slider_step for image in imgs] images_seq = np.empty(num_examples, dtype=object) for i, img in enumerate(imgs): images_seq[i] = np.array([img[:, loc * slider_step: loc * slider_step + slider_size[1]].flatten() for loc in range(length[i])], dtype=np.float32) # Create predictions using trained model test_pred = [] for i, inpt in enumerate(images_seq): inpt = np.reshape(inpt, (inpt.shape[0], 1, inpt.shape[1])) img = imgs[i].copy() # img = cv2.cvtColor(imgs[i].astype(np.float32), cv2.COLOR_GRAY2RGB) pred = prediction.eval({'inputs:0': inpt, 'length:0': [len(inpt)], 'keep_prob:0': 1.0}) for pos, g in enumerate(pred[0]): if g == 1: cv2.line(img, ((int)(15 + pos*slider_step), 0), ((int)(15 + pos*slider_step), slider_size[0]), 1, 1) implt(img, 'gray', t=str(i)) ```
github_jupyter
# FVCOM vertical slice along transect ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt import matplotlib.tri as mtri import cartopy.crs as ccrs from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER import iris import warnings import pyugrid import seawater as sw #url = 'http://crow.marine.usf.edu:8080/thredds/dodsC/FVCOM-Nowcast-Agg.nc' #url ='http://www.smast.umassd.edu:8080/thredds/dodsC/fvcom/hindcasts/30yr_gom3/mean' #url = 'http://www.smast.umassd.edu:8080/thredds/dodsC/fvcom/hindcasts/30yr_gom3' url = 'http://www.smast.umassd.edu:8080/thredds/dodsC/FVCOM/NECOFS/Forecasts/NECOFS_GOM3_FORECAST.nc' ugrid = pyugrid.UGrid.from_ncfile(url) # [lon,lat] of start point [A] and endpoint [B] for transect A = [-84, 27] B = [-82.5, 25.5] A = [-70, 41] B = [-69, 42] A = [-70.11129, 43.479881] # portland B = [-66.240095, 40.834688] # offshore Georges Bank A = [-70.6, 42.2] # mass bay B = [-69.3, 42.5] lon = ugrid.nodes[:, 0] lat = ugrid.nodes[:, 1] triangles = ugrid.faces[:] triang = mtri.Triangulation(lon, lat, triangles=triangles) def make_map(projection=ccrs.PlateCarree()): fig, ax = plt.subplots(figsize=(8, 6), subplot_kw=dict(projection=projection)) ax.coastlines(resolution='10m') gl = ax.gridlines(draw_labels=True) gl.xlabels_top = gl.ylabels_right = False gl.xformatter = LONGITUDE_FORMATTER gl.yformatter = LATITUDE_FORMATTER return fig, ax def plt_triangle(triang, face, ax=None, **kw): if not ax: fig, ax = plt.subplots() ax.triplot(triang.x[triang.triangles[face]], triang.y[triang.triangles[face]], triangles=triang.triangles[face], **kw) fig, ax = make_map() kw = dict(marker='.', linestyle='-', alpha=0.25, color='darkgray') ax.triplot(triang, **kw) # or lon, lat, triangules buf=1.0 extent = [lon.min(), lon.max(), lat.min(), lat.max()] extent = [A[0]-buf, B[0]+buf, B[1]-buf, A[1]+buf] ax.set_extent(extent) ax.plot(A[0], A[1], 'o') ax.plot(B[0], B[1], 'o') with warnings.catch_warnings(): warnings.simplefilter("ignore") cubes = iris.load_raw(url) print(cubes) cube = cubes.extract_strict('sea_water_potential_temperature') print(cube) # Finding the right `num` is tricky. num = 60 xi = np.linspace(A[0], B[0], num=num, endpoint=True) yi = np.linspace(A[1], B[1], num=num, endpoint=True) dist = sw.dist(xi, yi, 'km')[0].cumsum() dist = np.insert(dist, 0, 0) # grab a 3D chunk of data at a specific time step t3d = cube[-1, ...].data # this uses the CF formula terms to compute the z positions in the vertical z3d = cube[-1, ...].coord('sea_surface_height_above_reference_ellipsoid').points # this uses the CF formula terms to compute the z positions in the vertical #z3d = [z for z in cube[-1,...].coords(axis='z') if z.units.is_convertible(cf_units.Unit('m'))][0].points def interpolate(triang, xi, yi, data, trifinder=None): import matplotlib.tri as mtri # We still need to iterate in the vertical :-( i, j = data.shape slices = [] for k in range(i): interp_lin = mtri.LinearTriInterpolator(triang, data[k, :], trifinder=trifinder) slices.append(interp_lin(xi, yi)) return np.array(slices) trifinder = triang.get_trifinder() zi = interpolate(triang, xi, yi, t3d, trifinder=trifinder) di = interpolate(triang, xi, yi, z3d, trifinder=trifinder) fig, ax = plt.subplots(figsize=(11, 3)) im = ax.pcolormesh(dist, di, zi, shading='gouraud', cmap='jet') fig.colorbar(im, orientation='vertical'); ```
github_jupyter
## SIMPLE CONVOLUTIONAL NEURAL NETWORK ``` import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data %matplotlib inline print ("PACKAGES LOADED") ``` # LOAD MNIST ``` mnist = input_data.read_data_sets('data/', one_hot=True) trainimg = mnist.train.images trainlabel = mnist.train.labels testimg = mnist.test.images testlabel = mnist.test.labels print ("MNIST ready") ``` # SELECT DEVICE TO BE USED ``` device_type = "/gpu:1" ``` # DEFINE CNN ``` with tf.device(device_type): # <= This is optional n_input = 784 n_output = 10 weights = { 'wc1': tf.Variable(tf.random_normal([3, 3, 1, 64], stddev=0.1)), 'wd1': tf.Variable(tf.random_normal([14*14*64, n_output], stddev=0.1)) } biases = { 'bc1': tf.Variable(tf.random_normal([64], stddev=0.1)), 'bd1': tf.Variable(tf.random_normal([n_output], stddev=0.1)) } def conv_simple(_input, _w, _b): # Reshape input _input_r = tf.reshape(_input, shape=[-1, 28, 28, 1]) # Convolution _conv1 = tf.nn.conv2d(_input_r, _w['wc1'], strides=[1, 1, 1, 1], padding='SAME') # Add-bias _conv2 = tf.nn.bias_add(_conv1, _b['bc1']) # Pass ReLu _conv3 = tf.nn.relu(_conv2) # Max-pooling _pool = tf.nn.max_pool(_conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') # Vectorize _dense = tf.reshape(_pool, [-1, _w['wd1'].get_shape().as_list()[0]]) # Fully-connected layer _out = tf.add(tf.matmul(_dense, _w['wd1']), _b['bd1']) # Return everything out = { 'input_r': _input_r, 'conv1': _conv1, 'conv2': _conv2, 'conv3': _conv3 , 'pool': _pool, 'dense': _dense, 'out': _out } return out print ("CNN ready") ``` # DEFINE COMPUTATIONAL GRAPH ``` # tf Graph input x = tf.placeholder(tf.float32, [None, n_input]) y = tf.placeholder(tf.float32, [None, n_output]) # Parameters learning_rate = 0.001 training_epochs = 10 batch_size = 100 display_step = 1 # Functions! with tf.device(device_type): # <= This is optional _pred = conv_simple(x, weights, biases)['out'] cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(_pred, y)) optm = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) _corr = tf.equal(tf.argmax(_pred,1), tf.argmax(y,1)) # Count corrects accr = tf.reduce_mean(tf.cast(_corr, tf.float32)) # Accuracy init = tf.initialize_all_variables() # Saver save_step = 1; savedir = "nets/" saver = tf.train.Saver(max_to_keep=3) print ("Network Ready to Go!") ``` # OPTIMIZE ## DO TRAIN OR NOT ``` do_train = 1 sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) sess.run(init) if do_train == 1: for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) # Loop over all batches for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # Fit training using batch data sess.run(optm, feed_dict={x: batch_xs, y: batch_ys}) # Compute average loss avg_cost += sess.run(cost, feed_dict={x: batch_xs, y: batch_ys})/total_batch # Display logs per epoch step if epoch % display_step == 0: print ("Epoch: %03d/%03d cost: %.9f" % (epoch, training_epochs, avg_cost)) train_acc = sess.run(accr, feed_dict={x: batch_xs, y: batch_ys}) print (" Training accuracy: %.3f" % (train_acc)) test_acc = sess.run(accr, feed_dict={x: testimg, y: testlabel}) print (" Test accuracy: %.3f" % (test_acc)) # Save Net if epoch % save_step == 0: saver.save(sess, "nets/cnn_mnist_simple.ckpt-" + str(epoch)) print ("Optimization Finished.") ``` # RESTORE ``` if do_train == 0: epoch = training_epochs-1 saver.restore(sess, "nets/cnn_mnist_simple.ckpt-" + str(epoch)) print ("NETWORK RESTORED") ``` # LET'S SEE HOW CNN WORKS ``` with tf.device(device_type): conv_out = conv_simple(x, weights, biases) input_r = sess.run(conv_out['input_r'], feed_dict={x: trainimg[0:1, :]}) conv1 = sess.run(conv_out['conv1'], feed_dict={x: trainimg[0:1, :]}) conv2 = sess.run(conv_out['conv2'], feed_dict={x: trainimg[0:1, :]}) conv3 = sess.run(conv_out['conv3'], feed_dict={x: trainimg[0:1, :]}) pool = sess.run(conv_out['pool'], feed_dict={x: trainimg[0:1, :]}) dense = sess.run(conv_out['dense'], feed_dict={x: trainimg[0:1, :]}) out = sess.run(conv_out['out'], feed_dict={x: trainimg[0:1, :]}) ``` # Input ``` # Let's see 'input_r' print ("Size of 'input_r' is %s" % (input_r.shape,)) label = np.argmax(trainlabel[0, :]) print ("Label is %d" % (label)) # Plot ! plt.matshow(input_r[0, :, :, 0], cmap=plt.get_cmap('gray')) plt.title("Label of this image is " + str(label) + "") plt.colorbar() plt.show() ``` # Conv1 (convolution) ``` # Let's see 'conv1' print ("Size of 'conv1' is %s" % (conv1.shape,)) # Plot ! for i in range(3): plt.matshow(conv1[0, :, :, i], cmap=plt.get_cmap('gray')) plt.title(str(i) + "th conv1") plt.colorbar() plt.show() ``` # Conv2 (+bias) ``` # Let's see 'conv2' print ("Size of 'conv2' is %s" % (conv2.shape,)) # Plot ! for i in range(3): plt.matshow(conv2[0, :, :, i], cmap=plt.get_cmap('gray')) plt.title(str(i) + "th conv2") plt.colorbar() plt.show() ``` # Conv3 (ReLU) ``` # Let's see 'conv3' print ("Size of 'conv3' is %s" % (conv3.shape,)) # Plot ! for i in range(3): plt.matshow(conv3[0, :, :, i], cmap=plt.get_cmap('gray')) plt.title(str(i) + "th conv3") plt.colorbar() plt.show() ``` # Pool (max_pool) ``` # Let's see 'pool' print ("Size of 'pool' is %s" % (pool.shape,)) # Plot ! for i in range(3): plt.matshow(pool[0, :, :, i], cmap=plt.get_cmap('gray')) plt.title(str(i) + "th pool") plt.colorbar() plt.show() ``` # Dense ``` # Let's see 'dense' print ("Size of 'dense' is %s" % (dense.shape,)) # Let's see 'out' print ("Size of 'out' is %s" % (out.shape,)) ``` # Convolution filters ``` # Let's see weight! wc1 = sess.run(weights['wc1']) print ("Size of 'wc1' is %s" % (wc1.shape,)) # Plot ! for i in range(3): plt.matshow(wc1[:, :, 0, i], cmap=plt.get_cmap('gray')) plt.title(str(i) + "th conv filter") plt.colorbar() plt.show() ```
github_jupyter
# Checklist * Make sure to have a clock visible * Check network connectivity * Displays mirrored * Slides up * This notebook * ~170% zoom * Ideally using 3.7-pre because it has better error messages: demo-env/bin/jupyter notebook pycon-notebook.ipynb * Full screened (F11) * Hide header and toolbar * Turn on line numbers * Kernel → Restart and clear output * Examples: * getaddrinfo: on vorpus.org or blank * clear the async/await example and the happy eyeballs (maybe leaving the function prototype to seed things) * Two terminals ([tilix](https://gnunn1.github.io/tilix-web/)) with large font and * `nc -l -p 12345` * `nc -l -p 54321` * (For the `nc` included with MacOS, you leave out the `-p`, for example: `nc -l 12345`.) * No other windows on the same desktop * scrolled down to the getaddrinfo example <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> # `getaddrinfo` example ``` import socket socket.getaddrinfo("debian.org", "https", type=socket.SOCK_STREAM) ``` <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> # Demo: bidirectional proxy ``` import trio async def proxy_one_way(source, sink): while True: data = await source.receive_some(1024) if not data: await sink.send_eof() break await sink.send_all(data) async def proxy_two_way(a, b): async with trio.open_nursery() as nursery: nursery.start_soon(proxy_one_way, a, b) nursery.start_soon(proxy_one_way, b, a) async def main(): with trio.move_on_after(10): # 10 second time limit a = await trio.open_tcp_stream("localhost", 12345) b = await trio.open_tcp_stream("localhost", 54321) async with a, b: await proxy_two_way(a, b) print("all done!") trio.run(main) async def sleepy(): print("going to sleep") await trio.sleep(1) print("woke up") async def sleepy_twice(): await sleepy() await sleepy() trio.run(sleepy_twice) ``` <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> # Happy Eyeballs! ``` async def open_tcp_socket(hostname, port, *, max_wait_time=0.250): targets = await trio.socket.getaddrinfo( hostname, port, type=trio.socket.SOCK_STREAM) failed_attempts = [trio.Event() for _ in targets] winning_socket = None async def attempt(target_idx, nursery): # wait for previous one to finish, or timeout to expire if target_idx > 0: with trio.move_on_after(max_wait_time): await failed_attempts[target_idx - 1].wait() # start next attempt if target_idx + 1 < len(targets): nursery.start_soon(attempt, target_idx + 1, nursery) # try to connect to our target try: *socket_config, _, target = targets[target_idx] socket = trio.socket.socket(*socket_config) await socket.connect(target) # if fails, tell next attempt to go ahead except OSError: failed_attempts[target_idx].set() else: # if succeds, save winning socket nonlocal winning_socket winning_socket = socket # and cancel other attempts nursery.cancel_scope.cancel() async with trio.open_nursery() as nursery: nursery.start_soon(attempt, 0, nursery) if winning_socket is None: raise OSError("ruh-oh") else: return winning_socket # Let's try it out: async def main(): print(await open_tcp_socket("debian.org", "https")) trio.run(main) ``` <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> # Happy eyeballs (pre-prepared for timing emergencies) ``` async def open_connection(hostname, port, *, max_wait_time=0.250): targets = await trio.socket.getaddrinfo( hostname, port, type=trio.socket.SOCK_STREAM) attempt_failed = [trio.Event() for _ in targets] winning_socket = None async def attempt_one(target_idx, nursery): # wait for previous attempt to fail, or timeout if which > 0: with trio.move_on_after(max_wait_time): await attempt_failed[target_idx - 1].wait() # kick off next attempt if target_idx + 1 < len(targets): nursery.start_soon(attempt_one, target_idx + 1, nursery) # try to connect to our target *socket_config, _, target = targets[target_idx] try: sock = trio.socket.socket(*socket_config) await sock.connect(target) # if fail, tell next attempt to go ahead except OSError: attempt_failed[target_idx].set() # if succeed, cancel other attempts and save winning socket else: nursery.cancel_scope.cancel() nonlocal winning_socket winning_socket = sock async with trio.open_nursery() as nursery: nursery.start_soon(attempt_one, 0, nursery) if winning_socket is None: raise OSError("failed") else: return winning_socket trio.run(open_connection, "debian.org", "https") ``` <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> # async/await demo cheat sheet ``` async def sleep_one(): print("I'm tired") await trio.sleep(1) print("slept!") async def sleep_twice(): await sleep_one() await sleep_one() trio.run(sleep_twice) ``` # `trio.Event` example ``` async def sleeper(event): print("sleeper: going to sleep!") await trio.sleep(5) print("sleeper: woke up! let's tell everyone") event.set() async def waiter(event, i): print(f"waiter {i}: waiting for the sleeper") await event.wait() print(f"waiter {i}: received notification!") async def main(): async with trio.open_nursery() as nursery: event = trio.Event() nursery.start_soon(sleeper, event) nursery.start_soon(waiter, event, 1) nursery.start_soon(waiter, event, 2) trio.run(main) ```
github_jupyter
# Estandarización, covarianza y correlación ``` import numpy as np import pandas as pd import scipy.stats import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline df = pd.read_csv('iris-data.csv', index_col=0) df.columns df.tipo_flor.value_counts() y = df['lar.petalo'] fig, axis = plt.subplots() axis.set_title('Variable original') axis.hist(y, bins=30) axis.axvline(x = np.mean(y), c='k', label='Media', linestyle='--') axis.axvline(x = np.mean(y) + np.std(y), c='r', label='Desv. Est.', linestyle='--') axis.legend() # Primero centramos la variable, para ello, se debe restar el valor de la media a cada uno de los valores de Y. y = df['lar.petalo'] fig, axis = plt.subplots() axis.set_title('Variable Centrada') axis.hist(y- np.mean(y), bins=30) axis.axvline(x = np.mean(y - np.mean(y)), c='k', label='Media', linestyle='--') axis.axvline(x = np.mean(y) + np.std(y), c='r', label='Desv. Est.', linestyle='--') axis.legend() # Reducción de la variable y = df['lar.petalo'] fig, axis = plt.subplots() axis.set_title('Variable Estandarizada') axis.hist((y - np.mean(y))/np.std(y), bins=30) axis.axvline(x = np.mean((np.mean(y - np.mean(y))/np.std(y))), c='k', label='Media', linestyle='--') axis.axvline(x = np.mean((np.mean(y - np.mean(y))/np.std(y))) + np.std((y - np.mean(y))/np.std(y)), c='r', label='Desv. Est.', linestyle='--') axis.legend() fig, axis = plt.subplots() axis.scatter(df['lar.petalo'], df['lar.sepalo'], alpha=0.7) axis.set_xlabel('Largo Petalo') axis.set_ylabel('Largo Sepalo') axis.autoscale() np.cov(df['lar.petalo'], df['lar.sepalo']) # El resultado es una matriz de 2x2. Es una matriz de covarianza. # La Relación son las variables en las posiciones (0,1)(1,0) # Las variables en las posiciones (0,0)(1,1) son las varianza de cada una de las variables individuales. # Lo que podemos leer de esto, es que son variables de relación positiva de magnitud 1.2. # Correlación de la magnitud de la fuerza df.corr(method = 'spearman') # La correlación entre largo sepalo y el largo del petalo es fuerte, ya que se apróxima bastante a 1. corr = df.corr(method = 'spearman') sns.heatmap(corr, xticklabels=corr.columns, yticklabels = corr.columns) # Haciendo más entendible el gráfico. plt.subplots() mask = np.zeros_like(df.corr(), dtype = np.bool) mask[np.triu_indices_from(mask)] = True sns.heatmap(df.corr(), cmap = sns.diverging_palette(20, 220, n = 200), mask = mask, annot = True, center = 0) # Dejando la diagonal principal, a modo de delimitador corrK = df.corr(method='kendall') mask = np.zeros_like(corrK, dtype = np.bool) mask[np.triu_indices(len(corrK), 1)] = True sns.heatmap(df.corr(), cmap = sns.diverging_palette(20, 220, n = 200), mask = mask, annot = True, center = 0) # Ordenando por el tipo de flor plt.figure(figsize=(5,5)) sns.heatmap(df.corr()[['tipo_flor']].sort_values(by=['tipo_flor'], ascending=False).head(50), vmin=-1, annot=True) ```
github_jupyter
``` # look at tools/set_up_magics.ipynb yandex_metrica_allowed = True ; get_ipython().run_cell('# one_liner_str\n\nget_ipython().run_cell_magic(\'javascript\', \'\', \n \'// setup cpp code highlighting\\n\'\n \'IPython.CodeCell.options_default.highlight_modes["text/x-c++src"] = {\\\'reg\\\':[/^%%cpp/]} ;\'\n \'IPython.CodeCell.options_default.highlight_modes["text/x-cmake"] = {\\\'reg\\\':[/^%%cmake/]} ;\'\n)\n\n# creating magics\nfrom IPython.core.magic import register_cell_magic, register_line_magic\nfrom IPython.display import display, Markdown, HTML\nimport argparse\nfrom subprocess import Popen, PIPE\nimport random\nimport sys\nimport os\nimport re\nimport signal\nimport shutil\nimport shlex\nimport glob\nimport time\n\n@register_cell_magic\ndef save_file(args_str, cell, line_comment_start="#"):\n parser = argparse.ArgumentParser()\n parser.add_argument("fname")\n parser.add_argument("--ejudge-style", action="store_true")\n args = parser.parse_args(args_str.split())\n \n cell = cell if cell[-1] == \'\\n\' or args.no_eof_newline else cell + "\\n"\n cmds = []\n with open(args.fname, "w") as f:\n f.write(line_comment_start + " %%cpp " + args_str + "\\n")\n for line in cell.split("\\n"):\n line_to_write = (line if not args.ejudge_style else line.rstrip()) + "\\n"\n if line.startswith("%"):\n run_prefix = "%run "\n if line.startswith(run_prefix):\n cmds.append(line[len(run_prefix):].strip())\n f.write(line_comment_start + " " + line_to_write)\n continue\n comment_prefix = "%" + line_comment_start\n if line.startswith(comment_prefix):\n cmds.append(\'#\' + line[len(comment_prefix):].strip())\n f.write(line_comment_start + " " + line_to_write)\n continue\n raise Exception("Unknown %%save_file subcommand: \'%s\'" % line)\n else:\n f.write(line_to_write)\n f.write("" if not args.ejudge_style else line_comment_start + r" line without \\n")\n for cmd in cmds:\n if cmd.startswith(\'#\'):\n display(Markdown("\\#\\#\\#\\# `%s`" % cmd[1:]))\n else:\n display(Markdown("Run: `%s`" % cmd))\n get_ipython().system(cmd)\n\n@register_cell_magic\ndef cpp(fname, cell):\n save_file(fname, cell, "//")\n \n@register_cell_magic\ndef cmake(fname, cell):\n save_file(fname, cell, "#")\n\n@register_cell_magic\ndef asm(fname, cell):\n save_file(fname, cell, "//")\n \n@register_cell_magic\ndef makefile(fname, cell):\n fname = fname or "makefile"\n assert fname.endswith("makefile")\n save_file(fname, cell.replace(" " * 4, "\\t"))\n \n@register_line_magic\ndef p(line):\n line = line.strip() \n if line[0] == \'#\':\n display(Markdown(line[1:].strip()))\n else:\n try:\n expr, comment = line.split(" #")\n display(Markdown("`{} = {}` # {}".format(expr.strip(), eval(expr), comment.strip())))\n except:\n display(Markdown("{} = {}".format(line, eval(line))))\n \n \ndef show_log_file(file, return_html_string=False):\n obj = file.replace(\'.\', \'_\').replace(\'/\', \'_\') + "_obj"\n html_string = \'\'\'\n <!--MD_BEGIN_FILTER-->\n <script type=text/javascript>\n var entrance___OBJ__ = 0;\n var errors___OBJ__ = 0;\n function halt__OBJ__(elem, color)\n {\n elem.setAttribute("style", "font-size: 14px; background: " + color + "; padding: 10px; border: 3px; border-radius: 5px; color: white; "); \n }\n function refresh__OBJ__()\n {\n entrance___OBJ__ -= 1;\n if (entrance___OBJ__ < 0) {\n entrance___OBJ__ = 0;\n }\n var elem = document.getElementById("__OBJ__");\n if (elem) {\n var xmlhttp=new XMLHttpRequest();\n xmlhttp.onreadystatechange=function()\n {\n var elem = document.getElementById("__OBJ__");\n console.log(!!elem, xmlhttp.readyState, xmlhttp.status, entrance___OBJ__);\n if (elem && xmlhttp.readyState==4) {\n if (xmlhttp.status==200)\n {\n errors___OBJ__ = 0;\n if (!entrance___OBJ__) {\n if (elem.innerHTML != xmlhttp.responseText) {\n elem.innerHTML = xmlhttp.responseText;\n }\n if (elem.innerHTML.includes("Process finished.")) {\n halt__OBJ__(elem, "#333333");\n } else {\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n }\n }\n return xmlhttp.responseText;\n } else {\n errors___OBJ__ += 1;\n if (!entrance___OBJ__) {\n if (errors___OBJ__ < 6) {\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n } else {\n halt__OBJ__(elem, "#994444");\n }\n }\n }\n }\n }\n xmlhttp.open("GET", "__FILE__", true);\n xmlhttp.setRequestHeader("Cache-Control", "no-cache");\n xmlhttp.send(); \n }\n }\n \n if (!entrance___OBJ__) {\n entrance___OBJ__ += 1;\n refresh__OBJ__(); \n }\n </script>\n\n <p id="__OBJ__" style="font-size: 14px; background: #000000; padding: 10px; border: 3px; border-radius: 5px; color: white; ">\n </p>\n \n </font>\n <!--MD_END_FILTER-->\n <!--MD_FROM_FILE __FILE__.md -->\n \'\'\'.replace("__OBJ__", obj).replace("__FILE__", file)\n if return_html_string:\n return html_string\n display(HTML(html_string))\n\n \nclass TInteractiveLauncher:\n tmp_path = "./interactive_launcher_tmp"\n def __init__(self, cmd):\n try:\n os.mkdir(TInteractiveLauncher.tmp_path)\n except:\n pass\n name = str(random.randint(0, 1e18))\n self.inq_path = os.path.join(TInteractiveLauncher.tmp_path, name + ".inq")\n self.log_path = os.path.join(TInteractiveLauncher.tmp_path, name + ".log")\n \n os.mkfifo(self.inq_path)\n open(self.log_path, \'w\').close()\n open(self.log_path + ".md", \'w\').close()\n\n self.pid = os.fork()\n if self.pid == -1:\n print("Error")\n if self.pid == 0:\n exe_cands = glob.glob("../tools/launcher.py") + glob.glob("../../tools/launcher.py")\n assert(len(exe_cands) == 1)\n assert(os.execvp("python3", ["python3", exe_cands[0], "-l", self.log_path, "-i", self.inq_path, "-c", cmd]) == 0)\n self.inq_f = open(self.inq_path, "w")\n interactive_launcher_opened_set.add(self.pid)\n show_log_file(self.log_path)\n\n def write(self, s):\n s = s.encode()\n assert len(s) == os.write(self.inq_f.fileno(), s)\n \n def get_pid(self):\n n = 100\n for i in range(n):\n try:\n return int(re.findall(r"PID = (\\d+)", open(self.log_path).readline())[0])\n except:\n if i + 1 == n:\n raise\n time.sleep(0.1)\n \n def input_queue_path(self):\n return self.inq_path\n \n def wait_stop(self, timeout):\n for i in range(int(timeout * 10)):\n wpid, status = os.waitpid(self.pid, os.WNOHANG)\n if wpid != 0:\n return True\n time.sleep(0.1)\n return False\n \n def close(self, timeout=3):\n self.inq_f.close()\n if not self.wait_stop(timeout):\n os.kill(self.get_pid(), signal.SIGKILL)\n os.waitpid(self.pid, 0)\n os.remove(self.inq_path)\n # os.remove(self.log_path)\n self.inq_path = None\n self.log_path = None \n interactive_launcher_opened_set.remove(self.pid)\n self.pid = None\n \n @staticmethod\n def terminate_all():\n if "interactive_launcher_opened_set" not in globals():\n globals()["interactive_launcher_opened_set"] = set()\n global interactive_launcher_opened_set\n for pid in interactive_launcher_opened_set:\n print("Terminate pid=" + str(pid), file=sys.stderr)\n os.kill(pid, signal.SIGKILL)\n os.waitpid(pid, 0)\n interactive_launcher_opened_set = set()\n if os.path.exists(TInteractiveLauncher.tmp_path):\n shutil.rmtree(TInteractiveLauncher.tmp_path)\n \nTInteractiveLauncher.terminate_all()\n \nyandex_metrica_allowed = bool(globals().get("yandex_metrica_allowed", False))\nif yandex_metrica_allowed:\n display(HTML(\'\'\'<!-- YANDEX_METRICA_BEGIN -->\n <script type="text/javascript" >\n (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)};\n m[i].l=1*new Date();k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)})\n (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym");\n\n ym(59260609, "init", {\n clickmap:true,\n trackLinks:true,\n accurateTrackBounce:true\n });\n </script>\n <noscript><div><img src="https://mc.yandex.ru/watch/59260609" style="position:absolute; left:-9999px;" alt="" /></div></noscript>\n <!-- YANDEX_METRICA_END -->\'\'\'))\n\ndef make_oneliner():\n html_text = \'("В этот ноутбук встроен код Яндекс Метрики для сбора статистики использований. Если вы не хотите, чтобы по вам собиралась статистика, исправьте: yandex_metrica_allowed = False" if yandex_metrica_allowed else "")\'\n html_text += \' + "<""!-- MAGICS_SETUP_PRINTING_END -->"\'\n return \'\'.join([\n \'# look at tools/set_up_magics.ipynb\\n\',\n \'yandex_metrica_allowed = True ; get_ipython().run_cell(%s);\' % repr(one_liner_str),\n \'display(HTML(%s))\' % html_text,\n \' #\'\'MAGICS_SETUP_END\'\n ])\n \n\n');display(HTML(("В этот ноутбук встроен код Яндекс Метрики для сбора статистики использований. Если вы не хотите, чтобы по вам собиралась статистика, исправьте: yandex_metrica_allowed = False" if yandex_metrica_allowed else "") + "<""!-- MAGICS_SETUP_PRINTING_END -->")) #MAGICS_SETUP_END ``` # Ints & Floats <table width=100%> <tr> <th width=20%> <b>Видеозапись семинара &rarr; </b> </th> <th> <a href="https://youtu.be/MTDgiATnXlc"> <img src="video.jpg" width="320" height="160" align="left" alt="Видео с семинара"> </a> </th> <th> </th> </tr> </table> [Ридинг Яковлева: Целочисленная арифметика](https://github.com/victor-yacovlev/mipt-diht-caos/tree/master/practice/integers) <br>[Ридинг Яковлева: Вещественная арифметика](https://github.com/victor-yacovlev/mipt-diht-caos/tree/master/practice/ieee754) Сегодня в программе: * <a href="#int" style="color:#856024"> Целые числа </a> * <a href="#float" style="color:#856024"> Вещественные числа </a> ## <a name="int"></a> Целые числа Производя `+`, `-`, `*` со стандартными целочисленными типами в программе мы работаем $\mathbb{Z}_{2^k}$, где $k$ - количество бит в числе. Причем это верно как со знаковыми, так и беззнаковыми числами. В процессоре для сложения знаковых и беззнаковых чисел выполняется одна и та же инструкция. ``` k = 3 # min 0, max 7 = (1 << 3) - 1 m = (1 << k) def normalize(x): return ((x % m) + m) % m def format_n(x): x = normalize(x) return "unsigned %d, signed % 2d, bytes %s" % (x, x if x < (m >> 1) else x - m, bin(x + m)[3:]) for i in range(0, m): print("i=%d -> %s" % (i, format_n(i))) def show_add(a, b): print("%d + %d = %d" % (a, b, a + b)) print(" (%s) + (%s) = (%s)" % (format_n(a), format_n(b), format_n(a + b))) show_add(2, 1) show_add(2, -1) def show_mul(a, b): print("%d * %d = %d" % (a, b, a * b)) print(" (%s) * (%s) = (%s)" % (format_n(a), format_n(b), format_n(a * b))) show_mul(2, 3) show_mul(-2, -3) show_mul(-1, -1) ``` Но есть некоторые тонкости. Если вы пишете код на C/C++ то компилятор считает **переполнение знакового типа UB** (undefined behavior). А переполнение беззнакового типа - законной операцией, при которой просто отбрасываются старшие биты (или значение берется по модулю $2^k$, или просто операция производится в $\mathbb{Z}_{2^k}$ - можете выбирать более удобный для вас способ на это смотреть). ``` %%cpp lib.c %run gcc -O3 -shared -fPIC lib.c -o lib.so int check_increment(int x) { return x + 1 > x; } int unsigned_check_increment(unsigned int x) { return x + 1 > x; } import ctypes int32_max = (1 << 31) - 1 uint32_max = (1 << 32) - 1 lib = ctypes.CDLL("./lib.so") lib.check_increment.argtypes = [ctypes.c_int] lib.unsigned_check_increment.argtypes = [ctypes.c_uint] %p lib.check_increment(1) %p lib.check_increment(int32_max) %p lib.unsigned_check_increment(1) %p lib.unsigned_check_increment(uint32_max) !gdb lib.so -batch -ex="disass check_increment" -ex="disass unsigned_check_increment" # UB санитайзер в gcc этого не ловит :| # А вот clang молодец :) !clang -O0 -shared -fPIC -fsanitize=undefined lib.c -o lib_ubsan.so %%save_file run_ub.py %run LD_PRELOAD=$(gcc -print-file-name=libubsan.so) python3 run_ub.py import ctypes int32_max = (1 << 31) - 1 lib = ctypes.CDLL("./lib_ubsan.so") lib.check_increment.argtypes = [ctypes.c_int] print(lib.check_increment(int32_max)) %%cpp code_sample // воображаемая ситуация, когда переполнение нежелательно isize = 100000 n, m = 100000 for (int i = 0; i < isize && i < saturation_multiplication(n, m); ++i) { } ``` Иногда хочется обрабатывать переполнения разумным образом, например, насыщением: ``` %%cpp main.c %run gcc -O3 main.c -o a.exe %run ./a.exe #include <assert.h> #include <stdint.h> unsigned int satsum(unsigned int x, unsigned int y) { unsigned int z; // Функция, которая обрабатывает выставленный процессором флаг и возвращает его явно if (__builtin_uadd_overflow(x, y, &z)) { return ~0u; } return z; } int main() { assert(satsum(2000000000L, 2000000000L) == 4000000000L); assert(satsum(4000000000L, 4000000000L) == (unsigned int)-1); return 0; } ``` Для операций сравнения и деления целых чисел уже однозначно важно, знаковые они или нет. ``` %%cpp lib2.c %run gcc -O3 -shared -fPIC lib2.c -o lib2.so typedef unsigned int uint; int sum(int x, int y) { return x + y; } uint usum(uint x, uint y) { return x + y; } int mul(int x, int y) { return x * y; } uint umul(uint x, uint y) { return x * y; } int cmp(int x, int y) { return x < y; } int ucmp(uint x, uint y) { return x < y; } int div(int x, int y) { return x / y; } int udiv(uint x, uint y) { return x / y; } # Функции sum и usum идентичны !gdb lib2.so -batch -ex="disass sum" -ex="disass usum" !gdb lib2.so -batch -ex="disass mul" -ex="disass umul" # Функции cmp и ucmp отличаются! !gdb lib2.so -batch -ex="disass cmp" -ex="disass ucmp" !gdb lib2.so -batch -ex="disass div" -ex="disass udiv" ``` ## <a name="size"></a> Про размеры int'ов и знаковость Установка всякого-разного Для `-m32` `sudo apt-get install g++-multilib libc6-dev-i386` Для `qemu-arm` `sudo apt-get install qemu-system-arm qemu-user` `sudo apt-get install lib32z1` Для сборки и запуска arm: `wget http://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/arm-linux-gnueabi/gcc-linaro-7.3.1-2018.05-i686_arm-linux-gnueabi.tar.xz` `tar xvf gcc-linaro-7.3.1-2018.05-i686_arm-linux-gnueabi.tar.xz` ``` # Add path to compilers to PATH import os os.environ["PATH"] = os.environ["PATH"] + ":" + \ "/home/pechatnov/arm/gcc-linaro-7.3.1-2018.05-i686_arm-linux-gnueabi/bin" %%cpp size.c %// Компилируем обычным образом %run gcc size.c -o size.exe %run ./size.exe %// Под 32-битную архитектуру %run gcc -m32 size.c -o size.exe %run ./size.exe %// Под ARM %run arm-linux-gnueabi-gcc -marm size.c -o size.exe %run qemu-arm -L ~/arm/gcc-linaro-7.3.1-2018.05-i686_arm-linux-gnueabi/arm-linux-gnueabi/libc ./size.exe #include <stdio.h> int main() { printf("is char signed = %d, ", (int)((char)(-1) > 0)); printf("sizeof(long int) = %d\n", (int)sizeof(long int)); } ``` Какой из этого можно сделать вывод? Хотите понятный тип - используйте типы с детерминированым размером и знаковостью - uint64_t и другие подобные ## <a name="bit"></a> Битовые операции `^`, `|`, `&`, `~`, `>>`, `<<` ``` a = 0b0110 def my_bin(x, digits=4): m = (1 << digits) x = ((x % m) + m) % m return bin(x + m).replace('0b1', '0b') %p my_bin( a) # 4-битное число %p my_bin( ~a) # Его побитовое отрицание %p my_bin(a >> 1) # Его сдвиг вправо на 1 %p my_bin(a << 1) # Его сдвиг влево на 1 x = 0b0011 y = 0b1001 %p my_bin(x ) # X %p my_bin(y ) # Y %p my_bin(x | y) # Побитовый OR %p my_bin(x ^ y) # Побитовый XOR %p my_bin(x & y) # Побитовый AND ``` Задачки: 1. Получите из числа `a` `i`-ый бит 2. Выставьте в целом числе `a` `i`-ый бит 3. Занулите в целом числе `a` `i`-ый бит 4. Инвертируйте в целом числе `a` `i`-ый бит 5. Получите биты числа `a` с `i` по `j` невключительно как беззнаковое число 6. Скопируйте в биты числа `a` с `i` по `j` невключительно младшие биты числа `b` <details> <summary> Решения с семинара </summary> <pre> <code> 1. (a >> i) & 1u 2. a | (1u << i) 3. a & ~(1u << i) 4. a ^ (1u << i) 5. (a >> i) & ((1u << (j - i)) - 1u) 6. i = 2, j = 5 xxx a=0b00011000 yyy b=0b01010101 m = (1u << (j - i)) - 1u (a & ~(m << i)) | ((b & m) << i) </code> </pre> </details> ## <a name="float"></a> Вещественные числа Давайте просто посмотрим на битовые представления вещественных чисел и найдем закономерности :) ``` %%cpp stand.h // Можно не вникать, просто печаталка битиков #include <stdio.h> #include <stdint.h> #include <inttypes.h> #include <string.h> #define EXTRA_INFO // включение более подробного вывода #if defined(EXTRA_INFO) #define IS_VLINE_POINT(i) (i == 63 || i == 52) #define DESCRIBE(d) describe_double(d) typedef union { double double_val; struct { uint64_t mantissa_val : 52; uint64_t exp_val : 11; uint64_t sign_val : 1; }; } double_parser_t; void describe_double(double x) { double_parser_t parser = {.double_val = x}; printf(" (-1)^%d * 2^(%d) * 0x1.%013llx", (int)parser.sign_val, parser.exp_val - 1023, (long long unsigned int)parser.mantissa_val); } #else #define IS_VLINE_POINT(i) 0 #define DESCRIBE(d) (void)(d) #endif inline uint64_t bits_of_double(double d) { uint64_t result; memcpy(&result, &d, sizeof(result)); return result; } inline void print_doubles(double* dds) { char line_1[70] = {0}, line_2[70] = {0}, hline[70] = {0}; int j = 0; for (int i = 63; i >= 0; --i) { line_1[j] = (i % 10 == 0) ? ('0' + (i / 10)) : ' '; line_2[j] = '0' + (i % 10); hline[j] = '-'; ++j; if (IS_VLINE_POINT(i)) { line_1[j] = line_2[j] = '|'; hline[j] = '-'; ++j; } } printf("Bit numbers: %s\n", line_1); printf(" %s (-1)^S * 2^(E-B) * (1+M/(2^Mbits))\n", line_2); printf(" %s\n", hline); for (double* d = dds; *d; ++d) { printf("%10.4lf ", *d); uint64_t m = bits_of_double(*d); for (int i = 63; i >= 0; --i) { printf("%d", (int)((m >> i) & 1)); if (IS_VLINE_POINT(i)) { printf("|"); } } DESCRIBE(*d); printf("\n"); } } ``` ##### Посмотрим на пары чисел x и -x ``` %%cpp stand.cpp %run gcc stand.cpp -o stand.exe %run ./stand.exe #include "stand.h" int main() { double dd[] = {1, -1, 132, -132, 3.1415, -3.1415, 0}; print_doubles(dd); } ``` ##### Посмотрим на степени 2-ки ``` %%cpp stand.cpp %run gcc stand.cpp -o stand.exe %run ./stand.exe #include "stand.h" int main() { double dd[] = {0.125, 0.25, 0.5, 1, 2, 4, 8, 16, 0}; print_doubles(dd); } ``` ##### Посмотрим на числа вида $ 1 + i \cdot 2^{(-k)}$ ``` %%cpp stand.cpp %run gcc stand.cpp -o stand.exe %run ./stand.exe #include "stand.h" int main() { double t8 = 1.0 / 8; double dd[] = {1 + 0 * t8, 1 + 1 * t8, 1 + 2 * t8, 1 + 3 * t8, 1 + 4 * t8, 1 + 5 * t8, 1 + 6 * t8, 1 + 7 * t8, 0}; print_doubles(dd); } ``` ##### Посмотрим на числа вида $ 1 + i \cdot 2^{(-k)}$ ``` %%cpp stand.cpp %run gcc stand.cpp -o stand.exe %run ./stand.exe #include "stand.h" int main() { double eps = 1.0 / (1LL << 52); double dd[] = {1 + 0 * eps, 1 + 1 * eps, 1 + 2 * eps, 1 + 3 * eps, 1 + 4 * eps, 0}; print_doubles(dd); } ``` ##### Посмотрим на существенно разные значения double ``` %%cpp stand.cpp %run gcc stand.cpp -o stand.exe %run ./stand.exe #include <math.h> #include "stand.h" int main() { double dd[] = {1.5, 100, NAN, -NAN, 0.0 / 0.0, INFINITY, -INFINITY, 0}; print_doubles(dd); } ``` Я надеюсь по примерам вы уловили суть. Подробнее за теорией можно в [Ридинг Яковлева: Вещественная арифметика](https://github.com/victor-yacovlev/mipt-diht-caos/tree/master/practice/ieee754) # Дополнение про bitcast ``` %%cpp bitcast.c %run gcc -O2 -Wall bitcast.c -o bitcast.exe %run ./bitcast.exe #include <stdio.h> #include <string.h> #include <stdint.h> #include <inttypes.h> uint64_t bit_cast_memcpy(double d) { uint64_t result; memcpy(&result, &d, sizeof(result)); // Железобетонный способ, но чуть сложнее для оптимизатора return result; } typedef union { double double_val; uint64_t ui64_val; } converter_t; uint64_t bit_cast_union(double d) { converter_t conv; conv.double_val = d; return conv.ui64_val; //return ((converter_t){.double_val = d}).ui64_val; // Вроде (?) хорошее решение } uint64_t bit_cast_ptr(double d) { return *(uint64_t*)(void*)&d; // Простое, но неоднозначное решение из-за алиасинга } int main() { double d = 3.15; printf("%" PRId64 "\n", bit_cast_memcpy(d)); printf("%" PRId64 "\n", bit_cast_union(d)); printf("%" PRId64 "\n", bit_cast_memcpy(d)); } !gdb bitcast.exe -batch -ex="disass bit_cast_memcpy" -ex="disass bit_cast_union" -ex="disass bit_cast_ptr" ``` Я бы рекомендовал использовать в таких случаях memcpy. [Так сделано в std::bit_cast](https://en.cppreference.com/w/cpp/numeric/bit_cast) [Про C++ алиасинг, ловкие оптимизации и подлые баги / Хабр](https://habr.com/ru/post/114117/)
github_jupyter
# Optimization with scipy.optimize When we want to optimize something, we do not ofcourse need to start everything from scratch. It is good to know how algorithms work, but if the development of new algorithms is not the main point, then one can just use packages and libraries that have been premade. In Python, there are multiple packages for optimization. At this lecture, we are goint to take a look at *scipy.optimize* package. ## Starting up When we want to study a package in Python, we can import it.. ``` from scipy.optimize import minimize ``` If we want to see the documentation, we can write the name of the package and two question marks and hit enter: ``` minimize?? ``` ## Optimization of multiple variables Let us define again our friendly objective function: ``` def f_simple(x): return (x[0] - 10.0)**2 + (x[1] + 5.0)**2+x[0]**2 ``` ### Method: `Nelder-Mead' The documentation has the following to say: <pre> Method :ref:`Nelder-Mead <optimize.minimize-neldermead>` uses the Simplex algorithm [1]_, [2]_. This algorithm has been successful in many applications but other algorithms using the first and/or second derivatives information might be preferred for their better performances and robustness in general. ... References ---------- .. [1] Nelder, J A, and R Mead. 1965. A Simplex Method for Function Minimization. The Computer Journal 7: 308-13. .. [2] Wright M H. 1996. Direct search methods: Once scorned, now respectable, in Numerical Analysis 1995: Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis (Eds. D F Griffiths and G A Watson). Addison Wesley Longman, Harlow, UK. 191-208. </pre> ``` res = minimize(f_simple,[0,0],method='Nelder-Mead', options={'disp': True}) print res.x print type(res) print res print res.message ``` ### Method: `CG` The documentation has the following to say: <pre> Method :ref:`CG <optimize.minimize-cg>` uses a nonlinear conjugate gradient algorithm by Polak and Ribiere, a variant of the Fletcher-Reeves method described in [5]_ pp. 120-122. Only the first derivatives are used. ... References ---------- ... .. [5] Nocedal, J, and S J Wright. 2006. Numerical Optimization. Springer New York. </pre> The Conjugate gradient method needs the gradient. The documentation has the following to say <pre> jac : bool or callable, optional Jacobian (gradient) of objective function. Only for CG, BFGS, Newton-CG, L-BFGS-B, TNC, SLSQP, dogleg, trust-ncg. If `jac` is a Boolean and is True, `fun` is assumed to return the gradient along with the objective function. If False, the gradient will be estimated numerically. `jac` can also be a callable returning the gradient of the objective. In this case, it must accept the same arguments as `fun`. </pre> ### Estimating the gradient numerically: ``` import numpy as np res = minimize(f_simple, [0,0], method='CG', #Conjugate gradient method options={'disp': True}) print res.x ``` ### Giving the gradient with ad ``` import ad res = minimize(f_simple, [0,0], method='CG', #Conjugate gradient method options={'disp': True}, jac=ad.gh(f_simple)[0]) print res.x ``` ### Method: `Newton-CG` Newton-CG method uses a Newton-CG algorithm [5] pp. 168 (also known as the truncated Newton method). It uses a CG method to the compute the search direction. See also *TNC* method for a box-constrained minimization with a similar algorithm. References ---------- .. [5] Nocedal, J, and S J Wright. 2006. Numerical Optimization. Springer New York. The Newton-CG algorithm needs the Jacobian and the Hessian. The documentation has the following to say: <pre> hess, hessp : callable, optional Hessian (matrix of second-order derivatives) of objective function or Hessian of objective function times an arbitrary vector p. Only for Newton-CG, dogleg, trust-ncg. Only one of `hessp` or `hess` needs to be given. If `hess` is provided, then `hessp` will be ignored. If neither `hess` nor `hessp` is provided, then the Hessian product will be approximated using finite differences on `jac`. `hessp` must compute the Hessian times an arbitrary vector. </pre> ### Trying without reading the documentation ``` import numpy as np x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2]) res = minimize(f_simple, [0,0], method='Newton-CG', #Newton-CG method options={'disp': True}) print res.x ``` ### Giving the gradient ``` import ad res = minimize(f_simple, [0,0], method='Newton-CG', #Newton-CG method options={'disp': True},jac=ad.gh(f_simple)[0]) print res.x ``` ### Giving also the hessian ``` import ad res = minimize(f_simple, [0,0], method='Newton-CG', #Newton-CG method options={'disp': True},jac=ad.gh(f_simple)[0], hess=ad.gh(f_simple)[1]) print res.x ``` ## Line search ``` def f_singlevar(x): return 2+(1-x)**2 from scipy.optimize import minimize_scalar minimize_scalar?? ``` ### Method: `Golden` The documentation has the following to say: <pre> Method :ref:`Golden <optimize.minimize_scalar-golden>` uses the golden section search technique. It uses analog of the bisection method to decrease the bracketed interval. It is usually preferable to use the *Brent* method. </pre> ``` minimize_scalar(f_singlevar,method='golden',tol=0.00001) ``` ### Method: `Brent` The documentation has the following to say about the Brent method: Method *Brent* uses Brent's algorithm to find a local minimum. The algorithm uses inverse parabolic interpolation when possible to speed up convergence of the golden section method. ``` minimize_scalar(f_singlevar,method='brent') ```
github_jupyter
![terrainbento logo](../../../../media/terrainbento_logo.png) # terrainbento model BasicRt steady-state solution This model shows example usage of the BasicRt model from the TerrainBento package. BasicRt modifies Basic by allowing for two lithologies: $\frac{\partial \eta}{\partial t} = - K(\eta,\eta_C) Q^{1/2}S + D\nabla^2 \eta$ $K(\eta, \eta_C ) = w K_1 + (1 - w) K_2$ $w = \frac{1}{1+\exp \left( -\frac{(\eta -\eta_C )}{W_c}\right)}$ where $Q$ is the local stream discharge, $S$ is the local slope, $W_c$ is the contact-zone width, $K_1$ and $K_2$ are the erodibilities of the upper and lower lithologies, and $D$ is the regolith transport parameter. $w$ is a weight used to calculate the effective erodibility $K(\eta, \eta_C)$ based on the depth to the contact zone and the width of the contact zone. Refer to [Barnhart et al. (2019)](https://www.geosci-model-dev.net/12/1267/2019/) for further explaination. For detailed information about creating a BasicRt model, see [the detailed documentation](https://terrainbento.readthedocs.io/en/latest/source/terrainbento.derived_models.model_basicRt.html). This notebook (a) shows the initialization and running of this model, (b) saves a NetCDF file of the topography, which we will use to make an oblique Paraview image of the landscape, and (c) creates a slope-area plot at steady state. ``` # import required modules from terrainbento import BasicRt, Clock, NotCoreNodeBaselevelHandler from landlab.io.netcdf import write_netcdf from landlab.values import random import os import numpy as np import matplotlib.pyplot as plt import matplotlib from landlab import imshow_grid, RasterModelGrid np.random.seed(4897) #Ignore warnings import warnings warnings.filterwarnings('ignore') # in this example we will create a grid, a clock, a boundary handler, # and an output writer. We will then use these to construct the model. grid = RasterModelGrid((25, 40), xy_spacing=40) z = random(grid, "topographic__elevation", where="CORE_NODE") contact = grid.add_zeros("node", "lithology_contact__elevation") contact[grid.node_y > 500.0] = 10. contact[grid.node_y <= 500.0] = -100000000. clock = Clock(start=0, step=10, stop=1e7) ncnblh = NotCoreNodeBaselevelHandler(grid, modify_core_nodes=True, lowering_rate=-0.001) # the tolerance here is high, so that this can run on binder and for tests. (recommended value = 0.001 or lower). tolerance = 20.0 # we can use an output writer to run until the model reaches steady state. class run_to_steady(object): def __init__(self, model): self.model = model self.last_z = self.model.z.copy() self.tolerance = tolerance def run_one_step(self): if model.model_time > 0: diff = (self.model.z[model.grid.core_nodes] - self.last_z[model.grid.core_nodes]) if max(abs(diff)) <= self.tolerance: self.model.clock.stop = model._model_time print("Model reached steady state in " + str(model._model_time) + " time units\n") else: self.last_z = self.model.z.copy() if model._model_time <= self.model.clock.stop - self.model.output_interval: self.model.clock.stop += self.model.output_interval # initialize the model by passing the correct arguments and # keyword arguments. model = BasicRt(clock, grid, boundary_handlers={"NotCoreNodeBaselevelHandler": ncnblh}, output_interval=1e4, save_first_timestep=True, output_prefix="output/basicRt", fields=["topographic__elevation"], water_erodibility_lower=0.001, water_erodibility_upper=0.01, m_sp=0.5, n_sp=1.0, regolith_transport_parameter=0.1, contact_zone__width=1.0, output_writers={"class": [run_to_steady]}) # to run the model as specified, execute the following line: model.run() # MAKE SLOPE-AREA PLOT # plot nodes that are not on the boundary or adjacent to it core_not_boundary = np.array( model.grid.node_has_boundary_neighbor(model.grid.core_nodes)) == False plotting_nodes = model.grid.core_nodes[core_not_boundary] upper_plotting_nodes = plotting_nodes[ model.grid.node_y[plotting_nodes] > 500.0] lower_plotting_nodes = plotting_nodes[ model.grid.node_y[plotting_nodes] < 500.0] # assign area_array and slope_array for ROCK area_array_upper = model.grid.at_node["drainage_area"][upper_plotting_nodes] slope_array_upper = model.grid.at_node["topographic__steepest_slope"][ upper_plotting_nodes] # assign area_array and slope_array for TILL area_array_lower = model.grid.at_node["drainage_area"][lower_plotting_nodes] slope_array_lower = model.grid.at_node["topographic__steepest_slope"][ lower_plotting_nodes] # instantiate figure and plot fig = plt.figure(figsize=(6, 3.75)) slope_area = plt.subplot() #plot the data for ROCK slope_area.scatter(area_array_lower, slope_array_lower, marker="s", edgecolor="0", color="1", label="Model BasicRt, Lower Layer") #plot the data for TILL slope_area.scatter(area_array_upper, slope_array_upper, color="k", label="Model BasicRt, Upper Layer") #make axes log and set limits slope_area.set_xscale("log") slope_area.set_yscale("log") slope_area.set_xlim(9 * 10**1, 3 * 10**5) slope_area.set_ylim(1e-5, 1e-1) #set x and y labels slope_area.set_xlabel(r"Drainage area [m$^2$]") slope_area.set_ylabel("Channel slope [-]") slope_area.legend(scatterpoints=1, prop={"size": 12}) slope_area.tick_params(axis="x", which="major", pad=7) plt.show() # # Save stack of all netcdfs for Paraview to use. # model.save_to_xarray_dataset(filename="basicRt.nc", # time_unit="years", # reference_time="model start", # space_unit="meters") # remove temporary netcdfs model.remove_output_netcdfs() # make a plot of the final steady state topography plt.figure() imshow_grid(model.grid, "topographic__elevation",cmap ='terrain', grid_units=("m", "m"),var_name="Elevation (m)") plt.show() ``` ## Challenge Create a LEM with a hard bedrock layer underlying a soft cover in the eastern side of the domain or (extra challenge) as a square in the middle of the domain. ## Next Steps - [Welcome page](../Welcome_to_TerrainBento.ipynb) - There are three additional introductory tutorials: 1) [Introduction terrainbento](../example_usage/Introduction_to_terrainbento.ipynb) 2) [Introduction to boundary conditions in terrainbento](../example_usage/introduction_to_boundary_conditions.ipynb) 3) [Introduction to output writers in terrainbento](../example_usage/introduction_to_output_writers.ipynb). - Five examples of steady state behavior in coupled process models can be found in the following notebooks: 1) [Basic](model_basic_steady_solution.ipynb) the simplest landscape evolution model in the terrainbento package. 2) [BasicVm](model_basic_var_m_steady_solution.ipynb) which permits the drainage area exponent to change 3) [BasicCh](model_basicCh_steady_solution.ipynb) which uses a non-linear hillslope erosion and transport law 4) [BasicVs](model_basicVs_steady_solution.ipynb) which uses variable source area hydrology 5) **This Notebook**: [BasisRt](model_basicRt_steady_solution.ipynb) which allows for two lithologies with different K values 6) [RealDEM](model_basic_realDEM.ipynb) Run the basic terrainbento model with a real DEM as initial condition.
github_jupyter
# 100 numpy exercises This is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercises for those who teach. If you find an error or think you've a better way to solve some of them, feel free to open an issue at <https://github.com/rougier/numpy-100> #### 1. Import the numpy package under the name `np` (★☆☆) ``` import numpy as np ``` #### 2. Print the numpy version and the configuration (★☆☆) ``` print(np.__version__) np.show_config() ``` #### 3. Create a null vector of size 10 (★☆☆) ``` Z = np.zeros(10) print(Z) ``` #### 4. How to find the memory size of any array (★☆☆) ``` Z = np.zeros((10,10)) print("%d bytes" % (Z.size * Z.itemsize)) ``` #### 5. How to get the documentation of the numpy add function from the command line? (★☆☆) ``` %run `python -c "import numpy; numpy.info(numpy.add)"` ``` #### 6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆) ``` Z = np.zeros(10) Z[4] = 1 print(Z) ``` #### 7. Create a vector with values ranging from 10 to 49 (★☆☆) ``` Z = np.arange(10,50) print(Z) ``` #### 8. Reverse a vector (first element becomes last) (★☆☆) ``` Z = np.arange(50) Z = Z[::-1] print(Z) ``` #### 9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆) ``` Z = np.arange(9).reshape(3,3) print(Z) ``` #### 10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆) ``` nz = np.nonzero([1,2,0,0,4,0]) print(nz) ``` #### 11. Create a 3x3 identity matrix (★☆☆) ``` Z = np.eye(3) print(Z) ``` #### 12. Create a 3x3x3 array with random values (★☆☆) ``` Z = np.random.random((3,3,3)) print(Z) ``` #### 13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆) ``` Z = np.random.random((10,10)) Zmin, Zmax = Z.min(), Z.max() print(Zmin, Zmax) ``` #### 14. Create a random vector of size 30 and find the mean value (★☆☆) ``` Z = np.random.random(30) m = Z.mean() print(m) ``` #### 15. Create a 2d array with 1 on the border and 0 inside (★☆☆) ``` Z = np.ones((10,10)) Z[1:-1,1:-1] = 0 print(Z) ``` #### 16. How to add a border (filled with 0's) around an existing array? (★☆☆) ``` Z = np.ones((5,5)) Z = np.pad(Z, pad_width=1, mode='constant', constant_values=0) print(Z) ``` #### 17. What is the result of the following expression? (★☆☆) ``` print(0 * np.nan) print(np.nan == np.nan) print(np.inf > np.nan) print(np.nan - np.nan) print(np.nan in set([np.nan])) print(0.3 == 3 * 0.1) ``` #### 18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆) ``` Z = np.diag(1+np.arange(4),k=-1) print(Z) ``` #### 19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆) ``` Z = np.zeros((8,8),dtype=int) Z[1::2,::2] = 1 Z[::2,1::2] = 1 print(Z) ``` #### 20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element? ``` print(np.unravel_index(100,(6,7,8))) ``` #### 21. Create a checkerboard 8x8 matrix using the tile function (★☆☆) ``` Z = np.tile( np.array([[0,1],[1,0]]), (4,4)) print(Z) ``` #### 22. Normalize a 5x5 random matrix (★☆☆) ``` Z = np.random.random((5,5)) Z = (Z - np.mean (Z)) / (np.std (Z)) print(Z) ``` #### 23. Create a custom dtype that describes a color as four unsigned bytes (RGBA) (★☆☆) ``` color = np.dtype([("r", np.ubyte, 1), ("g", np.ubyte, 1), ("b", np.ubyte, 1), ("a", np.ubyte, 1)]) ``` #### 24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆) ``` Z = np.dot(np.ones((5,3)), np.ones((3,2))) print(Z) # Alternative solution, in Python 3.5 and above Z = np.ones((5,3)) @ np.ones((3,2)) ``` #### 25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆) ``` # Author: Evgeni Burovski Z = np.arange(11) Z[(3 < Z) & (Z <= 8)] *= -1 print(Z) ``` #### 26. What is the output of the following script? (★☆☆) ``` # Author: Jake VanderPlas print(sum(range(5),-1)) from numpy import * print(sum(range(5),-1)) ``` #### 27. Consider an integer vector Z, which of these expressions are legal? (★☆☆) ``` Z**Z 2 << Z >> 2 Z <- Z 1j*Z Z/1/1 Z<Z>Z ``` #### 28. What are the result of the following expressions? ``` print(np.array(0) / np.array(0)) print(np.array(0) // np.array(0)) print(np.array([np.nan]).astype(int).astype(float)) ``` #### 29. How to round away from zero a float array ? (★☆☆) ``` # Author: Charles R Harris Z = np.random.uniform(-10,+10,10) print (np.copysign(np.ceil(np.abs(Z)), Z)) ``` #### 30. How to find common values between two arrays? (★☆☆) ``` Z1 = np.random.randint(0,10,10) Z2 = np.random.randint(0,10,10) print(np.intersect1d(Z1,Z2)) ``` #### 31. How to ignore all numpy warnings (not recommended)? (★☆☆) ``` # Suicide mode on defaults = np.seterr(all="ignore") Z = np.ones(1) / 0 # Back to sanity _ = np.seterr(**defaults) An equivalent way, with a context manager: with np.errstate(divide='ignore'): Z = np.ones(1) / 0 ``` #### 32. Is the following expressions true? (★☆☆) ``` np.sqrt(-1) == np.emath.sqrt(-1) ``` #### 33. How to get the dates of yesterday, today and tomorrow? (★☆☆) ``` yesterday = np.datetime64('today', 'D') - np.timedelta64(1, 'D') today = np.datetime64('today', 'D') tomorrow = np.datetime64('today', 'D') + np.timedelta64(1, 'D') ``` #### 34. How to get all the dates corresponding to the month of July 2016? (★★☆) ``` Z = np.arange('2016-07', '2016-08', dtype='datetime64[D]') print(Z) ``` #### 35. How to compute ((A+B)\*(-A/2)) in place (without copy)? (★★☆) ``` A = np.ones(3)*1 B = np.ones(3)*2 C = np.ones(3)*3 np.add(A,B,out=B) np.divide(A,2,out=A) np.negative(A,out=A) np.multiply(A,B,out=A) ``` #### 36. Extract the integer part of a random array using 5 different methods (★★☆) ``` Z = np.random.uniform(0,10,10) print (Z - Z%1) print (np.floor(Z)) print (np.ceil(Z)-1) print (Z.astype(int)) print (np.trunc(Z)) ``` #### 37. Create a 5x5 matrix with row values ranging from 0 to 4 (★★☆) ``` Z = np.zeros((5,5)) Z += np.arange(5) print(Z) ``` #### 38. Consider a generator function that generates 10 integers and use it to build an array (★☆☆) ``` def generate(): for x in range(10): yield x Z = np.fromiter(generate(),dtype=float,count=-1) print(Z) ``` #### 39. Create a vector of size 10 with values ranging from 0 to 1, both excluded (★★☆) ``` Z = np.linspace(0,1,11,endpoint=False)[1:] print(Z) ``` #### 40. Create a random vector of size 10 and sort it (★★☆) ``` Z = np.random.random(10) Z.sort() print(Z) ``` #### 41. How to sum a small array faster than np.sum? (★★☆) ``` # Author: Evgeni Burovski Z = np.arange(10) np.add.reduce(Z) ``` #### 42. Consider two random array A and B, check if they are equal (★★☆) ``` A = np.random.randint(0,2,5) B = np.random.randint(0,2,5) # Assuming identical shape of the arrays and a tolerance for the comparison of values equal = np.allclose(A,B) print(equal) # Checking both the shape and the element values, no tolerance (values have to be exactly equal) equal = np.array_equal(A,B) print(equal) ``` #### 43. Make an array immutable (read-only) (★★☆) ``` Z = np.zeros(10) Z.flags.writeable = False Z[0] = 1 ``` #### 44. Consider a random 10x2 matrix representing cartesian coordinates, convert them to polar coordinates (★★☆) ``` Z = np.random.random((10,2)) X,Y = Z[:,0], Z[:,1] R = np.sqrt(X**2+Y**2) T = np.arctan2(Y,X) print(R) print(T) ``` #### 45. Create random vector of size 10 and replace the maximum value by 0 (★★☆) ``` Z = np.random.random(10) Z[Z.argmax()] = 0 print(Z) ``` #### 46. Create a structured array with `x` and `y` coordinates covering the \[0,1\]x\[0,1\] area (★★☆) ``` Z = np.zeros((5,5), [('x',float),('y',float)]) Z['x'], Z['y'] = np.meshgrid(np.linspace(0,1,5), np.linspace(0,1,5)) print(Z) ``` #### 47. Given two arrays, X and Y, construct the Cauchy matrix C (Cij =1/(xi - yj)) ``` # Author: Evgeni Burovski X = np.arange(8) Y = X + 0.5 C = 1.0 / np.subtract.outer(X, Y) print(np.linalg.det(C)) ``` #### 48. Print the minimum and maximum representable value for each numpy scalar type (★★☆) ``` for dtype in [np.int8, np.int32, np.int64]: print(np.iinfo(dtype).min) print(np.iinfo(dtype).max) for dtype in [np.float32, np.float64]: print(np.finfo(dtype).min) print(np.finfo(dtype).max) print(np.finfo(dtype).eps) ``` #### 49. How to print all the values of an array? (★★☆) ``` np.set_printoptions(threshold=np.nan) Z = np.zeros((16,16)) print(Z) ``` #### 50. How to find the closest value (to a given scalar) in a vector? (★★☆) ``` Z = np.arange(100) v = np.random.uniform(0,100) index = (np.abs(Z-v)).argmin() print(Z[index]) ``` #### 51. Create a structured array representing a position (x,y) and a color (r,g,b) (★★☆) ``` Z = np.zeros(10, [ ('position', [ ('x', float, 1), ('y', float, 1)]), ('color', [ ('r', float, 1), ('g', float, 1), ('b', float, 1)])]) print(Z) ``` #### 52. Consider a random vector with shape (100,2) representing coordinates, find point by point distances (★★☆) ``` Z = np.random.random((10,2)) X,Y = np.atleast_2d(Z[:,0], Z[:,1]) D = np.sqrt( (X-X.T)**2 + (Y-Y.T)**2) print(D) # Much faster with scipy import scipy # Thanks Gavin Heverly-Coulson (#issue 1) import scipy.spatial Z = np.random.random((10,2)) D = scipy.spatial.distance.cdist(Z,Z) print(D) ``` #### 53. How to convert a float (32 bits) array into an integer (32 bits) in place? ``` Z = np.arange(10, dtype=np.float32) Z = Z.astype(np.int32, copy=False) print(Z) ``` #### 54. How to read the following file? (★★☆) ``` from io import StringIO # Fake file s = StringIO("""1, 2, 3, 4, 5\n 6, , , 7, 8\n , , 9,10,11\n""") Z = np.genfromtxt(s, delimiter=",", dtype=np.int) print(Z) ``` #### 55. What is the equivalent of enumerate for numpy arrays? (★★☆) ``` Z = np.arange(9).reshape(3,3) for index, value in np.ndenumerate(Z): print(index, value) for index in np.ndindex(Z.shape): print(index, Z[index]) ``` #### 56. Generate a generic 2D Gaussian-like array (★★☆) ``` X, Y = np.meshgrid(np.linspace(-1,1,10), np.linspace(-1,1,10)) D = np.sqrt(X*X+Y*Y) sigma, mu = 1.0, 0.0 G = np.exp(-( (D-mu)**2 / ( 2.0 * sigma**2 ) ) ) print(G) ``` #### 57. How to randomly place p elements in a 2D array? (★★☆) ``` # Author: Divakar n = 10 p = 3 Z = np.zeros((n,n)) np.put(Z, np.random.choice(range(n*n), p, replace=False),1) print(Z) ``` #### 58. Subtract the mean of each row of a matrix (★★☆) ``` # Author: Warren Weckesser X = np.random.rand(5, 10) # Recent versions of numpy Y = X - X.mean(axis=1, keepdims=True) # Older versions of numpy Y = X - X.mean(axis=1).reshape(-1, 1) print(Y) ``` #### 59. How to sort an array by the nth column? (★★☆) ``` # Author: Steve Tjoa Z = np.random.randint(0,10,(3,3)) print(Z) print(Z[Z[:,1].argsort()]) ``` #### 60. How to tell if a given 2D array has null columns? (★★☆) ``` # Author: Warren Weckesser Z = np.random.randint(0,3,(3,10)) print((~Z.any(axis=0)).any()) ``` #### 61. Find the nearest value from a given value in an array (★★☆) ``` Z = np.random.uniform(0,1,10) z = 0.5 m = Z.flat[np.abs(Z - z).argmin()] print(m) ``` #### 62. Considering two arrays with shape (1,3) and (3,1), how to compute their sum using an iterator? (★★☆) ``` A = np.arange(3).reshape(3,1) B = np.arange(3).reshape(1,3) it = np.nditer([A,B,None]) for x,y,z in it: z[...] = x + y print(it.operands[2]) ``` #### 63. Create an array class that has a name attribute (★★☆) ``` class NamedArray(np.ndarray): def __new__(cls, array, name="no name"): obj = np.asarray(array).view(cls) obj.name = name return obj def __array_finalize__(self, obj): if obj is None: return self.info = getattr(obj, 'name', "no name") Z = NamedArray(np.arange(10), "range_10") print (Z.name) ``` #### 64. Consider a given vector, how to add 1 to each element indexed by a second vector (be careful with repeated indices)? (★★★) ``` # Author: Brett Olsen Z = np.ones(10) I = np.random.randint(0,len(Z),20) Z += np.bincount(I, minlength=len(Z)) print(Z) # Another solution # Author: Bartosz Telenczuk np.add.at(Z, I, 1) print(Z) ``` #### 65. How to accumulate elements of a vector (X) to an array (F) based on an index list (I)? (★★★) ``` # Author: Alan G Isaac X = [1,2,3,4,5,6] I = [1,3,9,3,4,1] F = np.bincount(I,X) print(F) ``` #### 66. Considering a (w,h,3) image of (dtype=ubyte), compute the number of unique colors (★★★) ``` # Author: Nadav Horesh w,h = 16,16 I = np.random.randint(0,2,(h,w,3)).astype(np.ubyte) #Note that we should compute 256*256 first. #Otherwise numpy will only promote F.dtype to 'uint16' and overfolw will occur F = I[...,0]*(256*256) + I[...,1]*256 +I[...,2] n = len(np.unique(F)) print(n) ``` #### 67. Considering a four dimensions array, how to get sum over the last two axis at once? (★★★) ``` A = np.random.randint(0,10,(3,4,3,4)) # solution by passing a tuple of axes (introduced in numpy 1.7.0) sum = A.sum(axis=(-2,-1)) print(sum) # solution by flattening the last two dimensions into one # (useful for functions that don't accept tuples for axis argument) sum = A.reshape(A.shape[:-2] + (-1,)).sum(axis=-1) print(sum) ``` #### 68. Considering a one-dimensional vector D, how to compute means of subsets of D using a vector S of same size describing subset indices? (★★★) ``` # Author: Jaime Fernández del Río D = np.random.uniform(0,1,100) S = np.random.randint(0,10,100) D_sums = np.bincount(S, weights=D) D_counts = np.bincount(S) D_means = D_sums / D_counts print(D_means) # Pandas solution as a reference due to more intuitive code import pandas as pd print(pd.Series(D).groupby(S).mean()) ``` #### 69. How to get the diagonal of a dot product? (★★★) ``` # Author: Mathieu Blondel A = np.random.uniform(0,1,(5,5)) B = np.random.uniform(0,1,(5,5)) # Slow version np.diag(np.dot(A, B)) # Fast version np.sum(A * B.T, axis=1) # Faster version np.einsum("ij,ji->i", A, B) ``` #### 70. Consider the vector \[1, 2, 3, 4, 5\], how to build a new vector with 3 consecutive zeros interleaved between each value? (★★★) ``` # Author: Warren Weckesser Z = np.array([1,2,3,4,5]) nz = 3 Z0 = np.zeros(len(Z) + (len(Z)-1)*(nz)) Z0[::nz+1] = Z print(Z0) ``` #### 71. Consider an array of dimension (5,5,3), how to mulitply it by an array with dimensions (5,5)? (★★★) ``` A = np.ones((5,5,3)) B = 2*np.ones((5,5)) print(A * B[:,:,None]) ``` #### 72. How to swap two rows of an array? (★★★) ``` # Author: Eelco Hoogendoorn A = np.arange(25).reshape(5,5) A[[0,1]] = A[[1,0]] print(A) ``` #### 73. Consider a set of 10 triplets describing 10 triangles (with shared vertices), find the set of unique line segments composing all the triangles (★★★) ``` # Author: Nicolas P. Rougier faces = np.random.randint(0,100,(10,3)) F = np.roll(faces.repeat(2,axis=1),-1,axis=1) F = F.reshape(len(F)*3,2) F = np.sort(F,axis=1) G = F.view( dtype=[('p0',F.dtype),('p1',F.dtype)] ) G = np.unique(G) print(G) ``` #### 74. Given an array C that is a bincount, how to produce an array A such that np.bincount(A) == C? (★★★) ``` # Author: Jaime Fernández del Río C = np.bincount([1,1,2,3,4,4,6]) A = np.repeat(np.arange(len(C)), C) print(A) ``` #### 75. How to compute averages using a sliding window over an array? (★★★) ``` # Author: Jaime Fernández del Río def moving_average(a, n=3) : ret = np.cumsum(a, dtype=float) ret[n:] = ret[n:] - ret[:-n] return ret[n - 1:] / n Z = np.arange(20) print(moving_average(Z, n=3)) ``` #### 76. Consider a one-dimensional array Z, build a two-dimensional array whose first row is (Z\[0\],Z\[1\],Z\[2\]) and each subsequent row is shifted by 1 (last row should be (Z\[-3\],Z\[-2\],Z\[-1\]) (★★★) ``` # Author: Joe Kington / Erik Rigtorp from numpy.lib import stride_tricks def rolling(a, window): shape = (a.size - window + 1, window) strides = (a.itemsize, a.itemsize) return stride_tricks.as_strided(a, shape=shape, strides=strides) Z = rolling(np.arange(10), 3) print(Z) ``` #### 77. How to negate a boolean, or to change the sign of a float inplace? (★★★) ``` # Author: Nathaniel J. Smith Z = np.random.randint(0,2,100) np.logical_not(Z, out=Z) Z = np.random.uniform(-1.0,1.0,100) np.negative(Z, out=Z) ``` #### 78. Consider 2 sets of points P0,P1 describing lines (2d) and a point p, how to compute distance from p to each line i (P0\[i\],P1\[i\])? (★★★) ``` def distance(P0, P1, p): T = P1 - P0 L = (T**2).sum(axis=1) U = -((P0[:,0]-p[...,0])*T[:,0] + (P0[:,1]-p[...,1])*T[:,1]) / L U = U.reshape(len(U),1) D = P0 + U*T - p return np.sqrt((D**2).sum(axis=1)) P0 = np.random.uniform(-10,10,(10,2)) P1 = np.random.uniform(-10,10,(10,2)) p = np.random.uniform(-10,10,( 1,2)) print(distance(P0, P1, p)) ``` #### 79. Consider 2 sets of points P0,P1 describing lines (2d) and a set of points P, how to compute distance from each point j (P\[j\]) to each line i (P0\[i\],P1\[i\])? (★★★) ``` # Author: Italmassov Kuanysh # based on distance function from previous question P0 = np.random.uniform(-10, 10, (10,2)) P1 = np.random.uniform(-10,10,(10,2)) p = np.random.uniform(-10, 10, (10,2)) print(np.array([distance(P0,P1,p_i) for p_i in p])) ``` #### 80. Consider an arbitrary array, write a function that extract a subpart with a fixed shape and centered on a given element (pad with a `fill` value when necessary) (★★★) ``` # Author: Nicolas Rougier Z = np.random.randint(0,10,(10,10)) shape = (5,5) fill = 0 position = (1,1) R = np.ones(shape, dtype=Z.dtype)*fill P = np.array(list(position)).astype(int) Rs = np.array(list(R.shape)).astype(int) Zs = np.array(list(Z.shape)).astype(int) R_start = np.zeros((len(shape),)).astype(int) R_stop = np.array(list(shape)).astype(int) Z_start = (P-Rs//2) Z_stop = (P+Rs//2)+Rs%2 R_start = (R_start - np.minimum(Z_start,0)).tolist() Z_start = (np.maximum(Z_start,0)).tolist() R_stop = np.maximum(R_start, (R_stop - np.maximum(Z_stop-Zs,0))).tolist() Z_stop = (np.minimum(Z_stop,Zs)).tolist() r = [slice(start,stop) for start,stop in zip(R_start,R_stop)] z = [slice(start,stop) for start,stop in zip(Z_start,Z_stop)] R[r] = Z[z] print(Z) print(R) ``` #### 81. Consider an array Z = \[1,2,3,4,5,6,7,8,9,10,11,12,13,14\], how to generate an array R = \[\[1,2,3,4\], \[2,3,4,5\], \[3,4,5,6\], ..., \[11,12,13,14\]\]? (★★★) ``` # Author: Stefan van der Walt Z = np.arange(1,15,dtype=np.uint32) R = stride_tricks.as_strided(Z,(11,4),(4,4)) print(R) ``` #### 82. Compute a matrix rank (★★★) ``` # Author: Stefan van der Walt Z = np.random.uniform(0,1,(10,10)) U, S, V = np.linalg.svd(Z) # Singular Value Decomposition rank = np.sum(S > 1e-10) print(rank) ``` #### 83. How to find the most frequent value in an array? ``` Z = np.random.randint(0,10,50) print(np.bincount(Z).argmax()) ``` #### 84. Extract all the contiguous 3x3 blocks from a random 10x10 matrix (★★★) ``` # Author: Chris Barker Z = np.random.randint(0,5,(10,10)) n = 3 i = 1 + (Z.shape[0]-3) j = 1 + (Z.shape[1]-3) C = stride_tricks.as_strided(Z, shape=(i, j, n, n), strides=Z.strides + Z.strides) print(C) ``` #### 85. Create a 2D array subclass such that Z\[i,j\] == Z\[j,i\] (★★★) ``` # Author: Eric O. Lebigot # Note: only works for 2d array and value setting using indices class Symetric(np.ndarray): def __setitem__(self, index, value): i,j = index super(Symetric, self).__setitem__((i,j), value) super(Symetric, self).__setitem__((j,i), value) def symetric(Z): return np.asarray(Z + Z.T - np.diag(Z.diagonal())).view(Symetric) S = symetric(np.random.randint(0,10,(5,5))) S[2,3] = 42 print(S) ``` #### 86. Consider a set of p matrices wich shape (n,n) and a set of p vectors with shape (n,1). How to compute the sum of of the p matrix products at once? (result has shape (n,1)) (★★★) ``` # Author: Stefan van der Walt p, n = 10, 20 M = np.ones((p,n,n)) V = np.ones((p,n,1)) S = np.tensordot(M, V, axes=[[0, 2], [0, 1]]) print(S) # It works, because: # M is (p,n,n) # V is (p,n,1) # Thus, summing over the paired axes 0 and 0 (of M and V independently), # and 2 and 1, to remain with a (n,1) vector. ``` #### 87. Consider a 16x16 array, how to get the block-sum (block size is 4x4)? (★★★) ``` # Author: Robert Kern Z = np.ones((16,16)) k = 4 S = np.add.reduceat(np.add.reduceat(Z, np.arange(0, Z.shape[0], k), axis=0), np.arange(0, Z.shape[1], k), axis=1) print(S) ``` #### 88. How to implement the Game of Life using numpy arrays? (★★★) ``` # Author: Nicolas Rougier def iterate(Z): # Count neighbours N = (Z[0:-2,0:-2] + Z[0:-2,1:-1] + Z[0:-2,2:] + Z[1:-1,0:-2] + Z[1:-1,2:] + Z[2: ,0:-2] + Z[2: ,1:-1] + Z[2: ,2:]) # Apply rules birth = (N==3) & (Z[1:-1,1:-1]==0) survive = ((N==2) | (N==3)) & (Z[1:-1,1:-1]==1) Z[...] = 0 Z[1:-1,1:-1][birth | survive] = 1 return Z Z = np.random.randint(0,2,(50,50)) for i in range(100): Z = iterate(Z) print(Z) ``` #### 89. How to get the n largest values of an array (★★★) ``` Z = np.arange(10000) np.random.shuffle(Z) n = 5 # Slow print (Z[np.argsort(Z)[-n:]]) # Fast print (Z[np.argpartition(-Z,n)[:n]]) ``` #### 90. Given an arbitrary number of vectors, build the cartesian product (every combinations of every item) (★★★) ``` # Author: Stefan Van der Walt def cartesian(arrays): arrays = [np.asarray(a) for a in arrays] shape = (len(x) for x in arrays) ix = np.indices(shape, dtype=int) ix = ix.reshape(len(arrays), -1).T for n, arr in enumerate(arrays): ix[:, n] = arrays[n][ix[:, n]] return ix print (cartesian(([1, 2, 3], [4, 5], [6, 7]))) ``` #### 91. How to create a record array from a regular array? (★★★) ``` Z = np.array([("Hello", 2.5, 3), ("World", 3.6, 2)]) R = np.core.records.fromarrays(Z.T, names='col1, col2, col3', formats = 'S8, f8, i8') print(R) ``` #### 92. Consider a large vector Z, compute Z to the power of 3 using 3 different methods (★★★) ``` # Author: Ryan G. x = np.random.rand(5e7) %timeit np.power(x,3) %timeit x*x*x %timeit np.einsum('i,i,i->i',x,x,x) ``` #### 93. Consider two arrays A and B of shape (8,3) and (2,2). How to find rows of A that contain elements of each row of B regardless of the order of the elements in B? (★★★) ``` # Author: Gabe Schwartz A = np.random.randint(0,5,(8,3)) B = np.random.randint(0,5,(2,2)) C = (A[..., np.newaxis, np.newaxis] == B) rows = np.where(C.any((3,1)).all(1))[0] print(rows) ``` #### 94. Considering a 10x3 matrix, extract rows with unequal values (e.g. \[2,2,3\]) (★★★) ``` # Author: Robert Kern Z = np.random.randint(0,5,(10,3)) print(Z) # solution for arrays of all dtypes (including string arrays and record arrays) E = np.all(Z[:,1:] == Z[:,:-1], axis=1) U = Z[~E] print(U) # soluiton for numerical arrays only, will work for any number of columns in Z U = Z[Z.max(axis=1) != Z.min(axis=1),:] print(U) ``` #### 95. Convert a vector of ints into a matrix binary representation (★★★) ``` # Author: Warren Weckesser I = np.array([0, 1, 2, 3, 15, 16, 32, 64, 128]) B = ((I.reshape(-1,1) & (2**np.arange(8))) != 0).astype(int) print(B[:,::-1]) # Author: Daniel T. McDonald I = np.array([0, 1, 2, 3, 15, 16, 32, 64, 128], dtype=np.uint8) print(np.unpackbits(I[:, np.newaxis], axis=1)) ``` #### 96. Given a two dimensional array, how to extract unique rows? (★★★) ``` # Author: Jaime Fernández del Río Z = np.random.randint(0,2,(6,3)) T = np.ascontiguousarray(Z).view(np.dtype((np.void, Z.dtype.itemsize * Z.shape[1]))) _, idx = np.unique(T, return_index=True) uZ = Z[idx] print(uZ) # Author: Andreas Kouzelis # NumPy >= 1.13 uZ = np.unique(Z, axis=0) print(uZ) ``` #### 97. Considering 2 vectors A & B, write the einsum equivalent of inner, outer, sum, and mul function (★★★) ``` # Author: Alex Riley # Make sure to read: http://ajcr.net/Basic-guide-to-einsum/ A = np.random.uniform(0,1,10) B = np.random.uniform(0,1,10) np.einsum('i->', A) # np.sum(A) np.einsum('i,i->i', A, B) # A * B np.einsum('i,i', A, B) # np.inner(A, B) np.einsum('i,j->ij', A, B) # np.outer(A, B) ``` #### 98. Considering a path described by two vectors (X,Y), how to sample it using equidistant samples (★★★)? ``` # Author: Bas Swinckels phi = np.arange(0, 10*np.pi, 0.1) a = 1 x = a*phi*np.cos(phi) y = a*phi*np.sin(phi) dr = (np.diff(x)**2 + np.diff(y)**2)**.5 # segment lengths r = np.zeros_like(x) r[1:] = np.cumsum(dr) # integrate path r_int = np.linspace(0, r.max(), 200) # regular spaced path x_int = np.interp(r_int, r, x) # integrate path y_int = np.interp(r_int, r, y) ``` #### 99. Given an integer n and a 2D array X, select from X the rows which can be interpreted as draws from a multinomial distribution with n degrees, i.e., the rows which only contain integers and which sum to n. (★★★) ``` # Author: Evgeni Burovski X = np.asarray([[1.0, 0.0, 3.0, 8.0], [2.0, 0.0, 1.0, 1.0], [1.5, 2.5, 1.0, 0.0]]) n = 4 M = np.logical_and.reduce(np.mod(X, 1) == 0, axis=-1) M &= (X.sum(axis=-1) == n) print(X[M]) ``` #### 100. Compute bootstrapped 95% confidence intervals for the mean of a 1D array X (i.e., resample the elements of an array with replacement N times, compute the mean of each sample, and then compute percentiles over the means). (★★★) ``` # Author: Jessica B. Hamrick X = np.random.randn(100) # random 1D array N = 1000 # number of bootstrap samples idx = np.random.randint(0, X.size, (N, X.size)) means = X[idx].mean(axis=1) confint = np.percentile(means, [2.5, 97.5]) print(confint) ```
github_jupyter
# Lecture 1 - Introduction, Variables, and Print Statement *Monday, June 1st 2020* *Rahul Dani* In this lecture, we will cover many fundamental topics to get you started! ## Topic 1 : Variables Variables can be considered as an item that holds a value. You can put any value inside this item. Note that **Python is case-sensitive** (UPPERCASE names and lowercase names are different from each other). Be careful about using letters in different cases. The format of the variables look like: ``` name = value ``` Some examples: ``` x = 5 y = 10 car = 73 bottle = 21 ``` There are different types of variables: **int** : *integer* (Example : 5, 14, -56) **float** : *decimal numbers* (Example: 7.33, 1.5, 9.0, -13.67234 ) **bool** : *boolean values* (Example: True, False) **str** : *text values/strings* (Example: 'hi', 'India', 'purple', 'ABC!', 'usa23'). Strings always need to have quotes(' ') around them. Some examples of variables definition using different types: ``` city = 'tampa' year = 2020 temperature = 75.6 is_hot = True username = 'cat123' number_one = '1' ``` **Task :** Can someone tell me what the types of each of the above variables are? <!-- Answers: * City is a string * Year is an int * Temperature is a float * is_hot is a boolean * username is a string * number_one is a string --> **Literals** are the values by itself. For example: ``` 3 'New York' 67.4 True ``` The way to check the type of a given variable using python is: ``` type(variable_name) ``` so if you wanted to check the type of the variable *city* the code looks like: ``` type(city) ``` Make sure to use the parenthesis after the word 'type' to run the type command on the city variable. **Task :** Find the types of all the variables we used above. Make sure to define the variable first! <!-- Answer: ``` city = 'tampa' year = 2020 temperature = 75.6 is_hot = True username = 'cat123' number_one = '1' print(type(city)) print(type(year)) print(type(temperature)) print(type(is_hot)) print(type(username)) print(type(number_one)) ``` --> To check if two items or variables are the same use the **==** command. Example: ``` 'cat' == 'cat' abc == abc 123 == 123 True == False ``` You can also check the variable types of two variables using the == command. Example: ``` type(2) == type(5) type('name') == type('chicago') type(word) == type(tampa) type(True) == type(False) ``` This would tell us if the two items have the same type. **Task :** Make a new variable called 'color_one' equal to your favorite color. Then set another variable called 'color_two' to your second favorite color. Then check if the two colors are the same. <!-- Answer: ``` #False Case color_one = 'red' color_two = 'black' color_one == color_two #True Case color_one = 'red' color_two = 'red' color_one == color_two ``` --> **Task :** Make a new variable called 'place' equal to your favorite city. Then check if place and 4.9 have the same type. What about place and 'miami', do they have the same type? <!-- Answer: ``` place = 'gainesville' print(type(place) == type(4.9)) print(type(place) == type('miami')) ``` --> ## Topic 2: Comment and Print **Comments** are used to write notes about your code without actually running that line. To write a comment put **#** before that line of code. Example: ``` # This is a comment # This line will not run This line will run because it is not a comment ``` Comments are also handy for testing a program. You can comment lines of code and see how they behave without those lines (Will cover this in detail in a future lecture). **Task :** Make two variables called 'height' and 'weight' and set them as two float values respectively. Comment the weight variable. <!-- Answer: ``` height = 6.3 #weight = 180.3 ``` --> **Print** Statements are used to printout/display the contents to the screen. Example of syntax: ``` print('Hi') print('This is a sentence.') print(1+2) print(367.123) print(car) ``` Print can be done on both variables and literals. Even expressions such as '6*5' can be used within the print statement. Similar to type, make sure to use the parenthesis after the word 'print' to run the print command on the variable or literal you choose. **Task :** Print the string 'Hello World'. Create a new variable called name and make it equal to your name. Print the name variable and also print the sum of 15 and 3 to the screen. <!-- Answer: ``` print('Hello World') name = 'Rahul' print(name) print(15+3) ``` --> ## Topic 3: Using Python as a Calculator Python can be used as a calculator. Calculations such as addition, subtraction, and multiplication, and division can be used. We will practice doing this today. **Addition (+), Subtraction (-), Multiplication (*), and Division (/)** **Task**: Find the sum of 3 and 2. <!-- Answer : 3+2 --> **Task:** Print x + y where x = 4 - 2 y = 5 + 3 <!-- Answer: x = 4 - 2 y = 5 + 3 print(x+y) --> **Task:** Print z + m where z = 3 * 7 m = 6/2 <!-- Answer: z = 3 * 7 m = 6 / 2 print (z+m) --> **Modulus/Remainder (%)** finds the remainder when two values are divided. **Power** **(\*\*)** finds the power of the first value to the second value. Ex. 5 \*\* 2 is the same as 5^2. **Floor/Integer Division (//)** divides the two numbers given and then rounds it to the lower number. **Task:** Find the remainder of 7 divided by 3 <!-- Answer: 7 % 3 --> **Task:** Find 17^3 and the floored solution of 10 divided by 3. Print both answers on seperate lines. <!-- Answer: print(17**3) print(10//3) -->
github_jupyter
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS-109A Introduction to Data Science ## Lab 2: Linear Regression and k-NN **Harvard University**<br/> **Fall 2019**<br/> **Authors:** Rahul Dave, David Sondak, Will Claybaugh, Pavlos Protopapas, Chris Tanner, Arpit Panda --- ``` ## RUN THIS CELL TO GET THE RIGHT FORMATTING import requests from IPython.core.display import HTML styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text HTML(styles) ``` ## Table of Contents <ol start="0"> <li> Building a model with statsmodels and sklearn </li> <li> Example: simple linear regression with automobile data </li> <li> $k$-nearest neighbors</li> <li> Polynomial Regression, and the Cab Data</li> <li> Multiple regression and exploring the Football data </li> </ol> ## Learning Goals After this lab, you should be able to - Feel comfortable with simple linear regression - Feel comfortable with $k$ nearest neighbors - Explain the difference between train/validation/test data and WHY we have each. - Implement arbitrary multiple regression models in both SK-learn and Statsmodels. - Interpret the coefficent estimates produced by each model, including transformed and dummy variables ``` # import the necessary libraries import warnings warnings.filterwarnings('ignore') import numpy as np import scipy as sp import matplotlib as mpl import matplotlib.cm as cm import matplotlib.pyplot as plt import pandas as pd import statsmodels.api as sm from statsmodels.api import OLS from sklearn import preprocessing from sklearn.preprocessing import PolynomialFeatures from sklearn.metrics import r2_score from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from pandas.plotting import scatter_matrix import time pd.set_option('display.width', 500) pd.set_option('display.max_columns', 100) pd.set_option('display.notebook_repr_html', True) import seaborn as sns from sklearn import preprocessing from sklearn.preprocessing import PolynomialFeatures from sklearn.metrics import r2_score from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler %matplotlib inline ``` <a class="anchor" id="fourth-bullet"></a> ## Part 1 - Simple Linear Regression with `statsmodels` and `sklearn` Let's learn two `python` packages to do simple linear regression for us: * [statsmodels](http://www.statsmodels.org/stable/regression.html) and * [scikit-learn (sklearn)](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html). Our goal is to show how to implement simple linear regression with these packages. For the purposes of this lab, `statsmodels` and `sklearn` do the same thing. More generally though, `statsmodels` tends to be easier for inference \[finding the values of the slope and intercept and dicussing uncertainty in those values\], whereas `sklearn` has machine-learning algorithms and is better for prediction \[guessing y values for a given x value\]. (Note that both packages make the same guesses, it's just a question of which activity they provide more support for. **Note:** `statsmodels` and `sklearn` are different packages! Unless we specify otherwise, you can use either one. ### Linear regression with a toy dataset We first examine a toy problem, focusing our efforts on fitting a linear model to a small dataset with three observations. Each observation consists of one predictor $x_i$ and one response $y_i$ for $i = 1, 2, 3$, \begin{align*} (x , y) = \{(x_1, y_1), (x_2, y_2), (x_3, y_3)\}. \end{align*} To be very concrete, let's set the values of the predictors and responses. \begin{equation*} (x , y) = \{(1, 2), (2, 2), (3, 4)\} \end{equation*} There is no line of the form $\beta_0 + \beta_1 x = y$ that passes through all three observations, since the data are not collinear. Thus our aim is to find the line that best fits these observations in the *least-squares sense*, as discussed in lecture. <div class="exercise"><b>Q1.1</b></div> * Make two numpy arrays out of this data, x_train and y_train * Check the dimentions of these arrays * Be sure to reshape input array to 2-D because this becomes important later! ``` #your code here x_train = np.array([1, 2, 3]) y_train = np.array([2, 2, 4]) x_train = x_train.reshape(-1, 1) ``` Below is the code for `statsmodels`. `Statsmodels` does not by default include the column of ones in the $X$ matrix, so we include it manually with `sm.add_constant`. ``` import statsmodels.api as sm # create the X matrix by appending a column of ones to x_train X = sm.add_constant(x_train) # build the OLS model (ordinary least squares) from the training data toyregr_sm = sm.OLS(y_train, X) # do the fit and save regression info (parameters, etc) in results_sm results_sm = toyregr_sm.fit() # pull the beta parameters out from results_sm beta0_sm = results_sm.params[0] beta1_sm = results_sm.params[1] print(f'The regression coef from statsmodels are: beta_0 = {beta0_sm:8.6f} and beta_1 = {beta1_sm:8.6f}') ``` Besides the beta parameters, `results_sm` contains a ton of other potentially useful information. ``` import warnings warnings.filterwarnings('ignore') print(results_sm.summary()) ``` Now let's turn our attention to the `sklearn` library. ``` from sklearn import linear_model # build the least squares model toyregr = linear_model.LinearRegression() # save regression info (parameters, etc) in results_skl results = toyregr.fit(x_train, y_train) # pull the beta parameters out from results_skl beta0_skl = toyregr.intercept_ beta1_skl = toyregr.coef_[0] print("The regression coefficients from the sklearn package are: beta_0 = {0:8.6f} and beta_1 = {1:8.6f}".format(beta0_skl, beta1_skl)) ``` <div class="exercise"><b>Q1.2</b></div> Do the values of beta_0 and beta_1 seem reasonable? Plot the training data using a scatter plot. Plot the best fit line with beta0 and beta1 together with the training data. ``` fig_scat, ax_scat = plt.subplots(1,1, figsize=(10,6)) beta_0 = beta0_skl beta_1 = beta1_skl # Plot best-fit line x_train = np.array([[1, 2, 3]]).T best_fit = beta_0 + beta_1 * x_train ax_scat.scatter(x_train, y_train, s=300, label='Training Data') ax_scat.plot(x_train, best_fit, ls='--', label='Best Fit Line') ax_scat.set_xlabel(r'$x_{train}$') ax_scat.set_ylabel(r'$y$'); ``` We should feel pretty good about ourselves now, and we're ready to move on to a real problem! ### Let's use `scikit-learn` Before diving right in to a "real" problem, we really ought to discuss more of the details of `sklearn`. We do this now. Along the way, we'll import the real-world dataset. `Scikit-learn` is the main `python` machine learning library. It consists of many learners which can learn models from data, as well as a lot of utility functions such as `train_test_split`. It can be used in `python` by the incantation `import sklearn`. In scikit-learn, an **estimator** is a Python object that implements the methods fit(X, y) and predict(T) Let's see the structure of `scikit-learn` needed to make these fits. `.fit` always takes two arguments: ```python estimator.fit(Xtrain, ytrain) ``` We will consider two estimators in this lab: `LinearRegression` and `KNeighborsRegressor`. Critically, `Xtrain` must be in the form of an *array of arrays* (or a 2x2 array) with the inner arrays each corresponding to one sample, and whose elements correspond to the feature values for that sample (visuals coming in a moment). `ytrain` on the other hand is a simple array of responses. These are continuous for regression problems. ![](images/sklearn2.jpg) ### Practice with `sklearn` We begin by loading up the `mtcars` dataset and cleaning it up a little bit. ``` import pandas as pd #load mtcars dfcars = pd.read_csv("../data/mtcars.csv") dfcars = dfcars.rename(columns={"Unnamed: 0":"car name"}) dfcars.head() ``` Next, let's split the dataset into a training set and test set. ``` # split into training set and testing set from sklearn.model_selection import train_test_split #set random_state to get the same split every time traindf, testdf = train_test_split(dfcars, test_size=0.2, random_state=42) # testing set is around 20% of the total data; training set is around 80% print("Shape of full dataset is: {0}".format(dfcars.shape)) print("Shape of training dataset is: {0}".format(traindf.shape)) print("Shape of test dataset is: {0}".format(testdf.shape)) ``` Now we have training and test data. We still need to select a predictor and a response from this dataset. Keep in mind that we need to choose the predictor and response from both the training and test set. You will do this in the exercises below. However, we provide some starter code for you to get things going. ``` # Extract the response variable that we're interested in y_train = traindf.mpg ``` First, let's reshape y_train to be an array of arrays using the reshape method. We want the first dimension of y_train to be size 2525 and the second dimension to be size 11 . ``` y_train_reshape = y_train.values.reshape(-1,1) y_train_reshape.shape ``` ### Simple linear regression with automobile data We will now use `sklearn` to predict automobile mileage per gallon (mpg) and evaluate these predictions. We already loaded the data and split them into a training set and a test set. <div class="exercise"><b>Q1.2</b></div> * Pick one variable to use as a predictor for simple linear regression. Create a markdown cell below and discuss your reasons. * Justify your choice with some visualizations. * Is there a second variable you'd like to use? For example, we're not doing multiple linear regression here, but if we were, is there another variable you'd like to include if we were using two predictors? ``` # Your code here y_mpg = dfcars.mpg x_wt = dfcars.wt x_hp = dfcars.hp fig_wt, ax_wt = plt.subplots(1,1, figsize=(10,6)) ax_wt.scatter(x_wt, y_mpg) ax_wt.set_xlabel(r'Car Weight') ax_wt.set_ylabel(r'Car MPG') fig_hp, ax_hp = plt.subplots(1,1, figsize=(10,6)) ax_hp.scatter(x_hp, y_mpg) ax_hp.set_xlabel(r'Car HP') ax_hp.set_ylabel(r'Car MPG') ``` Both weight and horsepower seem to have moderately (if not fully) strong negative relationships with fuel efficiency: larger automobiles and ones with more powerful engines have worse fuel efficiency. It would be interesting to fit a multiple regression model with both predictors in to see if these associations are confounded with each other: do we really just need to use one of these predictors to build a reasonable model, or can adding the 2nd one add more interpretive and predictive power? <div class="exercise"><b>Q1.3</b></div> * Use `sklearn` to fit the training data using simple linear regression. * Use the model to make mpg predictions on the test set. **Hints:** * Use the following to perform the analysis: ```python from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error ``` ``` # Your code here from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error traindf, testdf = train_test_split(dfcars, test_size=0.2, random_state=42) y_train = np.array(traindf.mpg) X_train = np.array(traindf.wt) X_train = X_train.reshape(X_train.shape[0], 1) y_test = np.array(testdf.mpg) X_test = np.array(testdf.wt) X_test = X_test.reshape(X_test.shape[0], 1) #create linear model regression = LinearRegression() #fit linear model regression.fit(X_train, y_train) predicted_y = regression.predict(X_test) r2 = regression.score(X_test, y_test) print(r2) ``` <div class="exercise"><b>Q1.4</b></div> * Plot the data and the prediction. * Print out the mean squared error for the training set and the test set and compare. ``` # Your code here fig, ax = plt.subplots(1,1, figsize=(10,6)) ax.plot(y_test, predicted_y, 'o') grid = np.linspace(np.min(dfcars.mpg), np.max(dfcars.mpg), 100) ax.plot(grid, grid, color="black") # 45 degree line ax.set_xlabel("actual y") ax.set_ylabel("predicted y") fig1, ax1 = plt.subplots(1,1, figsize=(10,6)) ax1.plot(dfcars.wt, dfcars.mpg, 'o') xgrid = np.linspace(np.min(dfcars.wt), np.max(dfcars.wt), 100) ax1.plot(xgrid, regression.predict(xgrid.reshape(100, 1))) #Your data here print(regression.score(X_train, y_train)) print(mean_squared_error(predicted_y, y_test)) print(mean_squared_error(y_train, regression.predict(X_train))) print('Coefficients: \n', regression.coef_[0], regression.intercept_) ``` MSE for the training set is 7.77 mpg$^2$ while it is much laeger in the test set: 12.48 mpg$^2$; this is not a surprising result as the linear regression model is fit to minimize this error on the data which was used to estimate the $\beta$ coefficients (possibly a little but of overfitting to the training set). ## Part 2 - $k$-nearest neighbors Now that you're familiar with `sklearn`, you're ready to do a KNN regression. Sklearn's regressor is called `sklearn.neighbors.KNeighborsRegressor`. Its main parameter is the `number of nearest neighbors`. There are other parameters such as the distance metric (default for 2 order is the Euclidean distance). For a list of all the parameters see the [Sklearn kNN Regressor Documentation](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html). Let's use $5$ nearest neighbors. ``` # Import the library from sklearn.neighbors import KNeighborsRegressor # Set number of neighbors k = 5 knnreg = KNeighborsRegressor(n_neighbors=k) ``` <div class="exercise"><b>2.1</b></div> Calculate the model's prediction on the test set and print the $R^{2}$ score ``` # Your code here # Fit the regressor - make sure your numpy arrays are the right shape knnreg.fit(X_train, y_train) # Evaluate the outcome on the train set using R^2 r2_train = knnreg.score(X_train, y_train) # Print results print(f'kNN model with {k} neighbors gives R^2 on the train set: {r2_train:.5}') ``` Pretty good, but can we do better? Let's vary the number of neighbors and see what we get. ``` # Make our lives easy by storing the different regressors in a dictionary regdict = {} # Make our lives easier by entering the k values from a list k_list = [1, 2, 4, 15] # Do a bunch of KNN regressions for k in k_list: knnreg = KNeighborsRegressor(n_neighbors=k) knnreg.fit(X_train, y_train) # Store the regressors in a dictionary regdict[k] = knnreg # Print the dictionary to see what we have regdict ``` Now let's plot all the predictions from using these $k$ values in the same plot. ``` fig, ax = plt.subplots(1,1, figsize=(10,6)) ax.plot(dfcars.wt, dfcars.mpg, 'o', label="data") xgrid = np.linspace(np.min(dfcars.wt), np.max(dfcars.wt), 100) # let's unpack the dictionary to its elements (items) which is the k and Regressor for k, regressor in regdict.items(): predictions = regressor.predict(xgrid.reshape(-1,1)) ax.plot(xgrid, predictions, label="{}-NN".format(k)) ax.legend(); ``` <div class="exercise"><b>2.2</b></div> Explain what you see in the graph. **Hint** Notice how the $1$-NN goes through nearly every point on the training set but utterly fails elsewhere. #Your explanation The 1-NN is way too *jumpy* (too complex) to be useful for out-of-sample prediction (for example, it would be unexpected that estimated mpg should jump up from 10 to 14 at the highest weights (around 5.2)). The 15-NN is likely too smooth (too simple a model) as the predicted values are quite flat at the extremes. Most likely 4-NN would be the best model to predict out-of-sample as it describes the signal (general trend) from the noise (jumpiness) the best. Lets look at the scores on the training set. ``` ks = range(1, 15) # Grid of k's scores_train = [] # R2 scores for k in ks: # Create KNN model knnreg = KNeighborsRegressor(n_neighbors=k) # Fit the model to training data knnreg.fit(X_train, y_train) # Calculate R^2 score score_train = knnreg.score(X_train, y_train) scores_train.append(score_train) # Plot fig, ax = plt.subplots(1,1, figsize=(12,8)) ax.plot(ks, scores_train,'o-') ax.set_xlabel(r'$k$') ax.set_ylabel(r'$R^{2}$') ``` <div class="exercise"><b>2.3</div> * Why do we get a perfect $R^2$ at k=1 for the training set? * Make the same plot as above on the *test* set. * What is the best $k$? ``` #Your code here ks = range(1, 15) # Grid of k's scores_test = [] # R2 scores for k in ks: knnreg = KNeighborsRegressor(n_neighbors=k) # Create KNN model knnreg.fit(X_train, y_train) # Fit the model to training data score_test = knnreg.score(X_test, y_test) # Calculate R^2 score scores_test.append(score_test) # Plot fig, ax = plt.subplots(1,1, figsize=(12,8)) ax.plot(ks, scores_test,'o-', ms=12) ax.set_xlabel(r'$k$') ax.set_ylabel(r'$R^{2}$') ``` $R^2$ is perfect on the training set for 1-NN is the observations are predicted perfectly when using the single closest point: the actual point itself. The plot above shows that $k=5$ provides the best $R^2$ score on the test set (of around $R^2=0.70$). ## Part 3: Polynomial Regression, and Exploring the Cab Data Polynomial regression uses a **linear model** to estimate a **non-linear function** (i.e., a function with polynomial terms). For example: $y = \beta_0 + \beta_1x_i + \beta_1x_i^{2}$ It is a linear model because we are still solving a linear equation (the _linear_ aspect refers to the beta coefficients). ``` # read in the data, break into train and test cab_df = pd.read_csv("../data/dataset_1.txt") train_data, test_data = train_test_split(cab_df, test_size=.2, random_state=42) cab_df.head() cab_df.shape # do some data cleaning X_train = train_data['TimeMin'].values.reshape(-1,1)/60 # transforms it to being hour-based y_train = train_data['PickupCount'].values X_test = test_data['TimeMin'].values.reshape(-1,1)/60 # hour-based y_test = test_data['PickupCount'].values def plot_cabs(cur_model, poly_transformer=None): # build the x values for the prediction line x_vals = np.arange(0,24,.1).reshape(-1,1) # optionally use the passed-in transformer if poly_transformer != None: dm = poly_transformer.fit_transform(x_vals) else: dm = x_vals # make the prediction at each x value prediction = cur_model.predict(dm) # plot the prediction line, and the test data plt.plot(x_vals,prediction, color='k', label="Prediction") plt.scatter(X_test, y_test, label="Test Data") # label your plots plt.ylabel("Number of Taxi Pickups") plt.xlabel("Time of Day (Hours Past Midnight)") plt.legend() plt.show() from sklearn.linear_model import LinearRegression fitted_cab_model0 = LinearRegression().fit(X_train, y_train) plot_cabs(fitted_cab_model0) fitted_cab_model0.score(X_test, y_test) ``` We can see that there's still a lot of variation in cab pickups that's not being captured by a linear fit. Further, the linear fit is predicting massively more pickups at 11:59pm than at 12:00am. This is a bad property, and it's the conseqeuence of having a straight line with a non-zero slope. However, we can add columns to our data for $TimeMin^2$ and $TimeMin^3$ and so on, allowing a curvy polynomial line to hopefully fit the data better. We'll be using ``sklearn``'s `PolynomialFeatures()` function to take some of the tedium out of building the expanded input data. In fact, if all we want is a formula like $y \approx \beta_0 + \beta_1 x + \beta_2 x^2 + ...$, it will directly return a new copy of the data in this format! ``` transformer_3 = PolynomialFeatures(3, include_bias=False) expanded_train = transformer_3.fit_transform(X_train) # TRANSFORMS it to polynomial features pd.DataFrame(expanded_train).describe() # notice that the columns now contain x, x^2, x^3 values ``` A few notes on `PolynomialFeatures`: - The interface is a bit strange. `PolynomialFeatures` is a _'transformer'_ in sklearn. We'll be using several transformers that learn a transformation on the training data, and then we will apply those transformations on future data. With PolynomialFeatures, the `.fit()` is pretty trivial, and we often fit and transform in one command, as seen above with ``.fit_transform()`. - You rarely want to `include_bias` (a column of all 1's), since _**sklearn**_ will add it automatically. Remember, when using _**statsmodels,**_ you can just `.add_constant()` right before you fit the data. - If you want polynomial features for a several different variables (i.e., multinomial regression), you should call `.fit_transform()` separately on each column and append all the results to a copy of the data (unless you also want interaction terms between the newly-created features). See `np.concatenate()` for joining arrays. ``` fitted_cab_model3 = LinearRegression().fit(expanded_train, y_train) print("fitting expanded_train:", expanded_train) plot_cabs(fitted_cab_model3, transformer_3) ``` <div class="exercise"><b>3.1</b></div> **Questions**: 1. Calculate the polynomial model's $R^2$ performance on the test set. 2. Does the polynomial model improve on the purely linear model? 3. Make a residual plot for the polynomial model. What does this plot tell us about the model? ``` # ANSWER 1 expanded_test = transformer_3.fit_transform(X_test) print("Test R-squared:", fitted_cab_model3.score(expanded_test, y_test)) # NOTE 1: unlike statsmodels' r2_score() function, sklearn has a .score() function # NOTE 2: fit_transform() is a nifty function that transforms the data, then fits it ``` ANSWER 2: does it? Yes, it appears to be an improvement since the curve (1) better approximates the general trend of the data and (2) the out-of-sample prediction is more accurate ($R^2_{test} = 0.334$ for the $3^{rd}$ order polynomial vs. 0.241 for the linear model). ``` # ANSWER 3 (class discussion about the residuals) x_matrix = transformer_3.fit_transform(X_train) prediction = fitted_cab_model3.predict(x_matrix) residual = y_train - prediction plt.scatter(X_train, residual, label="Residual") plt.axhline(0, color='k') plt.title("Residuals for the Cubic Model") plt.ylabel("Residual Number of Taxi Pickups") plt.xlabel("Time of Day (Hours Past Midnight)") plt.legend() ``` This plot shows that the pattern in the residuals is much better now: the non-linearities are at least improved or minimized. There is still some signal here (non-constant variance) which may indicate that other predictors could be considered to capture this signal (why are there outliers at 1am to 5am in the morning?). #### Other features Polynomial features are not the only constucted features that help fit the data. Because these data have a 24 hour cycle, we may want to build features that follow such a cycle. For example, $sin(24\frac{x}{2\pi})$, $sin(12\frac{x}{2\pi})$, $sin(8\frac{x}{2\pi})$. Other feature transformations are appropriate to other types of data. For instance certain feature transformations have been developed for geographical data. ### Scaling Features When using polynomials, we are explicitly trying to use the higher-order values for a given feature. However, sometimes these polynomial features can take on values that are drastically large, making it difficult for the system to learn an appropriate bias weight due to its large values and potentially large variance. To counter this, sometimes one may be interested in scaling the values for a given feature. For our ongoing taxi-pickup example, using polynomial features improved our model. If we wished to scale the features, we could use `sklearn`'s StandardScaler() function: ``` # SCALES THE EXPANDED/POLY TRANSFORMED DATA # we don't need to convert to a pandas dataframe, but it can be useful for scaling select columns train_copy = pd.DataFrame(expanded_train.copy()) test_copy = pd.DataFrame(expanded_test.copy()) # Fit the scaler on the training data scaler = StandardScaler().fit(train_copy) # Scale both the test and training data. train_scaled = scaler.transform(expanded_train) test_scaled = scaler.transform(expanded_test) # we could optionally run a new regression model on this scaled data fitted_scaled_cab = LinearRegression().fit(train_scaled, y_train) fitted_scaled_cab.score(test_scaled, y_test) ``` <hr style="height:3px"> ## Part 4: Multiple regression and exploring the Football (aka soccer) data Let's move on to a different dataset! The data imported below were scraped by [Shubham Maurya](https://www.kaggle.com/mauryashubham/linear-regression-to-predict-market-value/data) and record various facts about players in the English Premier League. Our goal will be to fit models that predict the players' market value (how much would a team pay for their services), as estimated by https://www.transfermarkt.us. `name`: Name of the player `club`: Club of the player `age` : Age of the player `position` : The usual position on the pitch `position_cat` : 1 for attackers, 2 for midfielders, 3 for defenders, 4 for goalkeepers `market_value` : As on transfermrkt.com on July 20th, 2017 `page_views` : Average daily Wikipedia page views from September 1, 2016 to May 1, 2017 `fpl_value` : Value in Fantasy Premier League as on July 20th, 2017 `fpl_sel` : % of FPL players who have selected that player in their team `fpl_points` : FPL points accumulated over the previous season `region`: 1 for England, 2 for EU, 3 for Americas, 4 for Rest of World `nationality`: Player's nationality `new_foreign`: Whether a new signing from a different league, for 2017/18 (till 20th July) `age_cat`: a categorical version of the Age feature `club_id`: a numerical version of the Club feature `big_club`: Whether one of the Top 6 clubs `new_signing`: Whether a new signing for 2017/18 (till 20th July) As always, we first import, verify, split, and explore the data. ## Import and verification and grouping ``` league_df = pd.read_csv("../data/league_data.txt") print(league_df.dtypes) # QUESTION: what would you guess is the mean age? mean salary? #league_df.head() league_df.shape #league_df.describe() ``` ### (Stratified) train/test split We want to make sure that the training and test data have appropriate representation of each region; it would be bad for the training data to entirely miss a region. This is especially important because some regions are rather rare. <div class="exercise"><b>4.1</b></div> **Questions**: 1. Use the `train_test_split()` function, while (a) ensuring the test size is 20% of the data, and; (2) using 'stratify' argument to split the data (look up documentation online), keeping equal representation of each region. This doesn't work by default, correct? What is the issue? 2. Deal with the issue you encountered above. Hint: you may find numpy's `.isnan()` and panda's `.dropna()` functions useful! 3. How did you deal with the error generated by `train_test_split`? How did you justify your action? *your answer here*: ``` # Your code league_df_old = league_df.copy() try: # Doesn't work: a value is missing train_data, test_data = train_test_split(league_df, test_size = 0.2, stratify=league_df['region']) except: # Count the missing lines and drop them missing_rows = np.isnan(league_df['region']) print("Uh oh, {} lines missing data! Dropping them".format(np.sum(missing_rows))) league_df = league_df.dropna(subset=['region']) train_data, test_data = train_test_split(league_df, test_size = 0.2, stratify=league_df['region']) # here's the observation we dropped: league_df_old[league_df_old['region'].isna()] train_data.shape, test_data.shape ``` Now that we won't be peeking at the test set, let's explore and look for patterns! We'll introduce a number of useful pandas and numpy functions along the way. ### Groupby Pandas' `.groupby()` function is a wonderful tool for data analysis. It allows us to analyze each of several subgroups. Many times, `.groupby()` is combined with `.agg()` to get a summary statistic for each subgroup. For instance: What is the average market value, median page views, and maximum fpl for each player position? ``` train_data.groupby('position').agg({ 'market_value': np.mean, 'page_views': np.median, 'fpl_points': np.max }) train_data.position.unique() train_data.groupby(['big_club', 'position']).agg({ 'market_value': np.mean, 'page_views': np.mean, 'fpl_points': np.mean }) ``` <hr style="height:3px"> ## Part 3.2: Linear regression on the football data This section of the lab focuses on fitting a model to the football (soccer) data and interpreting the model results. The model we'll use is $$\text{market_value} \approx \beta_0 + \beta_1\text{fpl_points} + \beta_2\text{age} + \beta_3\text{age}^2 + \beta_4log_2\left(\text{page_views}\right) + \beta_5\text{new_signing} +\beta_6\text{big_club} + \beta_7\text{position_cat}$$ We're including a 2nd degree polynomial in age because we expect pay to increase as a player gains experience, but then decrease as they continue aging. We're taking the log of page views because they have such a large, skewed range and the transformed variable will have fewer outliers that could bias the line. We choose the base of the log to be 2 just to make interpretation cleaner. <div class="exercise"><b>4.2</b></div> **Questions**: 1. Build the data and fit this model to it. How good is the overall model? ``` # Q1: we'll do most of it for you ... y_train = train_data['market_value'] y_test = test_data['market_value'] def build_football_data(df): x_matrix = df[['fpl_points','age','new_signing','big_club','position_cat']].copy() x_matrix['log_views'] = np.log2(df['page_views']) # CREATES THE AGE SQUARED COLUMN x_matrix['age_squared'] = df['age']**2 # OPTIONALLY WRITE CODE to adjust the ordering of the columns, just so that it corresponds with the equation above x_matrix = x_matrix[['fpl_points','age','age_squared','log_views','new_signing','big_club','position_cat']] # add a constant x_matrix = sm.add_constant(x_matrix) return x_matrix # use build_football_data() to transform both the train_data and test_data train_transformed = build_football_data(train_data) test_transformed = build_football_data(test_data) fitted_model_1 = OLS(endog= y_train, exog=train_transformed, hasconst=True).fit() fitted_model_1.summary() # WRITE CODE TO RUN r2_score(), then answer the above question about the overall goodness of the model r2_score(y_test,fitted_model_1.predict(test_transformed)) ``` Note: $R^2$ here illustrates the typical pattern: the model performs a little worse on the test set than the train set ($R^2=0.632$ in the test vs. $R^2=0.685$ in the train, a roughly 8% worse job in test). <div class="exercise"><b>4.3</b></div> Interpret the regression model. What is the meaning of the coefficient for: - age and age$^2$ - $log_2($page_views$)$ - big_club ``` #Your code here fitted_model_1 = OLS(endog= y_train, exog=train_transformed, hasconst=True).fit() fitted_model_1.summary() # Q2: let's use the age coefficients to show the effect of age has on one's market value; # we can get the age and age^2 coefficients via: agecoef = fitted_model_1.params.age age2coef = fitted_model_1.params.age_squared # let's set our x-axis (corresponding to age) to be a wide range from -100 to 100, # just to see a grand picture of the function x_vals = np.linspace(np.min(train_data['age']),np.max(train_data['age']),100) y_vals = agecoef*x_vals +age2coef*x_vals**2 # WRITE CODE TO PLOT x_vals vs y_vals plt.plot(x_vals, y_vals) plt.title("Effect of Age") plt.xlabel("Age") plt.ylabel("Contribution to Predicted Market Value") plt.show() # Q2A: WHAT HAPPENS IF WE USED ONLY AGE (not AGE^2) in our model (what's the r2?); make the same plot of age vs market value # Q2B: WHAT HAPPENS IF WE USED ONLY AGE^2 (not age) in our model (what's the r2?); make the same plot of age^2 vs market value # Q2C: PLOT page views vs market value ``` The plot above illustrates how age and age$^2$ relate to the market value in tandem (difficult to interpret separately): market value peaks at an age around 25-26 years old, and is much lower at older ages (in the thirties) or younger ages (in teens or early twenties). This makes sense since the linear effect of age is positive (so starts out increasing for small positive values of age) and the quadratic effect is negative (which starts to dominate the relationship and make it negative at higher values of age). This relationship is after controlling for the other predictors in the model. The coefficient for $log_2$(page_views) is 2.3230, which means a doubling in page_views is associated with a 2.32 unit increase in market value, controlling for the other predictors in the model. And signing with a big_club is associatecd with a 8.03 unit increase in market value vs. signing with a non-big_club, after controlling for the other factors in the model. <div class="exercise"><b>4.4</b></div> What should a player do in order to improve their market value? How many page views should a player go get to increase their market value by 10? Assuming one cannot control the age at which they hit the market (hitting it at age 25 or 26 would be ideal), this model suggests a player can increase their market value by playing for a big_club, have a greater media presence, and be a more productive fantasy player. Position is also important, but is difficult to interpret this variable since it should be treated as a set of binary indicators (3 binary indiciators for the 4 categories) instead of a single quantitative predictor. <hr style='height:3px'> ### Part 4.3: Turning Categorical Variables into multiple binary variables Of course, we have an error in how we've included player position. Even though the variable is numeric (1,2,3,4) and the model runs without issue, the value we're getting back is garbage. The interpretation, such as it is, is that there is an equal effect of moving from position category 1 to 2, from 2 to 3, and from 3 to 4, and that this effect is probably between -0.5 to -1 (depending on your run). In reality, we don't expect moving from one position category to another to be equivalent, nor for a move from category 1 to category 3 to be twice as important as a move from category 1 to category 2. We need to introduce better features to model this variable. We'll use `pd.get_dummies` to do the work for us. ``` train_design_recoded = pd.get_dummies(train_transformed, columns=['position_cat'], drop_first=True) test_design_recoded = pd.get_dummies(test_transformed, columns=['position_cat'], drop_first=True) train_design_recoded.head() ``` We've removed the original `position_cat` column and created three new ones. #### Why only three new columns? Why does pandas give us the option to drop the first category? <div class="exercise"><b>Exercise</b></div> **Questions**: 1. If we're fitting a model without a constant, should we have three dummy columns or four dummy columns? 2. Fit a model on the new, recoded data, then interpret the coefficient of `position_cat_2`. ``` # Your code ### SOLUTION: resu = OLS(y_train, train_design_recoded).fit() resu.summary() print("r2:", r2_score(y_test, resu.predict(test_design_recoded))) print("position_cat_2 coef:", resu.params.position_cat_2) train_design_recoded.shape, y_train.shape ``` **Answers**: Pandas allows us to drop the first category because it will serve as the reference group in the regression (sort of can be though of as the intercept *absorbing* this group). 1. If there is no intercept, then we should fully include all four dummy indicators as predictors (to estimate the 4 groups separately, rather than 3 groups in comparison to the reference group). 2. The output below estimates the coefficient for `position_cat_2` to be -1.05: which suggests that midfielders are valued about 1 unit below attackers, on average (holding the other predictors constant when comparing these two groups). ## BONUS EXERCISE: We have provided a spreadsheet of Boston housing prices (data/boston_housing.csv). The 14 columns are as follows: 1. CRIM: per capita crime rate by town 2. ZN: proportion of residential land zoned for lots over 25,000 sq.ft. 3. INDUS: proportion of non-retail business acres per town 4. CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) 5. NOX: nitric oxides concentration (parts per 10 million) 1https://archive.ics.uci.edu/ml/datasets/Housing 123 20.2. Load the Dataset 124 6. RM: average number of rooms per dwelling 7. AGE: proportion of owner-occupied units built prior to 1940 8. DIS: weighted distances to five Boston employment centers 9. RAD: index of accessibility to radial highways 10. TAX: full-value property-tax rate per \$10,000 11. PTRATIO: pupil-teacher ratio by town 12. B: 1000(Bk−0.63)2 where Bk is the proportion of blacks by town 13. LSTAT: % lower status of the population 14. MEDV: Median value of owner-occupied homes in $1000s We can see that the input attributes have a mixture of units There are 450 observations. <div class="exercise"><b>Exercise</b></div> Using the above file, try your best to predict **housing prices. (the 14th column)** We have provided a test set `data/boston_housing_test.csv` but refrain from looking at the file or evaluating on it until you have finalized and trained a model. 1. Load in the data. It is tab-delimited. Quickly look at a summary of the data to familiarize yourself with it and ensure nothing is too egregious. 2. Use a previously-discussed function to automatically partition the data into a training and validation (aka development) set. It is up to you to choose how large these two portions should be. 3. Train a basic model on just a subset of the features. What is the performance on the validation set? 4. Train a basic model on all of the features. What is the performance on the validation set? 5. Toy with the model until you feel your results are reasonably good. 6. Perform cross-validation with said model, and measure the average performance. Are the results what you expected? Were the average results better or worse than that from your original 1 validation set? 7. Experiment with other models, and for each, perform 10-fold cross-validation. Which model yields the best average performance? Select this as your final model. 8. Use this model to evaulate your performance on the testing set. What is your performance (MSE)? Is this what you expected?
github_jupyter
## Spatial Adaptive Graph Neural Network for POI Graph Learning In this tuorial, we will go through how to run the Spatial Adaptive Graph Neural Network (SA-GNN) to learn on the POI graph. If you are intersted in more details, please refer to the paper "Competitive analysis for points of interest". ``` import os import pgl import pickle import pandas as pd import numpy as np from random import shuffle import paddle import paddle.nn as nn import paddle.nn.functional as F from sagnn import SpatialOrientedAGG, SpatialAttnProp paddle.set_device('cpu') ``` ### Load dataset and construct the POI graph ``` def load_dataset(file_path, dataset): """ 1) Load the POI dataset from four files: edges file, two-dimensional coordinate file, POI feature file and label file. 2) Construct the PGL-based graph and return the pgl.Graph instance. """ edges = pd.read_table(os.path.join(file_path, '%s.edge' % dataset), header=None, sep=' ') # ex, ey = np.array(edges[:][0]), np.array(edges[:][1]) edges = list(zip(edges[:][0],edges[:][1])) coords = pd.read_table(os.path.join(file_path, '%s.coord' % dataset), header=None, sep=' ') coords = np.array(coords) feat_path = os.path.join(file_path, '%s.feat' % dataset) # pickle file if os.path.exists(feat_path): with open(feat_path, 'rb') as f: features = pickle.load(f) else: features = np.eye(len(coords)) graph = pgl.Graph(edges, num_nodes=len(coords), node_feat={"feat": features, 'coord': coords}) ind_labels = pd.read_table(os.path.join(file_path, '%s.label' % dataset), header=None, sep=' ') inds_1 = np.array(ind_labels)[:,0] inds_2 = np.array(ind_labels)[:,1] labels = np.array(ind_labels)[:,2:] return graph, (inds_1, inds_2), labels ``` ### Build the SA-GNN model for link prediction ``` class DenseLayer(nn.Layer): def __init__(self, in_dim, out_dim, activation=F.relu, bias=True): super(DenseLayer, self).__init__() self.activation = activation if not bias: self.fc = nn.Linear(in_dim, out_dim, bias_attr=False) else: self.fc = nn.Linear(in_dim, out_dim) def forward(self, input_feat): return self.activation(self.fc(input_feat)) class SAGNNModel(nn.Layer): def __init__(self, infeat_dim, hidden_dim=128, dense_dims=[128,128], num_heads=4, feat_drop=0.2, num_sectors=4, max_dist=0.1, grid_len=0.001, num_convs=1): super(SAGNNModel, self).__init__() self.num_convs = num_convs self.agg_layers = nn.LayerList() self.prop_layers = nn.LayerList() in_dim = infeat_dim for i in range(num_convs): agg = SpatialOrientedAGG(in_dim, hidden_dim, num_sectors, transform=False, activation=None) prop = SpatialAttnProp(hidden_dim, hidden_dim, num_heads, feat_drop, max_dist, grid_len, activation=None) self.agg_layers.append(agg) self.prop_layers.append(prop) in_dim = num_heads * hidden_dim self.mlp = nn.LayerList() for hidden_dim in dense_dims: self.mlp.append(DenseLayer(in_dim, hidden_dim, activation=F.relu)) in_dim = hidden_dim self.output_layer = nn.Linear(2 * in_dim, 1) def forward(self, graph, inds): feat_h = graph.node_feat['feat'] for i in range(self.num_convs): feat_h = self.agg_layers[i](graph, feat_h) graph = graph.tensor() feat_h = self.prop_layers[i](graph, feat_h) for fc in self.mlp: feat_h = fc(feat_h) feat_h = paddle.concat([paddle.gather(feat_h, inds[0]), paddle.gather(feat_h, inds[1])], axis=-1) output = F.sigmoid(self.output_layer(feat_h)) return output ``` Here we load a mock dataset for demonstration, you can load the full dataset as you want. Note that all needed files should include: 1) one edge file (dataset.edge) for POI graph construction; 2) one coordinate file (dataset.coord) for POI position; 3) one label file (dataset.label) for training model; 4) one feature file (dataset.feat) for POI feature loading, which is optional. If there is no dataset.feat, the default POI feature is the one-hot vector. ``` graph, inds, labels = load_dataset('./data/', 'mock_poi') ids = [i for i in range(len(labels))] shuffle(ids) train_num = int(0.6*len(labels)) train_inds = (inds[0][ids[:train_num]], inds[1][ids[:train_num]]) test_inds = (inds[0][ids[train_num:]], inds[1][ids[train_num:]]) train_labels = labels[ids[:train_num]] test_labels = labels[ids[train_num:]] print("dataset num: %s" % (len(labels)), "training num: %s" % (len(train_labels))) infeat_dim = graph.node_feat['feat'].shape[0] model = SAGNNModel(infeat_dim) optim = paddle.optimizer.Adam(0.001, parameters=model.parameters()) ``` ### Strart training ``` def train(model, graph, inds, labels, optim): model.train() graph = graph.tensor() inds = paddle.to_tensor(inds, 'int64') labels = paddle.to_tensor(labels, 'float32') preds = model(graph, inds) bce_loss = paddle.nn.BCELoss() loss = bce_loss(preds, labels) loss.backward() optim.step() optim.clear_grad() return loss.numpy()[0] def evaluate(model, graph, inds, labels): model.eval() graph = graph.tensor() inds = paddle.to_tensor(inds, 'int64') labels = paddle.to_tensor(labels, 'float32') preds = model(graph, inds) bce_loss = paddle.nn.BCELoss() loss = bce_loss(preds, labels) return loss.numpy()[0], 1.0*np.sum(preds.numpy().astype(int) == labels.numpy().astype(int), axis=0) / len(labels) for epoch_id in range(5): train_loss = train(model, graph, train_inds, train_labels, optim) print("epoch:%d train/loss:%s" % (epoch_id, train_loss)) test_loss, test_acc = evaluate(model, graph, test_inds, test_labels) print("test loss: %s, test accuracy: %s" % (test_loss, test_acc)) ```
github_jupyter
# Coding outside of Jupyter notebooks To be able to run Python on your own computer, I recommend installing [Anaconda](https://www.continuum.io/downloads) which contains basic packages for you to be up and running. While you are downloading things, also try the text editor [Atom](https://atom.io/). We have used Jupyter notebooks in this class as a useful tool for integrating text and interactive, usable code. However, many people when real-life coding would not use Jupyter notebooks but would instead type code in text files and run them via a terminal window or in iPython. Many of you have done this before, perhaps analogously in Matlab by writing your code in a .m file and then running it in the Matlab GUI. Writing code in a separate file allows more heavy computations as well as allowing that code to be reused more easily than when it is written in a Jupyter notebook. Later, we will demonstrate typing Python code in a .py file and then running it in an iPython window. First, a few of the other options... # Google's Colaboratory Google recently announced a partnership with Jupyter and have put out the Jupyter Notebook in their [own environment](https://colab.research.google.com/notebook). You can use it just like our in-class notebooks, share them through Google Drive, and even install packages. This may be the way of the future of teaching Python. # Jupyter itself The Jupyter project is becoming more and more sophisticated. The next project coming out is called [JupyterLab](https://towardsdatascience.com/jupyterlab-you-should-try-this-data-science-ui-for-jupyter-right-now-a799f8914bb3) and aims to more cleanly integrate modules that are already available in the server setup we have been using: notebooks, terminals, text editor, file hierarchy, etc. You can see a pre-alpha release image of it below: ![JupyterLab](https://cdn-images-1.medium.com/max/1600/1*D5L0HltRGVqcPoDfHjKxEA.png) # MATLAB-like GUIs Two options for using Python in a clickable, graphical user interface are [Spyder](https://pythonhosted.org/spyder/) and [Canopy](https://www.enthought.com/products/canopy/). Spyder is open source and Canopy is not, though the company that puts Canopy together (Enthought) does make a version available for free. Both are shown below. They are generally similar from the perspective of what we've been using so far. They have a console for getting information when you run your code (equivalent to running a cell in your notebook), typing in your file, getting more information about code, having nice code syntax coloring, maybe being able to examine Python objects. Note that many of these features are being integrated in less formal GUI tools like this — you'll see even in the terminal window in iPython you have access to many nice features. ![Spyder](https://pythonhosted.org/spyder/_images/editor3.png) ![Canopy](https://static.enthought.com/etw/img/interactive_graphical_python_code_debugger_in_enthought_canopy.png?8feb477) # Using iPython in a terminal window Here we have code in our notebook: ``` import numpy as np import matplotlib.pyplot as plt # just for jupyter notebooks %matplotlib inline x = np.linspace(0, 10) y = x**2 fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x, y, 'k', lw=3) ``` Now let's switch to a terminal window and a text file... ## Open iPython Get Anaconda downloaded and opened up on your machine if you want. Open a terminal window, or use the one that Anaconda opens, and type: > ipython Or, you can use redfish. On `redfish`: Go to the home menu on redfish, on the right-hand-side under "New", choose "Terminal" to open a terminal window that is running on `redfish`. To run Python 3 in this terminal window, you'll need to use the command `ipython3` instead of `ipython`, due to the way the alias to the program is set up: > ipython3 Note that we will use this syntax to mean that it is something to be typed in your terminal window/command prompt (or it is part of an exercise). To open ipython with some niceties added so that you don't have to import them by hand each time you open the program (numpy and matplotlib in particular), open it with > ipython --pylab Once you have done this, you'll see a series of text lines indicating that you are now in the program ipython. You can now type code as if you were in a code window in a Jupyter notebook (but without an ability to integrate text easily). --- ### *Exercise* > Copy in the code to define `x` and `y` from above, then make the figure. If you haven't opened `ipython` with the option flag `--pylab`, you will need to still do the import statements, but not `%matplotlib inline` since that is only for notebooks. > Notice how the figure appears as a separate window. Play with the figure window — you can change the size and properties of the plot using the GUI buttons, and you can zoom. --- ## Text editor A typical coding set up is to have a terminal window with `ipython` running alongside a text window where you type your code. You can then go back and forth, trying things out in iPython, and keeping what works in the text window so that you finish with a working script, which can be run independently in the future. This is, of course, what you've been doing when you use Jupyter notebooks, except everything is combined into one place in that sort of setup. If you are familiar with Matlab, this is what you are used to when you have your Matlab window with your `*.m` text window alongside a "terminal window" where you can type things. (There is also a variable viewer included in Matlab.) This is also what you can do in a single program with Jupyterlab. A good text editor should be able to highlight syntax – that is, use color and font style to differentiate between special key words and types for a given programming language. This is what has been happening in our Jupyter notebooks, when strings are colored red, for example. The editors will also key off typical behaviors in the language to try to be helpful, such as when in a Jupyter notebook if you write an `if` statement with a colon at the end and push `enter`, the next line will be automatically indented so that you can just start typing. These behaviors can be adjusted by changing user settings. Some options are [TextMate](https://macromates.com/) for Mac, which costs money, and [Sublime Text](https://www.sublimetext.com/) which works on all operating systems and also costs money (after a free trial). For this class, we recommend using [Atom](https://atom.io/), which is free, works across operating systems, and integrates with GitHub since they wrote it. So, go download Atom and start using it, unless you have a preferred alternative you want to use. --- ### *Exercise*: run a script > If you are running python locally on your machine with Anaconda, copy and paste the code from above into a new text file in your text editor. Save it, then run the file in ipython with > run [filename] > If you are sticking with `redfish`, you can type out text in a text file from the home window (under New), and you can get most but not all functionality this way. Or you can try one of the GUIs. --- ## Package managing The advantage of Anaconda is being able to really easily add packages, with > conda install [packagename] This will look for the Python package you want in the places known to conda. You may also tell it to look in another channel, which other people and groups can maintain. For example, `cartopy` is available through the `scitools channel`: > conda install -c scitools cartopy Sometimes, it is better or necessary to use `pip` to install packages, which links to the PyPI collections of packages that anyone can place there for other people to use. For example, you can get the `cmocean` colormaps package from [PyPI](https://pypi.python.org/pypi/cmocean) with > pip install cmocean ## Running Jupyter notebooks on your own server We've been running our notebooks on a TAMU server all semester. You can do this on your own machine pretty easily once you have Anaconda. There should be a place for you to double-click for it to open, or you can open a terminal window and type: > jupyter notebook This opens a window that should look familiar in your browser window. The difference is that instead of connecting to a remote server (on redfish), you are connecting to a local server on your own machine that you started running with the `jupyter notebook` command. # Run a script When you use the command `run` in iPython, parts of the code in that file are implemented as if they were written in the iPython window itself. Code that is outside of a function call will run: at 0 indentation level (import statements and maybe some variables definitions are common), but not any functions, though it will read the functions into your local variables so that they can be used. Code inside the line `if __name__ == '__main__':` will also be run. This syntax is available so that default run commands can be built into your script. This is often used to provide example or test code that can be easily run. Note anytime you are accessing a saved file from iPython, you need to have at least one of the following be true: * is in the same directory in your terminal window as the file; * are referencing the file with either its full path or a relative path; * have the path to your file be appended to the environmental variable PYTHONPATH. --- ### *Exercise* > There is example code available to try at https://github.com/kthyng/python4geosciences/blob/master/data/optimal_interpolation.py. Copy this code into your text editor and save it to the same location on your computer as your iPython window is open. > Within your iPython window, run optimal_interpolation.py with `run optimal_interpolation` (if you saved it in the same directory as your iPython window). Which part of the code actually runs? Why? > Add some `print` statements into the script at 0 indentation as well as below the line `if __name__ == '__main__':` and see what comes through when you run the code. Can you access the class `oi_1d`? --- # Importing your own code The point of importing code is to be able to then use the functions and classes that you've written in other files. Importing your own code is just like importing `numpy` or any other package. You use the same syntax and you have the same ability to query the built-in methods. > import numpy or: > import [your_code] When you import a package, any code at the 0 indentation level will run; however, the code within `if __name__ == '__main__':` will not run. When you are using a single script for analysis, you may just use `run` to use your code. However, as you build up complexity in your work, you'll probably want to make separate, independent code bases that can be called from subsequent code by importing them (again, just like us using the capabilities in `numpy`). When you import a package, a `*.pyc` file is created which holds compiled code that is subsequently read when the package is again imported, in order to save time. When you are in a single session in iPython, that `*.pyc` will be used and not updated. If you have changed the code and want it to be updated, you either need to exit iPython and reopen it, or you need to `reload` the package. These is different syntax for this depending on the version of Python you are using, but we are using Python 3 (>3.4) so we will do the following: **For >= Python3.4:** import importlib importlib.reload([code to reload]) --- ### *Exercise* > Import `optimal_interpolation.py`. Add a print statement with 0 indentation level in the code. Import the package again. Does the print statement run? Reload the package. How about now? > What about if you run it instead of importing it? ### *Exercise* > Write your own simple script with a function in it — your function should take at least one input (maybe several numbers) and return something (maybe a number that is the result of some calculation). > Now, use your code in several ways. Run the code in ipython with > run [filename] > Make sure you have a __name__ definition for this (`if __name__ == '__main__':`, etc). Now import the code and use it: > import [filename] > Add a docstring to the top of the file and reload your package, then query the code. Do you see the docstring? Add a docstring to the function in the file, reload, and query. Do you see the docstring? > You should have been able to run your code both ways: running it directly, and importing it, then using a function that is within the code. --- # Unit testing The idea of unit testing is to develop tests for your code as you develop it, so that you automatically know if it is working properly as you make changes. In fact, some coders prefer to write the unit tests first to drive the proper development of their code and to know when it is working. Of course, the quality of the testing is made up of the tests you include and aspects of your code that you test. Here are some unit test [guidelines](http://docs.python-guide.org/en/latest/writing/tests/): * Generally, you want to write unit tests that test one small aspect of your code functionality, as separately as possible from other parts of it. Then write many of these tests to cover all aspects of functionality. * Make sure your unit tests run very quickly since you may end up with many of them * Always run the full test suite before a coding session, and run it again after. This will give you more confidence that you did not break anything in the rest of the code. * You can now run a program like [Travis CI](https://travis-ci.com/) through GitHub which runs your test suite before you push your code to your repository or merge a pull request. * Use long and descriptive names for testing functions. The style guide here is slightly different than that of running code, where short names are often preferred. The reason is testing functions are never called explicitly. square() or even sqr() is ok in running code, but in testing code you would have names such as test_square_of_number_2(), test_square_negative_number(). These function names are displayed when a test fails, and should be as descriptive as possible. * Include detailed docstrings and comments throughout your testing files since these may be read more than the original code. How to set up a suite of unit tests: 1. make a `tests` directory in your code (or for simple code, just have your test file in the same directory); 1. make a new file to hold your tests, called `tests*.py` — it must start with "test" for it to be noticed by testing programs; 1. inside `tests*.py`, write a test function called `test_*()` — the testing programs look for functions with these names in particular and ignore other functions; 1. use functions like `assert` and `np.allclose` for numeric comparisons of function outputs and checking for output types. 1. run testing programs on your code. I recommend [nosetests](http://nose.readthedocs.org/en/latest/usage.html) or [pytest](http://pytest.org/latest/). You use these by running `nosetests` or `py.test` from the terminal window in the directory with your test code in it (or pointing to the directory). Next version will be `nose2`. You can load files into Jupyter notebooks using the magic command `%load`. You can then run the code inside the notebook if you want (though the `import` statement will be an issue here), or just look at it. ``` # %load ../data/package.py def add(x, y): """doc """ return x+y print(add(1, 2)) if __name__ == '__main__': print(add(1, 1)) # %load ../data/test.py """Test package.py""" import package import numpy as np def test_add_12(): """Test package with inputs 1, 2""" assert package.add(1, 2) == np.sum([1, 2]) ``` Now, run the test. We can do this by escaping to the terminal, or we can go to our terminal window and run it there. Note: starting a line of code with "!" makes it from the terminal window. Some commands are so common that you don't need to use the "!" (like `ls`), but in general you need it. ``` !nosetests ../data/ ``` --- ### *Exercise* > Start another file called `test*.py`. In it, write a test function, `test_*()`, that checks the accuracy of your original function in some way. Then run it with `nosetests`. --- # PEP 0008 A [PEP](https://www.python.org/dev/peps/pep-0001/) is a Python Enhancement Proposal, describing ideas for design or processes to the Python community. The list of the PEPs is [available online](https://www.python.org/dev/peps/). [PEP 0008](https://www.python.org/dev/peps/pep-0008/) is a set of style guidelines for writing good, clear, readable code, written with the assumption that code is read more often than it is written. These address questions such as, when a line of code is longer than one line, how should it be indented? And speaking of one line of code, how long should it be? Note than even in this document, they emphasize that these are guidelines and sometimes should be trumped by what has already been happening in a project or other considerations. But, generally, follow what they say here. Here is a list of some style guidelines to follow, but check out the full guide for a wealth of good ideas: * indent with spaces, not tabs * indent with 4 spaces * limit all lines to a maximum of 79 characters * put a space after a comma * avoid trailing whitespace anywhere Note that you can tell your text editor to enforce pep8 style guidelines to help you learn. This is called a linter. I do this with a plug-in in Sublime Text. You can [get one](https://github.com/AtomLinter/linter-pep8) for Atom. --- ### *Exercise* > Go back and clean up your code you've been writing so that it follows pep8 standards. --- # Docstrings Docstrings should be provided at the top of a code file for the whole package, and then for each function/class within the package. ## Overall style Overall style for docstrings is given in [PEP 0257](https://www.python.org/dev/peps/pep-0257/), and includes the following guidelines: * one liners: for really obvious functions. Keep the docstring completely on one line: > def simple_function(): > """Simple functionality.""" * multiliners: Multi-line docstrings consist of a summary line just like a one-line docstring, followed by a blank line, followed by a more elaborate description. The summary line may be used by automatic indexing tools; it is important that it fits on one line and is separated from the rest of the docstring by a blank line. The summary line may be on the same line as the opening quotes or on the next line. > def complex_function(): > """ > One liner describing overall. > > Now more involved description of inputs and outputs. > Possibly usage example(s) too. > """ ## Styles for inputs/outputs For the more involved description in the multi line docstring, there are several standards used. (These are summarized nicely in a [post](http://stackoverflow.com/questions/3898572/what-is-the-standard-python-docstring-format) on Stack Overflow; this list is copied from there.) 1. [reST](https://www.python.org/dev/peps/pep-0287/) > def complex_function(param1, param2): > """ > This is a reST style. > > :param param1: this is a first param > :param param2: this is a second param > :returns: this is a description of what is returned > :raises keyError: raises an exception > """ 1. [Google](http://google.github.io/styleguide/pyguide.html#Python_Language_Rules) > def complex_function(param1, param2): > """ > This is an example of Google style. > > Args: > param1: This is the first param. > param2: This is a second param. > > Returns: > This is a description of what is returned. > > Raises: > KeyError: Raises an exception. > """ 1. [Numpydoc](https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt) > def complex_function(first, second, third='value'): > """ > Numpydoc format docstring. > > Parameters > ---------- > first : array_like > the 1st param name `first` > second : > the 2nd param > third : {'value', 'other'}, optional > the 3rd param, by default 'value' > > Returns > ------- > string > a value in a string > > Raises > ------ > KeyError > when a key error > OtherError > when an other error > """ # Documentation generation [Sphinx](http://www.sphinx-doc.org/en/stable/index.html) is a program that can be run to generate documentation for your project from your docstrings. You basically run the program and if you use the proper formatting in your docstrings, they will all be properly pulled out and presented nicely in a coherent way. There are various additions you can use with Sphinx in order to be able to write your docstrings in different formats (as shown above) and still have Sphinx be able to interpret them. For example, you can use [Napoleon](http://sphinxcontrib-napoleon.readthedocs.org/en/latest/) with Sphinx to be able to write using the Google style instead of reST, meaning that you can have much more readable docstrings and still get nicely-generated documentation out. Once you have generated this documentation, you can publish it using [Read the docs](https://readthedocs.org/). Here is documentation on readthedocs for a package that converts between colorspaces, [Colorspacious](https://colorspacious.readthedocs.org/en/latest/). Another approach is to use Sphinx, but link it with [GitHub Pages](https://pages.github.com/), which is hosted directly from your GitHub repo page. Separately from documentation, I use GitHub Pages for [my own website](http://kristenthyng.com). I also use one for documentation for a package of mine, [cmocean](http://matplotlib.org/cmocean/) that provides colormaps for oceanography. To get this running, I followed instructions [online](http://gisellezeno.com/tutorials/sphinx-for-python-documentation.html). Note that GitHub pages is built using Jekyll but in this case we tell it not to use Jekyll and instead use Sphinx. We can see that the docstrings in the code are nicely interpreted into documentation for the functions by comparing the [module docs](http://matplotlib.org/cmocean/cmocean.html#module-cmocean.tools) with the code below. ``` # %load https://raw.githubusercontent.com/matplotlib/cmocean/master/cmocean/tools.py ''' Plot up stuff with colormaps. ''' import numpy as np import matplotlib as mpl def print_colormaps(cmaps, N=256, returnrgb=True, savefiles=False): '''Print colormaps in 256 RGB colors to text files. :param returnrgb=False: Whether or not to return the rgb array. Only makes sense to do if print one colormaps' rgb. ''' rgb = [] for cmap in cmaps: rgbtemp = cmap(np.linspace(0, 1, N))[np.newaxis, :, :3][0] if savefiles: np.savetxt(cmap.name + '-rgb.txt', rgbtemp) rgb.append(rgbtemp) if returnrgb: return rgb def get_dict(cmap, N=256): '''Change from rgb to dictionary that LinearSegmentedColormap expects. Code from https://mycarta.wordpress.com/2014/04/25/convert-color-palettes-to-python-matplotlib-colormaps/ and http://nbviewer.ipython.org/github/kwinkunks/notebooks/blob/master/Matteo_colourmaps.ipynb ''' x = np.linspace(0, 1, N) # position of sample n - ranges from 0 to 1 rgb = cmap(x) # flip colormap to follow matplotlib standard if rgb[0, :].sum() < rgb[-1, :].sum(): rgb = np.flipud(rgb) b3 = rgb[:, 2] # value of blue at sample n b2 = rgb[:, 2] # value of blue at sample n # Setting up columns for tuples g3 = rgb[:, 1] g2 = rgb[:, 1] r3 = rgb[:, 0] r2 = rgb[:, 0] # Creating tuples R = list(zip(x, r2, r3)) G = list(zip(x, g2, g3)) B = list(zip(x, b2, b3)) # Creating dictionary k = ['red', 'green', 'blue'] LinearL = dict(zip(k, [R, G, B])) return LinearL def cmap(rgbin, N=256): '''Input an array of rgb values to generate a colormap. :param rgbin: An [mx3] array, where m is the number of input color triplets which are interpolated between to make the colormap that is returned. hex values can be input instead, as [mx1] in single quotes with a #. :param N=10: The number of levels to be interpolated to. ''' # rgb inputs here if not mpl.cbook.is_string_like(rgbin[0]): # normalize to be out of 1 if out of 256 instead if rgbin.max() > 1: rgbin = rgbin/256. cmap = mpl.colors.LinearSegmentedColormap.from_list('mycmap', rgbin, N=N) return cmap ``` # Debugging You can use the package [`pdb`](https://docs.python.org/3/library/pdb.html) while running your code to pause it intermittently and poke around to check variable values and understand what is going on. A few key commands to get you started are: * `pdb.set_trace()` pauses the code run at this location, then allows you to type in the iPython window. You can print statements or check variables shapes, etc. This is how you can dig into code. * Once you have stopped your code at a trace, use: * `n` to move to the next line; * `s` to step into a function if that is the next line and you want to move into that function as opposed to just running the function; * `c` to continue until there is another trace, the code ends, it reaches the end of a function, or an error occurs; * `q` to quit out of the debugger, which will also quit out of the code being run. --- ### *Exercise* > Use `pdb` to investigate variables after starting your code running. --- # Make a package To make a Python package that you want to be a bit more official because you plan to use it long-term, and/or you want to share it with other people and make it easy for them to use, you are going to want to get it on GitHub, provide documentation, and get it on PyPI (this is how you are able to then easily install it with `pip install [package_name]`). There are also a number of technical steps you'll need to do. More information about this sort of process is [available online](http://python-packaging.readthedocs.org/en/latest/minimal.html).
github_jupyter
``` %cd .. import os os.environ['SEED'] = "42" import dataclasses from pathlib import Path import warnings import nlp import torch import numpy as np import torch.nn.functional as F from transformers import ( BertForSequenceClassification, DistilBertForSequenceClassification ) from torch.optim.lr_scheduler import CosineAnnealingLR from sklearn.model_selection import train_test_split from pytorch_helper_bot import ( BaseBot, MovingAverageStatsTrackerCallback, CheckpointCallback, LearningRateSchedulerCallback, MultiStageScheduler, Top1Accuracy, LinearLR, Callback ) try: from apex import amp APEX_AVAILABLE = True except ModuleNotFoundError: APEX_AVAILABLE = False CACHE_DIR = Path("cache/") CACHE_DIR.mkdir(exist_ok=True) class SST2Dataset(torch.utils.data.Dataset): def __init__(self, entries_dict, temperature=1): super().__init__() self.entries_dict = entries_dict self.temperature = temperature def __len__(self): return len(self.entries_dict["label"]) def __getitem__(self, idx): return ( self.entries_dict["input_ids"][idx], self.entries_dict["attention_mask"][idx], { "label": self.entries_dict["label"][idx], "logits": self.entries_dict["logits"][idx] / self.temperature } ) train_dict, valid_dict, test_dict = torch.load(str(CACHE_DIR / "distill-dicts-augmented.jbl")) # Instantiate a PyTorch Dataloader around our dataset TEMPERATURE = 2. train_loader = torch.utils.data.DataLoader(SST2Dataset(train_dict, temperature=TEMPERATURE), batch_size=64, shuffle=True) valid_loader = torch.utils.data.DataLoader(SST2Dataset(valid_dict, temperature=TEMPERATURE), batch_size=64, drop_last=False) test_loader = torch.utils.data.DataLoader(SST2Dataset(test_dict, temperature=1.), batch_size=64, drop_last=False) ALPHA = 0 DISTILL_OBJECTIVE = torch.nn.MSELoss() def cross_entropy(logits, targets): targets = F.softmax(targets, dim=-1) return -(targets * F.log_softmax(logits, dim=-1)).sum(dim=1).mean() def distill_loss(logits, targets): # distill_part = F.binary_cross_entropy_with_logits( # logits[:, 1], targets["logits"][:, 1] # ) distill_part = cross_entropy( logits, targets["logits"] ) classification_part = F.cross_entropy( logits, targets["label"] ) return ALPHA * classification_part + (1-ALPHA) * distill_part bert_model = BertForSequenceClassification.from_pretrained(str(CACHE_DIR / "sst2_bert_uncased")).cpu() bert_model.bert.embeddings.word_embeddings.weight.shape distill_bert_model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") bert_model.bert.embeddings.position_embeddings.weight.data.shape # distill_bert_model.distilbert.embeddings.word_embeddings.weight.data = bert_model.bert.embeddings.word_embeddings.weight.data # distill_bert_model.distilbert.embeddings.position_embeddings.weight.data = bert_model.bert.embeddings.position_embeddings.weight.data[:128] # Freeze the embedding layer # for param in distill_bert_model.distilbert.embeddings.parameters(): # param.requires_grad = False distill_bert_model =distill_bert_model.cuda() del bert_model optimizer = torch.optim.Adam(distill_bert_model.parameters(), lr=2e-5, betas=(0.9, 0.99)) if APEX_AVAILABLE: distill_bert_model, optimizer = amp.initialize( distill_bert_model, optimizer, opt_level="O1" ) class DistillTop1Accuracy(Top1Accuracy): def __call__(self, truth, pred): truth = truth["label"] return super().__call__(truth, pred) @dataclasses.dataclass class SST2Bot(BaseBot): log_dir = CACHE_DIR / "logs" def __post_init__(self): super().__post_init__() self.loss_format = "%.6f" @staticmethod def extract_prediction(output): return output[0] total_steps = len(train_loader) * 5 checkpoints = CheckpointCallback( keep_n_checkpoints=1, checkpoint_dir=CACHE_DIR / "distill_model_cache/", monitor_metric="loss" ) lr_durations = [ int(total_steps*0.2), int(np.ceil(total_steps*0.8)) ] break_points = [0] + list(np.cumsum(lr_durations))[:-1] callbacks = [ MovingAverageStatsTrackerCallback( avg_window=len(train_loader) // 8, log_interval=len(train_loader) // 10 ), LearningRateSchedulerCallback( MultiStageScheduler( [ LinearLR(optimizer, 0.01, lr_durations[0]), CosineAnnealingLR(optimizer, lr_durations[1]) ], start_at_epochs=break_points ) ), checkpoints ] bot = SST2Bot( log_dir = CACHE_DIR / "distill_logs", model=distill_bert_model, train_loader=train_loader, valid_loader=valid_loader, clip_grad=10., optimizer=optimizer, echo=True, criterion=distill_loss, callbacks=callbacks, pbar=False, use_tensorboard=False, use_amp=APEX_AVAILABLE, metrics=(DistillTop1Accuracy(),) ) print(total_steps) bot.train( total_steps=total_steps, checkpoint_interval=len(train_loader) // 2 ) bot.load_model(checkpoints.best_performers[0][1]) checkpoints.remove_checkpoints(keep=0) bot.eval(valid_loader) bot.eval(test_loader) ```
github_jupyter
``` %matplotlib inline import gym import itertools import matplotlib import numpy as np import sys import tensorflow as tf import collections if "../" not in sys.path: sys.path.append("../") from lib.envs.cliff_walking import CliffWalkingEnv from lib import plotting matplotlib.style.use('ggplot') env = CliffWalkingEnv() class PolicyEstimator(): """ Policy Function approximator. """ def __init__(self, learning_rate=0.01, scope="policy_estimator"): with tf.variable_scope(scope): self.state = tf.placeholder(tf.int32, [], "state") self.action = tf.placeholder(dtype=tf.int32, name="action") self.target = tf.placeholder(dtype=tf.float32, name="target") # This is just table lookup estimator state_one_hot = tf.one_hot(self.state, int(env.observation_space.n)) self.output_layer = tf.contrib.layers.fully_connected( inputs=tf.expand_dims(state_one_hot, 0), num_outputs=env.action_space.n, activation_fn=None, weights_initializer=tf.zeros_initializer) self.action_probs = tf.squeeze(tf.nn.softmax(self.output_layer)) self.picked_action_prob = tf.gather(self.action_probs, self.action) # Loss and train op self.loss = -tf.log(self.picked_action_prob) * self.target self.optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) self.train_op = self.optimizer.minimize( self.loss, global_step=tf.contrib.framework.get_global_step()) def predict(self, state, sess=None): sess = sess or tf.get_default_session() return sess.run(self.action_probs, { self.state: state }) def update(self, state, target, action, sess=None): sess = sess or tf.get_default_session() feed_dict = { self.state: state, self.target: target, self.action: action } _, loss = sess.run([self.train_op, self.loss], feed_dict) return loss class ValueEstimator(): """ Value Function approximator. """ def __init__(self, learning_rate=0.1, scope="value_estimator"): with tf.variable_scope(scope): self.state = tf.placeholder(tf.int32, [], "state") self.target = tf.placeholder(dtype=tf.float32, name="target") # This is just table lookup estimator state_one_hot = tf.one_hot(self.state, int(env.observation_space.n)) self.output_layer = tf.contrib.layers.fully_connected( inputs=tf.expand_dims(state_one_hot, 0), num_outputs=1, activation_fn=None, weights_initializer=tf.zeros_initializer) self.value_estimate = tf.squeeze(self.output_layer) self.loss = tf.squared_difference(self.value_estimate, self.target) self.optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) self.train_op = self.optimizer.minimize( self.loss, global_step=tf.contrib.framework.get_global_step()) def predict(self, state, sess=None): sess = sess or tf.get_default_session() return sess.run(self.value_estimate, { self.state: state }) def update(self, state, target, sess=None): sess = sess or tf.get_default_session() feed_dict = { self.state: state, self.target: target } _, loss = sess.run([self.train_op, self.loss], feed_dict) return loss def actor_critic(env, estimator_policy, estimator_value, num_episodes, discount_factor=1.0): """ Actor Critic Algorithm. Optimizes the policy function approximator using policy gradient. Args: env: OpenAI environment. estimator_policy: Policy Function to be optimized estimator_value: Value function approximator, used as a baseline num_episodes: Number of episodes to run for discount_factor: Time-discount factor Returns: An EpisodeStats object with two numpy arrays for episode_lengths and episode_rewards. """ # Keeps track of useful statistics stats = plotting.EpisodeStats( episode_lengths=np.zeros(num_episodes), episode_rewards=np.zeros(num_episodes)) Transition = collections.namedtuple("Transition", ["state", "action", "reward", "next_state", "done"]) for i_episode in range(num_episodes): # Reset the environment and pick the fisrst action state = env.reset() episode = [] # One step in the environment for t in itertools.count(): # Take a step action_probs = estimator_policy.predict(state) action = np.random.choice(np.arange(len(action_probs)), p=action_probs) next_state, reward, done, _ = env.step(action) # Keep track of the transition episode.append(Transition( state=state, action=action, reward=reward, next_state=next_state, done=done)) # Update statistics stats.episode_rewards[i_episode] += reward stats.episode_lengths[i_episode] = t # Calculate TD Target value_next = estimator_value.predict(next_state) td_target = reward + discount_factor * value_next td_error = td_target - estimator_value.predict(state) # Update the value estimator estimator_value.update(state, td_target) # Update the policy estimator # using the td error as our advantage estimate estimator_policy.update(state, td_error, action) # Print out which step we're on, useful for debugging. print("\rStep {} @ Episode {}/{} ({})".format( t, i_episode + 1, num_episodes, stats.episode_rewards[i_episode - 1]), end="") if done: break state = next_state return stats tf.reset_default_graph() global_step = tf.Variable(0, name="global_step", trainable=False) policy_estimator = PolicyEstimator() value_estimator = ValueEstimator() with tf.Session() as sess: sess.run(tf.initialize_all_variables()) # Note, due to randomness in the policy the number of episodes you need to learn a good # policy may vary. ~300 seemed to work well for me. stats = actor_critic(env, policy_estimator, value_estimator, 300) plotting.plot_episode_stats(stats, smoothing_window=10) ```
github_jupyter
# Example of physical analysis with IPython ``` %pylab inline import numpy import pandas import root_numpy folder = '/moosefs/notebook/datasets/Manchester_tutorial/' ``` ## Reading simulation data ``` def load_data(filenames, preselection=None): # not setting treename, it's detected automatically data = root_numpy.root2array(filenames, selection=preselection) return pandas.DataFrame(data) sim_data = load_data(folder + 'PhaseSpaceSimulation.root', preselection=None) ``` Looking at data, taking first rows: ``` sim_data.head() ``` ### Plotting some feature ``` # hist data will contain all information from histogram hist_data = hist(sim_data.H1_PX, bins=40, range=[-100000, 100000]) ``` ## Adding interesting features for each particle we compute it's P, PT and energy (under assumption this is Kaon) ``` def add_momenta_and_energy(dataframe, prefix, compute_energy=False): """Adding P, PT and У of particle with given prefix, say, 'H1_' """ pt_squared = dataframe[prefix + 'PX'] ** 2. + dataframe[prefix + 'PY'] ** 2. dataframe[prefix + 'PT'] = numpy.sqrt(pt_squared) p_squared = pt_squared + dataframe[prefix + 'PZ'] ** 2. dataframe[prefix + 'P'] = numpy.sqrt(p_squared) if compute_energy: E_squared = p_squared + dataframe[prefix + 'M'] ** 2. dataframe[prefix + 'E'] = numpy.sqrt(E_squared) for prefix in ['H1_', 'H2_', 'H3_']: # setting Kaon mass to each of particles: sim_data[prefix + 'M'] = 493 add_momenta_and_energy(sim_data, prefix, compute_energy=True) ``` ## Adding features of $B$ We are able to compute 4-momentum of B, given 4-momenta of produced particles ``` def add_B_features(data): for axis in ['PX', 'PY', 'PZ', 'E']: data['B_' + axis] = data['H1_' + axis] + data['H2_' + axis] + data['H3_' + axis] add_momenta_and_energy(data, prefix='B_', compute_energy=False) data['B_M'] = data.eval('(B_E ** 2 - B_PX ** 2 - B_PY ** 2 - B_PZ ** 2) ** 0.5') add_B_features(sim_data) ``` looking at result (with added features) ``` sim_data.head() _ = hist(sim_data['B_M'], range=[5260, 5280], bins=100) ``` # Dalitz plot computing dalitz variables and checking that no resonances in simulation ``` def add_dalitz_variables(data): """function to add Dalitz variables, names of prudicts are H1, H2, H3""" for i, j in [(1, 2), (1, 3), (2, 3)]: momentum = pandas.DataFrame() for axis in ['E', 'PX', 'PY', 'PZ']: momentum[axis] = data['H{}_{}'.format(i, axis)] + data['H{}_{}'.format(j, axis)] data['M_{}{}'.format(i,j)] = momentum.eval('(E ** 2 - PX ** 2 - PY ** 2 - PZ ** 2) ** 0.5') add_dalitz_variables(sim_data) scatter(sim_data.M_12, sim_data.M_13, alpha=0.05) ``` # Working with real data ## Preselection ``` preselection = """ H1_IPChi2 > 1 && H2_IPChi2 > 1 && H3_IPChi2 > 1 && H1_IPChi2 + H2_IPChi2 + H3_IPChi2 > 500 && B_VertexChi2 < 12 && H1_ProbPi < 0.5 && H2_ProbPi < 0.5 && H3_ProbPi < 0.5 && H1_ProbK > 0.9 && H2_ProbK > 0.9 && H3_ProbK > 0.9 && !H1_isMuon && !H2_isMuon && !H3_isMuon """ preselection = preselection.replace('\n', '') real_data = load_data([folder + 'B2HHH_MagnetDown.root', folder + 'B2HHH_MagnetUp.root'], preselection=preselection) ``` ### adding features ``` for prefix in ['H1_', 'H2_', 'H3_']: # setting Kaon mass: real_data[prefix + 'M'] = 493 add_momenta_and_energy(real_data, prefix, compute_energy=True) add_B_features(real_data) _ = hist(real_data.B_M, bins=50) ``` ### additional preselection which uses added features ``` momentum_preselection = """ (H1_PT > 100) && (H2_PT > 100) && (H3_PT > 100) && (H1_PT + H2_PT + H3_PT > 4500) && H1_P > 1500 && H2_P > 1500 && H3_P > 1500 && B_M > 5050 && B_M < 6300 """ momentum_preselection = momentum_preselection.replace('\n', '').replace('&&', '&') real_data = real_data.query(momentum_preselection) _ = hist(real_data.B_M, bins=50) ``` ## Adding Dalitz plot for real data ``` add_dalitz_variables(real_data) # check that 2nd and 3rd particle have same sign numpy.mean(real_data.H2_Charge * real_data.H3_Charge) scatter(real_data['M_12'], real_data['M_13'], alpha=0.1) xlabel('M_12'), ylabel('M_13') show() # lazy way for plots real_data.plot('M_12', 'M_13', kind='scatter', alpha=0.1) ``` ### Ordering dalitz variables let's reorder particles so the first Dalitz variable is always greater ``` scatter(numpy.maximum(real_data['M_12'], real_data['M_13']), numpy.minimum(real_data['M_12'], real_data['M_13']), alpha=0.1) xlabel('max(M12, M13)'), ylabel('min(M12, M13)') show() ``` ### Binned dalitz plot let's plot the same in bins, as physicists like ``` hist2d(numpy.maximum(real_data['M_12'], real_data['M_13']), numpy.minimum(real_data['M_12'], real_data['M_13']), bins=8) colorbar() xlabel('max(M12, M13)'), ylabel('min(M12, M13)') show() ``` ## Looking at local CP-asimmetry adding one more column ``` real_data['B_Charge'] = real_data.H1_Charge + real_data.H2_Charge + real_data.H3_Charge hist(real_data.B_M[real_data.B_Charge == +1].values, bins=30, range=[5050, 5500], alpha=0.5) hist(real_data.B_M[real_data.B_Charge == -1].values, bins=30, range=[5050, 5500], alpha=0.5) pass ``` ## Leaving only signal region in mass ``` signal_charge = real_data.query('B_M > 5200 & B_M < 5320').B_Charge ``` counting number of positively and negatively charged B particles ``` n_plus = numpy.sum(signal_charge == +1) n_minus = numpy.sum(signal_charge == -1) print n_plus, n_minus, n_plus - n_minus print 'asymmetry = ', (n_plus - n_minus) / float(n_plus + n_minus) ``` ### Estimating significance of deviation (approximately) we will assume that $N_{+} + N_{-}$, and under null hypothesis each observation contains with $p=0.5$ positive or negative particle. So, under these assumptions $N_{+}$ is distributed as binomial random variable. ``` # computing properties of n_plus according to H_0 hypothesis. n_mean = len(signal_charge) * 0.5 n_std = numpy.sqrt(len(signal_charge) * 0.25) print 'significance = ', (n_plus - n_mean) / n_std ``` # Subtracting background using RooFit to fit mixture of exponential (bkg) and gaussian (signal) distributions. Based on the fit, we estimate number of events in mass region ``` # Lots of ROOT imports for fitting and plotting from rootpy import asrootpy, log from rootpy.plotting import Hist, Canvas, set_style, get_style from ROOT import (RooFit, RooRealVar, RooDataHist, RooArgList, RooArgSet, RooAddPdf, TLatex, RooGaussian, RooExponential ) def compute_n_signal_by_fitting(data_for_fit): """ Computing the amount of signal with in region [x_min, x_max] returns: canvas with fit, n_signal in mass region """ # fit limits hmin, hmax = data_for_fit.min(), data_for_fit.max() hist = Hist(100, hmin, hmax, drawstyle='EP') root_numpy.fill_hist(hist, data_for_fit) # Declare observable x x = RooRealVar("x","x", hmin, hmax) dh = RooDataHist("dh","dh", RooArgList(x), RooFit.Import(hist)) frame = x.frame(RooFit.Title("D^{0} mass")) # this will show histogram data points on canvas dh.plotOn(frame, RooFit.MarkerColor(2), RooFit.MarkerSize(0.9), RooFit.MarkerStyle(21)) # Signal PDF mean = RooRealVar("mean", "mean", 5300, 0, 6000) width = RooRealVar("width", "width", 10, 0, 100) gauss = RooGaussian("gauss","gauss", x, mean, width) # Background PDF cc = RooRealVar("cc", "cc", -0.01, -100, 100) exp = RooExponential("exp", "exp", x, cc) # Combined model d0_rate = RooRealVar("D0_rate", "rate of D0", 0.9, 0, 1) model = RooAddPdf("model","exp+gauss",RooArgList(gauss, exp), RooArgList(d0_rate)) # Fitting model result = asrootpy(model.fitTo(dh, RooFit.Save(True))) mass = result.final_params['mean'].value hwhm = result.final_params['width'].value # this will show fit overlay on canvas model.plotOn(frame, RooFit.Components("exp"), RooFit.LineStyle(3), RooFit.LineColor(3)) model.plotOn(frame, RooFit.LineColor(4)) # Draw all frames on a canvas canvas = Canvas() frame.GetXaxis().SetTitle("m_{K#pi#pi} [GeV]") frame.GetXaxis().SetTitleOffset(1.2) frame.Draw() # Draw the mass and error label label = TLatex(0.6, 0.8, "m = {0:.2f} #pm {1:.2f} GeV".format(mass, hwhm)) label.SetNDC() label.Draw() # Calculate the rate of background below the signal curve inside (x_min, x_max) x_min, x_max = 5200, 5330 x.setRange(hmin, hmax) bkg_total = exp.getNorm(RooArgSet(x)) sig_total = gauss.getNorm(RooArgSet(x)) x.setRange(x_min, x_max) bkg_level = exp.getNorm(RooArgSet(x)) sig_level = gauss.getNorm(RooArgSet(x)) bkg_ratio = bkg_level / bkg_total sig_ratio = sig_level / sig_total n_elements = hist.GetEntries() # TODO - normally get parameter form fit_result sig_part = (d0_rate.getVal()) bck_part = (1 - d0_rate.getVal()) # estimating ratio of signal and background bck_sig_ratio = (bkg_ratio * n_elements * bck_part) / (sig_ratio * n_elements * sig_part) # n_events in (x_min, x_max) n_events_in_mass_region = numpy.sum((data_for_fit > x_min) & (data_for_fit < x_max)) n_signal_in_mass_region = n_events_in_mass_region / (1. + bck_sig_ratio) return canvas, n_signal_in_mass_region B_mass_range = [5050, 5500] mass_for_fitting_plus = real_data.query('(B_M > 5050) & (B_M < 5500) & (B_Charge == +1)').B_M mass_for_fitting_minus = real_data.query('(B_M > 5050) & (B_M < 5500) & (B_Charge == -1)').B_M canvas_plus, n_positive_signal = compute_n_signal_by_fitting(mass_for_fitting_plus) canvas_plus canvas_minus, n_negative_signal = compute_n_signal_by_fitting(mass_for_fitting_minus) canvas_minus ``` ## Computing asymmetry with subtracted background ``` print n_positive_signal, n_negative_signal print (n_positive_signal - n_negative_signal) / (n_positive_signal + n_negative_signal) n_mean = 0.5 * (n_positive_signal + n_negative_signal) n_std = numpy.sqrt(0.25 * (n_positive_signal + n_negative_signal)) print (n_positive_signal - n_mean) / n_std ```
github_jupyter
# FloPy ## MODFLOW 6 (MF6) Support The Flopy library contains classes for creating, saving, running, loading, and modifying MF6 simulations. The MF6 portion of the flopy library is located in: *flopy.mf6* While there are a number of classes in flopy.mf6, to get started you only need to use the main classes summarized below: flopy.mf6.MFSimulation * MODFLOW Simulation Class. Entry point into any MODFLOW simulation. flopy.mf6.ModflowGwf * MODFLOW Groundwater Flow Model Class. Represents a single model in a simulation. flopy.mf6.Modflow[pc] * MODFLOW package classes where [pc] is the abbreviation of the package name. Each package is a separate class. For packages that are part of a groundwater flow model, the abbreviation begins with "Gwf". For example, "flopy.mf6.ModflowGwfdis" is the Discretization package. ``` import os import sys from shutil import copyfile import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt try: import flopy except: fpth = os.path.abspath(os.path.join('..', '..')) sys.path.append(fpth) import flopy print(sys.version) print('numpy version: {}'.format(np.__version__)) print('matplotlib version: {}'.format(mpl.__version__)) print('flopy version: {}'.format(flopy.__version__)) ``` # Creating a MF6 Simulation A MF6 simulation is created by first creating a simulation object "MFSimulation". When you create the simulation object you can define the simulation's name, version, executable name, workspace path, and the name of the tdis file. All of these are optional parameters, and if not defined each one will default to the following: sim_name='modflowtest' version='mf6' exe_name='mf6.exe' sim_ws='.' sim_tdis_file='modflow6.tdis' ``` sim_name = 'example_sim' sim_path = os.path.join('data', 'example_project') sim = flopy.mf6.MFSimulation(sim_name=sim_name, version='mf6', exe_name='mf6', sim_ws=sim_path) ``` The next step is to create a tdis package object "ModflowTdis". The first parameter of the ModflowTdis class is a simulation object, which ties a ModflowTdis object to a specific simulation. The other parameters and their definitions can be found in the docstrings. ``` tdis = flopy.mf6.ModflowTdis(sim, pname='tdis', time_units='DAYS', nper=2, perioddata=[(1.0, 1, 1.0), (10.0, 5, 1.0)]) ``` Next one or more models are created using the ModflowGwf class. The first parameter of the ModflowGwf class is the simulation object that the model will be a part of. ``` model_name = 'example_model' model = flopy.mf6.ModflowGwf(sim, modelname=model_name, model_nam_file='{}.nam'.format(model_name)) ``` Next create one or more Iterative Model Solution (IMS) files. ``` ims_package = flopy.mf6.ModflowIms(sim, pname='ims', print_option='ALL', complexity='SIMPLE', outer_hclose=0.00001, outer_maximum=50, under_relaxation='NONE', inner_maximum=30, inner_hclose=0.00001, linear_acceleration='CG', preconditioner_levels=7, preconditioner_drop_tolerance=0.01, number_orthogonalizations=2) ``` Each ModflowGwf object needs to be associated with an ModflowIms object. This is done by calling the MFSimulation object's "register_ims_package" method. The first parameter in this method is the ModflowIms object and the second parameter is a list of model names (strings) for the models to be associated with the ModflowIms object. ``` sim.register_ims_package(ims_package, [model_name]) ``` Next add packages to each model. The first package added needs to be a spatial discretization package since flopy uses information from the spatial discretization package to help you build other packages. There are three spatial discretization packages to choose from: DIS (ModflowGwfDis) - Structured discretization DISV (ModflowGwfdisv) - Discretization with vertices DISU (ModflowGwfdisu) - Unstructured discretization ``` dis_package = flopy.mf6.ModflowGwfdis(model, pname='dis', length_units='FEET', nlay=2, nrow=2, ncol=5, delr=500.0, delc=500.0, top=100.0, botm=[50.0, 20.0], fname='{}.dis'.format(model_name)) ``` ## Accessing Namefiles Namefiles are automatically built for you by flopy. However, there are some options contained in the namefiles that you may want to set. To get the namefile object access the name_file attribute in either a simulation or model object to get the simulation or model namefile. ``` # set the nocheck property in the simulation namefile sim.name_file.nocheck = True # set the print_input option in the model namefile model.name_file.print_input = True ``` ## Specifying Options Option that appear alone are assigned a boolean value, like the print_input option above. Options that have additional optional parameters are assigned using a tuple, with the entries containing the names of the optional parameters to turn on. Use a tuple with an empty string to indicate no optional parameters and use a tuple with None to turn the option off. ``` # Turn Newton option on with under relaxation model.name_file.newtonoptions = ('UNDER_RELAXATION') # Turn Newton option on without under relaxation model.name_file.newtonoptions = ('') # Turn off Newton option model.name_file.newtonoptions = (None) ``` ## MFArray Templates Lastly define all other packages needed. Note that flopy supports a number of ways to specify data for a package. A template, which defines the data array shape for you, can be used to specify the data. Templates are built by calling the empty of the data type you are building. For example, to build a template for k in the npf package you would call: ModflowGwfnpf.k.empty() The empty method for "MFArray" data templates (data templates whose size is based on the structure of the model grid) take up to four parameters: * model - The model object that the data is a part of. A valid model object with a discretization package is required in order to build the proper array dimensions. This parameter is required. * layered - True or false whether the data is layered or not. * data_storage_type_list - List of data storage types, one for each model layer. If the template is not layered, only one data storage type needs to be specified. There are three data storage types supported, internal_array, internal_constant, and external_file. * default_value - The initial value for the array. ``` # build a data template for k that stores the first layer as an internal array and the second # layer as a constant with the default value of k for all layers set to 100.0 layer_storage_types = [flopy.mf6.data.mfdata.DataStorageType.internal_array, flopy.mf6.data.mfdata.DataStorageType.internal_constant] k_template = flopy.mf6.ModflowGwfnpf.k.empty(model, True, layer_storage_types, 100.0) # change the value of the second layer to 50.0 k_template[0]['data'] = [65.0, 60.0, 55.0, 50.0, 45.0, 40.0, 35.0, 30.0, 25.0, 20.0] k_template[0]['factor'] = 1.5 print(k_template) # create npf package using the k template to define k npf_package = flopy.mf6.ModflowGwfnpf(model, pname='npf', save_flows=True, icelltype=1, k=k_template) ``` ## Specifying MFArray Data MFArray data can also be specified as a numpy array, a list of values, or a single value. Below strt (starting heads) are defined as a single value, 100.0, which is interpreted as an internal constant storage type of value 100.0. Strt could also be defined as a list defining a value for every model cell: strt=[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0] Or as a list defining a value or values for each model layer: strt=[100.0, 90.0] or: strt=[[100.0], [90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0]] MFArray data can also be stored in an external file by using a dictionary using the keys 'filename' to specify the file name relative to the model folder and 'data' to specific the data. The optional 'factor', 'iprn', and 'binary' keys may also be used. strt={'filename': 'strt.txt', 'factor':1.0, 'data':[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0]} If the 'data' key is omitted from the dictionary flopy will try to read the data from an existing file 'filename'. Any relative paths for loading data from a file should specified relative to the MF6 simulation folder. ``` strt={'filename': 'strt.txt', 'factor':1.0, 'data':[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0]} ic_package = flopy.mf6.ModflowGwfic(model, pname='ic', strt=strt, fname='{}.ic'.format(model_name)) # move external file data into model folder icv_data_path = os.path.join('..', 'data', 'mf6', 'notebooks', 'iconvert.txt') copyfile(icv_data_path, os.path.join(sim_path, 'iconvert.txt')) # create storage package sto_package = flopy.mf6.ModflowGwfsto(model, pname='sto', save_flows=True, iconvert={'filename':'iconvert.txt'}, ss=[0.000001, 0.000002], sy=[0.15, 0.14, 0.13, 0.12, 0.11, 0.11, 0.12, 0.13, 0.14, 0.15, 0.15, 0.14, 0.13, 0.12, 0.11, 0.11, 0.12, 0.13, 0.14, 0.15]) ``` ## MFList Templates Flopy supports specifying record and recarray "MFList" data in a number of ways. Templates can be created that define the shape of the data. The empty method for "MFList" data templates take up to 7 parameters. * model - The model object that the data is a part of. A valid model object with a discretization package is required in order to build the proper array dimensions. This parameter is required. * maxbound - The number of rows in the recarray. If not specified one row is returned. * aux_vars - List of auxiliary variable names. If not specified auxiliary variables are not used. * boundnames - True/False if boundnames is to be used. * nseg - Number of segments (only relevant for a few data types) * timeseries - True/False indicates that time series data will be used. * stress_periods - List of integer stress periods to be used (transient MFList data only). If not specified for transient data, template will only be defined for stress period 1. MFList transient data templates are numpy recarrays stored in a dictionary with the dictionary key an integer zero based stress period value (stress period - 1). In the code below the well package is set up using a transient MFList template to help build the well's stress_periods. ``` maxbound = 2 # build a stress_period_data template with 2 wells over stress periods 1 and 2 with boundnames # and three aux variables wel_periodrec = flopy.mf6.ModflowGwfwel.stress_period_data.empty(model, maxbound=maxbound, boundnames=True, aux_vars=['var1', 'var2', 'var3'], stress_periods=[0,1]) # define the two wells for stress period one wel_periodrec[0][0] = ((0,1,2), -50.0, -1, -2, -3, 'First Well') wel_periodrec[0][1] = ((1,1,4), -25.0, 2, 3, 4, 'Second Well') # define the two wells for stress period two wel_periodrec[1][0] = ((0,1,2), -200.0, -1, -2, -3, 'First Well') wel_periodrec[1][1] = ((1,1,4), -4000.0, 2, 3, 4, 'Second Well') # build the well package wel_package = flopy.mf6.ModflowGwfwel(model, pname='wel', print_input=True, print_flows=True, auxiliary=[('var1', 'var2', 'var3')], maxbound=maxbound, stress_period_data=wel_periodrec, boundnames=True, save_flows=True) ``` ## Cell IDs Cell IDs always appear as tuples in an MFList. For a structured grid cell IDs appear as: (<layer>, <row>, <column>) For vertice based grid cells IDs appear as: (<layer>, <intralayer_cell_id>) Unstructured grid cell IDs appear as: (<cell_id>) ## Specifying MFList Data MFList data can also be defined as a list of tuples, with each tuple being a row of the recarray. For transient data the list of tuples can be stored in a dictionary with the dictionary key an integer zero based stress period value. If only a list of tuples is specified for transient data, the data is assumed to apply to stress period 1. Additional stress periods can be added with the add_transient_key method. The code below defines saverecord and printrecord as a list of tuples. ``` # printrecord data as a list of tuples. since no stress # period is specified it will default to stress period 1 printrec_tuple_list = [('HEAD', 'ALL'), ('BUDGET', 'ALL')] # saverecord data as a dictionary of lists of tuples for # stress periods 1 and 2. saverec_dict = {0:[('HEAD', 'ALL'), ('BUDGET', 'ALL')],1:[('HEAD', 'ALL'), ('BUDGET', 'ALL')]} # create oc package oc_package = flopy.mf6.ModflowGwfoc(model, pname='oc', budget_filerecord=[('{}.cbc'.format(model_name),)], head_filerecord=[('{}.hds'.format(model_name),)], saverecord=saverec_dict, printrecord=printrec_tuple_list) # add stress period two to the print record oc_package.printrecord.add_transient_key(1) # set the data for stress period two in the print record oc_package.printrecord.set_data([('HEAD', 'ALL'), ('BUDGET', 'ALL')], 1) ``` ### Specifying MFList Data in an External File MFList data can be specified in an external file using a dictionary with the 'filename' key. If the 'data' key is also included in the dictionary and is not None, flopy will create the file with the data contained in the 'data' key. The code below creates a chd package which creates and references an external file containing data for stress period 1 and stores the data internally in the chd package file for stress period 2. ``` stress_period_data = {0: {'filename': 'chd_sp1.dat', 'data': [[(0, 0, 0), 70.]]}, 1: [[(0, 0, 0), 60.]]} chd = flopy.mf6.ModflowGwfchd(model, maxbound=1, stress_period_data=stress_period_data) ``` ## Packages that Support both List-based and Array-based Data The recharge and evapotranspiration packages can be specified using list-based or array-based input. The array packages have an "a" on the end of their name: `ModflowGwfrch` - list based recharge package `ModflowGwfrcha` - array based recharge package `ModflowGwfevt` - list based evapotranspiration package `ModflowGwfevta` - array based evapotranspiration package ``` rch_recarray = {0:[((0,0,0), 'rch_1'), ((1,1,1), 'rch_2')], 1:[((0,0,0), 'rch_1'), ((1,1,1), 'rch_2')]} rch_package = flopy.mf6.ModflowGwfrch(model, pname='rch', fixed_cell=True, print_input=True, ts_filerecord='recharge_rates.ts', obs_filerecord='example_model.rch.obs', maxbound=2, stress_period_data=rch_recarray) ``` ## Utility Files (TS, TAS, OBS, TAB) Utility files, MF6 formatted files that reference by packages, include time series, time array series, observation, and tab files. The file names for utility files are specified in the package that references them (see ts_filerecord in the code above). The utility files themselves are created in the same way as packages. When creating utility files you must pass in to the constructor both the model and the package that they are a part of. See code below. ``` # build a time series array for the recharge package ts_recarray = [(0.0, 0.015, 0.0017), (1.0, 0.016, 0.0019), (2.0, 0.012, 0.0015), (3.0, 0.020, 0.0014), (4.0, 0.015, 0.0021), (5.0, 0.013, 0.0012), (6.0, 0.022, 0.0012), (7.0, 0.016, 0.0014), (8.0, 0.013, 0.0011), (9.0, 0.021, 0.0011), (10.0, 0.017, 0.0016), (11.0, 0.012, 0.0015)] rch_ts_file = flopy.mf6.ModflowUtlts(model, pname='rch_ts', time_series_namerecord=[('rch_1', 'rch_2')], timeseries=ts_recarray, fname='recharge_rates.ts', parent_file=rch_package, interpolation_methodrecord=[('stepwise', 'stepwise')]) # build an recharge observation package that outputs the western recharge to a binary file and the eastern # recharge to a text file obs_recarray = {('rch_west.csv', 'binary'): [('rch_1_1_1', 'RCH', (0, 0, 0)), ('rch_1_2_1', 'RCH', (0, 1, 0))], 'rch_east.csv': [('rch_1_1_5', 'RCH', (0, 0, 4)), ('rch_1_2_5', 'RCH', (0, 1, 4))]} rch_obs_package = flopy.mf6.ModflowUtlobs(model, fname='example_model.rch.obs', parent_file=rch_package, digits=10, print_input=True, continuous=obs_recarray) ``` # Saving and Running a MF6 Simulation Saving and running a simulation are done with the MFSimulation class's write_simulation and run_simulation methods. ``` # write simulation to new location sim.write_simulation() # run simulation sim.run_simulation() ``` # Loading an Existing MF6 Simulation Loading a simulation can be done with the flopy.mf6.MFSimulation.load static method. ``` # load the simulation loaded_sim = flopy.mf6.MFSimulation.load(sim_name, 'mf6', 'mf6', sim_path) ``` # Retrieving Data and Modifying an Existing MF6 Simulation Data can be easily retrieved from a simulation. Data can be retrieved using two methods. One method is to retrieve the data object from a master simulation dictionary that keeps track of all the data. The master simulation dictionary is accessed by accessing a simulation's "simulation_data" property and then the "mfdata" property: sim.simulation_data.mfdata[<data path>] The data path is the path to the data stored as a tuple containing the model name, package name, block name, and data name. The second method is to get the data from the package object. If you do not already have the package object, you can work your way down the simulation structure, from the simulation to the correct model, to the correct package, and finally to the data object. These methods are demonstrated in the code below. ``` # get hydraulic conductivity data object from the data dictionary hk = sim.simulation_data.mfdata[(model_name, 'npf', 'griddata', 'k')] # get specific yield data object from the storage package sy = sto_package.sy # get the model object from the simulation object using the get_model method, # which takes a string with the model's name and returns the model object mdl = sim.get_model(model_name) # get the package object from the model mobject using the get_package method, # which takes a string with the package's name or type ic = mdl.get_package('ic') # get the data object from the initial condition package object strt = ic.strt ``` Once you have the appropriate data object there are a number methods to retrieve data from that object. Data retrieved can either be the data as it appears in the model file or the data with any factor specified in the model file applied to it. To get the raw data without applying a factor use the get_data method. To get the data with the factor already applied use .array. Note that MFArray data is always a copy of the data stored by flopy. Modifying the copy of the flopy data will have no affect on the data stored in flopy. Non-constant internal MFList data is returned as a reference to a numpy recarray. Modifying this recarray will modify the data stored in flopy. ``` # get the data without applying any factor hk_data_no_factor = hk.get_data() print('Data without factor:\n{}\n'.format(hk_data_no_factor)) # get data with factor applied hk_data_factor = hk.array print('Data with factor:\n{}\n'.format(hk_data_factor)) ``` Data can also be retrieved from the data object using `[]`. For unlayered data the `[]` can be used to slice the data. ``` # slice layer one row two print('SY slice of layer on row two\n{}\n'.format(sy[0,:,2])) ``` For layered data specify the layer number within the brackets. This will return a "LayerStorage" object which let's you change attributes of an individual layer. ``` # get layer one LayerStorage object hk_layer_one = hk[0] # change the print code and factor for layer one hk_layer_one.iprn = '2' hk_layer_one.factor = 1.1 print('Layer one data without factor:\n{}\n'.format(hk_layer_one.get_data())) print('Data with new factor:\n{}\n'.format(hk.array)) ``` ## Modifying Data Data can be modified in several ways. One way is to set data for a given layer within a LayerStorage object, like the one accessed in the code above. Another way is to set the data attribute to the new data. Yet another way is to call the data object's set_data method. ``` # set data within a LayerStorage object hk_layer_one.set_data([120.0, 100.0, 80.0, 70.0, 60.0, 50.0, 40.0, 30.0, 25.0, 20.0]) print('New HK data no factor:\n{}\n'.format(hk.get_data())) # set data attribute to new data ic_package.strt = 150.0 print('New strt values:\n{}\n'.format(ic_package.strt.array)) # call set_data sto_package.ss.set_data([0.000003, 0.000004]) print('New ss values:\n{}\n'.format(sto_package.ss.array)) ``` ## Modifying the Simulation Path The simulation path folder can be changed by using the set_sim_path method in the MFFileMgmt object. The MFFileMgmt object can be obtained from the simulation object through properties: sim.simulation_data.mfpath ``` # create new path save_folder = os.path.join(sim_path, 'sim_modified') # change simulation path sim.simulation_data.mfpath.set_sim_path(save_folder) # create folder if not os.path.isdir(save_folder): os.makedirs(save_folder) ``` ## Adding a Model Relative Path A model relative path lets you put all of the files associated with a model in a folder relative to the simulation folder. Warning, this will override all of your file paths to model package files and will also override any relative file paths to external model data files. ``` # Change path of model files relative to the simulation folder model.set_model_relative_path('model_folder') # create folder if not os.path.isdir(save_folder): os.makedirs(os.path.join(save_folder,'model_folder')) # write simulation to new folder sim.write_simulation() # run simulation from new folder sim.run_simulation() ``` ## Post-Processing the Results Results can be retrieved from the master simulation dictionary. Results are retrieved from the master simulation dictionary with using a tuple key that identifies the data to be retrieved. For head data use the key ('<model name>', 'HDS', 'HEAD') where &lt;model name&gt; is the name of your model. For cell by cell budget data use the key ('<model name>', 'CBC', '<flow data name>') where `<flow data name>` is the name of the flow data to be retrieved (ex. 'FLOW-JA-FACE'). All available output keys can be retrieved using the output_keys method. ``` keys = sim.simulation_data.mfdata.output_keys() ``` The entries in the list above are keys for data in the head file "HDS" and data in cell by cell flow file "CBC". Keys in this list are not gaurenteed to be in any particular order. The code below uses the head file key to retrieve head data and then plots head data using matplotlib. ``` import matplotlib.pyplot as plt import numpy as np # get all head data head = sim.simulation_data.mfdata['example_model', 'HDS', 'HEAD'] # get the head data from the end of the model run head_end = head[-1] # plot the head data from the end of the model run levels = np.arange(160,162,1) extent = (0.0, 1000.0, 2500.0, 0.0) plt.contour(head_end[0, :, :],extent=extent) plt.show() ``` Results can also be retrieved using the existing binaryfile method. ``` # get head data using old flopy method hds_path = os.path.join(sim_path, model_name + '.hds') hds = flopy.utils.HeadFile(hds_path) # get heads after 1.0 days head = hds.get_data(totim=1.0) # plot head data plt.contour(head[0, :, :],extent=extent) plt.show() ```
github_jupyter
# Seasonal Naive Approach Benchmark model that simply forecasts the same value from the previous seasonal period. ``` %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd matplotlib.rcParams['figure.figsize'] = (16, 9) pd.options.display.max_columns = 999 ``` ## Load Dataset ``` df = pd.read_csv('../datasets/hourly-weather-wind_direction.csv', parse_dates=[0], index_col='DateTime') print(df.shape) df.head() ``` ## Define Parameters Make predictions for 24-hour period using a seasonality of 24-hours. ``` dataset_name = 'Hourly Weather Wind Direction' dataset_abbr = 'HWD' model_name = 'Naive' context_length = 24 prediction_length = 24 ``` ## Define Error Metric The seasonal variant of the mean absolute scaled error (MASE) will be used to evaluate the forecasts. ``` def calc_sMASE(training_series, testing_series, prediction_series, seasonality=prediction_length): a = training_series.iloc[seasonality:].values b = training_series.iloc[:-seasonality].values if len(a) != 0: d = np.sum(np.abs(a-b)) / len(a) else: return 1 errors = np.abs(testing_series - prediction_series) return np.mean(errors) / d ``` ## Evaluate Seasonal Naive Model ``` results = df.copy() for i, col in enumerate(df.columns): results['pred%s' % str(i+1)] = results[col].shift(context_length) results.dropna(inplace=True) sMASEs = [] for i, col in enumerate(df.columns): sMASEs.append(calc_sMASE(results[col].iloc[-(context_length + prediction_length):-prediction_length], results[col].iloc[-prediction_length:], results['pred%s' % str(i+1)].iloc[-prediction_length:])) fig, ax = plt.subplots() ax.hist(sMASEs, bins=20) ax.set_title('Distributions of sMASEs for {} dataset'.format(dataset_name)) ax.set_xlabel('sMASE') ax.set_ylabel('Count'); sMASE = np.mean(sMASEs) print("Overall sMASE: {:.4f}".format(sMASE)) ``` Show some example forecasts. ``` fig, ax = plt.subplots(5, 2, sharex=True) ax = ax.ravel() for col in range(1, 11): ax[col-1].plot(results.index[-prediction_length:], results['ts%s' % col].iloc[-prediction_length:], label='Actual', c='k', linestyle='--', linewidth=1) ax[col-1].plot(results.index[-prediction_length:], results['pred%s' % col].iloc[-prediction_length:], label='Predicted', c='b') fig.suptitle('{} Predictions'.format(dataset_name)) ax[0].legend(); ``` Store the predictions and accuracy score for the Seasonal Naive Approach models. ``` import pickle with open('{}-sMASE.pkl'.format(dataset_abbr), 'wb') as f: pickle.dump(sMASE, f) with open('../_results/{}/{}-results.pkl'.format(model_name, dataset_abbr), 'wb') as f: pickle.dump(results.iloc[-prediction_length:], f) ```
github_jupyter
## Getting Started [`Magma`](https://github.com/phanrahan/magma) is a hardware construction language written in `Python 3`. The central abstraction in `Magma` is a `Circuit`, which is analagous to a verilog module. A circuit is a set of functional units that are wired together. `Magma` is designed to work with [`Mantle`](https://github.com/phanrahan/mantle), a library of hardware building blocks including logic and arithmetic units, registers, memories, etc. The [`Loam`](https://github.com/phanrahan/loam) system builds upon the `Magma` `Circuit` abstraction to represent *parts* and *boards*. A board consists of a set of parts that are wired together. `Loam` makes it is easy to setup a board such as the Lattice IceStick. ### Lattice IceStick In this tutorial, we will be using the Lattice IceStick. This breakout board contains a ICE40HX FPGA with 1K 4-input LUTs. The board has several useful peripherals including an FTDI USB interface with an integrated JTAG interface which is used to program the FPGA and a USART which is used to communicate with the host. The board also contains 5 LEDs, a PMOD interface, and 2 10-pin headers (J1 and J3). The 10-pin headers bring out 8 GPIO pins, as well as power and ground. This board is inexpensive ($25), can be plugged into the USB port on your laptop, and, best of all, can be programmed using an open source software toolchain. ![icestick](images/icestick.jpg) Additional information about the IceStick Board can be found in the [IceStick Programmers Guide](http://www.latticesemi.com/~/media/LatticeSemi/Documents/UserManuals/EI/icestickusermanual.pdf) ### Blink As a first example, let's write a `Magma` program that blinks an LED on the Icestick Board. First, we import `Magma` as the module `m`. Next, we import `Counter` from `Mantle`. Before doing the import we configure mantle to use the ICE40 as the target device. ``` import magma as m m.set_mantle_target("ice40") ``` The next step is to setup the IceStick board. We import the class `IceStick` from `Loam`. We then create an instance of an `IceStick`. This board instance has member variables that store the configuration of all the parts on the board. The blink program will use the Clock and the LED D5. Turning *on* the Clock and the LED D5 sets up the build environment to use the associated ICE40 GPIO pins. ``` from loam.boards.icestick import IceStick # Create an instance of an IceStick board icestick = IceStick() # Turn on the Clock # The clock must turned on because we are using a synchronous counter icestick.Clock.on() # Turn on the LED D5 icestick.D5.on(); ``` Now that the IceStick setup is done, we create a `main` program that runs on the Lattice ICE40 FPGA. This main program becomes the top level module. We create a simple circuit inside `main`. The circuit has a a 22-bit counter wired to D5. The crystal connected to the ICE40 has a frequency of 12 Mhz. so the counter will increment at that rate. Wiring the most-significant bit of the counter to D5 will cause the LED to blink roughly 3 times per second. `D5` is accessible via `main`. In a similar way, the output of the counter is accesible via `counter.O`, and since this an array of bits we can access the MSB using Python's standard list indexing syntax. ``` from mantle import Counter N = 22 # Define the main Magma Circuit on the FPGA on the IceStick main = icestick.DefineMain() # Instance a 22-bit counter counter = Counter(N) # Wire bit 21 of the counter's output to D5. main.D5 <= counter.O[N-1] # End main m.EndDefine() ``` We then compile the program to verilog. This step also creates a PCF (physical constraints file). ``` m.compile('build/blink', main) ``` Now we run the open source tools for the Lattice ICE40. `yosys` synthesizes the input verilog file (`blink.v`) to produce an output netlist (`blink.blif`). `arachne-pnr` runs the place and router and generates the bitstream as a text file. `icepack` creates a binary bitstream file that can be downloaded to the FPGA. `iceprog` uploads the bitstream to the device. Once the device has been programmed, you should see the center, green LED blinking. ``` %%bash cd build yosys -q -p 'synth_ice40 -top main -blif blink.blif' blink.v arachne-pnr -q -d 1k -o blink.txt -p blink.pcf blink.blif icepack blink.txt blink.bin #iceprog blink.bin ``` You can view the verilog file generated by `Magma`. ``` %cat build/blink.v ``` Notice that the top-level module contains two arguments (ports), `D5` and `CLKIN`. `D5` has been configured as an output, and `CLKIN` as an input. The mapping from these named arguments to pins is contained in the PCF (physical constraint file). ``` %cat build/blink.pcf ``` `D5` is connected to pin 95 and `CLKIN` is connected to pin 21.
github_jupyter
# Measuring monotonic relationships By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie with example algorithms by David Edwards Reference: DeFusco, Richard A. "Tests Concerning Correlation: The Spearman Rank Correlation Coefficient." Quantitative Investment Analysis. Hoboken, NJ: Wiley, 2007 Part of the Quantopian Lecture Series: * [www.quantopian.com/lectures](https://www.quantopian.com/lectures) * [github.com/quantopian/research_public](https://github.com/quantopian/research_public) Notebook released under the Creative Commons Attribution 4.0 License. Please do not remove this attribution. --- The Spearman Rank Correlation Coefficient allows us to determine whether or not two data series move together; that is, when one increases (decreases) the other also increases (decreases). This is more general than a linear relationship; for instance, $y = e^x$ is a monotonic function, but not a linear one. Therefore, in computing it we compare not the raw data but the ranks of the data. This is useful when your data sets may be in different units, and therefore not linearly related (for example, the price of a square plot of land and its side length, since the price is more likely to be linear in the area). It's also suitable for data sets which not satisfy the assumptions that other tests require, such as the observations being normally distributed as would be necessary for a t-test. ``` import numpy as np import scipy.stats as stats import matplotlib.pyplot as plt import math # Example of ranking data l = [10, 9, 5, 7, 5] print 'Raw data: ', l print 'Ranking: ', list(stats.rankdata(l, method='average')) ``` ## Spearman Rank Correlation ### Intuition The intution is now that instead of looking at the relationship between the two variables, we look at the relationship between the ranks. This is robust to outliers and the scale of the data. ### Definition The argument `method='average'` indicates that when we have a tie, we average the ranks that the numbers would occupy. For example, the two 5's above, which would take up ranks 1 and 2, each get assigned a rank of $1.5$. To compute the Spearman rank correlation for two data sets $X$ and $Y$, each of size $n$, we use the formula $$r_S = 1 - \frac{6 \sum_{i=1}^n d_i^2}{n(n^2 - 1)}$$ where $d_i$ is the difference between the ranks of the $i$th pair of observations, $X_i - Y_i$. The result will always be between $-1$ and $1$. A positive value indicates a positive relationship between the variables, while a negative value indicates an inverse relationship. A value of 0 implies the absense of any monotonic relationship. This does not mean that there is no relationship; for instance, if $Y$ is equal to $X$ with a delay of 2, they are related simply and precisely, but their $r_S$ can be close to zero: ##Experiment Let's see what happens if we draw $X$ from a poisson distribution (non-normal), and then set $Y = e^X + \epsilon$ where $\epsilon$ is drawn from another poisson distribution. We'll take the spearman rank and the correlation coefficient on this data and then run the entire experiment many times. Because $e^X$ produces many values that are far away from the rest, we can this of this as modeling 'outliers' in our data. Spearman rank compresses the outliers and does better at measuring correlation. Normal correlation is confused by the outliers and on average will measure less of a relationship than is actually there. ``` ## Let's see an example of this n = 100 def compare_correlation_and_spearman_rank(n, noise): X = np.random.poisson(size=n) Y = np.exp(X) + noise * np.random.normal(size=n) Xrank = stats.rankdata(X, method='average') # n-2 is the second to last element Yrank = stats.rankdata(Y, method='average') diffs = Xrank - Yrank # order doesn't matter since we'll be squaring these values r_s = 1 - 6*sum(diffs*diffs)/(n*(n**2 - 1)) c_c = np.corrcoef(X, Y)[0,1] return r_s, c_c experiments = 1000 spearman_dist = np.ndarray(experiments) correlation_dist = np.ndarray(experiments) for i in range(experiments): r_s, c_c = compare_correlation_and_spearman_rank(n, 1.0) spearman_dist[i] = r_s correlation_dist[i] = c_c print 'Spearman Rank Coefficient: ' + str(np.mean(spearman_dist)) # Compare to the regular correlation coefficient print 'Correlation coefficient: ' + str(np.mean(correlation_dist)) ``` Let's take a look at the distribution of measured correlation coefficients and compare the spearman with the regular metric. ``` plt.hist(spearman_dist, bins=50, alpha=0.5) plt.hist(correlation_dist, bins=50, alpha=0.5) plt.legend(['Spearman Rank', 'Regular Correlation']) plt.xlabel('Correlation Coefficient') plt.ylabel('Frequency'); ``` Now let's see how the Spearman rank and Regular coefficients cope when we add more noise to the situation. ``` n = 100 noises = np.linspace(0, 3, 30) experiments = 100 spearman = np.ndarray(len(noises)) correlation = np.ndarray(len(noises)) for i in range(len(noises)): # Run many experiments for each noise setting rank_coef = 0.0 corr_coef = 0.0 noise = noises[i] for j in range(experiments): r_s, c_c = compare_correlation_and_spearman_rank(n, noise) rank_coef += r_s corr_coef += c_c spearman[i] = rank_coef/experiments correlation[i] = corr_coef/experiments plt.scatter(noises, spearman, color='r') plt.scatter(noises, correlation) plt.legend(['Spearman Rank', 'Regular Correlation']) plt.xlabel('Amount of Noise') plt.ylabel('Average Correlation Coefficient') ``` We can see that the Spearman rank correlation copes with the non-linear relationship much better at most levels of noise. Interestingly, at very high levels, it seems to do worse than regular correlation. ##Delay in correlation Of you might have the case that one process affects another, but after a time lag. Now let's see what happens if we add the delay. ``` n = 100 X = np.random.rand(n) Xrank = stats.rankdata(X, method='average') # n-2 is the second to last element Yrank = stats.rankdata([1,1] + list(X[:(n-2)]), method='average') diffs = Xrank - Yrank # order doesn't matter since we'll be squaring these values r_s = 1 - 6*sum(diffs*diffs)/(n*(n**2 - 1)) print r_s ``` Sure enough, the relationship is not detected. It is important when using both regular and spearman correlation to check for lagged relationships by offsetting your data and testing for different offset values. ##Built-In Function We can also use the `spearmanr` function in the `scipy.stats` library: ``` # Generate two random data sets np.random.seed(161) X = np.random.rand(10) Y = np.random.rand(10) r_s = stats.spearmanr(X, Y) print 'Spearman Rank Coefficient: ', r_s[0] print 'p-value: ', r_s[1] ``` We now have ourselves an $r_S$, but how do we interpret it? It's positive, so we know that the variables are not anticorrelated. It's not very large, so we know they aren't perfectly positively correlated, but it's hard to say from a glance just how significant the correlation is. Luckily, `spearmanr` also computes the p-value for this coefficient and sample size for us. We can see that the p-value here is above 0.05; therefore, we cannot claim that $X$ and $Y$ are correlated. ##Real World Example: Mutual Fund Expense Ratio Now that we've seen how Spearman rank correlation works, we'll quickly go through the process again with some real data. For instance, we may wonder whether the expense ratio of a mutual fund is indicative of its three-year Sharpe ratio. That is, does spending more money on administration, management, etc. lower the risk or increase the returns? Quantopian does not currently support mutual funds, so we will pull the data from Yahoo Finance. Our p-value cutoff will be the usual default of 0.05. ### Data Source Thanks to [Matthew Madurski](https://github.com/dursk) for the data. To obtain the same data: 1. Download the csv from this link. https://gist.github.com/dursk/82eee65b7d1056b469ab 2. Upload it to the 'data' folder in your research account. ``` mutual_fund_data = local_csv('mutual_fund_data.csv') expense = mutual_fund_data['Annual Expense Ratio'].values sharpe = mutual_fund_data['Three Year Sharpe Ratio'].values plt.scatter(expense, sharpe) plt.xlabel('Expense Ratio') plt.ylabel('Sharpe Ratio') r_S = stats.spearmanr(expense, sharpe) print 'Spearman Rank Coefficient: ', r_S[0] print 'p-value: ', r_S[1] ``` Our p-value is below the cutoff, which means we accept the hypothesis that the two are correlated. The negative coefficient indicates that there is a negative correlation, and that more expensive mutual funds have worse sharpe ratios. However, there is some weird clustering in the data, it seems there are expensive groups with low sharpe ratios, and a main group whose sharpe ratio is unrelated to the expense. Further analysis would be required to understand what's going on here. ## Real World Use Case: Evaluating a Ranking Model ### NOTE: [Factor Analysis](https://www.quantopian.com/lectures/factor-analysis) now covers this topic in much greater detail Let's say that we have some way of ranking securities and that we'd like to test how well our ranking performs in practice. In this case our model just takes the mean daily return for the last month and ranks the stocks by that metric. We hypothesize that this will be predictive of the mean returns over the next month. To test this we score the stocks based on a lookback window, then take the spearman rank correlation of the score and the mean returns over the walk forward month. ``` symbol_list = ['A', 'AA', 'AAC', 'AAL', 'AAMC', 'AAME', 'AAN', 'AAOI', 'AAON', 'AAP', 'AAPL', 'AAT', 'AAU', 'AAV', 'AAVL', 'AAWW', 'AB', 'ABAC', 'ABAX', 'ABB', 'ABBV', 'ABC', 'ABCB', 'ABCD', 'ABCO', 'ABCW', 'ABDC', 'ABEV', 'ABG', 'ABGB'] # Get the returns over the lookback window start = '2014-12-01' end = '2015-01-01' historical_returns = get_pricing(symbol_list, fields='price', start_date=start, end_date=end).pct_change()[1:] # Compute our stock score scores = np.mean(historical_returns) print 'Our Scores\n' print scores print '\n' start = '2015-01-01' end = '2015-02-01' walk_forward_returns = get_pricing(symbol_list, fields='price', start_date=start, end_date=end).pct_change()[1:] walk_forward_returns = np.mean(walk_forward_returns) print 'The Walk Forward Returns\n' print walk_forward_returns print '\n' plt.scatter(scores, walk_forward_returns) plt.xlabel('Scores') plt.ylabel('Walk Forward Returns') r_s = stats.spearmanr(scores, walk_forward_returns) print 'Correlation Coefficient: ' + str(r_s[0]) print 'p-value: ' + str(r_s[1]) ``` The p-value indicates that our hypothesis is false and we accept the null hypothesis that our ranking was no better than random. This is a really good check of any ranking system one devises for constructing a long-short equity portfolio. *This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline df= pd.read_csv('911.csv') df.info() df df.head() #zipcodes for 911 calls df['zip'].value_counts() #Top zipcodes for 911 calls df['zip'].value_counts().head(5) #Townships for 911 calls df['twp'].value_counts() # Top 5 Townships for 911 calls df['twp'].value_counts().head(5) #titles in title code df['title'].unique() #Unique titles in title code len(df['title'].unique()) #Unique titles in title code df['title'].nunique() ``` Adding new features #Custom lambda expression to add new feature 'Reason' . *For example, if the title column value is EMS: BACK PAINS/INJURY , the Reason column value would be EMS. * ``` x = df['title'].iloc[0] x.split(':')[0] df['Reason'] = df['title'].apply(lambda title: title.split(':')[0]) df['Reason'] #Most common reason df['Reason'].value_counts() #Seaborn to create countplot sns.countplot(x='Reason', data=df) sns.countplot(x='Reason', data=df, palette='viridis') #Datatype of timestamp col df.info() type(df['timeStamp'].iloc[0]) #Converting them to datetime object df['timeStamp'] = pd.to_datetime(df['timeStamp']) type(df['timeStamp'].iloc[0]) #Grab and call objects time = df['timeStamp'].iloc[0] time.hour time.year time.hour time.month time.dayofweek df['Hour'] = df['timeStamp'].apply(lambda time: time.hour) df['Month'] = df['timeStamp'].apply(lambda time: time.month) df['Day of Week'] = df['timeStamp'].apply(lambda time: time.dayofweek) df.head() ``` ** Notice how the Day of Week is an integer 0-6. Use the .map() with this dictionary to map the actual string names to the day of the week: ** ``` dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'} df['Day of Week'] = df['Day of Week'].map(dmap) df.head() sns.countplot( x='Day of Week', data=df) sns.countplot( x='Day of Week', data=df, hue='Reason') sns.countplot( x='Day of Week', data=df, hue='Reason', palette ='viridis') sns.countplot(x='Month',data=df,hue='Reason',palette='viridis') # To relocate the legend plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) # It is missing some months! 9,10, and 11 are not there. byMonth = df.groupby('Month').count() byMonth.head() byMonth['twp'].plot() sns.lmplot(x='Month',y='twp',data=byMonth.reset_index()) ``` **Create a new column called 'Date' that contains the date from the timeStamp column. You'll need to use apply along with the .date() method. ** ``` df['Date']=df['timeStamp'].apply(lambda t: t.date()) df.groupby('Date').count()['twp'].plot() plt.tight_layout() df[df['Reason']=='Traffic'].groupby('Date').count()['twp'].plot() plt.title('Traffic') plt.tight_layout() df[df['Reason']=='Fire'].groupby('Date').count()['twp'].plot() plt.title('Fire') plt.tight_layout() df[df['Reason']=='EMS'].groupby('Date').count()['twp'].plot() plt.title('EMS') plt.tight_layout() #Creating heatmaps dayHour = df.groupby(by=['Day of Week','Hour']).count()['Reason'].unstack() dayHour.head() plt.figure(figsize=(12,6)) sns.heatmap(dayHour,cmap='coolwarm') sns.clustermap(dayHour,cmap='viridis') dayMonth = df.groupby(by=['Day of Week','Month']).count()['Reason'].unstack() dayMonth.head() plt.figure(figsize=(12,6)) sns.heatmap(dayMonth,cmap='seismic') sns.clustermap(dayMonth,cmap='viridis') ```
github_jupyter
## Why automate your work flow, and how to approach the process **Questions for students to consider:** 1) What happens when you get a new dataset that you need to analyze in the same way you analyzed a previous data set? 2) What processes do you do often? How do you implement these? 3) Do you have a clear workflow you could replicate? 4) Or even better, could you plug this new data set into your old workflow? ## Learning Objectives of Automation Module: ### Lesson 1 (10-15min) - Employ best practices of naming a variable including: don’t use existing function names, avoid periods in names, don’t use numbers at the beginning of a variable name. ### Lesson 2 (10-15min) - Define DRY and provide examples of how you would implement DRY in your code; Don’t repeat yourself! - Identify code that can be modularized following DRY and implement a modular workflow using functions. ### Lesson 3 (60 min) - Know how to construct a function: variables, function name, syntax, documentation, return values - Demonstrate use of function within the notebook / code. - Construct and compose function documentation that clearly defines inputs, output variables and behaviour. ### Lesson 4 - Organize a set of functions within a python (.py) script and use it (import it into) in a Jupyter notebook. ### Lesson 5 - Use asserts to test validity of function inputs and outputs ### Lesson 6 - ## Basic Overview of the suggested workflow using Socrative - Use Socrative quiz to collect answers from student activities (students can run their code in their notebooks, and post to socrative). This will allow the instructor to see what solutions students came up with, and identify any places where misconceptions and confusion are coming up. Using Socrative quizes also allows for a record of the student work to be analyzed after class to see how students are learning and where they are having troubles. - sharing of prepared Socrative Quizes designed to be used with the automation module can be shared by URL links to each teacher so they do not have to be remade. ## Review of good variable practices **Learning Objective: ** Employ best practices of naming a variable including: don’t use existing function names, avoid periods in names, don’t use numbers at the beginning of a variable name ##Types of variables: - strings, integers, etc.. References: https://www.tutorialspoint.com/python3/python_variable_types.htm ## Naming conventions that should be followed **Rules** ``` # write out three variables, assign a number, string, list x = 'Asia' # String y = 1952 # an integer z = 1.5 # a floating point number cal_1 = y * z print cal_1 # or x, y = 'Asia', 'Africa' w = x w = x + x #concatinating strings (combinging strings) print w h = 'Africa' list_1 = ['Asia', 'Africa', 'Europe'] # list print list_1 # Questions for students: # 1) what do you think will happen with this code? x * z # 2) what do you think will happen with this code? list_1[0] # 3) what do you think will happen with this code? list_1[1:2] ``` ## Indexing Python indexing is from 0 to length of list - 1 **Example:** list_1 = ['Asia', 'Africa', 'Europe'] Asia index = 0, Africa index = 1, Europe index = 2 ``` list_1 # this is not a very descriptive and identifiable variable countries = ['Asia', 'Africa', 'Europe'] ``` ## Scope of variables global vs. local in functions ### Global variables Global variables are available in the environment your script is working in. Every variable we have made at this point is a global variable. ### Local variables Local variables will be useful to understand when we start using functions in the automation of our code. Local variables only exist in the function environment, not the global environment your linear workflow code is. ## Other useful conventions with variables to follow 1) set-up variables at the begining of your page, after importing libraries 2) use variables instead of file names, or exact values or strings so that if you need to change the value of something you don't have to search through all your code to make sure you made the change everywhere, simply change the value of the variable at the top. -- This will also make your code more reproducible in the end.
github_jupyter
<h2> How to run this file </h2> To run code in each cell, use shift enter or tinker with cell part of the bar menu above You may come across run issues, look up online references or tinker and figure out, failure of the code is not a big deal as you can always delete a file and start afresh. <h2> About this block </h2> This is a markdown cell, it was created by using the cell menu above and selecting the markdown option. For more on the usage of Markdown scriptlook up the corresponding wikipedia page. The cell below defines two lines, shows the resuluting plot and gives the intersection point if it is unique. Feel free to tinker with the code and rerurn using shift enter. ``` # Comments are given on the right side in blue after the # prompt. # Please note that indentation is used for nested statements without using "end". # If you have a line equations of the form ax + by = c, bring them to y = mx + c form to use this code. import numpy as np # Python library for efficient computation import matplotlib.pyplot as plt # Plot library x = np.arange(-10, 10, 0.5) # defining the x range and step size m1 = 2 # m1 is slope for line 1 c1 = 3 # c1 is intercept for line 1 m2 = 2 # m2 is slope for line 2 c2 = -2 # c2 is intercept for line 2 def y(x,m,c): # The definition of the line return m*x + c plt.plot(x, y(x,m1,c1), 'b', x, y(x,m2,c2), 'p') # Plotting the two lines in two colors, blue 'b' and green 'g' plt.show() # After creating the plot object, it has to be shown. ``` Create a cell below using the insert menu at the top. Declare the cell of type "markdown" by selecting "cell menu -> cell type" and write some really important information like "Computing is cool" ``` # Solving the two equations, check for special cases x0 = "not defined" y0 = "not defined" if (m1*m2)==0 : #If at least one of the line is horizontal. if m1 != 0: y0 = c2 x0 = (c2 - c1)/m1 elif m2 != 0 : y0 = c1 x0 = (c1 - c2)/m2 elif c1 == c2: print("The lines are the same, zero slope") print("infinite number of solutions") else: print("parallel lines zero slope, solution doesn't exist") elif (m1-m2) != 0 : x0 = (c2-c1)/(m1 - m2) y0 = m1*x0 + c1 elif (c1 == c2) : print("The lines are the same") print("infinite number of solutions") elif c1 != c2 : print("parallel lines, solutions doesn't exist") (x0,y0) ``` Matrix form of the equation is $$ \left(\begin{array}{cc} m1 & -1\\ m2 & -1 \end{array}\right) \left(\begin{array}{c} x \\ y \end{array}\right) = \left(\begin{array}{c} -c1 \\ -c2 \end{array}\right) $$ ``` # Define two lines and draw them m1p = 2 m2p = -1 c1p = 1 c2p = -2 plt.plot(x, y(x,m1p,c1p), 'pink', x, y(x,m2p,c2p), 'c') plt.show() # Repeat the exercise to draw three lines # Write the code below ``` Write the matrix equation representing 3 intersecting lines, How many variables (unknowns) are there?, how many equations are there? What does the solution mean geometrically? How many different types of solutions can there be? Insert a cell below and answer the questions. <h2> Notation </h2> We use uparrow after a symbol to denote a column vector (like, $x \uparrow$) and a horizontal arrow above the symbol (like, $\vec{x}$) to denote a row vector. <h2> The row and column pictures </h2> Consider a matrix equation $ A x\uparrow = b \uparrow $ as follows, $$ \left(\begin{array}{cc} a_{11} & a_{12}\\ a_{21} & a_{22} \end{array}\right) \left(\begin{array}{c} x \\ y \end{array}\right) = \left(\begin{array}{c} b_1 \\ b_2 \end{array}\right) $$ The question of finding solutions of the above matrix equation can be thought of as that of finding the intersection point(s) for a set of lines. This gemetric picture is also known as row picture of the matrix equation. The equations in the row picture are written as, $$ a_{11} x + a_{12} y = b_1 \\ a_{21} x + a_{22} y = b_2. $$ One can also interprete the matrix equation in terms of addition of vectors as follows, $$ x \left(\begin{array}{c} a_{11} \\ a_{21} \end{array}\right) + y \left(\begin{array}{c} a_{12} \\ a_{22} \end{array}\right) = \left(\begin{array}{c} b_1 \\ b_2 \end{array}\right) $$ Here the question is to look for scaling facctors $x$ and $y$ so that the vector sum of the two given scaled vectors equals the RHS. This geometric picture is known as the column picture. A detailed discussion and exercises can be found in "Linear Algebra and its applications" (chapter 1) by Gilbert Strang. The matrix equation can also be interpreted in a third scenario where the matrix is to be thought of a linear transformation and the question reduces to that of finding a vector $x\uparrow$ whose linear transformation gives the vector $b\uparrow$. Write a three dimensional version of the matrix equation $ A x\uparrow = b \uparrow $ and the corresponding row and column picture representations below.
github_jupyter
``` import sys if "google.colab" in sys.modules: branch = "master" # change to the branch you want ! git clone --single-branch --branch $branch https://github.com/OpenMined/PySyft.git ! cd PySyft && ./scripts/colab.sh # fixes some colab python issues sys.path.append("/content/PySyft/src") # prevents needing restart import syft as sy ``` ## Join the Duet Server the Data Owner 1 connected to ``` duet1 = sy.join_duet(loopback=True) duet1.store.pandas ``` ## Join the Duet Server the Data Owner 2 connected to ``` duet2 = sy.join_duet(loopback=True) duet2.store.pandas ``` ## Linear regression ``` data1_ptr = duet1.store[0] target1_ptr = duet1.store[1] #data2_ptr = duet2.store[0] #target2_ptr = duet2.store[1] print(data1_ptr) print(target1_ptr) #print(data2_ptr) #print(target2_ptr) ``` ### Create Base Model ``` import torch in_dim = 8 out_dim = 5 class SyNet(sy.Module): def __init__(self, torch_ref): super(SyNet, self).__init__(torch_ref=torch_ref) self.lin1 = self.torch_ref.nn.Linear(in_dim, 256) self.act1 = self.torch_ref.nn.ReLU() self.lin2 = self.torch_ref.nn.Linear(256, 64) self.act2 = self.torch_ref.nn.ReLU() self.lin3 = self.torch_ref.nn.Linear(64, out_dim) self.sm = self.torch_ref.nn.Softmax(dim=1) def forward(self, x): x = self.lin1(x) x = self.act1(x) x = self.lin2(x) x = self.act2(x) x = self.lin3(x) return x def inference(self, x): x = self.forward(x) x = self.sm(x) return x combined_model = SyNet(torch) ``` ### Training ``` def train(epochs, model, torch_ref, optim, data_ptr, target_ptr, criterion): losses = [] for epoch in range(epochs): optim.zero_grad() output = model(data_ptr) loss = criterion(output, target_ptr) loss_item = loss.item() loss_value = loss_item.get( reason="To evaluate training progress", request_block=True, timeout_secs=5, ) #if epoch % 5 == 0: print("Epoch", epoch, "loss", loss_value) losses.append(loss_value) loss.backward() optim.step() return losses ``` #### Send one copy of the model to each data owner or client and train remotely ``` import torch as th import numpy as np ``` Train on Data Owner 1 data ``` local_model1 = SyNet(torch) print(local_model1.parameters()) remote_model1 = local_model1.send(duet1) remote_torch1 = duet1.torch params = remote_model1.parameters() optim1 = remote_torch1.optim.SGD(params=params, lr=0.01) ``` Dummy target data ``` #target1_ptr = th.FloatTensor(np.array([5, 10, 15, 22, 30, 38]).reshape(-1, 1)) #target1_ptr print(remote_torch1) epochs= 20 criterion = remote_torch1.nn.CrossEntropyLoss() losses = train(epochs, remote_model1, remote_torch1, optim1, data1_ptr, target1_ptr, criterion) ``` Train on Data Owner 2 data ``` data2_ptr = duet2.store[0] target2_ptr = duet2.store[1] print(data2_ptr) print(target2_ptr) local_model2 = SyNet(torch) print(local_model2.parameters()) remote_model2 = local_model2.send(duet2) remote_torch2 = duet2.torch params = remote_model2.parameters() optim2 = remote_torch2.optim.SGD(params=params, lr=0.01) ``` Dummy Target data ``` #target2_ptr = th.FloatTensor(np.array([35, 40, 45, 55, 60]).reshape(-1, 1)) #target2_ptr epochs = 20 criterion = remote_torch2.nn.CrossEntropyLoss() losses = train(epochs, remote_model2, remote_torch2, optim2, data2_ptr, target2_ptr, criterion) ``` ### Averaging Model Updates Ideally, there will be a coordinator server who will get the model updates from different clients and make an aggregation. For the case of simplicity, in this example we will make THIS server the coordinator. ``` from collections import OrderedDict ## Little sanity check! param1 = remote_model1.parameters().get(request_block=True) param2 = remote_model2.parameters().get(request_block=True) print("Local model1 parameters:") print(local_model1.parameters()) print("Remote model1 parameters:") print(param1) print() print("Local model2 parameters:") print(local_model2.parameters()) print("Remote model2 parameters:") print(param2) remote_model1_updates = remote_model1.get( request_block=True ).state_dict() print(remote_model1_updates) remote_model2_updates = remote_model2.get( request_block=True ).state_dict() print(remote_model2_updates) avg_updates = OrderedDict() avg_updates["linear.weight"] = ( remote_model1_updates["linear.weight"] + remote_model2_updates["linear.weight"] ) / 2 avg_updates["linear.bias"] = ( remote_model1_updates["linear.bias"] + remote_model2_updates["linear.bias"] ) / 2 print(avg_updates) ``` ### Load aggregated weights ``` combined_model.load_state_dict(avg_updates) del avg_updates test_data = th.FloatTensor(np.array([17, 25, 32, 50, 80]).reshape(-1, 1)) test_target = th.FloatTensor(np.array([12, 15, 20, 30, 50]).reshape(-1, 1)) preds = [] with torch.no_grad(): for i in range(len(test_data)): sample = test_data[i] y_hat = combined_model(sample) print(f"Prediction: {y_hat.item()} Ground Truth: {test_target[i].item()}") preds.append(y_hat) ``` ## Comparison to classical linear regression on centralised data ``` import torch import numpy as np in_dim = 1 out_dim = 1 class ClassicalLR(torch.nn.Module): def __init__(self, torch): super(ClassicalLR, self).__init__() self.linear = torch.nn.Linear(in_dim, out_dim) def forward(self, x): x = self.linear(x) return x classical_model = ClassicalLR(torch) data = torch.FloatTensor( np.array([5, 15, 25, 35, 45, 55, 60, 65, 75, 85, 95]).reshape(-1, 1) ) target = torch.FloatTensor( np.array([5, 10, 15, 22, 30, 38, 35, 40, 45, 55, 60]).reshape(-1, 1) ) def classic_train(epochs, model, torch, optim, data, target, criterion): losses = [] for i in range(epochs): optim.zero_grad() output = model(data) loss = criterion(output, target) loss_item = loss.item() if i % 10 == 0: print("Epoch", i, "loss", loss_item) losses.append(loss_item) loss.backward() optim.step() return losses params = classical_model.parameters() optim = torch.optim.SGD(params=params, lr=0.01) criterion = torch.nn.MSELoss() epochs = 20 losses = classic_train( epochs, classical_model, torch, optim, data, target, criterion ) test_data = th.FloatTensor(np.array([17, 25, 32, 50, 80]).reshape(-1, 1)) test_target = th.FloatTensor(np.array([12, 15, 20, 30, 50]).reshape(-1, 1)) preds = [] with torch.no_grad(): for i in range(len(test_data)): sample = test_data[i] y_hat = classical_model(sample) print(f"Prediction: {y_hat.item()} Ground Truth: {test_target[i].item()}") preds.append(y_hat) ```
github_jupyter
# Reference: Implemented: https://towardsdatascience.com/detection-of-price-support-and-resistance-levels-in-python-baedc44c34c9 Alternative: https://medium.com/@judopro/using-machine-learning-to-programmatically-determine-stock-support-and-resistance-levels-9bb70777cf8e ``` import pandas as pd import numpy as np import yfinance from mplfinance.original_flavor import candlestick_ohlc import matplotlib.dates as mpl_dates import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [12, 7] plt.rc('font', size=14) # Download S&P 500 daily data ticker = yfinance.Ticker('SPY') df = ticker.history(interval="1d", start="2020-01-01", end="2021-02-15") df['Date'] = pd.to_datetime(df.index) df['Date'] = df['Date'].apply(mpl_dates.date2num) df = df.loc[:,['Date', 'Open', 'High', 'Low', 'Close']] df.head() # Two functions that identify the 4-candles fractals def isSupport(df,i): support = df['Low'][i] < df['Low'][i-1] and df['Low'][i] < df['Low'][i+1] and df['Low'][i+1] < df['Low'][i+2] and df['Low'][i-1] < df['Low'][i-2] return support def isResistance(df,i): resistance = df['High'][i] > df['High'][i-1] and df['High'][i] > df['High'][i+1] and df['High'][i+1] > df['High'][i+2] and df['High'][i-1] > df['High'][i-2] return resistance # Create a list that will contain the levels we find. Each level is a tuple whose first element is the index of the signal candle and the second element is the price value. levels = [] for i in range(2,df.shape[0]-2): if isSupport(df,i): levels.append((i,df['Low'][i])) elif isResistance(df,i): levels.append((i,df['High'][i])) # Define a function that plots price and key levels together def plot_all(): fig, ax = plt.subplots() candlestick_ohlc(ax,df.values,width=0.6, \ colorup='green', colordown='red', alpha=0.8) date_format = mpl_dates.DateFormatter('%d %b %Y') ax.xaxis.set_major_formatter(date_format) fig.autofmt_xdate() fig.tight_layout() for level in levels: plt.hlines(level[1],xmin=df['Date'][level[0]],\ xmax=max(df['Date']),colors='blue') fig.show() # Plot and see plot_all() ``` We have been able to detect the major rejection levels, but there’s still some noise. Some levels are over others, but they are essentially the same level. We can clean this noise modifying the function that detects key levels. If a level is near another one, it will be discarded. We must decide what “near” means, then. We can say that a level is near another one if their distance is less than the average candle size in our chart (i.e. the average difference between high and low prices in a candle). This will give us a rough estimate of volatility. ``` s = np.mean(df['High'] - df['Low']) # Define a function that, given a price value, returns False if it is near some previously discovered key level. def isFarFromLevel(l): return np.sum([abs(l-x) < s for x in levels]) == 0 # Scan the price history looking for key levels using this function as a filter. levels = [] for i in range(2,df.shape[0]-2): if isSupport(df,i): l = df['Low'][i] if isFarFromLevel(l): levels.append((i,l)) elif isResistance(df,i): l = df['High'][i] if isFarFromLevel(l): levels.append((i,l)) plot_all() ```
github_jupyter
``` import math import torch as t import torch.nn as nn import torch.nn.functional as F import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline # activation should be some kind function name as F.relu, F.tanh # this Layer may be realized directly by nn.Module class Layer(nn.Module): def __init__(self, D_in, D_out, activation): super(Layer,self).__init__() self.linear = nn.Linear(D_in, D_out) self.activation = activation #F.relu def forward(self, x): if self.activation is None: return self.linear(x) else: return self.activation(self.linear(x)) #Test code example #l1 = Layer(3,5, F.relu) #l1(t.randn(10,3)) #or l1.forward(t.randn(10,3)) # KL divergence for encoder's cost # mu.shape[1] : the dimension of mu EPS = 0.001 def KL_mvn(mu, var): return (mu.shape[1] + t.sum(t.log(var), dim=1) - t.sum(mu**2, dim=1) - t.sum(var, dim=1)) / 2.0 # For decoder's cost def log_diag_mvn(mu, var): def f(x): k = mu.shape[1] log_p = (-k / 2.0) * math.log(2 * math.pi) - 0.5 * t.sum(t.log(var), dim=1) - t.sum(0.5 * (1.0 / var) * (x - mu) * (x - mu), dim=1) return log_p return f class Encoder(nn.Module): def __init__(self, layer_sizes, activations): super(Encoder, self).__init__() # add layers to the network self.layers = [] for n_input, n_output, activation in zip(layer_sizes[:-2], layer_sizes[1:-1], activations[1:-1]): self.layers.append(Layer(n_input, n_output, activation)) #MLP Gaussian encoder self.mu_layer = Layer(layer_sizes[-2], layer_sizes[-1], None) self.logvar_layer = Layer(layer_sizes[-2], layer_sizes[-1], None) def forward(self, x, eps): # tow outputs inp = x for layer in self.layers: x = layer.forward(x) mu = self.mu_layer.forward(x) var = t.exp(self.logvar_layer.forward(x)) sigma = t.sqrt(var) # reparametrization trick self.output = mu + sigma * eps self.cost = -t.sum(KL_mvn(mu, var)) return self.output, self.cost class Decoder(nn.Module): def __init__(self, layer_sizes, activations): super(Decoder, self).__init__() self.layers = [] for n_input, n_output, activation in zip(layer_sizes[:-2], layer_sizes[1:-1], activations[1:-1]): self.layers.append(Layer(n_input, n_output, activation)) self.mu_layer = Layer(layer_sizes[-2], layer_sizes[-1], None) self.logvar_layer = Layer(layer_sizes[-2], layer_sizes[-1], None) def forward(self, x, y): inp = x for layer in self.layers: x = layer.forward(x) mu = self.mu_layer.forward(x) var = t.exp(self.logvar_layer.forward(x)) sigma = t.sqrt(var) # decoder cost function self.mu = mu self.var = var self.output = mu self.cost = -t.sum(log_diag_mvn(self.output , var)(y)) return self.output, self.cost class VAE(nn.Module): def __init__(self, enc_layer_sizes, dec_layer_sizes, enc_activations, dec_activations): super(VAE, self).__init__() self.enc_mlp = Encoder(enc_layer_sizes, enc_activations) self.dec_mlp = Decoder(dec_layer_sizes, dec_activations) def forward(self, x, eps): e_cost = self.enc_mlp.forward(x, eps)[1] d_cost = self.dec_mlp.forward(self.enc_mlp.forward(x, eps)[0], x)[1] return e_cost + d_cost ``` Test the VAE architecture with a simple example: a curve with Gaussian noise ``` t.manual_seed(10) N = 1000 X1 = t.rand(N, requires_grad = True) X = t.transpose(t.stack((X1, -1.0*t.sqrt(0.25 - (X1-0.5)**2) +0.6 + 0.1*t.rand(N)), dim = 0 ), 0, 1) plt.figure(figsize = (8,4)) #plt.plot(X[:, 0].data.numpy(), X[:, 1].data.numpy()) plt.scatter(X[:, 0].data.numpy(), X[:, 1].data.numpy(), linewidths =.3, s=3, cmap=plt.cm.cool) plt.axis([0, 1, 0, 2]) plt.show() #X.shape[0] # declare the model enc_layer_sizes = [X.shape[1], 10, 10, 10, 1] enc_activations = [None, F.tanh, F.tanh, None, None] dec_layer_sizes = [enc_layer_sizes[-1], 10, 10, 10, X.shape[1]] dec_activations = [None, None, F.tanh, F.tanh, None] model = VAE(enc_layer_sizes, dec_layer_sizes, enc_activations, dec_activations) # parameters lr = 0.001 batch_size = 100 epochs = 1000 num_batch = X.shape[0]/ batch_size # optimizer optimizer = t.optim.Adam(model.parameters(), lr=lr) # training loop for i in range(int(epochs * num_batch)): k = i % num_batch x = X[int(k * batch_size):int((k+1) * batch_size), :] eps = t.rand(x.shape[0],enc_layer_sizes[-1], requires_grad = True) # loss function loss = model.forward(x, eps) # backpropagation optimizer.zero_grad() loss.backward(retain_graph=True) optimizer.step() if i%200 ==0: print(loss.data) # reconstruct the data with trained model eps = t.rand(X.shape[0],enc_layer_sizes[-1], requires_grad = True) Z = model.enc_mlp(X, eps)[0] X_projected = model.dec_mlp(Z, X)[0] plt.figure(figsize = (8,4)) plt.scatter(X[:, 0].data.numpy(), X[:, 1].data.numpy(), c='blue', lw=.3, s=3) plt.scatter(X_projected[:, 0].data.numpy(), X_projected[:, 1].data.numpy(), c='red', lw=.3, s=3) plt.axis([0, 1, 0, 1]) plt.show() ```
github_jupyter
# Narrative ## GOALS - Provide a code narrative in this project including: - a map of the South King County region, - updated tables in comparison with the OY in the Road Map Project Region report, and - final vizualizations for the data. ## Detailed Steps Import the necessary packages. ``` %matplotlib inline import matplotlib.pyplot as plt import geopandas as gpd import pandas as pd import numpy as np import matplotlib.patches as mpatches import functions_used as func from sqlalchemy import create_engine pd.options.display.float_format = '{:,.1f}'.format ``` ## List of PUMAs We begin with a list of PUMAs. Here, the PUMAs listed are from South King County. Create a pointer to the database. ``` engine = create_engine("postgresql:///opportunity_youth") ``` List the PUMAs. ``` pd.options.display.max_rows #force rightmost column to display wider pd.set_option('display.max_colwidth', -1) puma_names_list = pd.read_sql(sql="SELECT * FROM puma_names_finder0;", con=engine) puma_names_df = pd.DataFrame(puma_names_list).style.hide_index() puma_names_df ``` More information about the tools and process can be found [here](data_reduction_with_psql.ipynb). ## Map Next, we create a map for PUMAs 11610 - 11615. ``` # Load the data wa_df = gpd.read_file('/Users/karenwarmbein/ds/lectures/opportunity_youth/data/raw/tl_2017_53_puma10.shp') ``` Filter and sort the values in the dataframe for King County. Apply a function that colors the graph. ``` wa_df['PUMACE10'] = wa_df['PUMACE10'].astype(int) wa_df['color'] = wa_df['PUMACE10'].map(func.filtering_func) wa_df.sort_values(by = ['PUMACE10']); new_df = wa_df[(wa_df['PUMACE10']>= 11601) & (wa_df['PUMACE10']<= 11616)] ``` Plot the map. ``` f, ax = plt.subplots(nrows = 1, ncols = 2, figsize = (15,6)) ax[0] = wa_df.plot(ax=ax[0], color = wa_df['color'], edgecolor = '#444444') ax[0].set_axis_off() ax[1] = new_df.plot(ax=ax[1], color = new_df['color'], edgecolor = '#444444') ax[1].set_axis_off() red_patch = mpatches.Patch(color='red', label='South King County') green_patch = mpatches.Patch(color='green', label='North King County & Seattle') blue_patch = mpatches.Patch(color='white', label='Outside King County') ax[0].title.set_text("All of Washington") ax[1].title.set_text("King County") ax[0].legend(handles=[red_patch, green_patch, blue_patch], loc = 4, prop = {'size': 10}, edgecolor = '#444444', facecolor = '#bbbbbb'); ``` ## Total number of OY per PUMA in South King county Query the database to find the total number of OY per puma in South King County. ``` puma_oy_totals = pd.read_sql(sql="SELECT * FROM OY_by_puma0;", con=engine) puma_oy_totals_df = pd.DataFrame(puma_oy_totals).style.hide_index() puma_oy_totals_df ``` ## Total number of OY in South King County ``` print('In South King county there are ' + str(puma_oy_totals['sum'].sum()) + ' persons we can identify as OY.') ``` ## Creating the tables We start updating the tables in the Road Map Project Region report with the 2017 data. Read in the data. ``` df_oy = pd.read_sql(sql="SELECT * FROM table_final_query_1", con=engine) df_oy ``` Pivot the table of raw data and rename the indices. ``` df_oy_new = df_oy.pivot(index='pop', columns='age_group', values=['est','total','pct']) df_oy_new df_oy_new.rename(index={'oy_no':'Not an Opportunity Youth', 'oy_yes':'Opportunity Youth','working w/o a diploma':'Working without a diploma'}) ``` We used the following queries and python code for updating the second table. Extra functions are located in the functions_used.py file. ``` no_degree = """ SELECT pwgtp, agep, schl FROM pums_2017 WHERE (puma BETWEEN '11610' AND '11615') AND (agep BETWEEN 16 AND 24) AND (esr = '3' OR esr = '6') AND (schl <= '15') AND sch = '1' ;""" some_college = """ SELECT pwgtp, agep, schl FROM pums_2017 WHERE (puma BETWEEN '11610' AND '11615') AND (agep BETWEEN 16 AND 24) AND (esr = '3' OR esr = '6') AND (schl = '18' OR schl = '19') AND sch = '1' ; """ hs_diploma = """ SELECT pwgtp, agep, schl FROM pums_2017 WHERE (puma BETWEEN '11610' AND '11615') AND (agep BETWEEN 16 AND 24) AND (esr = '3' OR esr = '6') AND (schl = '16' OR schl = '17') AND sch = '1' ;""" college_deg = """ SELECT pwgtp, agep, schl FROM pums_2017 WHERE (puma BETWEEN '11610' AND '11615') AND (agep BETWEEN 16 AND 24) AND (esr = '3' OR esr = '6') AND (schl BETWEEN '20' AND '24') AND sch = '1' ; """ total_oy = """ SELECT pwgtp, agep, schl FROM pums_2017 WHERE (puma BETWEEN '11610' AND '11615') AND (agep BETWEEN 16 AND 24) AND (esr = '3' OR esr = '6') AND sch = '1' ; """ df_total_oy = pd.read_sql(sql = total_oy, con = engine) #data frame for individuals with highschool degree or GED df_hs_ged = pd.read_sql(sql = hs_diploma, con = engine) #data frame for individuals with highschool degree or GED df_no_degree = pd.read_sql(sql = no_degree, con = engine) #data frame for individuals with no degree df_some_college = pd.read_sql(sql = some_college, con = engine) #data frame for individuals with some college experience df_col_deg = pd.read_sql(sql = college_deg, con = engine) #data frame for individuals with an AA degree or higher tri_sected1 = func.trisect_ages(df_col_deg) tri_sected2 = func.trisect_ages(df_no_degree) tri_sected5 = func.trisect_ages(df_total_oy) second_array = func.form_another_2d_array([df_total_oy, df_no_degree, df_hs_ged, df_some_college, df_col_deg]) index_names = ['Total Population', 'No HS Degree or GED', 'High School Degree/GED','Some College', 'AA or higher'] column_names = ['16-18 total', '19-21 total', '22-24 total', '16-24 total'] second_df = func.create_df(second_array, column_names, index_names) reorganized_list = ['16-18 percentage','16-18 total', '19-21 percentage', '19-21 total', '22-24 percentage', '22-24 total', '16-24 percentage', '16-24 total', ] second_df = second_df.reindex(columns = reorganized_list) func.add_percentages_total(second_df) second_df ``` ## Creating vizualizations Start by coding the first graph. ``` # need a percents dataframe df_oy_pct = df_oy_new['pct'] #Data #green oy_no #orange oy_yes #blue working w/o diploma r = np.arange(2) raw_data_1 = {'greenBars': [93, df_oy_pct.loc['oy_no', '16-18']], 'orangeBars': [6, df_oy_pct.loc['oy_yes', '16-18']], 'blueBars': [1, df_oy_pct.loc['working w/o a diploma','16-18']] } raw_data_2 = {'greenBars': [78, df_oy_pct.loc['oy_no', '19-21']], 'orangeBars': [17, df_oy_pct.loc['oy_yes', '19-21']], 'blueBars': [5, df_oy_pct.loc['working w/o a diploma','19-21']] } raw_data_3 = {'greenBars': [78.5, df_oy_pct.loc['oy_no', '22-24']], 'orangeBars': [16, df_oy_pct.loc['oy_yes', '22-24']], 'blueBars': [5.5, df_oy_pct.loc['working w/o a diploma','22-24']] } raw_data = [raw_data_1, raw_data_2, raw_data_3] fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(16,6)) ax[0], ax[1], ax[2] = ax.flatten() t = ['OY per Year Comparison: 16-18', 'OY per Year Comparison: 19-21', 'OY per Year Comparison: 22-24']; for data in range(3): df = pd.DataFrame(raw_data[data]) # # From raw value to percentage totals = [i+j+k for i,j,k in zip(df['greenBars'], df['orangeBars'], df['blueBars'])] greenBars = [i / j * 100 for i,j in zip(df['greenBars'], totals)] orangeBars = [i / j * 100 for i,j in zip(df['orangeBars'], totals)] blueBars = [i / j * 100 for i,j in zip(df['blueBars'], totals)] # plot barWidth = 0.85 names = ('2016','2017') # Create green Bars p1 = ax[data].bar(r, greenBars, color='#b5ffb9', edgecolor='#b5ffb9', width=barWidth, ) # Create orange Bars p2 = ax[data].bar(r, orangeBars, bottom=greenBars, color='#f9bc86', edgecolor='#f9bc86', width=barWidth) # Create blue Bars p3 = ax[data].bar(r, blueBars, bottom=[i+j for i,j in zip(greenBars, orangeBars)], color='#a3acff', edgecolor='#a3acff', width=barWidth) # Custom x axis ax[data].set_xticks([0,1]) ax[data].set_xticklabels(['2016','2017']) ax[data].set_ylabel('Percent') ax[data].set_title(t[data]) # Show graphic ax[0].legend((p1[0], p2[0], p3[0]), ('Not Opportunity Youth', 'Opportunity Youth', 'Working without a diploma'), loc='lower left') plt.show() plt.savefig('oy_per_year.png') ``` ![image.png](attachment:image.png) ## Conclusion: although the number of OY has decreased in our report, the percent ratios are similar from year to year. ``` #Data #green oy_no #orange oy_yes #blue working w/o diploma r = np.arange(2) raw_data_1 = {'greenBars': [57, second_df.loc['No HS Degree or GED', '16-18 percentage']], 'orangeBars': [35, second_df.loc['High School Degree/GED', '16-18 percentage']], 'blueBars': [6, second_df.loc['Some College','16-18 percentage']], 'redBars': [1, second_df.loc['AA or higher','16-18 percentage']] } raw_data_2 = {'greenBars': [28, second_df.loc['No HS Degree or GED', '19-21 percentage']], 'orangeBars': [46, second_df.loc['High School Degree/GED', '19-21 percentage']], 'blueBars': [23, second_df.loc['Some College','19-21 percentage']], 'redBars': [3, second_df.loc['AA or higher','19-21 percentage']] } raw_data_3 = {'greenBars': [23, second_df.loc['No HS Degree or GED', '22-24 percentage']], 'orangeBars': [35, second_df.loc['High School Degree/GED', '22-24 percentage']], 'blueBars': [20, second_df.loc['Some College','22-24 percentage']], 'redBars': [22, second_df.loc['AA or higher','22-24 percentage']] } raw_data = [raw_data_1, raw_data_2, raw_data_3] fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(16,6)) ax[0], ax[1], ax[2] = ax.flatten() t = ['OY per Year Comparison: 16-18', 'OY per Year Comparison: 19-21', 'OY per Year Comparison: 22-24']; for data in range(3): df = pd.DataFrame(raw_data[data]) # # # From raw value to percentage totals = [i+j+k+l for i,j,k,l in zip(df['greenBars'], df['orangeBars'], df['blueBars'], df['redBars'])] greenBars = [i / j * 100 for i,j in zip(df['greenBars'], totals)] orangeBars = [i / j * 100 for i,j in zip(df['orangeBars'], totals)] blueBars = [i / j * 100 for i,j in zip(df['blueBars'], totals)] redBars = [i / j * 100 for i,j in zip(df['redBars'], totals)] # plot barWidth = 0.85 names = ('2016','2017') # Create green Bars p1 = ax[data].bar(r, greenBars, color='#b5ffb9', edgecolor='#b5ffb9', width=barWidth ) # Create orange Bars p2 = ax[data].bar(r, orangeBars, bottom=greenBars, color='#f9bc86', edgecolor='#f9bc86', width=barWidth) # Create blue Bars p3 = ax[data].bar(r, blueBars, bottom=[i+j for i,j in zip(greenBars, orangeBars)], color='#a3acff', edgecolor='#a3acff', width=barWidth) # Create red Bars p4 = ax[data].bar(r, redBars, bottom=[i+j+k for i,j,k in zip(greenBars, orangeBars, blueBars)], color='#ffcccb', edgecolor='#ffcccb', width=barWidth) # Custom x axis ax[data].set_xticks([0,1]) # ax[data].set_yrange(0,100,10) ax[data].set_xticklabels(['2016','2017']) ax[data].set_ylabel('Percent') ax[data].set_title(t[data]) # Show graphic ax[0].legend((p1[0], p2[0], p3[0], p4[0]), ('No HS Degree or GED', 'High School Degree/GED', 'Some College', 'AA or higher'), loc='lower left') plt.show() ``` [here](https://github.com/Lucaswb/project1_flatiron/blob/master/second%20table%20update%20from%202016%20survey.ipynb) ![image.png](attachment:image.png) ## Conclusion 1. While the total number of opportunity you has remained more or less constant as a percentage of the total youth population in SOuth King County, the proportions who have college degrees/experience and those with highschool degrees has radically chifted amongst a majority of the age groups. Since the number of youth opportunity has remained more or less constant, these changes indicate a significant shift in the respective groups. Unfortunately, without further analysis, we don't know if this is because their is an overall decrease in the number of indivdiuals who have college degrees or if that more of them are finding meaningful work after receiving degrees. The graphs support both of these conclusions which would suggest two diametrically opposite outcomes from the project roadmap. Unfortunately, we do not have to further investigate this discrepancy.
github_jupyter
``` import torch import torchvision import matplotlib.pyplot as plt # import torch.nn.functional as F import glob import scipy.io as sio import matplotlib.pyplot as plt from torchvision import datasets, transforms import torchvision.transforms as transforms from PIL import Image import vgg import glob import numpy as np import REGroup device = 'cuda' PATH = './vgg19_cifar10_trained_model.pth.tar' model = vgg.__dict__['vgg19']() model.features = torch.nn.DataParallel(model.features) checkpoint = torch.load(PATH) model.load_state_dict(checkpoint['state_dict']) model = model.to(device) model.eval() normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) transform=transforms.Compose([ transforms.ToTensor(), normalize, ]) gc_path = "./layerwise_generative_classifiers/" RG_model = REGroup.REGroup(eval_model=model,gc_path=gc_path,device=device) # SoftMax vs REGroup Clean Sample Accuracy softmax_correct_pred = 0 REGroup_correct_pred = 0 tot_samples = 0 model.eval() for cid in range(10): samples = glob.glob('./cifar10_test_set/'+str(cid)+'/*.mat') tot_samples = tot_samples+len(samples) for i in range(len(samples)): print("Evaluating Class:{} samples:{}".format(cid,i)) data = sio.loadmat(samples[i]) img = torch.tensor(data['img']).permute(1,2,0).numpy() input = Image.fromarray(img) input = transform(input).unsqueeze(0).to(device) label = torch.tensor(data['label'])[0][0].to(device) #---------SoftMax Prediction-------------- with torch.no_grad(): output = model(input) smax = torch.softmax(output,dim=1) pred_class = torch.argmax(smax) if pred_class == label: softmax_correct_pred = softmax_correct_pred+1 #------------------------------------------ #--------REGroup Prediction---------------- REGroup_ranked_predictions=RG_model.get_robust_predictions(img=input) Regroup_pred_class = REGroup_ranked_predictions[0] #Top-1 Class if Regroup_pred_class == label: REGroup_correct_pred = REGroup_correct_pred+1 #------------------------------------------ SoftMax_CIFAR10_Test_Clean_Samples_ACC = (softmax_correct_pred/tot_samples)*100 REGroup_CIFAR10_Test_Clean_Samples_ACC = (REGroup_correct_pred/tot_samples)*100 print('Clean Samples Accuracy. PreTrained+SoftMax:{}%, PreTrained+REGroup:{}%'.format(SoftMax_CIFAR10_Test_Clean_Samples_ACC, REGroup_CIFAR10_Test_Clean_Samples_ACC )) # SoftMax vs REGroup Accuracy (CIFAR10 Test Set) on Adversarial Samples generated using PGD with L-Infinity perturbation Metric #------------------------------------------------------ # We have pre generated the pgd examples with L-infinity metric # If you would like to generate on your own please check next cell. data = sio.loadmat('cifar10_vgg19_pgd_examples.mat') pgd_linf_cifar10_samples = data['adv_ex'] all_effective_dist = data['all_effective_dist'] gt_label = sio.loadmat('lbl.mat')['lbl'] sam_id=0 adv_softmax_correct_pred = 0 adv_REGroup_correct_pred = 0 tot_samples = 0 all_effective_dist=[] model.eval() for sid in range(0,10000,1): print('Evaluating sample {}'.format(sid)) adv_img =torch.tensor(pgd_linf_cifar10_samples[sid,:,:,:].astype(np.float32)).unsqueeze(0) # Adversarial images (adv_img) are already normalized using mean and std, DO NOT APPLY normlaization again label = gt_label[0,sid] #---------Adversarial Sample SoftMax Prediction-------------- with torch.no_grad(): adv_output = model(adv_img) adv_smax = torch.softmax(adv_output,dim=1) adv_pred_class = torch.argmax(adv_smax) if adv_pred_class == label: adv_softmax_correct_pred = adv_softmax_correct_pred+1 #--------Adversarial Sample REGroup Prediction---------------- adv_REGroup_ranked_predictions=RG_model.get_robust_predictions(img=adv_img) adv_Regroup_pred_class = adv_REGroup_ranked_predictions[0] #Top-1 Class if adv_Regroup_pred_class == label: adv_REGroup_correct_pred = adv_REGroup_correct_pred+1 #------------------------------------------ SoftMax_CIFAR10_Test_Adv_Samples_ACC = (adv_softmax_correct_pred/10000.0)*100 REGroup_CIFAR10_Test_Adv_Samples_ACC = (adv_REGroup_correct_pred/10000.0)*100 print('CIFAR10 Test set average PGD Linfinity perturbation epsilon {}/256'.format(np.round(np.mean(data['all_effective_dist'])))) print('Adversarial Samples Accuracy. PreTrained+SoftMax:{}%, PreTrained+REGroup:{}%'.format(SoftMax_CIFAR10_Test_Adv_Samples_ACC, REGroup_CIFAR10_Test_Adv_Samples_ACC )) # This will generate PGD adversarial examples with L-infinity perturbation metric # THIS WOULD BE TIME CONSUMING (Generating PGD adversarial examples of all 10000 CIFAR10 test samples) # SoftMax vs REGroup Accuracy (CIFAR10 Test Set) on Adversarial Samples generated using PGD with L-Infinity perturbation Metric pgd_linf_cifar10_samples = np.zeros((10000,3,32,32),np.float) sam_id=0 adv_softmax_correct_pred = 0 adv_REGroup_correct_pred = 0 tot_samples = 0 all_effective_dist=[] model.eval() for cid in range(0,10,1): samples = glob.glob('./split_cifar10_data/'+split+'/'+str(cid)+'/*.mat') tot_samples = tot_samples+len(samples) for i in range(0,len(samples),1): print("Generating and Evaluating PGD Adv Examples of Class:{} samples:{}".format(cid,i)) data = sio.loadmat(samples[i]) img = torch.tensor(data['img']).permute(1,2,0).numpy() input = Image.fromarray(img) input = transform(input).unsqueeze(0).to(device) image = data['img'].astype(np.float32)#torch.tensor(data['img']).permute(1,2,0).numpy() image = image / 255. label = torch.tensor(data['label'])[0][0].to(device) #---------Clean Sample SoftMax Prediction-------------- with torch.no_grad(): output = model(input) smax = torch.softmax(output,dim=1) pred_class = torch.argmax(smax) # Generate adversarial example only if originally CORRECTLY classified by softmax if pred_class == label: adv_img, eff_adv_dist=RG_model.get_foolbox_pgd_linf_adversary(image,label.item(),epsilon=2.0) all_effective_dist.append(eff_adv_dist) adv_img = adv_img.unsqueeze(0) print(adv_img.shape) else: adv_img = input.clone() #------------------------------------------ pgd_linf_cifar10_samples[sam_id,:,:,:] = adv_img[0,:,:,:].detach().cpu().numpy() sam_id=sam_id+1 #---------Adversarial Sample SoftMax Prediction-------------- with torch.no_grad(): adv_output = model(adv_img) adv_smax = torch.softmax(adv_output,dim=1) adv_pred_class = torch.argmax(adv_smax) if adv_pred_class == label: adv_softmax_correct_pred = adv_softmax_correct_pred+1 #--------REGroup Prediction---------------- adv_REGroup_ranked_predictions=RG_model.get_robust_predictions(img=adv_img) adv_Regroup_pred_class = adv_REGroup_ranked_predictions[0] #Top-1 Class if adv_Regroup_pred_class == label: adv_REGroup_correct_pred = adv_REGroup_correct_pred+1 #------------------------------------------ SoftMax_CIFAR10_Test_Adv_Samples_ACC = (adv_softmax_correct_pred/tot_samples)*100 REGroup_CIFAR10_Test_Adv_Samples_ACC = (adv_REGroup_correct_pred/tot_samples)*100 print('CIFAR10 Test set average PGD Linfinity perturbation epsilon {}/256'.format(np.round(np.mean(all_effective_dist)))) print('Adversarial Samples Accuracy. PreTrained+SoftMax:{}%, PreTrained+REGroup:{}%'.format(SoftMax_CIFAR10_Test_Adv_Samples_ACC, REGroup_CIFAR10_Test_Adv_Samples_ACC )) ```
github_jupyter
# 2018-10-02 - Scraping, récupérer une image depuis LeMonde Le notebook suivant récupère le contenu d'une page du journal [Le Monde](https://www.lemonde.f), extrait les urls d'images à l'aide d'une expression régulière puis télécharge les images pour les stocker dans un répertoire. Le notebook extrait les images d'une personnalité. Première étape, on récupère automatiquement le contenu d'une page. ``` %matplotlib inline import urllib.request as ulib def get_html(source): with ulib.urlopen(source) as u: return u.read() page = get_html("https://www.lemonde.fr") page[:500] ``` Quelques expériences avec les encoding. La norme sur internet est l'encoding [utf-8](https://fr.wikipedia.org/wiki/UTF-8). Un caractère ne peut représenter que 255 caractères distincts ce qui est insuffisant pour certaines langues. L'encoding est un ensemble de codes qui permettent de représenter des caractères complexes avec une séquence d'octets. ``` ch = "é" ch ch.encode("utf-8") ``` Quand on essaye d'encoder et décoder avec des encodings différents, cela a peu de chance de fonctionner. ``` try: ch.encode("utf-8").decode("ascii") except UnicodeDecodeError as e: print(e) ``` On revient sur la page du monde. ``` page2 = page.decode("utf-8") page2[:500] len(page) ``` On cherche les urls d'images [JPEG](https://fr.wikipedia.org/wiki/JPEG), quelque chose comme ``<img ... src="...jpg" ... />``. ``` import re reg = re.compile('src="(.*?[.]jpg)"') images = reg.findall(page2) images[:5] ``` ``` import os if not os.path.exists("images/lemonde2"): os.makedirs("images/lemonde2") ``` Et on enregistre les images. ``` for i, img in enumerate(images): nom = img.split("/")[-1] dest = os.path.join("images/lemonde2", nom) if os.path.exists(dest): continue try: contenu = get_html(img) except Exception as e: print(e) continue print(f"{i+1}/{len(images)}: {nom}") with open(dest, "wb") as f: f.write(contenu) ``` Affichons quelques images. ``` from mlinsights.plotting import plot_gallery_images import numpy from datetime import datetime now = datetime.now().strftime("%d/%m/%Y") fold = "images/lemonde2" imgs = [os.path.join(fold, img) for img in os.listdir(fold)] choices = numpy.random.choice(len(imgs), 12) random_set = [imgs[i] for i in choices] ax = plot_gallery_images(random_set) ax[0, 1].set_title("12 images aléatoires du Monde du %s" % now); ``` On se concentre sur d'une personnalité en utilisant la légende de l'image. On utilise une expression régulière pour récupérer toutes les images. ``` import re reg = re.compile('<img src=\\"([^>]+?[.]jpg)\\" alt=\\"([^>]*?)\\">', re.IGNORECASE) len(page2) imgs = reg.findall(page2) len(imgs), imgs[:3] if not os.path.exists("images/personne"): os.makedirs("images/personne") with open(os.path.join("images/personne", "legend.txt"), "w") as fl: for img, alt in imgs: if alt == "": continue nom = img.split("/")[-1] dest = os.path.join("images/personne", nom) if os.path.exists(dest): continue try: contenu = get_html(img) except Exception as e: print(e) continue print(nom) with open(dest, "wb") as f: f.write(contenu) fl.write("{0};{1}\n".format(nom, alt)) now = datetime.now().strftime("%d/%m/%Y") fold = "images/personne" imgs = [os.path.join(fold, img) for img in os.listdir(fold) if ".jpg" in img] ax = plot_gallery_images(imgs) if len(ax) > 0: # Si on a trouvé des images. if len(ax.shape) == 2: ax[0, 0].set_title("Image du Monde %s" % now) else: ax[0].set_title("Image du Monde %s" % now) ax; ```
github_jupyter
# Parcels Tutorial Welcome to a quick tutorial on Parcels. This is meant to get you started with the code, and give you a flavour of some of the key features of Parcels. In this tutorial, we will first cover how to run a set of particles [from a very simple idealised field](#Running-particles-in-an-idealised-field). We will show how easy it is to run particles in [time-backward mode](#Running-particles-in-backward-time). Then, we will show how to [add custom behaviour](#Adding-a-custom-behaviour-kernel) to the particles. Then we will show how to [run particles in a set of NetCDF files from external data](#Reading-in-data-from-arbritrary-NetCDF-files). Then we will show how to use particles to [sample a field](#Sampling-a-Field-with-Particles) such as temperature or sea surface height. And finally, we will show how to [write a kernel that tracks the distance travelled by the particles](#A-second-example-kernel:-calculating-distance-travelled). Let's start with importing the relevant modules. The key ones are all in the `parcels` package. ``` %matplotlib inline from parcels import FieldSet, ParticleSet, Variable, JITParticle, AdvectionRK4, plotTrajectoriesFile import numpy as np import math from datetime import timedelta from operator import attrgetter ``` ## Running particles in an idealised field The first step to running particles with Parcels is to define a `FieldSet` object, which is simply a collection of hydrodynamic fields. In this first case, we use a simple flow of two idealised moving eddies. That field is saved in NetCDF format in the directory `examples/MovingEddies_data`. Since we know that the files are in what's called Parcels FieldSet format, we can call these files using the function `FieldSet.from_parcels()`. ``` fieldset = FieldSet.from_parcels("MovingEddies_data/moving_eddies") ``` The `fieldset` can then be visualised with the `show()` function. To show the zonal velocity (`U`), give the following command ``` fieldset.U.show() ``` The next step is to define a `ParticleSet`. In this case, we start 2 particles at locations (330km, 100km) and (330km, 280km) using the `from_list` constructor method, that are advected on the `fieldset` we defined above. Note that we use `JITParticle` as `pclass`, because we will be executing the advection in JIT (Just-In-Time) mode. The alternative is to run in `scipy` mode, in which case `pclass` is `ScipyParticle` ``` pset = ParticleSet.from_list(fieldset=fieldset, # the fields on which the particles are advected pclass=JITParticle, # the type of particles (JITParticle or ScipyParticle) lon=[3.3e5, 3.3e5], # a vector of release longitudes lat=[1e5, 2.8e5]) # a vector of release latitudes ``` Print the `ParticleSet` to see where they start ``` print(pset) ``` This output shows for each particle the (longitude, latitude, depth, time). Note that in this case the time is `not_yet_set`, that is because we didn't specify a `time` when we defined the `pset`. To plot the positions of these particles on the zonal velocity, use the following command ``` pset.show(field=fieldset.U) ``` The final step is to run (or 'execute') the `ParticelSet`. We run the particles using the `AdvectionRK4` kernel, which is a 4th order Runge-Kutte implementation that comes with Parcels. We run the particles for 6 days (using the `timedelta` function from `datetime`), at an RK4 timestep of 5 minutes. We store the trajectory information at an interval of 1 hour in a file called `EddyParticles.nc`. Because `time` was `not_yet_set`, the particles will be advected from the first date available in the `fieldset`, which is the default behaviour. ``` output_file = pset.ParticleFile(name="EddyParticles.nc", outputdt=timedelta(hours=1)) # the file name and the time step of the outputs pset.execute(AdvectionRK4, # the kernel (which defines how particles move) runtime=timedelta(days=6), # the total length of the run dt=timedelta(minutes=5), # the timestep of the kernel output_file=output_file) ``` The code should have run, which can be confirmed by printing and plotting the `ParticleSet` again ``` print(pset) pset.show(field=fieldset.U) ``` Note that both the particles (the black dots) and the `U` field have moved in the plot above. Also, the `time` of the particles is now 518400 seconds, which is 6 days. The trajectory information of the particles can be written to the `EddyParticles.nc` file by using the `.export()` method on the output file. The trajectory can then be quickly plotted using the `plotTrajectoriesFile` function. ``` output_file.export() plotTrajectoriesFile('EddyParticles.nc'); ``` The `plotTrajectoriesFile` function can also be used to show the trajectories as an animation, by specifying that it has to run in `movie2d_notebook` mode. If we pass this to our function above, we can watch the particles go! ``` plotTrajectoriesFile('EddyParticles.nc', mode='movie2d_notebook') ``` The `plotTrajectoriesFile` can also be used to display 2-dimensional histograms (`mode=hist2d`) of the number of particle observations per bin. Use the `bins` argument to control the number of bins in the longitude and latitude direction. See also the [matplotlib.hist2d](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist2d.html) page. ``` plotTrajectoriesFile('EddyParticles.nc', mode='hist2d', bins=[30, 20]); ``` Now one of the neat features of Parcels is that the particles can be plotted as a movie *during execution*, which is great for debugging. To rerun the particles while plotting them on top of the zonal velocity field (`fieldset.U`), first reinitialise the `ParticleSet` and then re-execute. However, now rather than saving the output to a file, display a movie using the `moviedt` display frequency, in this case with the zonal velocity `fieldset.U` as background ``` # THIS DOES NOT WORK IN THIS IPYTHON NOTEBOOK, BECAUSE OF THE INLINE PLOTTING. # THE 'SHOW_MOVIE' KEYWORD WILL WORK ON MOST MACHINES, THOUGH # pset = ParticleSet(fieldset=fieldset, size=2, pclass=JITParticle, lon=[3.3e5, 3.3e5], lat=[1e5, 2.8e5]) # pset.execute(AdvectionRK4, # runtime=timedelta(days=6), # dt=timedelta(minutes=5), # moviedt=timedelta(hours=1), # movie_background_field=fieldset.U) ``` ## Running particles in backward time Running particles in backward time is extremely simple: just provide a `dt` < 0. ``` output_file = pset.ParticleFile(name="EddyParticles_Bwd.nc", outputdt=timedelta(hours=1)) # the file name and the time step of the outputs pset.execute(AdvectionRK4, dt=-timedelta(minutes=5), # negative timestep for backward run runtime=timedelta(days=6), # the run time output_file=output_file) ``` Now print the particles again, and see that they (except for some round-off errors) returned to their original position ``` print(pset) pset.show(field=fieldset.U) ``` ## Adding a custom behaviour kernel A key feature of Parcels is the ability to quickly create very simple kernels, and add them to the execution. Kernels are little snippets of code that are run during exection of the particles. In this example, we'll create a simple kernel where particles obtain an extra 2 m/s westward velocity after 1 day. Of course, this is not very realistic scenario, but it nicely illustrates the power of custom kernels. ``` def WestVel(particle, fieldset, time): if time > 86400: uvel = -2. particle.lon += uvel * particle.dt ``` Now reset the `ParticleSet` again, and re-execute. Note that we have now changed `kernel` to be `AdvectionRK4 + k_WestVel`, where `k_WestVel` is the `WestVel` function as defined above cast into a `Kernel` object (via the `pset.Kernel` call). ``` pset = ParticleSet.from_list(fieldset=fieldset, pclass=JITParticle, lon=[3.3e5, 3.3e5], lat=[1e5, 2.8e5]) k_WestVel = pset.Kernel(WestVel) # casting the WestVel function to a kernel object output_file = pset.ParticleFile(name="EddyParticles_WestVel.nc", outputdt=timedelta(hours=1)) pset.execute(AdvectionRK4 + k_WestVel, # simply add kernels using the + operator runtime=timedelta(days=2), dt=timedelta(minutes=5), output_file=output_file) ``` And now plot this new trajectory file ``` output_file.export() plotTrajectoriesFile('EddyParticles_WestVel.nc'); ``` ## Reading in data from arbritrary NetCDF files In most cases, you will want to advect particles within pre-computed velocity fields. If these velocity fields are stored in NetCDF format, it is fairly easy to load them into the `FieldSet.from_netcdf()` function. The `examples` directory contains a set of [GlobCurrent](http://globcurrent.ifremer.fr/products-data/products-overview) files of the region around South Africa. First, define the names of the files containing the zonal (U) and meridional (V) velocities. You can use wildcards (`*`) and the filenames for U and V can be the same (as in this case) ``` filenames = {'U': "GlobCurrent_example_data/20*.nc", 'V': "GlobCurrent_example_data/20*.nc"} ``` Then, define a dictionary of the variables (`U` and `V`) and dimensions (`lon`, `lat` and `time`; note that in this case there is no `depth` because the GlobCurrent data is only for the surface of the ocean) ``` variables = {'U': 'eastward_eulerian_current_velocity', 'V': 'northward_eulerian_current_velocity'} dimensions = {'lat': 'lat', 'lon': 'lon', 'time': 'time'} ``` Finally, read in the fieldset using the `FieldSet.from_netcdf` function with the above-defined `filenames`, `variables` and `dimensions` ``` fieldset = FieldSet.from_netcdf(filenames, variables, dimensions) ``` Now define a `ParticleSet`, in this case with 5 particle starting on a line between (28E, 33S) and (30E, 33S) using the `ParticleSet.from_line` constructor method ``` pset = ParticleSet.from_line(fieldset=fieldset, pclass=JITParticle, size=5, # releasing 5 particles start=(28, -33), # releasing on a line: the start longitude and latitude finish=(30, -33)) # releasing on a line: the end longitude and latitude ``` And finally execute the `ParticleSet` for 10 days using 4th order Runge-Kutta ``` output_file = pset.ParticleFile(name="GlobCurrentParticles.nc", outputdt=timedelta(hours=6)) pset.execute(AdvectionRK4, runtime=timedelta(days=10), dt=timedelta(minutes=5), output_file=output_file) ``` Now visualise this simulation using the `plotParticles` script again. Note you can plot the particles on top of one of the velocity fields using the `tracerfile`, `tracerfield`, etc keywords. ``` output_file.export() plotTrajectoriesFile('GlobCurrentParticles.nc', tracerfile='GlobCurrent_example_data/20020101000000-GLOBCURRENT-L4-CUReul_hs-ALT_SUM-v02.0-fv01.0.nc', tracerlon='lon', tracerlat='lat', tracerfield='eastward_eulerian_current_velocity'); ``` ## Sampling a Field with Particles One typical use case of particle simulations is to sample a Field (such as temperature, vorticity or sea surface hight) along a particle trajectory. In Parcels, this is very easy to do, with a custom Kernel. Let's read in another example, the flow around a Peninsula (see [Fig 2.2.3 in this document](http://archimer.ifremer.fr/doc/00157/26792/24888.pdf)), and this time also load the Pressure (`P`) field, using `extra_fields={'P': 'P'}`. Note that, because this flow does not depend on time, we need to set `allow_time_extrapolation=True` when reading in the fieldset. ``` fieldset = FieldSet.from_parcels("Peninsula_data/peninsula", extra_fields={'P': 'P'}, allow_time_extrapolation=True) ``` Now define a new `Particle` class that has an extra `Variable`: the pressure. We initialise this by sampling the `fieldset.P` field. ``` class SampleParticle(JITParticle): # Define a new particle class p = Variable('p', initial=fieldset.P) # Variable 'p' initialised by sampling the pressure ``` Now define a `ParticleSet` using the `from_line` method also used above in the GlobCurrent data. Plot the `pset` and print their pressure values `p` ``` pset = ParticleSet.from_line(fieldset=fieldset, pclass=SampleParticle, start=(3000, 3000), finish=(3000, 46000), size=5, time=0) pset.show(field='vector') print('p values before execution:', [p.p for p in pset]) ``` Now create a custom function that samples the `fieldset.P` field at the particle location. Cast this function to a `Kernel`. ``` def SampleP(particle, fieldset, time): # Custom function that samples fieldset.P at particle location particle.p = fieldset.P[time, particle.depth, particle.lat, particle.lon] k_sample = pset.Kernel(SampleP) # Casting the SampleP function to a kernel. ``` Finally, execute the `pset` with a combination of the `AdvectionRK4` and `SampleP` kernels, plot the `pset` and print their new pressure values `p` ``` pset.execute(AdvectionRK4 + k_sample, # Add kernels using the + operator. runtime=timedelta(hours=20), dt=timedelta(minutes=5)) pset.show(field=fieldset.P, show_time=0) print('p values after execution:', [p.p for p in pset]) ``` And see that these pressure values `p` are (within roundoff errors) the same as the pressure values before the execution of the kernels. The particles thus stay on isobars! ## Calculating distance travelled As a second example of what custom kernels can do, we will now show how to create a kernel that logs the total distance that particles have travelled. First, we need to create a new `Particle` class that includes three extra variables. The `distance` variable will be written to output, but the auxiliary variables `prev_lon` and `prev_lat` won't be written to output (can be controlled using the `to_write` keyword) ``` class DistParticle(JITParticle): # Define a new particle class that contains three extra variables distance = Variable('distance', initial=0., dtype=np.float32) # the distance travelled prev_lon = Variable('prev_lon', dtype=np.float32, to_write=False, initial=attrgetter('lon')) # the previous longitude prev_lat = Variable('prev_lat', dtype=np.float32, to_write=False, initial=attrgetter('lat')) # the previous latitude. ``` Now define a new function `TotalDistance` that calculates the sum of Euclidean distances between the old and new locations in each RK4 step ``` def TotalDistance(particle, fieldset, time): # Calculate the distance in latitudinal direction (using 1.11e2 kilometer per degree latitude) lat_dist = (particle.lat - particle.prev_lat) * 1.11e2 # Calculate the distance in longitudinal direction, using cosine(latitude) - spherical earth lon_dist = (particle.lon - particle.prev_lon) * 1.11e2 * math.cos(particle.lat * math.pi / 180) # Calculate the total Euclidean distance travelled by the particle particle.distance += math.sqrt(math.pow(lon_dist, 2) + math.pow(lat_dist, 2)) particle.prev_lon = particle.lon # Set the stored values for next iteration. particle.prev_lat = particle.lat ``` *Note:* here it is assumed that the latitude and longitude are measured in degrees North and East, respectively. However, some datasets (e.g. the `MovingEddies` used above) give them measured in (kilo)meters, in which case we must *not* include the factor `1.11e2`. We will run the `TotalDistance` function on a `ParticleSet` containing the five particles within the `GlobCurrent` fieldset from above. Note that `pclass=DistParticle` in this case. ``` filenames = {'U': "GlobCurrent_example_data/20*.nc", 'V': "GlobCurrent_example_data/20*.nc"} variables = {'U': 'eastward_eulerian_current_velocity', 'V': 'northward_eulerian_current_velocity'} dimensions = {'lat': 'lat', 'lon': 'lon', 'time': 'time'} fieldset = FieldSet.from_netcdf(filenames, variables, dimensions) pset = ParticleSet.from_line(fieldset=fieldset, pclass=DistParticle, size=5, start=(28, -33), finish=(30, -33)) ``` Again define a new kernel to include the function written above and execute the `ParticleSet`. ``` k_dist = pset.Kernel(TotalDistance) # Casting the TotalDistance function to a kernel. pset.execute(AdvectionRK4 + k_dist, # Add kernels using the + operator. runtime=timedelta(days=6), dt=timedelta(minutes=5), output_file=pset.ParticleFile(name="GlobCurrentParticles_Dist.nc", outputdt=timedelta(hours=1))) ``` And finally print the distance in km that each particle has travelled (note that this is also stored in the `EddyParticles_Dist.nc` file) ``` print([p.distance for p in pset]) #the distances in km travelled by the particles ```
github_jupyter
``` import pandas as pd import numpy as np import openml import os ``` # Download OpenML data ``` dataset_ids = [61] for dataset_id in dataset_ids: print ('Get dataset id', dataset_id) dataset = openml.datasets.get_dataset(dataset_id) X, y, categorical_indicator, attribute_names = dataset.get_data(dataset_format='dataframe', target=dataset.default_target_attribute) if len(np.unique(y)) != 2: print ('Not binary classification') #continue vals = {} for i, name in enumerate(attribute_names): vals[name] = X[name] vals['target'] = y df = pd.DataFrame(vals) df.to_csv('./data/{0}.csv'.format(dataset_id), index=False) print('Dataset {} saved successfully'.format(dataset_id)) ``` # Train a machine learning model ``` from sklearn.model_selection import train_test_split # NOTE: We are using dataset 20 from the test server: https://test.openml.org/d/20 dataset = openml.datasets.get_dataset(61) X, y, categorical_indicator, attribute_names = dataset.get_data( dataset_format='array', target=dataset.default_target_attribute ) dataset.name #split the data into train & test sets x_train,x_test, y_train, y_test=train_test_split(X,y,test_size=0.30) from sklearn.svm import SVC model=SVC( C=100) model.fit(x_train, y_train) #Predictions from the trained model pred=model.predict(x_test) #Model Evaluation from sklearn.metrics import classification_report, confusion_matrix print(confusion_matrix(y_test,pred)) print(classification_report(y_test, pred)) print('Accuracy of SVM classifier on test set: {:.2f}'.format(model.score(x_test, y_test))) ``` # using GridSearch ``` from sklearn.model_selection import GridSearchCV # defining parameter range param_grid = {'C': [0.1, 1, 10, 100, 1000], 'gamma': [1, 0.1, 0.01, 0.001, 0.0001], 'kernel': ['rbf']} grid = GridSearchCV(SVC(), param_grid, refit = True, verbose = 3, cv=5) # fitting the model for grid search grid.fit(x_train, y_train) grid_predictions = grid.predict(x_test) # print classification report print(classification_report(y_test, grid_predictions)) print('Accuracy of SVM classifier on test set: {:.2f}'.format(grid.score(x_test, y_test))) # print best parameter after tuning print(grid.best_params_) # print how our model looks after hyper-parameter tuning print(grid.best_estimator_) # Print the tuned parameters and score print("Tuned SVM Parameters: {}".format(grid.best_params_)) print("Best score is {}".format(grid.best_score_)) ``` # Pipeline ``` from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn import tree from sklearn.ensemble import RandomForestClassifier # Construct svm pipeline pipe_svm = Pipeline([('svm',SVC(C=1000, break_ties=False, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma=0.001, kernel='rbf', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False))]) # Construct Random Forest pipeline num_trees = 100 max_features = 1 pipe_rf = Pipeline([('ss4', StandardScaler()), ('rf', RandomForestClassifier(n_estimators=num_trees, max_features=max_features))]) # Construct DT pipeline pipe_dt = Pipeline([('ss3', StandardScaler()), ('dt', tree.DecisionTreeClassifier(random_state=42))]) pipe_svm.fit(x_train, y_train) print('Accuracy of SVM classifier on test set: {:.2f}'.format(pipe_svm.score(x_test, y_test))) pipe_dt.fit(x_train, y_train) print('Accuracy of DT classifier on test set: {:.2f}'.format(pipe_dt.score(x_test, y_test))) pipe_rf.fit(x_train, y_train) print('Accuracy of RF classifier on test set: {:.2f}'.format(pipe_rf.score(x_test, y_test))) pipe_dic = {0: 'Decision Tree', 1:'Random Forest', 2:'Support Vector Machines'} pipelines = [pipe_dt,pipe_rf,pipe_svm] for pipe in pipelines: pipe.fit(x_train, y_train) for idx, val in enumerate(pipelines): print('%s pipeline test accuracy: %.2f' % (pipe_dic[idx], val.score(x_test, y_test))) best_accuracy = 0 best_classifier = 0 best_pipeline = '' for idx, pipe in enumerate(pipelines): if pipe.score(x_test, y_test) > best_accuracy: best_accuracy = pipe.score(x_test, y_test) best_pipeline = pipe best_classifier = idx print('%s is the classifier has the best accuracy of %.2f' % (pipe_dic[best_classifier],best_accuracy)) #print(pipe) ```
github_jupyter
IN DEVELOPMENT # Part 2: Training an RBM *with* a phase ## Getting Started The following imports are needed to run this tutorial. ``` from rbm_tutorial import RBM_Module, ComplexRBM import torch import cplx import unitary_library import numpy as np import csv ``` *rbm_tutorial.py* contains the child class **ComplexRBM** that inherits properties and functions from the parent class **RBM_Module**. Pytorch (torch) is used as a replacement for doing some algebra that would normally be done with numpy. Pytorch also allows one to take advantage of GPU acceleration among many other things. Don't worry if you don't have a GPU on your machine; the tutorial will run in no time on a CPU. One downfall of pytorch is that it currently does not have complex number support, so we have written our own complex algebra library (cplx.py). For more information on this library's contents, please refer to [here](../cplx.rst). We hope that pytorch will implement complex numbers soon! *unitary_library* is a package that will create a dictionary of the unitaries needed in order to train a ComplexRBM object (more later). ## Training Let's go through training a complex wavefunction. To evaluate how the RBM is training, we will compute the fidelity between the true wavefunction of the system and the wavefunction the RBM reconstructs. We first need to load our training data and the true wavefunction of this system. However, we also need the corresponding file that contains all of the measurements that each site is in. The dummy dataset we will train our RBM on is a two qubit system who's wavefunction is $\psi =\left.\frac{1}{2}\right\vert+,+\rangle - \left.\frac{1}{2}\right\vert+,-\rangle + \left.\frac{i}{2}\right\vert-,+\rangle - \left.\frac{i}{2}\right\vert-,-\rangle$, where $+$ and $-$ represent spin-up and spin-down, respectively. ``` train_set2 = np.loadtxt('2qubits_train_samples.txt', dtype= 'float32') psi_file = np.loadtxt('2qubits_psi.txt') true_psi2 = torch.tensor([psi_file[:,0], psi_file[:,1]], dtype = torch.double) bases = np.loadtxt('2qubits_train_bases.txt', dtype = str) ``` The following arguments are required to construct a **ComplexRBM** object. 1. **A dictionary containing 2x2 unitaries, unitaries**. We will create this dictionary in the next block with the hand of the module we imported called *unitary_library*. 2. **The number of visible units, num_visible**. This is 2 for the case of our dataset. 3. **The number of hidden units in the amplitude hidden layer of the RBM, num_hidden_amp**. It's recommended that the number of hidden units stay equal to the number of visible units (2 in the case of our dummy dataset). 4. **The number of hidden units in the phase hidden layer of the RBM, num_hidden_amp**. It's recommended that the number of hidden units stay equal to the number of visible units (2 in the case of our dummy dataset). ``` unitaries = unitary_library.create_dict() '''If you would like to add your own quantum gates from your experiment to "unitaries", do: unitaries = unitary_library.create_dict(name='your_name', unitary=torch.tensor([[real part], [imaginary part]], dtype=torch.double) For example: unitaries = unitary_library.create_dict(name='qucumber', unitary=torch.tensor([ [[1.,0.],[0.,1.]] [[0.,0.],[0.,0.]] ], dtype=torch.double)) By default, unitary_library.create_dict() contains the hadamard and K gates with keys X and Y, respectively.''' num_visible = train_set2.shape[-1] # 2 num_hidden_amp = train_set2.shape[-1] # 2 num_hidden_phase = train_set2.shape[-1] # 2 ``` A **ComplexRBM** object has a function called *fit* that performs the training. *fit* takes the following arguments. 1. ***train_set***. Needed for selecting mini batches of the data. 2. ***bases***. Needed for calculating gradients (performing the correct rotations). 2. ***true_psi***. Only needed here to compute the fidelity. 3. **The number of epochs, *epochs***. The number of training cycles that will be performed. 15 should be fine. 4. **The mini batch size, *batch_size***. The number of data points that each mini batch will contain. We'll go with 10. 5. **The number of contrastive divergence steps, *k***. One contrastive divergence step seems to be good enough in most cases. 6. **The learning rate, *lr***. We will use a learning rate of 0.01 here. 7. **How often you would like the program to update you during training, *log_every***. Every 10 epochs the program will print out the fidelity. ``` epochs = 15 batch_size = 10 k = 1 lr = 0.01 log_every = 5 rbm_complex = ComplexRBM(num_visible, num_hidden_amp, num_hidden_phase) rbm_complex.fit(train_set2, bases, true_psi2, unitaries, epochs, batch_size, k, lr, log_every) ``` ### After Training After training your RBM, the *fit* function will have saved your trained weights and biases for the amplitude and the phase. Now, you have the option to generate new data from the trained RBM. The *rbm_real* object has a *sample* function that takes the following arguments. 1. The number of samples you wish to generate, *num_samples*. 2. The number of contrastive divergence steps performed to generate the samples, *k*. ``` num_samples = 100 k = 10 samples = rbm_complex.sample(num_samples, k) ``` You will now find the *generated_samples_complexRBM.pkl* file in your directory that contains your new samples.
github_jupyter
<!-- dom:TITLE: Demo - Working with Functions --> # Demo - Working with Functions <!-- dom:AUTHOR: Mikael Mortensen Email:mikaem@math.uio.no at Department of Mathematics, University of Oslo. --> <!-- Author: --> **Mikael Mortensen** (email: `mikaem@math.uio.no`), Department of Mathematics, University of Oslo. Date: **August 7, 2020** **Summary.** This is a demonstration of how the Python module [shenfun](https://github.com/spectralDNS/shenfun) can be used to work with global spectral functions in one and several dimensions. ## Construction A global spectral function $u(x)$ is represented on a one-dimensional domain (a line) as $$ u(x) = \sum_{k=0}^{N-1} \hat{u}_k \psi_k(x) $$ where $\psi_k(x)$ is the $k$'th basis function and $x$ is a position inside the domain. $\{\hat{u}_k\}_{k=0}^{N-1}$ are the expansion coefficient for the series, and often referred to as the degrees of freedom. There is one degree of freedom per basis function. We can use any number of basis functions, and the span of the chosen basis is then a function space. Also part of the function space is the domain, which is specified when a function space is created. To create a function space $T=\text{span}\{T_k\}_{k=0}^{N-1}$ for the first N Chebyshev polynomials of the first kind on the default domain $[-1, 1]$, do ``` from shenfun import * N = 8 T = FunctionSpace(N, 'Chebyshev', domain=(-1, 1)) ``` The function $u(x)$ can now be created with all N coefficients equal to zero as ``` u = Function(T) ``` When using Chebyshev polynomials the computational domain is always $[-1, 1]$. However, we can still use a different physical domain, like ``` T = FunctionSpace(N, 'Chebyshev', domain=(0, 1)) ``` and under the hood shenfun will then map this domain to the reference domain through $$ u(x) = \sum_{k=0}^{N-1} \hat{u}_k \psi_k(2(x-0.5)) $$ ## Approximating analytical functions The `u` function above was created with only zero valued coefficients, which is the default. Alternatively, a [Function](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.Function) may be initialized using a constant value ``` T = FunctionSpace(N, 'Chebyshev', domain=(-1, 1)) u = Function(T, val=1) ``` but that is not very useful. A third method to initialize a [Function](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.Function) is to interpolate using an analytical Sympy function. ``` import sympy as sp x = sp.Symbol('x', real=True) u = Function(T, buffer=4*x**3-3*x) print(u) ``` Here the analytical Sympy function will first be evaluated on the entire quadrature mesh of the `T` function space, and then forward transformed to get the coefficients. This corresponds to a projection to `T`. The projection is Find $u_h \in T$, such that $$ (u_h - u, v)_w = 0 \quad \forall v \in T, $$ where $v \in \{T_j\}_{j=0}^{N-1}$ is a test function, $u_h=\sum_{k=0}^{N-1} \hat{u}_k T_k$ is a trial function and the notation $(\cdot, \cdot)_w$ represents a weighted inner product. In this projection $u_h$ is the [Function](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.Function), $u$ is the sympy function and we use sympy to exactly evaluate $u$ on all quadrature points $\{x_j\}_{j=0}^{N-1}$. With quadrature we then have $$ (u, v)_w = \sum_{j\in\mathcal{I}^N} u(x_j) v(x_j) w_j \forall v \in T, $$ where $\mathcal{I}^N = (0, 1, \ldots, N-1)$ and $\{w_j\}_{j\in \mathcal{I}^N}$ are the quadrature weights. The left hand side of the projection is $$ (u_h, v)_w = \sum_{j\in\mathcal{I}^N} u_h(x_j) v(x_j) w_j \forall v \in T. $$ A linear system of equations arise when inserting for the basis functions $$ \left(u, T_i\right)_w = \tilde{u}_i \forall i \in \mathcal{I}^N, $$ and $$ \begin{align*} \left(u_h, T_i \right)_w &= (\sum_{k\in \mathcal{I}^N} \hat{u}_k T_k , T_i)_w \\ &= \sum_{k\in \mathcal{I}^N} \left( T_k, T_i\right)_w \hat{u}_k \end{align*} $$ with the mass matrix $$ a_{ik} = \left( T_k, T_i\right)_w \forall (i, k) \in \mathcal{I}^N \times \mathcal{I}^N, $$ we can now solve to get the unknown expansion coefficients. In matrix notation $$ \hat{u} = A^{-1} \tilde{u}, $$ where $\hat{u}=\{\hat{u}_i\}_{i\in \mathcal{I}^N}$, $\tilde{u}=\{\tilde{u}_i\}_{i \in \mathcal{I}^N}$ and $A=\{a_{ki}\}_{(i,k) \in \mathcal{I}^N \times \mathcal{I}^N}$ ## Adaptive function size The number of basis functions can also be left open during creation of the function space, through ``` T = FunctionSpace(0, 'Chebyshev', domain=(-1, 1)) ``` This is useful if you want to approximate a function and are uncertain how many basis functions that are required. For example, you may want to approximate the function $\cos(20 x)$. You can then find the required [Function](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.Function) using ``` u = Function(T, buffer=sp.cos(20*x)) print(len(u)) ``` We see that $N=45$ is required to resolve this function. This agrees well with what is reported also by [Chebfun](https://www.chebfun.org/docs/guide/guide01.html). Note that in this process a new [FunctionSpace()](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.FunctionSpace) has been created under the hood. The function space of `u` can be extracted using ``` Tu = u.function_space() print(Tu.N) ``` To further show that shenfun is compatible with Chebfun we can also approximate the Bessel function ``` T1 = FunctionSpace(0, 'Chebyshev', domain=(0, 100)) u = Function(T1, buffer=sp.besselj(0, x)) print(len(u)) ``` which gives 83 basis functions, in close agreement with Chebfun (89). The difference lies only in the cut-off criteria. We cut frequencies with a relative tolerance of 1e-12 by default, but if we make this criteria a little bit stronger, then we will also arrive at a slightly higher number: ``` u = Function(T1, buffer=sp.besselj(0, x), reltol=1e-14) print(len(u)) ``` Plotting the function on its quadrature points looks a bit ragged, though: ``` %matplotlib inline import matplotlib.pyplot as plt Tu = u.function_space() plt.plot(Tu.mesh(), u.backward()) ``` To improve the quality of this plot we can instead evaluate the function on more points ``` xj = np.linspace(0, 100, 1000) plt.plot(xj, u(xj)) ``` Alternatively, we can refine the function, which simply pads zeros to $\hat{u}$ ``` up = u.refine(200) Tp = up.function_space() plt.plot(Tp.mesh(), up.backward()) ``` The padded expansion coefficients are now given as ``` print(up) ``` ## More features Since we have used a regular Chebyshev basis above, there are many more features that could be explored simply by going through [Numpy's Chebyshev module](https://numpy.org/doc/stable/reference/routines.polynomials.chebyshev.html). For example, we can create a Chebyshev series like ``` import numpy.polynomial.chebyshev as cheb c = cheb.Chebyshev(u, domain=(0, 100)) ``` The Chebyshev series in Numpy has a wide range of possibilities, see [here](https://numpy.org/doc/stable/reference/generated/numpy.polynomial.chebyshev.Chebyshev.html#numpy.polynomial.chebyshev.Chebyshev). However, we may also work directly with the Chebyshev coefficients already in `u`. To find the roots of the polynomial that approximates the Bessel function on domain $[0, 100]$, we can do ``` z = Tu.map_true_domain(cheb.chebroots(u)) ``` Note that the roots are found on the reference domain $[-1, 1]$ and as such we need to move the result to the physical domain using `map_true_domain`. The resulting roots `z` are both real and imaginary, so to extract the real roots we need to filter a little bit ``` z2 = z[np.where((z.imag == 0)*(z.real > 0)*(z.real < 100))].real print(z2[:5]) ``` Here `np.where` returns the indices where the condition is true. The condition is that the imaginary part is zero, whereas the real part is within the true domain $[0, 100]$. **Notice.** Using directly `cheb.chebroots(c)` does not seem to work (even though the series has been generated with the non-standard domain) because Numpy only looks for roots in the reference domain $[-1, 1]$. We could also use a function space with boundary conditions built in, like ``` Td = FunctionSpace(0, 'C', bc=(sp.besselj(0, 0), sp.besselj(0, 100)), domain=(0, 100)) ud = Function(Td, buffer=sp.besselj(0, x)) print(len(ud)) ``` As we can see this leads to a function space of dimension very similar to the orthogonal space. The major advantages of working with a space with boundary conditions built in only comes to life when solving differential equations. As long as we are only interested in approximating functions, we may just as well stick to the orthogonal spaces. This goes for Legendre as well as Chebyshev. ## Multidimensional functions Multidimensional tensor product spaces are created by taking the tensor products of one-dimensional function spaces. For example ``` C0 = FunctionSpace(20, 'C') C1 = FunctionSpace(20, 'C') T = TensorProductSpace(comm, (C0, C1)) u = Function(T) ``` Here $\text{T} = \text{C0} \otimes \text{C1}$, the basis function is $T_i(x) T_j(y)$ and the Function `u` is $$ u(x, y) = \sum_{i=0}^{N-1} \sum_{j=0}^{N-1} \hat{u}_{ij} T_i(x) T_j(y). $$ The multidimensional Functions work more or less exactly like for the 1D case. We can here interpolate 2D Sympy functions ``` y = sp.Symbol('y', real=True) u = Function(T, buffer=sp.cos(10*x)*sp.cos(10*y)) X = T.local_mesh(True) plt.contourf(X[0], X[1], u.backward()) ``` Like for 1D the coefficients are computed through projection, where the exact function is evaluated on all quadrature points in the mesh. The Cartesian mesh represents the quadrature points of the two function spaces, and can be visualized as follows ``` X = T.mesh() for xj in X[0]: for yj in X[1]: plt.plot((xj, xj), (X[1][0, 0], X[1][0, -1]), 'k') plt.plot((X[0][0], X[0][-1]), (yj, yj), 'k') ``` We may alternatively plot on a uniform mesh ``` X = T.local_mesh(broadcast=True, uniform=True) plt.contourf(X[0], X[1], u.backward(kind='uniform')) ``` ## Curvilinear coordinates With shenfun it is possible to use curvilinear coordinates, and not necessarily with orthogonal basis vectors. With curvilinear coordinates the computational coordinates are always straight lines, rectangles and cubes. But the physical coordinates can be very complex. Consider the unit disc with polar coordinates. Here the position vector $\mathbf{r}$ is given by $$ \mathbf{r} = r\cos \theta \mathbf{i} + r\sin \theta \mathbf{j} $$ The physical domain is $\Omega = \{(x, y): x^2 + y^2 < 1\}$, whereas the computational domain is the Cartesian product $D = \{(r, \theta) \in [0, 1] \times [0, 2 \pi]\}$. We create this domain in shenfun through ``` r, theta = psi = sp.symbols('x,y', real=True, positive=True) rv = (r*sp.cos(theta), r*sp.sin(theta)) B0 = FunctionSpace(20, 'C', domain=(0, 1)) F0 = FunctionSpace(20, 'F') T = TensorProductSpace(comm, (B0, F0), coordinates=(psi, rv)) ``` Note that we are using a Fourier space for the azimuthal direction, since the solution here needs to be periodic. We can now create functions on the space using an analytical function in computational coordinates ``` u = Function(T, buffer=(1-r)*r*sp.sin(sp.cos(theta))) ``` However, when this is plotted it may not be what you expect ``` X = T.local_mesh(True) plt.contourf(X[0], X[1], u.backward(), 100) ``` We see that the function has been plotted in computational coordinates, and not on the disc, as you probably expected. To plot on the disc we need the physical mesh, and not the computational ``` X = T.local_cartesian_mesh() plt.contourf(X[0], X[1], u.backward(), 100) ``` **Notice.** The periodic plot does not wrap all around the circle. This is not wrong, we have simply not used the same point twice, but it does not look very good. To overcome this problem we can wrap the grid all the way around and re-plot. ``` up = u.backward() xp, yp, up = wrap_periodic([X[0], X[1], up], axes=[1]) plt.contourf(xp, yp, up, 100) ``` ## Adaptive functions in multiple dimensions If you want to find a good resolution for a function in multiple dimensions, the procedure is exactly like in 1D. First create function spaces with 0 quadrature points, and then call [Function](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.Function) ``` B0 = FunctionSpace(0, 'C', domain=(0, 1)) F0 = FunctionSpace(0, 'F') T = TensorProductSpace(comm, (B0, F0), coordinates=(psi, rv)) u = Function(T, buffer=((1-r)*r)**2*sp.sin(sp.cos(theta))) print(u.shape) ``` The algorithm used to find the approximation in multiple dimensions simply treat the problem one direction at the time. So in this case we would first find a space in the first direction by using a function ` ~ ((1-r)*r)**2`, and then along the second using a function ` ~ sp.sin(sp.cos(theta))`. <!-- ======= Bibliography ======= -->
github_jupyter
[View in Colaboratory](https://colab.research.google.com/github/adowaconan/Deep_learning_fMRI/blob/master/3_1_some_concepts_of_CNN.ipynb) # Reference ## [How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native](https://medium.com/@timanglade/how-hbos-silicon-valley-built-not-hotdog-with-mobile-tensorflow-keras-react-native-ef03260747f3) ## [Convolutional Neural Networks - Basics](https://mlnotebook.github.io/post/CNN1/) ## [reference within reference](https://github.com/rcmalli/keras-squeezenet/blob/master/examples/example_keras_squeezenet.ipynb) ## [Understanding Activation Functions in Neural Networks](https://medium.com/the-theory-of-everything/understanding-activation-functions-in-neural-networks-9491262884e0) ## [Activation Functions: Neural Networks](https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6) # Kernels ## come matrices that could only do addition and multiplication ## feature extracters ## filter ``` from IPython.display import Image Image(url='https://mlnotebook.github.io/img/CNN/convSobel.gif') ``` The kernel shown above is a kernel with size (3,3) and stride of (1,1), no activation introduced # Activation functions ``` print('step function') Image(url='https://cdn-images-1.medium.com/max/800/0*8U8_aa9hMsGmzMY2.') print('linear function') Image(url='https://cdn-images-1.medium.com/max/800/1*tldIgyDQWqm-sMwP7m3Bww.png') print('sigmoid function') Image(url='https://cdn-images-1.medium.com/max/800/0*5euYS7InCmDP08ir.') print('tanh function') Image(url='https://cdn-images-1.medium.com/max/800/0*YJ27cYXmTAUFZc9Z.') print('ratified function') Image(url='https://cdn-images-1.medium.com/max/800/0*vGJq0cIuvTB9dvf5.') print('Leaky ratified function') Image(url='https://cdn-images-1.medium.com/max/800/1*A_Bzn0CjUgOXtPCJKnKLqA.jpeg') ``` Softmax is different... # pooling ## [What is wrong with Convolutional neural networks ?](https://towardsdatascience.com/what-is-wrong-with-convolutional-neural-networks-75c2ba8fbd6f) ``` Image(url='https://cdn-images-1.medium.com/max/800/1*lbUtgiANqLoO1GFSc9pHTg.gif') Image(url='https://cdn-images-1.medium.com/max/800/1*wsf4tsOH77T1lpylPUIhbA.png') ``` # Dropout ## [Dropout in (Deep) Machine learning](https://medium.com/@amarbudhiraja/https-medium-com-amarbudhiraja-learning-less-to-learn-better-dropout-in-deep-machine-learning-74334da4bfc5) ``` Image(url='https://cdn-images-1.medium.com/max/800/1*iWQzxhVlvadk6VAJjsgXgg.png') ``` # Batch Normalization ## [Batch normalization in Neural Networks](https://towardsdatascience.com/batch-normalization-in-neural-networks-1ac91516821c) ## [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](https://arxiv.org/abs/1502.03167) ### Batch normalization reduces the amount by what the hidden unit values shift around (covariance shift) ### batch normalization allows each layer of a network to learn by itself a little bit more independently of other layers #### VGG nets don't have batch normalization layers, but they still work well. They don't have one is because batch normalization was not invented yet. #### Pytorch VGG net is trained with batch normalization while tensorflow VGG net is not. # Here is the reason we don't want to train proposed deep neural nets from scratch # Let's compare these models: 1. Xception net 2. VGG19 3. ResNet50 4. Inception V3 5. InceptionResNet V2 6. MobileNet 7. DenseNet 8. NASNet ``` import pandas as pd df = {} df['Model']=['Xception','VGG16','VGG19','ResNet50','InceptionV3','InceptionResNetV2', 'MobileNet','DenseNet121','DenseNet169','DenseNet201'] df['Size']=[88,528,549,99,92,215,17,33,57,80] df['Top1 Accuracy']=[.79,.715,.727,.759,.788,.804,.665,.745,.759,.77] df['Top5 Accuracy']=[.945,.901,.910,.929,.944,.953,.871,.918,.928,.933] df['Parameters']=[22910480,138357544,143667240,25636712,23851784,55873736,4253864, 8062504,14307880,20242984] df['Depth']=[126,23,26,168,159,572,88,121,169,201] df['min input size']=['150x150','48x48','48x48','197x197','139x139','139x139', '32x32','?','?','?'] df = pd.DataFrame(df) df[['Model','Size','Top1 Accuracy','Top5 Accuracy','Parameters','Depth','min input size']] ``` # With trasfer learning, building a model ontop of VGG19, we only need to train 1026 parameters with 128x128x3 (number of featuers = 49152) pixel values of each image ``` from keras.applications import VGG19 model_vgg19 = VGG19(include_top=False, # do not include the classifier weights='imagenet', # get the pretrained weights input_tensor=None, # don't know what this is input_shape=(128,128,3), # decide the input shape pooling='avg', # use global average for the pooling classes=1000)# doesen't matter from keras.models import Model from keras.layers import Dense,Dropout from keras import optimizers,metrics,losses for i,layer in enumerate(model_vgg19.layers[:-2]): layer.trainable = False Encoder = model_vgg19.output Encoder = Dropout(0.5)(Encoder) output = Dense(2,activation='softmax',)(Encoder) model = Model(model_vgg19.input,output) model.compile(optimizer=optimizers.adam(), loss=losses.binary_crossentropy, metrics=['acc']) model.summary() ``` # Image argumentation ## flip the images ## stratch the images ## rotate the images ## shear the images ## take out some pixels ## and so on ... ``` Image(url='https://cdn-images-1.medium.com/max/800/1*RVV70qYkWJ1Uw8hvALjV4A.png') ```
github_jupyter
# Bite Size Bayes Copyright 2020 Allen B. Downey License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) ## Review [In the previous notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/04_dice.ipynb) I presented Theorem 4, which is a way to compute the probability of a disjunction (`or` operation) using the probability of a conjunction (`and` operation). $P(A ~or~ B) = P(A) + P(B) - P(A ~and~ B)$ Then we used it to show that the sum of the unnormalized posteriors is the total probability of the data, which is why the Bayes table works. We saw several examples involving dice, and I used them to show how prediction and inference are related: the Bayes table actually solves the prediction problem on the way to solving the inference problem. ## Bayesville In this notebook we'll consider a famous example where Bayes's Theorem brings clarity to a confusing topic: medical testing. Joe Blitzstein explains the scenario in this video from Stat110x at HarvardX. (You might have to run the following cell to see the video.) ``` from IPython.display import YouTubeVideo YouTubeVideo('otdaJPVQIgg') ``` I'll paraphase the problem posed in the video: > In Bayesville, 1% of the population has an undiagnosed medical condition. Jimmy gets tested for the condition and the test comes back positive; that is, the test says Jimmy has the condition. > > The test is 95% accurate, which means > > * If you give the test to someone with the condition, the probability is 95% that the test will be positive, and > > * If you give it to someone who does not have the condition, the probability is 95% that the test will be negative. > > What is the probability that Jimmy actually has the condition? Because the test is 95% accurate, it is tempting to say that the probability is 95% that the test is correct and Jimmy has the condition. But that is wrong. Or maybe I should say it's the right answer to a different question. 95% is the probability of a positive test, given a patient with the condition. But that's not what the question asked, or what Jimmy wants to know. To Jimmy, the important question is the probability he has the condition, given a positive test. As we have seen, and as Joe explains in the video: $P(A|B) ≠ P(B|A)$ We can use a Bayes table to answer Jimmy's question correctly. ## Bayes table I'll use two strings to represent the hypotheses: `condition` and `no condition`. The prior for `condition` is the probability a random citizen of Bayesville has the condition, which is 1%. The prior for `no condition` is the probability that a random citizen does not have the disease, which is 99%. Let's put those values into a Bayes table. ``` import pandas as pd table = pd.DataFrame(index=['condition', 'no condition']) table['prior'] = 0.01, 0.99 table ``` The data is the positive test, so the likelihoods are: * The probability of a correct positive test, given the condition, which is 95%. * The probability of an incorrect positive test, given no condition, which is 5%. ``` table['likelihood'] = 0.95, 0.05 table ``` Once we have priors and likelihoods, the remaining steps are always the same. We compute the unnormalized posteriors: ``` table['unnorm'] = table['prior'] * table['likelihood'] table ``` And the total probability of the data. ``` prob_data = table['unnorm'].sum() prob_data ``` Then divide through to get the normalized posteriors. ``` table['posterior'] = table['unnorm'] / prob_data table ``` The posterior for `condition` is substantially higher than the prior, so the positive test is evidence in favor of `condition`. But the prior is small and the evidence is not strong enough to overcome it; despite the positive test, the probability that Jimmy has the condition is only about 16%. Many people find this result surprising and some insist that the probability is 95% that Jimmy has the condition. The mistake they are making is called the [base rate fallacy](https://en.wikipedia.org/wiki/Base_rate_fallacy) because it ignores the "base rate" of the condition, which is the prior. ## Put a function on it At this point you might be sick of seeing the same six lines of code over and over, so let's put them in a function where you will never have to see them again. ``` def make_bayes_table(hypos, prior, likelihood): """Make a Bayes table. hypos: sequence of hypotheses prior: prior probabilities likelihood: sequence of likelihoods returns: DataFrame """ table = pd.DataFrame(index=hypos) table['prior'] = prior table['likelihood'] = likelihood table['unnorm'] = table['prior'] * table['likelihood'] prob_data = table['unnorm'].sum() table['posterior'] = table['unnorm'] / prob_data return table ``` This function takes three parameters: * `hypos`, which should be a sequence of hypotheses. You can use almost any type to represent the hypotheses, including string, `int`, and `float`. * `prior`, which is sequence of prior probabilities, $P(H)$ for each $H$. * `likelihood`, which is a sequence of likelihoods, $P(D|H)$ for each $H$. All three sequences should be the same length. Here's a solution to the previous problem using `make_bayes_table`: ``` hypos = ['condition', 'no condition'] prior = 0.01, 0.99 likelihood = 0.95, 0.05 make_bayes_table(hypos, prior, likelihood) ``` **Exercise:** Suppose we take the same test to another town, called Sickville, where the base rate of the disease is 10%, substantially higher than in Bayesville. If a citizen of Sickville tests positive, what is the probability that they have the condition? Use `make_bayes_table` to compute the result. ``` # Solution goes here ``` With a higher base rate, the posterior probability is substantially higher. **Exercise:** Suppose we go back to Bayesville, where the base rate is 1%, with a new test that is 99.5% accurate. If a citizen of Bayesville tests positive with the new test, what is the probability they have the condition? Use `make_bayes_table` to compute the result. ``` # Solution goes here ``` With an accuracy of 99.5%, the positive test provides stronger evidence, so it is able to overcome the small prior. ## The Elvis problem Here's a problem from [*Bayesian Data Analysis*](http://www.stat.columbia.edu/~gelman/book/): > Elvis Presley had a twin brother (who died at birth). What is the probability that Elvis was an identical twin? For background information, I used data from the U.S. Census Bureau, [Birth, Stillbirth, and Infant Mortality Statistics for the Continental United States, the Territory of Hawaii, the Virgin Islands 1935](https://www.cdc.gov/nchs/data/vsushistorical/birthstat_1935.pdf) to estimate that in 1935, about 1/3 of twins were identical. **Exercise:** Use this base rate and a Bayes table to compute the probability that Elvis was an identical twin. Hint: Because identical twins have the same genes, they are almost always the same sex. ``` # Solution goes here ``` ## Summary In this notebook, we used Bayes's Theorem, in the form of a Bayes table, to solve an important problem: interpreting the result of a medical test correctly. Many people, including many doctors, get this problem wrong, with bad consequences for patients. Now that you know about the "base rate fallacy", you will see that it appears in many other domains, not just medicine. Finally, I presented the Elvis problem, which I hope is a fun way to apply what you have learned so far. If you like the Elvis problem, you might enjoy [this notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/elvis.ipynb) where I dig into it a little deeper. [In the next notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/06_pmf.ipynb) I'll introduce the probability mass function (PMF) and we'll use it to solve new versions of the cookie problem and the dice problem.
github_jupyter
# Verify predictions ``` import pandas as pd import os s3_val_pred_ensemble_file_f1="s3://aegovan-data/pubmed_asbtract/predictions_valtest_ppimulticlass-bert-f1-2021-05-10-9_2021052215/val_multiclass.json.json" s3_test_pred_ensemble_file_f1="s3://aegovan-data/pubmed_asbtract/predictions_valtest_ppimulticlass-bert-f1-2021-05-10-9_2021052215/test_multiclass.json.json" s3_train_pred_ensemble_file_f1='s3://aegovan-data/pubmed_asbtract/predictions_valtest_ppimulticlass-bert-f1-2021-05-10-9_2021052910/train_multiclass.json.json' s3_val_pred_ensemble_file_loss ="s3://aegovan-data/pubmed_asbtract/predictions_valtest_ppimulticlass-bert-2021-05-08-17-10_2021050918/val_multiclass.json.json" s3_test_pred_ensemble_file_loss = "s3://aegovan-data/pubmed_asbtract/predictions_valtest_ppimulticlass-bert-2021-05-08-17-10_2021050918/test_multiclass.json.json" #label_mapper = PpiMulticlassLabelMapper() s3_prefix = "s3://aegovan-data/verify_ensemble_prediction_data" s3_mainifests_prefix = "{}/mainifests".format(s3_prefix) std_threshold = 0.15 s3_data_file = s3_test_pred_ensemble_file_f1 local_temp = "temp" local_temp_pred_dir = os.path.join( local_temp, "pred_results") local_temp_wk_dir = os.path.join( local_temp, "wk") !rm -rf $local_temp !mkdir -p $local_temp_pred_dir !mkdir -p $local_temp_wk_dir import boto3 import glob from multiprocessing.dummy import Pool as ThreadPool import argparse import datetime import os def upload_file(localpath, s3path): """ Uploads a file to s3 :param localpath: The local path :param s3path: The s3 path in format s3://mybucket/mydir/mysample.txt """ bucket, key = get_bucketname_key(s3path) if key.endswith("/"): key = "{}{}".format(key, os.path.basename(localpath)) s3 = boto3.client('s3') s3.upload_file(localpath, bucket, key) def get_bucketname_key(uripath): assert uripath.startswith("s3://") path_without_scheme = uripath[5:] bucket_end_index = path_without_scheme.find("/") bucket_name = path_without_scheme key = "/" if bucket_end_index > -1: bucket_name = path_without_scheme[0:bucket_end_index] key = path_without_scheme[bucket_end_index + 1:] return bucket_name, key def download_file(s3path, local_dir): bucket, key = get_bucketname_key(s3path) s3 = boto3.client('s3') local_file = os.path.join(local_dir, s3path.split("/")[-1]) s3.download_file(bucket, key, local_file) def download_object(s3path): bucket, key = get_bucketname_key(s3path) s3 = boto3.client('s3') s3_response_object = s3.get_object(Bucket=bucket, Key=key) object_content = s3_response_object['Body'].read() return len(object_content) def list_files(s3path_prefix): assert s3path_prefix.startswith("s3://") bucket, key = get_bucketname_key(s3path_prefix) s3 = boto3.resource('s3') bucket = s3.Bucket(name=bucket) return ( (o.bucket_name, o.key) for o in bucket.objects.filter(Prefix=key)) def upload_files(local_dir, s3_prefix, num_threads=20): input_tuples = ( (f, s3_prefix) for f in glob.glob("{}/*".format(local_dir))) with ThreadPool(num_threads) as pool: pool.starmap(uploadfile, input_tuples) def download_files(s3_prefix, local_dir, num_threads=20): input_tuples = ( ("s3://{}/{}".format(s3_bucket,s3_key), local_dir) for s3_bucket, s3_key in list_files(s3_prefix)) with ThreadPool(num_threads) as pool: results = pool.starmap(download_file, input_tuples) def download_objects(s3_prefix, num_threads=20): s3_files = ( "s3://{}/{}".format(s3_bucket,s3_key) for s3_bucket, s3_key in list_files(s3_prefix)) with ThreadPool(num_threads) as pool: results = pool.map(download_object, s3_files) return sum(results)/1024 def get_directory_size(start_path): total_size = 0 for dirpath, dirnames, filenames in os.walk(start_path): for f in filenames: fp = os.path.join(dirpath, f) # skip if it is symbolic link if not os.path.islink(fp): total_size += os.path.getsize(fp) return total_size def get_s3file_size(bucket, key): s3 = boto3.client('s3') response = s3.head_object(Bucket=bucket, Key=key) size = response['ContentLength'] return size def download_files_min_files(s3_prefix, local_dir, min_file_size=310, num_threads=20): input_tuples = ( ("s3://{}/{}".format(s3_bucket,s3_key), local_dir) for s3_bucket, s3_key in list_files(s3_prefix) if get_s3file_size(s3_bucket, s3_key) > min_file_size ) with ThreadPool(num_threads) as pool: results = pool.starmap(download_file, input_tuples) %%time download_file(s3_data_file, local_temp_pred_dir) data_full_df = pd.read_json(os.path.join(local_temp_pred_dir, s3_data_file.split("/")[-1])) data_full_df.head(n=3) def missing_uniprot(x): uniprots =[] for i in list(x["gene_to_uniprot_map"].values()): uniprots.extend(i) return x["participant1Id"] not in uniprots or x["participant2Id"] not in uniprots def gene_id_map(x): gene_to_uniprot_map = x["gene_to_uniprot_map"] for n, uniprots in x["gene_to_uniprot_map"].items(): if x["participant1Id"] in uniprots : gene_to_uniprot_map[n]= x["participant1Id"] if x["participant2Id"] in uniprots: gene_to_uniprot_map[n]= x["participant2Id"] return gene_to_uniprot_map data_full_df["missing_uniprot"] = data_full_df.apply(missing_uniprot, axis=1) missing_uniprots = data_full_df.query( "missing_uniprot == True").groupby("class")["class"].count() assert len(missing_uniprots)==0 self_relation = data_full_df.query("participant1Id == participant2Id").groupby("class")["class"].count() assert len(self_relation)==0 ``` ## Filter threshold ``` high_quality_preds_df = data_full_df.query(f"confidence_std <= {std_threshold} and prediction != 'other'") high_quality_preds_df.shape high_quality_preds_df high_quality_preds_df[["pubmedId","participant1Name","participant1Id" , "participant2Name", "participant2Id", "prediction", 'class']] high_quality_preds_df.head() import json import tempfile def create_manifest_file(df, s3_output_manifest_file): items = df.to_dict(orient='records' ) temp_file = os.path.join( tempfile.mkdtemp(), s3_output_manifest_file.split("/")[-1]) with open(temp_file , "w") as f: for item in items: # Write without new lines item_m = {} item_m["source"] = json.dumps(item) f.write(json.dumps(item_m).replace("\n", "\t")) f.write("\n") upload_file(temp_file, s3_output_manifest_file) "{}/".format(s3_mainifests_prefix.rstrip("/")) manifest_file = "{}/{}".format( s3_mainifests_prefix.rstrip("/"), s3_data_file.split("/")[-1]) create_manifest_file( high_quality_preds_df, manifest_file) s3_manifests = [manifest_file] s3_manifests ``` ### Create sagemaker ground truth labelling job ``` import boto3 import sagemaker from datetime import datetime def create_groundtruth_labelling_job(s3_manifest, s3_gt_output, s3_template, pre_lambda, post_lambda, role, workforce_name, job_name, label_attribute_name="prediction", workforce_type= "private-crowd" ): client = boto3.client('sagemaker') sagemaker_session = sagemaker.Session() account_id = boto3.client('sts').get_caller_identity().get('Account') region = boto3.session.Session().region_name workforce_arn = "arn:aws:sagemaker:{}:{}:workteam/{}/{}".format(region, account_id, workforce_type, workforce_name) role_arn = "arn:aws:iam::{}:role/{}".format( account_id, role) pre_lambda_arn = "arn:aws:lambda:{}:{}:function:{}".format(region, account_id, pre_lambda) post_lambda_arn = "arn:aws:lambda:{}:{}:function:{}".format(region, account_id, post_lambda) num_workers_per_object = 1 task_time_limit_sec = 60 * 60 * 5 task_availablity_sec =60 * 60 * 24 * 10 job = client.create_labeling_job(LabelingJobName=job_name ,LabelAttributeName = label_attribute_name ,InputConfig = { "DataSource": { 'S3DataSource': { 'ManifestS3Uri': s3_manifest } } } ,OutputConfig={ 'S3OutputPath': s3_gt_output } , RoleArn = role_arn , HumanTaskConfig={ 'WorkteamArn': workforce_arn, 'UiConfig': { 'UiTemplateS3Uri': s3_template }, 'PreHumanTaskLambdaArn': pre_lambda_arn, 'TaskKeywords': [ 'PPI', ], 'TaskTitle': 'Verify PPI extraction for protein {}'.format(s3_manifest.split("/")[-1]), 'TaskDescription': 'Verifies PPi extraction', 'NumberOfHumanWorkersPerDataObject': num_workers_per_object, 'TaskTimeLimitInSeconds': task_time_limit_sec, 'TaskAvailabilityLifetimeInSeconds': task_availablity_sec, 'MaxConcurrentTaskCount': 10, 'AnnotationConsolidationConfig': { 'AnnotationConsolidationLambdaArn': post_lambda_arn } } ) return job def create_groundtruth_labelling_multiple_jobs(lst_s3_manifests, s3_gt_output, s3_template, pre_lambda, post_lambda, role, workforce_name, job_prefix ="ppi", label_attribute_name="class"): job_prefix = "{}-{}".format(job_prefix , datetime.now().strftime("%Y%m%d%H%M%S")) for s3_manifest in lst_s3_manifests: job_name = "{}-{}".format( job_prefix, s3_manifest.split("/")[-1].split("_")[-1].split(".")[0]) create_groundtruth_labelling_job(s3_manifest, s3_gt_output, s3_template, pre_lambda, post_lambda, role, workforce_name, job_name) import urllib.request def download_template(template_url): with urllib.request.urlopen(template_url) as f: html = f.read().decode('utf-8') with open("template.html", "w") as f: f.write(html) download_template('http://raw.githubusercontent.com/elangovana/ppi-sagemaker-groundtruth-verification/main/src/template/template.html') s3_prefix.rstrip("/") role_name = "service-role/AmazonSageMaker-ExecutionRole-20210104T161547" pre_lambda="Sagemaker-ppipreprocessing" post_lambda="sagemaker-ppipostprocessing" s3_gt_output = "{}/gt_output/".format(s3_prefix.rstrip("/")) workforce_name = "ppi-team" s3_template_file = "{}/template.html".format(s3_prefix.rstrip("/")) upload_file("template.html", s3_template_file ) create_groundtruth_labelling_multiple_jobs (s3_manifests, s3_gt_output, s3_template_file, pre_lambda, post_lambda, role_name, workforce_name) ```
github_jupyter
# Nutria In this Notebook we'll consider the population growth of the Nutria species. The data has been taken from .. . We'll begin importing the data and visualizing it. ``` import pandas as pd from pyfilter import __version__ print(__version__) data = pd.read_csv("nutria.txt", sep='\t').iloc[:, 0].rename("nutria") data.plot(figsize=(16, 9), title="Nutria population") ``` Next, we'll specify the model to use for inference. We'll use the flexible Allee model, found in .. . However, instead of considering the actual population, we'll use the logarithm. ``` from pyfilter.timeseries import LinearGaussianObservations, AffineProcess from torch.distributions import Normal, Gamma, TransformedDistribution, AffineTransform, PowerTransform import torch from pyfilter.distributions import Prior, DistributionWrapper def f(x, a, b, c, d): exped = x.values.exp() return x.values + a + b * exped + c * exped ** 2 def g(x, a, b, c, d): return d.sqrt() def build_invgamma(concentration, rate, power, **kwargs): return TransformedDistribution(Gamma(alpha, rate, **kwargs), PowerTransform(power)) alpha = data.shape[0] / 2 beta = 2 * (alpha - 1) / 10 invgamma_prior = Prior( build_invgamma, concentration=alpha, rate=beta, power=-1.0 ) norm_prior = Prior(Normal, loc=0.0, scale=1.0) h_priors = norm_prior, norm_prior, norm_prior, invgamma_prior dist = DistributionWrapper(Normal, loc=0.0, scale=1.0) hidden = AffineProcess((f, g), h_priors, dist, dist) model = LinearGaussianObservations(hidden, 1., invgamma_prior) ``` Next, we'll use SMC2 together with APF to perform inference on the logged dataset. ``` from pyfilter.inference.sequential import SMC2 from pyfilter.filters.particle import APF, proposals as p import numpy as np logged_data = torch.from_numpy(data.values).float().log() algs = list() for i in range(2): filt = APF(model.copy(), 250) alg = SMC2(filt, 1000, n_steps=5).cuda() state = alg.fit(logged_data) algs.append((state, alg)) ``` Next, let's visualize the filtered means of the state. ``` import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(16, 9)) data.plot(ax=ax) for state, _ in algs: ax.plot(state.filter_state.filter_means.mean(dim=1)[1:].exp().cpu().numpy(), label="Filtered") ax.legend() ``` Next, let's visualize the posterior distributions of the parameters. ``` import pandas as pd from arviz import plot_posterior fig, ax = plt.subplots(5, figsize=(16, 9)) colors = ["gray", "salmon"] names = "a, b, c, d, \sigma".split(", ") for j, (state, alg) in enumerate(algs): w = state.normalized_weights() for i, param in enumerate(alg.filter.ssm.parameters()): plot_posterior(param.squeeze().cpu().numpy(), ax=ax[i], color=colors[j], point_estimate=None, hdi_prob='hide') ax[i].set_title(f"${names[i]}$") plt.tight_layout() ```
github_jupyter
``` import rasterio as rio import numpy as np import matplotlib.pyplot as plt from IPython import display from matplotlib_scalebar.scalebar import ScaleBar ``` # Sensitivity of ASO Snow cover mask to snow depth threshold David Shean May 2, 2020 _(modified by Tony Cannistra, Jan 30, 2021)_ **Purpose**: Examine the choice of snow depth threshold used to binarize 3 m ASO snow depth data. ``` aso_sd_fn = '/Volumes/wrangell-st-elias/research/planet/ASO_3M_SD_USCATE_20180528.tif' aso_sd_ds = rio.open(aso_sd_fn) aso_sd = aso_sd_ds.read(1, masked=True) def imshow_stretch(ax,a,clim=None,perc=(2,98),sym=False,cmap='inferno',dx=aso_sd_ds.res[0],cbar=True): if sym: cmap = 'RdBu' if clim is None: vmin,vmax = np.percentile(a.compressed(),perc) #vmin,vmax = np.percentile(a,perc) if sym: vmax = np.max(np.abs([vmin,vmax])) vmin = -vmax clim = (vmin, vmax) m = ax.imshow(a, vmin=clim[0], vmax=clim[1], cmap=cmap, interpolation='None') ax.add_artist(ScaleBar(dx)) cbar_obj=None if cbar: cbar_obj = plt.colorbar(m, ax=ax) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.set_facecolor('0.5') return clim, cbar_obj f, ax = plt.subplots(figsize=(10,10)) clim, cbar = imshow_stretch(ax, aso_sd) plt.title("ASO 3m Snow Depth") cbar.set_label("Snow Depth (m)") ``` ## General Statistics ``` display.Markdown(f"**Number of snow depth pixels:** {aso_sd.count():.1E}") display.Markdown(f"**Maxiumum snow depth**: {aso_sd.max():.2f}m") # 1 cm snow depth bins from 1 cm to 300 cm bins = np.arange(0.01,3.01,0.01) f,ax = plt.subplots(dpi=150) ax.hist(aso_sd.compressed(), bins=bins) ax.set_xlabel('Snow Depth (m)') ax.set_ylabel('Bin count (px)') plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) ``` ## Examine the effect of multiple thresholds on binary snow pixel assignment ``` # Possible thresholds # 1 cm to 20 cm in 1 cm increments sd_thresh_list = np.arange(0.01, 0.21, 0.01) sd_thresh_list # count the number of pixels >= each threshold snow_mask_count_list = [] for sd_thresh in sd_thresh_list: print(sd_thresh) snow_mask = aso_sd >= sd_thresh snow_mask_count = snow_mask.sum() snow_mask_count_list.append(snow_mask_count) f, ax = plt.subplots(dpi=150) ax.plot(sd_thresh_list*100, snow_mask_count_list) ax.set_ylabel('Snowcover pixel count') ax.set_xlabel('Snow Depth Threshold (cm)') ax.set_title("Effect of SD Thresh on Pixel Count") ax.axvline(10.0, linestyle=':', linewidth=1, color='red', label='10 cm Threshold') plt.legend() snow_mask_area_list = (np.array(snow_mask_count_list) * aso_sd_ds.res[0] * aso_sd_ds.res[1])/1E6 f, ax = plt.subplots(dpi=150) ax.plot(sd_thresh_list*100, snow_mask_area_list, linewidth=1, color='grey') ax.set_ylabel('Snowcover Area (km$^2$)') ax.set_xlabel('Snow Depth Threshold (cm)') ax.set_title("Effect of SD Threshold on SCA") #10cm vs 8cm cm10 = 0.10 cm8 =0.08 cm10_sca = snow_mask_area_list[np.where(np.isclose(sd_thresh_list,cm10))][0] cm8_sca = snow_mask_area_list[np.where(np.isclose(sd_thresh_list,cm8))][0] ax.hlines(cm8_sca, cm8*100, cm10*100, linestyle='--', linewidth=1) ax.vlines(cm10*100, cm8_sca, cm10_sca, linestyle='--', linewidth=1, label='Differences') bottom = min(snow_mask_area_list) ax.vlines(cm10*100, bottom, cm10_sca, linestyle=':', linewidth=1, color='red', label=f'{cm10*100:0.0f} cm Threshold') ax.vlines(cm8*100, bottom, cm8_sca, linestyle=':', linewidth=1, color='green', label=f'{cm8*100:0.0f} cm Threshold') ax.annotate(f"{(cm8_sca - cm10_sca):.2f} km$^2$ Difference", (cm10*100, cm10_sca + (cm8_sca - cm10_sca)/2), (cm10*100 + 2.2, cm10_sca + (cm8_sca - cm10_sca)/2), xycoords='data', ha="left", va="center", size=10, arrowprops=dict(arrowstyle='-[', shrinkA=5, shrinkB=5, fc="k", ec="k", ), bbox=dict(boxstyle="square", fc="w")) ax.annotate('ASO_3M_SD_USCATE_20180528.tif', (0.02, 0.02), xycoords='axes fraction', color='grey', size=6) plt.legend() ``` ## Visual + Quantitative Comparison of Specific Thresholds 1cm, 10cm and 20cm. ``` snow_mask_01cm = aso_sd >= 0.01 snow_mask_08cm = aso_sd >= 0.08 snow_mask_10cm = aso_sd >= 0.10 snow_mask_20cm = aso_sd >= 0.20 f, axa = plt.subplots(1,3, figsize=(14,8)) imshow_stretch(axa[0], snow_mask_01cm, cbar=False) imshow_stretch(axa[1], snow_mask_10cm, cbar=False) imshow_stretch(axa[2], snow_mask_20cm, cbar=False) axa[0].set_title('SD Thresh %0.2f m' % 0.01); axa[1].set_title('SD Thresh %0.2f m' % 0.10); axa[2].set_title('SD Thresh %0.2f m' % 0.20); def snowmask_comparison(a,b): #All valid pixels a_all_count = a.count() print(a_all_count, "all pixel count in a") #All valid snow pixels in a a_snow_count = a.sum() print(a_snow_count, "snow pixel count in a") a_snow_count_perc = 100*a_snow_count/a_all_count print("%0.2f%% snow pixel count in a" % a_snow_count_perc) #All valid snow pixels in b b_snow_count = b.sum() print(b_snow_count, "snow pixel count in b") b_snow_count_perc = 100*b_snow_count/a_all_count print("%0.2f%% snow pixel count in b" % b_snow_count_perc) ab_snow_count_diff = np.abs(a_snow_count - b_snow_count) print(ab_snow_count_diff, "snow pixel count difference between a and b") ab_snow_count_diff_perc = 100*ab_snow_count_diff/np.mean([a_snow_count,b_snow_count]) #ab_snow_count_diff_perc = 100*ab_snow_count_diff/a_snow_count print("%0.2f%% snow pixel count percent difference between a and b" % ab_snow_count_diff_perc) #Boolean disagreement for snow ab_snow_disagree = ~(a == b) #Count of snow pixels that agree #print(ab_snow_disagree.sum()) return ab_snow_disagree ``` ### Percentage Difference in SCA Between Specific Thresholds **1cm and 10cm**: ``` snow_mask_01cm_10cm = snowmask_comparison(snow_mask_01cm, snow_mask_10cm) display.Markdown(f"_Area Difference: {(1982066 * 3.0 * 3.0) / 1E6:.2f} km$^2$_") ``` **8 cm and 10 cm** ``` snow_mask_01cm_10cm = snowmask_comparison(snow_mask_08cm, snow_mask_10cm) display.Markdown(f"_Area Difference: {(491801 * 3.0 * 3.0) / 1E6:.2f} km$^2$_") ``` **10 cm and 20cm** ``` snow_mask_10cm_20cm = snowmask_comparison(snow_mask_10cm, snow_mask_20cm) display.Markdown(f"_Area Difference: {(2678151 * 3.0 * 3.0) / 1E6:.2f} km$^2$_") ``` **1 cm and 20 cm** ``` snow_mask_01cm_20cm = snowmask_comparison(snow_mask_01cm, snow_mask_20cm) display.Markdown(f"_Area Difference: {(4660217 * 3.0 * 3.0) / 1E6:.2f} km$^2$_") ``` ### Visualization of SCA differences with 3 thresholds ``` f,axa = plt.subplots(1,3, figsize=(16,10), sharex=True, sharey=True) imshow_stretch(axa[0], snow_mask_01cm_10cm, clim=(0,1), cbar=False) axa[0].set_title('1 cm vs. 10 cm') imshow_stretch(axa[1], snow_mask_10cm_20cm, clim=(0,1), cbar=False) axa[1].set_title('10 cm vs. 20 cm') imshow_stretch(axa[2], snow_mask_01cm_20cm, clim=(0,1), cbar=False) axa[2].set_title('1 cm vs. 20 cm') ``` ## Analysis We assessed a broad range of thresholds to determine the sensitivity of ASO-derived snow covered area to choice of threshold. [Raleigh and Small, 2017)][rs] suggest a range of vertical accuracy of lidar-based snow depth measurements between 2-30 cm, so we chose to evaluate a subset ranging from 1 cm to 20 cm. We observed a **9.28% difference in SCA** across the widest range of thresholds (e.g. comparing 1 cm to 20 cm). This amounts to a difference of $4660217$ pixels, which represents $\mathrm{41.9 km}^2$. We also evaluated several specific thresholds and their relationships to one another. [Painter et al., 2016][painter16] suggest an 8 cm vertical accuracy for Airborne Snow Observatory-derived snow depth measurements, based on an assessment of open (e.g. unforested) terrain without topographic complexity. [Currier et al., 2016], when comparing ALS snow depth to both terrestrial lidar and ground-based snow probe surveys at open and forested sites, observed a range of vertical RMSD values between 8 cm and 16 cm. Our assessment of this literature motivated the comparison of **2 cm, 8 cm, 10 cm, and 20 cm** thresholds. Focusing our attention on the center of the known vertical accuracy range, we found small differences in SCA between 8 cm and 10 cm (**0.97% SCA difference, 4.43 km$^2$**). Taking into account the homogeneous nature of the Tuolumne watershed studied here, particularly with regard to topogrpahic complexity and forested regions, we believe a 10 cm is the threshold value that takes into account these sources of uncertainty present in this watershed while still being representative of our current understanding of ALS vertical accuracy. [rs]: https://doi.org/10.1002/2016GL071999 [painter16]: https://doi.org/10.1016/j.rse.2016.06.018 [currier]: https://doi.org/10.1029/2018WR024533
github_jupyter
sbpy.data.Ephem Example Notebooks ================================= [Ephem](https://sbpy.readthedocs.io/en/latest/api/sbpy.data.Ephem.html#sbpy.data.Ephem) provides functionality to query, calculate, manipulate, and store ephemerides and observational information. Querying Asteroid Ephemerides from JPL Horizons --------------------------------------- Query ephemerides for three asteroids in a given date range as observed from Maunakea (observatory code 568) using [sbpy.data.Ephem.from_horizons](https://sbpy.readthedocs.io/en/latest/api/sbpy.data.Ephem.html#sbpy.data.Ephem.from_horizons). This function uses [astroquery.jplhorizons](https://astroquery.readthedocs.io/en/latest/jplhorizons/jplhorizons.html) in the background and creates Ephem objects from the query results: ``` from sbpy.data import Ephem import astropy.units as u from astropy.time import Time targets = ['2100', '2018 RC1', 'Ganymed'] eph = Ephem.from_horizons(targets, location=568, epochs={'start': Time('2018-01-01'), 'stop': Time('2018-02-01'), 'step':1*u.hour}) eph.table ``` The resulting `Ephem` object provides the same functionality as all other `sbpy.data` objects; for instance, you can easily convert values to other units: ``` import numpy as np import astropy.units as u eph['RA'][0].to('arcsec') ``` Querying Satellite Ephemerides from JPL Horizons (and other ambiguous queries) ------------------------------------------------ Query ephemerides for satellite `'GOES-1'` for a number of epochs as observed from Magdalena Ridge Observatory (observatory code `'H01'`): ``` eph = Ephem.from_horizons('GOES-1', epochs=Time([2453452.123245, 2453453.34342, 2453454.342342], format='jd')) ``` This function call raises a `QueryError`. The reason for this error is that astroquery.jplhorizons assumes that the provided target string is a small body. However, GOES-1 is an artifical satellite. As suggested in the provided error message, we need to specificy an `id_type` that is different from the default value (`'smallbody'`). Since sbpy.data.Ephem.from_horizons builds upon astroquery.jplhorizons, we can provide optional parameters for the latter directly to the former. According to the [JPL Horizons documentation](https://ssd.jpl.nasa.gov/?horizons_doc#selection), spacecraft are grouped together with major body, so let's try: ``` eph = Ephem.from_horizons('GOES-1', id_type='majorbody', epochs=Time([2453452.123245, 2453453.34342, 2453454.342342], format='jd')) ``` Another error. This time, the error refers to the target name being ambiguous and provides a list of objects in the database that match the search string. We pick `GOES-1` and use the unique id provided for this object (note that in this case we have to use `id_type='id'`): ``` eph = Ephem.from_horizons('-108366', id_type='id', epochs=Time([2453452.123245, 2453453.34342, 2453454.342342], format='jd')) eph.table ``` Using Ephem objects ------------------- ... please refer to [this notebook](https://github.com/NASA-Planetary-Science/sbpy-tutorial/blob/master/notebooks/data/General_concepts.ipynb) for some examples. Computing Ephemerides with OpenOrb ------------------------------- `sbpy.data.Ephem.from_oo` provides a way to compute ephemerides from an [sbpy.data.Orbit](https://sbpy.readthedocs.io/en/latest/api/sbpy.data.Orbit.html#sbpy.data.Orbit) object. This function requires [pyoorb](https://github.com/oorb/oorb/tree/master/python) to be installed. However, you can use the following code snippet on your computer locally to compute ephemerides for a number of orbits that were obtained from the JPL Horizons system as observed from he Discovery Channel Telescope (observatory code `'G37'`): ``` from sbpy.data import Ephem, Orbit from astropy.time import Time from numpy import arange orbits = Orbit.from_horizons(['12893', '3552', '2018 RC3']) epochs = Time(Time.now().jd + arange(0, 3, 1/24), format='jd') eph = Ephem.from_oo(orbits, epochs, location='G37') eph.table ``` We can compare the calculated ephemerides to ephemerides provided by JPL Horizons to check their accuracy by calculating the maximum absolute residuals in RA, Dec, and heliocentric distance (r): ``` import numpy as np eph_horizons = Ephem.from_horizons(['12893', '3552', '2018 RC3'], epochs=epochs, location='G37') print('RA max residual:', np.max(np.fabs(eph['RA']-eph_horizons['RA']).to('arcsec'))) print('DEC max residual', np.max(np.fabs(eph['DEC']-eph_horizons['DEC']).to('arcsec'))) print('r max residual:', np.max(np.fabs(eph['r']-eph_horizons['r']).to('km'))) ``` Querying Ephemerides from the Minor Planet Center and Miriade ====================== Ephemerides from the [Minor Planet Center](http://minorplanetcenter.net) can be queried using the function [sbpy.data.Ephem.from_mpc()](https://sbpy.readthedocs.io/en/latest/api/sbpy.data.Ephem.html#sbpy.data.Ephem.from_mpc), which uses [astroquery.mpc](https://astroquery.readthedocs.io/en/latest/mpc/mpc.html) under the hood and uses the same syntax as [sbpy.data.Ephem.from_horizons](https://sbpy.readthedocs.io/en/latest/api/sbpy.data.Ephem.html#sbpy.data.Ephem.from_horizons). Similarly, [sbpy.data.Ephem.from_miriade](https://sbpy.readthedocs.io/en/latest/api/sbpy.data.Ephem.html#sbpy.data.Ephem.from_miriade) can be used to query ephemerides generated by the [Miriade system at IMCCE](http://vo.imcce.fr/webservices/miriade/), which uses [astroquery.imcce](https://astroquery.readthedocs.io/en/latest/imcce/imcce.html) under the hood. The following example compares ephemerides for asteroid Ceres for a given date and observer location from JPL Horizons and the Minor Planet Center: ``` from sbpy.data import Ephem import numpy as np epoch = Time(2451200.5, format='jd') ceres_horizons = Ephem.from_horizons('Ceres', epochs=epoch, location=500) ceres_mpc = Ephem.from_mpc('Ceres', epochs=epoch, location=500) ceres_miriade = Ephem.from_miriade('Ceres', epochs=epoch, location=500) # RA print('Horizons', ceres_horizons['RA']) print('MPC', ceres_horizons['RA']) print('Miriade', ceres_miriade['RA']) # Dec print('Horizons', ceres_horizons['DEC']) print('MPC', ceres_horizons['DEC']) print('Miriade', ceres_miriade['DEC']) # delta print('Horizons', ceres_horizons['delta']) print('MPC', ceres_horizons['delta']) print('Miriade', ceres_miriade['delta']) ```
github_jupyter
## Introduction This is a quick and dirty notebook to detect VDSL interference in amateur radio bands. The DSP very closly follows Dr Martin Sach's fantastic article on [VDSL2 Detection](http://rsgb.org/main/files/2018/10/VDSL-Radiation-and-its-Signal-Charecterisation.pdf). The purpose of this notebook is not to replace Lelantos but to offer a way for folks to gain a deeper understanding of the DSP approach and to tinker with the parameters to see their effects. Suggestions, pull requests, critiques etc are very welcome. Happy VDSL hunting. ## Dependencies This notebook is dependent on siglib, and matplotlib. Installing siglib will pickup numpy ``` !pip install git+git://github.com/kyjohnso/siglib !pip install -U matplotlib import numpy as np import wave import matplotlib.pyplot as plt import siglib as sl ``` ## Read IQ File This notebook is dependent on getting your IQ file into a 1D numpy.ndarray named x. I grabbed the example file [here](https://rsgb.services/public/software/lelantos/Example_11to13MHz.wav) but you can get data from a variety of SDRs and even transcievers that support IQ. If you do get your data from another source then you probably will have to modify the cell below. This boils down to, do the work to get a ndarray named "x" and also set "samplerate." ``` filename = './Example_11to13MHz.wav' with wave.open(filename,'rb') as f: samplerate = f.getframerate() x = f.readframes(f.getnframes()) x = np.frombuffer(x, dtype=np.int16) x = np.reshape(x,(f.getnframes(),2)).dot([1,1j]) ``` Lets just take a look at the spectrum of this file to make sure we read the samples correctly. ``` X = 10*np.log10(np.abs(np.fft.fftshift(np.fft.fft(x)))) %matplotlib notebook freq = np.arange(-samplerate/2,samplerate/2,samplerate/X.shape[-1])/1e3 plt.plot(freq,X) plt.xlabel("frequency (kHz)") plt.ylabel("power (dB)") ``` In the above plot, if you zoom into about 0-50 kHz IF you should see that the noise floor is noticably lower than the surrounding spectrum. This is the VDSL guard band and this feature alone is enough to confidently detect and measure the level of VDSL interference. Per the article though, we will do the correlation to detect the signal structure itself. ## Filtering Alright, so before we do the delay multiply detection of the cyclic extension in VDSL2 (that last statement should cause you to go click on the referenced article at the top to figure out what the heck it is I am referring to), we need to filter out the high power, narrow band signals so that they don't dominate the relatively short correlations. The article doesn't specify exactly how the narrow band signals are identified, so I chose the opening morphological function to find the noise floor. Any signals > 6dB over this level are identified as interferers. ``` freq_bin = 250 notch_thrsh = 6 len_fft = int(samplerate/freq_bin) window = sl.hamming(len_fft) x_f = sl.frame(x,len_fft,int(len_fft/2),pad=False) x_w = x_f*window psd = np.fft.fft(x_w) psd = np.fft.fftshift(np.mean(np.abs(psd),axis=0)) psd = 10*np.log10(psd) ntaps = 64 psd_o = sl.opening(psd,ntaps) psd_c = sl.closing(psd,ntaps) ``` Now lets plot the spectrum, the opening function, and 6dB above that. ``` %matplotlib notebook freq = np.arange(-samplerate/2,samplerate/2,samplerate/psd.shape[-1])/1e3 plt.plot(freq,psd) plt.plot(freq,psd_o) plt.plot(freq,psd_o+6) plt.xlabel("frequency (kHz)") plt.ylabel("power (dB)") plt.legend(["power spectral density","noise floor (opening function)","noise floor + 6 dB"]) ``` Now we have a threshold that we can use to get the indecies for a notch filter. I am sure there is a better way to form the filter, but setting the interferer index values to 0, all others to 1, and then IFFTing will give us a crude time domain filter. ``` H = (psd < (psd_o + notch_thrsh))*1.0 h = np.fft.fftshift(np.fft.ifft(np.fft.fftshift(H))) ``` Now we need to filter the time domain signal through this filter. To do this you could use scipy.signal.convolve(x,h), but seeing as how the goal here is to gain an understanding of the underlying DSP, (and since I have an astetic preference for numpy only functions), I decided to implement the overlap save correlation in the siglib library. It is pretty low level and so you have to pad and transform the filter, compute how much the overlap should be, and then pass those to the function. Fortunately it is all in the cell below. ``` overlap = h.shape[-1] - 1 len_fft = int(2**(np.ceil(np.log2(8 * overlap)))) H = np.fft.fft(np.concatenate([h,np.zeros(len_fft-h.shape[-1])])) step = H.shape[-1] - overlap x_filt = sl.overlapsave(x,H,step).flatten() ``` Cool, now lets plot the original and the filtered spectra to make sure our filter worked. ``` %matplotlib notebook freq = np.arange(-samplerate/2,samplerate/2,samplerate/x.shape[-1])/1e3 freq_filt = np.arange(-samplerate/2,samplerate/2,samplerate/x_filt.shape[-1])/1e3 plt.plot(freq,10*np.log10(np.abs(np.fft.fftshift(np.fft.fft(x))))) plt.plot(freq_filt,10*np.log10(np.abs(np.fft.fftshift(np.fft.fft(x_filt))))) plt.xlabel("frequency (kHz)") plt.ylabel("power (dB)") plt.legend(["Original PSD","Filtered PSD"]) ``` Now we can perform the shift by the un-extended symbol length of 231 us. Again, there are scipy.signal ways to do this but then we wouldn't be able to understand the ins and outs of sinc interpolation. ``` ntaps = 5 symbol_period_sec = 231.884e-6 symbol_period_samp = samplerate*symbol_period_sec ext_symbol_period_sec = 250e-6 ext_symbol_period_samp = samplerate*ext_symbol_period_sec ce_len_sec = 18.116e-6 ce_len_samp = int(samplerate*ce_len_sec) shift_idx = np.arange(symbol_period_samp,x.shape[-1]-np.floor(ntaps/2)-1,1) x_shift = sl.resample(x_filt,shift_idx,ntaps) ``` Now lets take a look at the shifted samples to make sure we know how to do sinc interpolation. Zoom in so you can see a handful of individual samples below and confirm that it looks about right. ``` %matplotlib notebook plt.plot(np.real(x_filt)) plt.plot(shift_idx,np.real(x_shift)) ``` Now we can perform the correlation, the frame_length in the cell below changes how many samples to integrate in the correlation, that might be fun for you to tinker with. ``` x_xs = x_filt[:x_shift.shape[-1]]*np.conj(x_shift) x_cor = np.sum(sl.frame(x_xs, frame_length=int(ce_len_samp), frame_step=1,pad=False),axis=-1) ``` Here we are, we have filtered, we have correlated, now we can plot the correlation in all its glory and see those peaks and ... ``` %matplotlib notebook plt.plot(np.abs(x_cor)) ``` Wa, wa...... Reading further in the article we need to integrate (incoherently) multiple signals to get the peaks to stand out. Furthermore, the article describes pretty well the effect of a SDR clock that drifts and a method to correct it. To do this we will look over -50 PPM to +50 PPM in 5 PPM increments. This can be implemented by slightly shifting the sample that are summed. ``` svi_cor = [] err_vec = np.arange(-50e-6,55e-6,5e-6) for samplerate_error in err_vec: svi_idx = np.asarray(np.round(np.arange(0,x_cor.shape[-1],1+samplerate_error)),dtype=np.int32) svi_idx = sl.frame(svi_idx,1000,1000,pad=False) svi_cor.append(np.sum(np.abs(x_cor[svi_idx]),axis=0)) svi_cor = np.array(svi_cor) err_vec[np.argmax(np.max(svi_cor,axis=-1))] ``` Here we see that the maximum correlation happens at ~10 PPM offset, if we plot the integrated correlation for all of these oscillator errors, it makes for a pretty cool visualization. ``` %matplotlib notebook _ = plt.plot(svi_cor.T) ``` And there we have it, I still need to add the SNR value of the VDSL signals, but those peaks indicate that we are well on our way to building this analysis notebook out into something pretty cool.
github_jupyter
#DV360 Report Create a DV360 report. #License Copyright 2020 Google LLC, Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. #Disclaimer This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team. This code generated (see starthinker/scripts for possible source): - **Command**: "python starthinker_ui/manage.py colab" - **Command**: "python starthinker/tools/colab.py [JSON RECIPE]" #1. Install Dependencies First install the libraries needed to execute recipes, this only needs to be done once, then click play. ``` !pip install git+https://github.com/google/starthinker ``` #2. Set Configuration This code is required to initialize the project. Fill in required fields and press play. 1. If the recipe uses a Google Cloud Project: - Set the configuration **project** value to the project identifier from [these instructions](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md). 1. If the recipe has **auth** set to **user**: - If you have user credentials: - Set the configuration **user** value to your user credentials JSON. - If you DO NOT have user credentials: - Set the configuration **client** value to [downloaded client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md). 1. If the recipe has **auth** set to **service**: - Set the configuration **service** value to [downloaded service credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_service.md). ``` from starthinker.util.configuration import Configuration CONFIG = Configuration( project="", client={}, service={}, user="/content/user.json", verbose=True ) ``` #3. Enter DV360 Report Recipe Parameters 1. Reference field values from the <a href='https://developers.google.com/bid-manager/v1/reports'>DV360 API</a> to build a report. 1. Copy and paste the JSON definition of a report, <a href='https://github.com/google/starthinker/blob/master/tests/scripts/dbm_to_bigquery.json#L9-L40' target='_blank'>sample for reference</a>. 1. The report is only created, a seperate script is required to move the data. 1. To reset a report, delete it from DV360 reporting. Modify the values below for your use case, can be done multiple times, then click play. ``` FIELDS = { 'auth_read': 'user', # Credentials used for reading data. 'report': '{}', # Report body and filters. 'delete': False, # If report exists, delete it before creating a new one. } print("Parameters Set To: %s" % FIELDS) ``` #4. Execute DV360 Report This does NOT need to be modified unless you are changing the recipe, click play. ``` from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'dbm': { 'auth': 'user', 'report': {'field': {'name': 'report', 'kind': 'json', 'order': 1, 'default': '{}', 'description': 'Report body and filters.'}}, 'delete': {'field': {'name': 'delete', 'kind': 'boolean', 'order': 2, 'default': False, 'description': 'If report exists, delete it before creating a new one.'}} } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True) ```
github_jupyter
# Collect NISMOD2 results for NIC resilience - demand scenarios - water demand - energy demand - transport OD matrix, trip distribution, energy consumption ``` import glob import os import re from datetime import datetime, timedelta import pandas import geopandas from pandas.api.types import CategoricalDtype from tqdm.notebook import tqdm ``` ## Water demand ``` water_demand_files = glob.glob("../results/nic_w*/water_demand/decision_0/*.csv") dfs = [] for fn in water_demand_files: demand_scenario = re.search("__(\w+)", fn).group(1) year = re.search("2\d+", fn).group(0) df = pandas.read_csv(fn, dtype={ 'water_resource_zones': 'category' }) df['timestep'] = int(year) df.timestep = df.timestep.astype('int16') df['demand_scenario'] = demand_scenario df.demand_scenario = df.demand_scenario.astype(CategoricalDtype(['BL', 'FP'])) dfs.append(df) water_demand = pandas.concat(dfs) del dfs water_demand.head() water_demand.dtypes water_demand.to_parquet('nic_water_demand.parquet') ``` ## Energy demand ``` energy_demand_files = glob.glob("../results/nic_ed_unconstrained/energy_demand_unconstrained/decision_0/*2050.parquet") dfs = [] for n, fn in enumerate(tqdm(energy_demand_files)): output = re.search("output_(\w+)_timestep", fn).group(1) year = re.search("2\d+", fn).group(0) sector = re.match("[^_]*", output).group(0) service = output.replace(sector + "_", "") fuel = re.match("hydrogen|oil|solid_fuel|gas|electricity|biomass|heat", service).group(0) df = pandas.read_parquet( fn ).rename(columns={ output: 'energy_demand' }) df['fuel'] = fuel df['sector'] = sector dfs.append(df) energy_demand = pandas.concat(dfs) del dfs energy_demand.head() ed_heat_elec = energy_demand[energy_demand.fuel.isin(('heat', 'electricity'))] \ .groupby(['fuel', 'lad_uk_2016', 'hourly']) \ .sum() \ .reset_index() ed_heat_elec # set date values ed_heat_elec['date'] = ed_heat_elec.hourly.apply(lambda h: datetime(2050, 1, 1) + timedelta(hours=h-1)) ed_heat_elec = ed_heat_elec.set_index('date') ed_heat_elec # national dated ed_national = ed_heat_elec \ .groupby('hourly') \ .sum() \ .reset_index() ed_national['date'] = ed_national.hourly.apply(lambda h: datetime(2050, 1, 1) + timedelta(hours=h-1)) ed_national = ed_national.set_index('date') ed_national # find max demand day daily = ed_national.drop(columns=['hourly']).resample('D').sum() daily.loc[daily.energy_demand.idxmax()] # find max demand hour ed_national.loc[ed_national.energy_demand.idxmax()] # select from max day max_day = ed_heat_elec.loc['2050-01-20'] max_day max_day \ .groupby(['fuel', 'hourly']) \ .sum() \ .reset_index() \ .pivot(columns='fuel', index='hourly') \ .plot() max_day.to_parquet('nic_energy_demand_heat_electricity_2050_max_day.parquet') ed_heat_elec.to_parquet('nic_energy_demand_heat_electricity_2050.parquet') ``` ## Transport energy ``` def hours_to_int(h): """Convert from string-named hours to 24-hour clock integers """ lu = { 'MIDNIGHT': 0, 'ONEAM': 1, 'TWOAM': 2, 'THREEAM': 3, 'FOURAM': 4, 'FIVEAM': 5, 'SIXAM': 6, 'SEVENAM': 7, 'EIGHTAM': 8, 'NINEAM': 9, 'TENAM': 10, 'ELEVENAM': 12, 'NOON': 11, 'ONEPM': 13, 'TWOPM': 14, 'THREEPM': 15, 'FOURPM': 16, 'FIVEPM': 17, 'SIXPM': 18, 'SEVENPM': 19, 'EIGHTPM': 20, 'NINEPM': 21, 'TENPM': 22, 'ELEVENPM': 23, } return lu[h] ev_paths = glob.glob("../results/nic_ed_tr/transport/decision_0/*vehicle*") dfs = [] for fn in ev_paths: output = re.search("output_(\w+)_timestep", fn).group(1) year = re.search("2\d+", fn).group(0) df = pandas.read_parquet(fn).rename(columns={ output: 'value' }) df['timestep'] = int(year) df['key'] = output dfs.append(df) ev_demand = pandas.concat(dfs) \ .reset_index() del dfs ev_demand.annual_day_hours = ev_demand.annual_day_hours.apply(hours_to_int) ev_demand = ev_demand \ .pivot_table( index=['timestep', 'lad_gb_2016', 'annual_day_hours'], columns='key', values='value' ) \ .reset_index() del ev_demand.columns.name ev_demand.head() ev_demand.dtypes ev_demand.to_parquet('nic_ev_demand.parquet') ``` ## Transport trips ``` tr_data_path = "../results/nic_ed_tr/transport-raw_data_results_nic_ed_tr/" # 2015 estimated tempro OD tempro15 = pandas.read_csv(tr_data_path + "data/csvfiles/temproMatrixListBased198WithMinor4.csv") tempro15 # 2015 aggregated LAD OD lad15 = pandas.read_csv(tr_data_path + "data/csvfiles/ladFromTempro198ODMWithMinor4.csv") \ .sort_values(by=['origin', 'destination']) lad15 # 2050 predicted LAD OD - to disaggregate lad50 = pandas.read_csv(tr_data_path + "output/2050/predictedODMatrix.csv") \ .melt(id_vars='origin', var_name='destination', value_name='flow') \ .sort_values(by=['origin', 'destination']) lad50 # tempro zones shapefile - with LAD codes already attached tempro_lad = geopandas.read_file(tr_data_path + "data/shapefiles/tempro2.shp") \ .rename(columns={ 'Zone_Name': 'tempro_name', 'Zone_Code': 'tempro', 'LAD_Code': 'lad', 'Local_Auth': 'lad_name' }) \ [['lad', 'lad_name', 'tempro', 'tempro_name']] \ .sort_values(by=['lad', 'tempro']) tempro_lad_codes = tempro_lad[['lad', 'tempro']] tempro_lad # start with tempro 2015 OD # merge on LAD codes for tempro origins df = tempro15 \ .rename(columns={'flow': 'tempro2015'}) \ .merge(tempro_lad_codes, left_on='origin', right_on='tempro') \ .drop(columns='tempro') \ .rename(columns={'lad': 'origin_lad'}) # merge on LAD codes for tempro destinations df = df \ .merge(tempro_lad_codes, left_on='destination', right_on='tempro') \ .drop(columns='tempro') \ .rename(columns={'lad': 'destination_lad'}) # merge on LAD 2015 flows df = df \ .merge(lad15, left_on=['origin_lad', 'destination_lad'], right_on=['origin', 'destination'], suffixes=('', '_y')) \ .drop(columns=['origin_y', 'destination_y']) \ .rename(columns={'flow': 'lad2015'}) # merge on LAD 2050 flows df = df \ .merge(lad50, left_on=['origin_lad', 'destination_lad'], right_on=['origin', 'destination'], suffixes=('', '_y')) \ .drop(columns=['origin_y', 'destination_y']) \ .rename(columns={'flow': 'lad2050'}) df # Disaggregation calculation df['tempro2050'] = (df.tempro2015 * (df.lad2050 / df.lad2015)) \ .round() \ .astype(int) # Quick check df[(df.origin_lad == 'E09000007') & (df.destination_lad == 'E09000029')] df = df.drop(columns=['lad2015', 'lad2050', 'origin_lad', 'destination_lad']) df df.to_parquet('nic_transport_trips.parquet') ```
github_jupyter
# Regression Week 4: Ridge Regression (gradient descent) In this notebook, you will implement ridge regression via gradient descent. You will: * Convert an SFrame into a Numpy array * Write a Numpy function to compute the derivative of the regression weights with respect to a single feature * Write gradient descent function to compute the regression weights given an initial weight vector, step size, tolerance, and L2 penalty # Fire up Turi Create Make sure you have the latest version of Turi Create ``` import turicreate ``` # Load in house sales data Dataset is from house sales in King County, the region where the city of Seattle, WA is located. ``` sales = turicreate.SFrame('home_data.sframe/') ``` If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features. # Import useful functions from previous notebook As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste `get_numpy_data()` from the second notebook of Week 2. ``` import numpy as np # note this allows us to refer to numpy as np instead def get_numpy_data(data_sframe, features, output): data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame # add the column 'constant' to the front of the features list so that we can extract it along with the others: features = ['constant'] + features # this is how you combine two lists # select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant): features_sframe = data_sframe[features] # the following line will convert the features_SFrame into a numpy matrix: feature_matrix = features_sframe.to_numpy() # assign the column of data_sframe associated with the output to the SArray output_sarray output_sarray = data_sframe[output] # the following will convert the SArray into a numpy array by first converting it to a list output_array = output_sarray.to_numpy() return(feature_matrix, output_array) ``` Also, copy and paste the `predict_output()` function to compute the predictions for an entire matrix of features given the matrix and the weights: ``` def predict_output(feature_matrix, weights): # assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array # create the predictions vector by using np.dot() predictions = np.dot(feature_matrix, weights) return(predictions) ``` # Computing the Derivative We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output, plus the L2 penalty term. ``` Cost(w) = SUM[ (prediction - output)^2 ] + l2_penalty*(w[0]^2 + w[1]^2 + ... + w[k]^2). ``` Since the derivative of a sum is the sum of the derivatives, we can take the derivative of the first part (the RSS) as we did in the notebook for the unregularized case in Week 2 and add the derivative of the regularization part. As we saw, the derivative of the RSS with respect to `w[i]` can be written as: ``` 2*SUM[ error*[feature_i] ]. ``` The derivative of the regularization term with respect to `w[i]` is: ``` 2*l2_penalty*w[i]. ``` Summing both, we get ``` 2*SUM[ error*[feature_i] ] + 2*l2_penalty*w[i]. ``` That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself, plus `2*l2_penalty*w[i]`. **We will not regularize the constant.** Thus, in the case of the constant, the derivative is just twice the sum of the errors (without the `2*l2_penalty*w[0]` term). Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors, plus `2*l2_penalty*w[i]`. With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points). To decide when to we are dealing with the constant (so we don't regularize it) we added the extra parameter to the call `feature_is_constant` which you should set to `True` when computing the derivative of the constant and `False` otherwise. ``` def feature_derivative_ridge(errors, feature, weight, l2_penalty, feature_is_constant): if feature_is_constant == True: derivative = 2 * np.dot(errors, feature) # Otherwise, derivative is twice the dot product plus 2*l2_penalty*weight else: derivative = 2 * np.dot(errors, feature) + 2 * l2_penalty * weight return derivative ``` To test your feature derivartive run the following: ``` (example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') my_weights = np.array([1., 10.]) test_predictions = predict_output(example_features, my_weights) errors = test_predictions - example_output # prediction errors # next two lines should print the same values print (feature_derivative_ridge(errors, example_features[:,1], my_weights[1], 1, False)) print (np.sum(errors*example_features[:,1])*2+20.) print ('') # next two lines should print the same values print (feature_derivative_ridge(errors, example_features[:,0], my_weights[0], 1, True)) print (np.sum(errors)*2.) ``` # Gradient Descent Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of *increase* and therefore the negative gradient is the direction of *decrease* and we're trying to *minimize* a cost function. The amount by which we move in the negative gradient *direction* is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a **maximum number of iterations** and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.) With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria. ``` def ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations=100): print ('Starting gradient descent with l2_penalty = ' + str(l2_penalty)) weights = np.array(initial_weights) # make sure it's a numpy array iteration = 0 # iteration counter print_frequency = 1 # for adjusting frequency of debugging output #while not reached maximum number of iterations: while iteration <= max_iterations: iteration += 1 # increment iteration counter ### === code section for adjusting frequency of debugging output. === if iteration == 10: print_frequency = 10 if iteration == 100: print_frequency = 100 if iteration%print_frequency==0: print('Iteration = ' + str(iteration)) ### === end code section === # compute the predictions based on feature_matrix and weights using your predict_output() function predictions = predict_output(feature_matrix, weights) # compute the errors as predictions - output errors = predictions - output # from time to time, print the value of the cost function if iteration%print_frequency==0: print ('Cost function = ', str(np.dot(errors,errors) + l2_penalty*(np.dot(weights,weights) - weights[0]**2))) for i in range(len(weights)): # loop over each weight # Recall that feature_matrix[:,i] is the feature column associated with weights[i] # compute the derivative for weight[i]. #(Remember: when i=0, you are computing the derivative of the constant!) if i == 0: derivative = feature_derivative_ridge(errors, feature_matrix[:, i], weights[i], 0.0, True) else: derivative = feature_derivative_ridge(errors, feature_matrix[:, i], weights[i], l2_penalty, False) # subtract the step size times the derivative from the current weight weights[i] -= step_size*derivative print ('Done with gradient descent at iteration ', iteration) print ('Learned weights = ', str(weights)) return weights ``` # Visualizing effect of L2 penalty The L2 penalty gets its name because it causes weights to have small L2 norms than otherwise. Let's see how large weights get penalized. Let us consider a simple model with 1 feature: ``` simple_features = ['sqft_living'] my_output = 'price' ``` Let us split the dataset into training set and test set. Make sure to use `seed=0`: ``` train_data,test_data = sales.random_split(.8,seed=0) ``` In this part, we will only use `'sqft_living'` to predict `'price'`. Use the `get_numpy_data` function to get a Numpy versions of your data with only this feature, for both the `train_data` and the `test_data`. ``` (simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output) (simple_test_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output) ``` Let's set the parameters for our optimization: ``` initial_weights = np.array([0., 0.]) step_size = 1e-12 max_iterations=1000 ``` First, let's consider no regularization. Set the `l2_penalty` to `0.0` and run your ridge regression algorithm to learn the weights of your model. Call your weights: `simple_weights_0_penalty` we'll use them later. ``` simple_weights_0_penalty = ridge_regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, 0.0, max_iterations = 100) simple_weights_0_penalty ``` Next, let's consider high regularization. Set the `l2_penalty` to `1e11` and run your ridge regression algorithm to learn the weights of your model. Call your weights: `simple_weights_high_penalty` we'll use them later. ``` simple_weights_high_penalty = ridge_regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, 1e11, max_iterations = 100) simple_weights_high_penalty ``` This code will plot the two learned models. (The blue line is for the model with no regularization and the red line is for the one with high regularization.) ``` import matplotlib.pyplot as plt %matplotlib inline plt.plot(simple_feature_matrix,output,'k.', simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_0_penalty),'b-', simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_high_penalty),'r-') ``` Compute the RSS on the TEST data for the following three sets of weights: 1. The initial weights (all zeros) 2. The weights learned with no regularization 3. The weights learned with high regularization Which weights perform best? ``` print (((test_output - predict_output(simple_test_feature_matrix, initial_weights))**2).sum()) print (predict_output(simple_test_feature_matrix, initial_weights)[0]) print (((test_output - predict_output(simple_test_feature_matrix, simple_weights_0_penalty))**2).sum()) print (predict_output(simple_test_feature_matrix, simple_weights_0_penalty)[0]) print (((test_output - predict_output(simple_test_feature_matrix, simple_weights_high_penalty))**2).sum()) print (predict_output(simple_test_feature_matrix, simple_weights_high_penalty)[0]) ``` ***QUIZ QUESTIONS*** 1. What is the value of the coefficient for `sqft_living` that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization? 2. Comparing the lines you fit with the with no regularization versus high regularization, which one is steeper? no regularization was steeper 3. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)? initial: 1784273282524564.0 no regularization: 275723643923134.44 high regularization: 694653077641343.2 # Running a multiple regression with L2 penalty Let us now consider a model with 2 features: `['sqft_living', 'sqft_living15']`. First, create Numpy versions of your training and test data with these two features. ``` model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors. my_output = 'price' (feature_matrix, output) = get_numpy_data(train_data, model_features, my_output) (test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output) ``` We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations. ``` initial_weights = np.array([0.0,0.0,0.0]) step_size = 1e-12 max_iterations = 1000 ``` First, let's consider no regularization. Set the `l2_penalty` to `0.0` and run your ridge regression algorithm to learn the weights of your model. Call your weights: `multiple_weights_0_penalty` ``` multiple_weights_0_penalty = ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, 0.0, max_iterations) multiple_weights_0_penalty ``` Next, let's consider high regularization. Set the `l2_penalty` to `1e11` and run your ridge regression algorithm to learn the weights of your model. Call your weights: `multiple_weights_high_penalty` ``` multiple_weights_high_penalty = ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, 1e11, max_iterations) multiple_weights_high_penalty ``` Compute the RSS on the TEST data for the following three sets of weights: 1. The initial weights (all zeros) 2. The weights learned with no regularization 3. The weights learned with high regularization Which weights perform best? ``` ((test_output - predict_output(test_feature_matrix, initial_weights))**2).sum() ((test_output - predict_output(test_feature_matrix, multiple_weights_0_penalty))**2).sum() ((test_output - predict_output(test_feature_matrix, multiple_weights_high_penalty))**2).sum() ``` Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house? ``` test_output[0] mult_0_predictions_test = predict_output(test_feature_matrix, multiple_weights_0_penalty) mult_0_predictions_test[0] mult_high_predictions_test = predict_output(test_feature_matrix, multiple_weights_high_penalty) mult_high_predictions_test[0] ``` ***QUIZ QUESTIONS*** 1. What is the value of the coefficient for `sqft_living` that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization? 2. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)? 1784273282524564.0, 274067694347184.56, 500404796858030.0 3. We make prediction for the first house in the test set using two sets of weights (no regularization vs high regularization). Which weights make better prediction <u>for that particular house</u>? no regularization
github_jupyter
## Testing different libraries for parallel processing in Python ``` import numpy as np # Different ways to speed up your computations using multiple cpu cores def slow_function(n=1000): total = 0.0 for i, _ in enumerate(range(n)): for j, _ in enumerate(range(1, n)): total += (i * j) return total data = range(100) ``` ### Option 0: sequential loop ``` results = [] for _ in data: results.append(slow_function()) print(results[:10]) ``` ### Option 1: Multiprocessing - Advantage: native python library - Disadvantage: verbose ``` import multiprocessing as mp pool = mp.Pool(mp.cpu_count()) results = [pool.apply_async(slow_function, args=()) for row in data] pool.close() pool.join() results = [r.get() for r in results] print(results[:10]) ``` ### Option 2: Ray - Advantage: one of the least verbose library I'm aware of - Disadvantage: NOT native python library - More: * Docs: https://docs.ray.io/en/latest/index.html * Github: https://github.com/ray-project/ray (14.4k stars) * Install it first: `pip install ray`. * Bunch of useful tips: https://docs.ray.io/en/latest/auto_examples/tips-for-first-time.html ``` import ray ray.init() @ray.remote def paralel_slow_function(x=1000): return slow_function(x) futures = [paralel_slow_function.remote() for _ in data] print(ray.get(futures[:10])) #ray.shutdown() ``` ### Option 4: pandarallel - Advantage: Do not need anything else if you are doing your work on pandas - Disadvantage: only works with pandas - More: * Docs: * Github: https://github.com/nalepae/pandarallel (1.3K stars) * Install it first: `pip install pandarallel`. * Bunch of useful tips: https://github.com/nalepae/pandarallel/blob/master/docs/examples.ipynb ``` import pandas as pd s = pd.Series(data) s.head() # Usual way to apply a function with Pandas. Applying the `slow_function`. # Got the similar running time as shown above. s.apply(lambda x: slow_function()) from pandarallel import pandarallel pandarallel.initialize(progress_bar=False) # You can specify number of cores, memory, progress_bar s.parallel_apply(lambda x: slow_function()) ``` ### Option 5: Dask - Advantage: It is fast and provides parallel implementations for numpy/pandas/sklean... - Disadvantage: implementation is similar to native numpy/pandas/sklean but not always the same - More: * Docs: https://docs.dask.org/en/latest/ * Github: https://github.com/dask/dask (7.7K stars) * Install it first: `pip install dask`. * Bunch of useful tips: https://mybinder.org/v2/gh/dask/dask-examples/master?urlpath=lab ``` import dask.dataframe as dd import pandas as pd s = pd.Series(data) ds = dd.from_pandas(s, 12) ds.apply(lambda x: slow_function(), meta=('float64')).head(10) ```
github_jupyter
# Poisson Distribution *** ## Definition >The Poisson distribution [...] [is a discrete probability distribution] that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant rate and independently of the time since the last event.$ ^{[1]}$. ## Formula The probability mass function of a Poisson distributed random variable is defined as: $$ f(x|\lambda) = \frac{\lambda^{x}e^{-\lambda}}{x!}$$ where $\lambda$ denotes the mean of the distribution. ``` # %load ../src/poisson/01_general.py ``` *** ## Parameters ``` # %load ../src/poisson/02_lambda.py ``` *** ## Implementation in Python Multiple Python packages implement the Poisson distribution. One of those is the `stats.poisson` module from the `scipy` package. The following methods are only an excerpt. For a full list of features the [official documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.poisson.html) should be read. ### Random Variates In order to generate a random sample from, the function `rvs` should be used. ``` import numpy as np from scipy.stats import poisson # draw a single sample np.random.seed(42) print(poisson.rvs(mu=10), end="\n\n") # draw 10 samples print(poisson.rvs(mu=10, size=10), end="\n\n") ``` ### Probability Mass Function The probability mass function can be accessed via the `pmf` function (mass instead of density since the Poisson distribution is discrete). Like the `rvs` method, the `pdf` allows for adjusting the mean of the random variable: ``` from scipy.stats import poisson # additional imports for plotting purpose import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.rcParams["figure.figsize"] = (14,7) # likelihood of x and y x = 1 y = 7 print("pdf(X=1) = {}\npdf(X=7) = {}".format(poisson.pmf(k=x, mu=5), poisson.pmf(k=y, mu=5))) # continuous pdf for the plot x_s = np.arange(15) y_s = poisson.pmf(k=x_s, mu=5) plt.scatter(x_s, y_s, s=100); ``` ### Cumulative Probability Density Function The cumulative probability density function is useful when a probability range has to be calculated. It can be accessed via the `cdf` function: ``` from scipy.stats import poisson # probability of x less or equal 0.3 print("P(X <=3) = {}".format(poisson.cdf(k=3, mu=5))) # probability of x in [-0.2, +0.2] print("P(2 < X <= 8) = {}".format(poisson.cdf(k=8, mu=5) - poisson.cdf(k=2, mu=5))) ``` *** ## Infering $\lambda$ Given a sample of datapoints it is often required to estimate the "true" parameters of the distribution. In the case of the Poisson distribution this estimation is quite simple. $\lambda$ can be derived by calculating the mean of the sample. ``` # %load ../src/poisson/03_estimation.py ``` ## Infering $\lambda$ - MCMC In addition to a "direct" inference, $\lambda$ can also be estimated using Markov chain Monte Carlo simulation - implemented in Python's [PyMC3](https://github.com/pymc-devs/pymc3). ``` # %load ../src/poisson/04_mcmc_estimation.py ``` *** [1] - [Wikipedia. Poisson Distribution](https://en.wikipedia.org/wiki/Poisson_distribution)
github_jupyter
### Demonstration of triangle slicing Here are some Python based functions for slicing sets of triangles given in an STL file relative to different tool shapes. A "barmesh" is an efficient way of encoding a continuous mesh of triangles using forward-right and back-left pointers from each edge that makes the triangles trivial to infer. ``` _____NF /| ^ | / |BFR | /-> | | <-/ | BBL| / | | / |/___ NB ``` ``` import time time.process_time() # quick access to the library (I know this is not done properly) import sys sys.path.append("..") # load the triangles into the efficient encoding structure # (there is a numpy based version of this, which we should use in future) from tribarmes import TriangleBarMesh fname = "../stlsamples/frameguide.stl" tbm = TriangleBarMesh(fname) # Quick and dirty plot of this triangle mesh in 3D %matplotlib inline from basicgeo import P3 from mpl_toolkits import mplot3d from matplotlib import pyplot as plt fig = plt.figure() axes = mplot3d.Axes3D(fig) vs = tbm.GetBarMeshTriangles() cs = mplot3d.art3d.Poly3DCollection(vs) # need to shade the triangles according to normal vectors cm = plt.get_cmap('cool') def col(t): n = P3.ZNorm(P3.Cross(t[1]-t[0], t[2]-t[0])) if n[2] < 0: n = -n return cm(n[2]*0.8 + n[0]*0.6) cs.set_facecolor([col(t) for t in vs]) axes.auto_scale_xyz([t[0][0] for t in vs], [t[0][1] for t in vs], [t[0][2] for t in vs]) axes.add_collection3d(cs) plt.show() # This builds the initial 2D mesh which will be used for the basis of the # slicing of the STL shape above from basicgeo import P2, P3, Partition1, Along import barmesh rad = 2.5 rex = rad + 2.5 xpart = Partition1(tbm.xlo-rex, tbm.xhi+rex, 19) ypart = Partition1(tbm.ylo-rex, tbm.yhi+rex, 17) zlevel = Along(0.1, tbm.zlo, tbm.zhi) bm = barmesh.BarMesh() bm.BuildRectBarMesh(xpart, ypart, zlevel) # show the mesh as just a regular rectangular array of line segments from matplotlib.collections import LineCollection segments = [[(bar.nodeback.p[0], bar.nodeback.p[1]), (bar.nodefore.p[0], bar.nodefore.p[1])] for bar in bm.bars if not bar.bbardeleted ] lc = LineCollection(segments) plt.gca().add_collection(lc) rex2 = rex + 4 plt.xlim(tbm.xlo-rex2, tbm.xhi+rex2) plt.ylim(tbm.ylo-rex2, tbm.yhi+rex2) plt.show() import implicitareaballoffset iaoffset = implicitareaballoffset.ImplicitAreaBallOffset(tbm) # Here we actually make the slice of the triangle mesh by inserting mid-points # into the segments and adding more joining segments where needed to model the # contours to tolerance from barmeshslicer import BarMeshSlicer rd2 = max(xpart.vs[1]-xpart.vs[0], ypart.vs[1]-ypart.vs[0], rad*1.5) + 0.1 bms = BarMeshSlicer(bm, iaoffset, rd=rad, rd2=rd2, contourdotdiff=0.95, contourdelta=0.05, lamendgap=0.001) bms.fullmakeslice() # Plot the in and out parts of each segments in red and blue plt.figure(figsize=(11,11)) segmentswithin = [ ] segmentsbeyond = [ ] for bar in bm.bars: if not bar.bbardeleted: p0within, p1within = None, None p0beyond, p1beyond = None, None if bar.nodeback.pointzone.izone == barmesh.PZ_WITHIN_R and bar.nodefore.pointzone.izone == barmesh.PZ_WITHIN_R: p0within, p1within = bar.nodeback.p, bar.nodefore.p elif bar.nodeback.pointzone.izone == barmesh.PZ_BEYOND_R and bar.nodefore.pointzone.izone == barmesh.PZ_BEYOND_R: p0beyond, p1beyond = bar.nodeback.p, bar.nodefore.p elif bar.nodeback.pointzone.izone == barmesh.PZ_WITHIN_R and bar.nodefore.pointzone.izone == barmesh.PZ_BEYOND_R: p0within, p1within = bar.nodeback.p, bar.nodemid.p p0beyond, p1beyond = bar.nodemid.p, bar.nodefore.p elif bar.nodeback.pointzone.izone == barmesh.PZ_BEYOND_R and bar.nodefore.pointzone.izone == barmesh.PZ_WITHIN_R: p0beyond, p1beyond = bar.nodeback.p, bar.nodemid.p p0within, p1within = bar.nodemid.p, bar.nodefore.p if p0within: segmentswithin.append([(p0within[0], p0within[1]), (p1within[0], p1within[1])]) if p0beyond: segmentsbeyond.append([(p0beyond[0], p0beyond[1]), (p1beyond[0], p1beyond[1])]) lc = LineCollection(segmentswithin, color="red") plt.gca().add_collection(lc) lc = LineCollection(segmentsbeyond, color="blue") plt.gca().add_collection(lc) rex2 = rex + 4 plt.xlim(tbm.xlo-rex2, tbm.xhi+rex2) plt.ylim(tbm.ylo-rex2, tbm.yhi+rex2) plt.show() # Now extract the contours by following the round the cells keeping the WITHIN # and BEYOND sides on one side of a series of nodemids from mainfunctions import BarMeshContoursF, NestContours conts, topbars = BarMeshContoursF(bm, barmesh.PZ_BEYOND_R) contnest = NestContours(topbars, barmesh.PZ_BEYOND_R) mconts = dict((topbar.midcontournumber, cont) for cont, topbar in zip(conts, topbars)) cnswithin = [cn for cn, (izone, outxn, innlist) in contnest.items() if izone == barmesh.PZ_WITHIN_R] cnsbeyond = [cn for cn, (izone, outxn, innlist) in contnest.items() if izone == barmesh.PZ_BEYOND_R] plt.figure(figsize=(11,11)) lc = LineCollection([[(p[0], p[1]) for p in mconts[cn]] for cn in cnswithin], color="red") plt.gca().add_collection(lc) lc = LineCollection([[(p[0], p[1]) for p in mconts[cn]] for cn in cnsbeyond], color="blue") plt.gca().add_collection(lc) rex2 = rex + 4 plt.xlim(tbm.xlo-rex2, tbm.xhi+rex2) plt.ylim(tbm.ylo-rex2, tbm.yhi+rex2) plt.show() class F: def __init__(self, V): self.V = V class G(F): def __init__(self, q): super().__init__(88) self.q = q g = G(9) g.__dict__ ```
github_jupyter
> **Tip**: Welcome to the Investigate a Dataset project! You will find tips in quoted sections like this to help organize your approach to your investigation. Before submitting your project, it will be a good idea to go back through your report and remove these sections to make the presentation of your work as tidy as possible. First things first, you might want to double-click this Markdown cell and change the title so that it reflects your dataset and investigation. # Project: Investigate a Dataset (Replace this with something more specific!) ## Table of Contents <ul> <li><a href="#intro">Introduction</a></li> <li><a href="#wrangling">Data Wrangling</a></li> <li><a href="#eda">Exploratory Data Analysis</a></li> <li><a href="#conclusions">Conclusions</a></li> </ul> <a id='intro'></a> ## Introduction > **Tip**: In this section of the report, provide a brief introduction to the dataset you've selected for analysis. At the end of this section, describe the questions that you plan on exploring over the course of the report. Try to build your report around the analysis of at least one dependent variable and three independent variables. > > If you haven't yet selected and downloaded your data, make sure you do that first before coming back here. If you're not sure what questions to ask right now, then make sure you familiarize yourself with the variables and the dataset context for ideas of what to explore. ``` # Use this cell to set up import statements for all of the packages that you # plan to use. # Remember to include a 'magic word' so that your visualizations are plotted # inline with the notebook. See this page for more: # http://ipython.readthedocs.io/en/stable/interactive/magics.html ``` <a id='wrangling'></a> ## Data Wrangling > **Tip**: In this section of the report, you will load in the data, check for cleanliness, and then trim and clean your dataset for analysis. Make sure that you document your steps carefully and justify your cleaning decisions. ### General Properties ``` # Load your data and print out a few lines. Perform operations to inspect data # types and look for instances of missing or possibly errant data. ``` > **Tip**: You should _not_ perform too many operations in each cell. Create cells freely to explore your data. One option that you can take with this project is to do a lot of explorations in an initial notebook. These don't have to be organized, but make sure you use enough comments to understand the purpose of each code cell. Then, after you're done with your analysis, create a duplicate notebook where you will trim the excess and organize your steps so that you have a flowing, cohesive report. > **Tip**: Make sure that you keep your reader informed on the steps that you are taking in your investigation. Follow every code cell, or every set of related code cells, with a markdown cell to describe to the reader what was found in the preceding cell(s). Try to make it so that the reader can then understand what they will be seeing in the following cell(s). ### Data Cleaning (Replace this with more specific notes!) ``` # After discussing the structure of the data and any problems that need to be # cleaned, perform those cleaning steps in the second part of this section. ``` <a id='eda'></a> ## Exploratory Data Analysis > **Tip**: Now that you've trimmed and cleaned your data, you're ready to move on to exploration. Compute statistics and create visualizations with the goal of addressing the research questions that you posed in the Introduction section. It is recommended that you be systematic with your approach. Look at one variable at a time, and then follow it up by looking at relationships between variables. ### Research Question 1 (Replace this header name!) ``` # Use this, and more code cells, to explore your data. Don't forget to add # Markdown cells to document your observations and findings. ``` ### Research Question 2 (Replace this header name!) ``` # Continue to explore the data to address your additional research # questions. Add more headers as needed if you have more questions to # investigate. ``` <a id='conclusions'></a> ## Conclusions > **Tip**: Finally, summarize your findings and the results that have been performed. Make sure that you are clear with regards to the limitations of your exploration. If you haven't done any statistical tests, do not imply any statistical conclusions. And make sure you avoid implying causation from correlation! > **Tip**: Once you are satisfied with your work, you should save a copy of the report in HTML or PDF form via the **File** > **Download as** submenu. Before exporting your report, check over it to make sure that the flow of the report is complete. You should probably remove all of the "Tip" quotes like this one so that the presentation is as tidy as possible. Congratulations!
github_jupyter
# Data Preparation for 2D Medical Imaging ## Kidney Segmentation with PyTorch Lightning and OpenVINO™ - Part 1 This tutorial is part of a series on how to train, optimize, quantize and show live inference on a medical segmentation model. The goal is to accelerate inference on a kidney segmentation model. The [UNet](https://arxiv.org/abs/1505.04597) model is trained from scratch; the data is from [Kits19](https://github.com/neheller/kits19). The Kits19 Nifty images are 3D files. Kidney segmentation is a relatively simple problem for neural networks - it is expected that a 2D neural network should work quite well. 2D networks are smaller, and easier to work with than 3D networks, and image data is easier to work with than Nifty files. This first tutorial in the series shows how to: - Load Nifty images and get the data as array - Apply windowing to a CT scan to increase contrast - Convert Nifty data to 8-bit images > Note: This will not result in the best kidney segmentation model. Optimizing the kidney segmentation model is outside the scope of this tutorial. The goal is to have a small model that works reasonably well, as a starting point. All notebooks in this series: - Data Preparation for 2D Segmentation of 3D Medical Data (this notebook) - Train a 2D-UNet Medical Imaging Model with PyTorch Lightning (will be published soon) - [Convert and Quantize a UNet Model and Show Live Inference](../110-ct-segmentation-quantize/110-ct-segmentation-quantize.ipynb) - [Live Inference and Benchmark CT-scan data](../210-ct-scan-live-inference/210-ct-scan-live-inference.ipynb) ## Instructions To install the requirements for running this notebook, please follow the instructions in the README. Before running this notebook, you must download the Kits19 dataset, with code from https://github.com/neheller/kits19. **This code will take a long time to run. The downloaded data takes up around 21GB of space, and the converted images around 3.5GB**. Downloading the full dataset is only required if you want to train the model yourself. To show quantization on a downloadable subset of the dataset, see the [Convert and Quantize a UNet Model and Show Live Inference](../110-ct-segmentation-quantize/110-ct-segmentation-quantize.ipynb) tutorial. To do this, first clone the repository and install the requirements. It is recommend to install the requirements in the `openvino_env` virtual environment. In short: ``` 1. git clone https://github.com/neheller/kits19 2. cd kits19 3. pip install -r requirements.txt 4. python -m starter_code.get_imaging ``` If you installed the Kits19 requirements in the `openvino_env` environment, you will have installed [nibabel](https://nipy.org/nibabel/). If you get an importerror, you can install nibabel in the current environment by uncommenting and running the first cell. ## Imports ``` # Uncomment this cell to install nibabel if it is not yet installed # %pip install nibabel import os import time from pathlib import Path from typing import Optional, Tuple import cv2 import matplotlib.pyplot as plt import nibabel as nib import numpy as np ``` ## Settings Set `NIFTI_PATH` to the root directory of the Nifty files. This is the directory that contains subdirectories `case_00000` to `case_00299` containing _.nii.gz_ data. FRAMES_DIR should point to the directory to save the frames. ``` # Adjust NIFTI_PATH to directory that contains case_00000 to case_00299 files with .nii.gz data NIFTI_PATH = Path("~/kits19/data").expanduser() FRAMES_DIR = "kits19_frames" # This assert checks that the directory exists, but not that the data in it is correct assert NIFTI_PATH.exists(), f"NIFTI_PATH {NIFTI_PATH} does not exist" ``` ## Show One CT-scan Let's load one CT-scan and visualize the scan and the label ``` mask_path = NIFTI_PATH / "case_00002/segmentation.nii.gz" image_path = mask_path.with_name("imaging.nii.gz") nii_mask = nib.load(mask_path) nii_image = nib.load(image_path) mask_data = nii_mask.get_fdata() image_data = nii_image.get_fdata() print(image_data.shape) ``` A CT-scan is a 3D image. To visualize this in 2D, we can create slices, or frames. This can be done in three [anatomical planes](https://en.wikipedia.org/wiki/Anatomical_plane): from the front (coronal) , from the side (sagittal), or from the top (axial). Since a kidney is relatively small, most pixels do not contain kidney data. For an indication, let's check the fraction of pixels that contain kidney data, by dividing the number of non-zero pixels by the total number of pixels in the scan. ``` np.count_nonzero(mask_data) / np.size(mask_data) ``` This number shows that in this particular scan, less than one percent of all pixels in the scan belongs to a kidney. We find frames with pixels that are annotated as kidney, and show the kidney from all three sides ``` z = np.argmax([np.count_nonzero(item) for item in mask_data]) x = np.argmax([np.count_nonzero(item) for item in np.transpose(mask_data, (1, 2, 0))]) y = np.argmax([np.count_nonzero(item) for item in np.transpose(mask_data, (2, 1, 0))]) print(z, x, y) def show_slices(z: int, x: int, y: int): fig, ax = plt.subplots(nrows=2, ncols=3, figsize=(12, 6)) ax[0, 0].imshow(image_data[z], cmap="gray") ax[1, 0].imshow(mask_data[z], cmap="gray", vmin=0, vmax=2) ax[0, 1].imshow(image_data[:, x, :], cmap="gray") ax[1, 1].imshow(mask_data[:, x, :], cmap="gray", vmin=0, vmax=2) ax[0, 2].imshow(image_data[:, :, y], cmap="gray") ax[1, 2].imshow(mask_data[:, :, y], cmap="gray", vmin=0, vmax=2); show_slices(z, x, y) ``` The image above shows three slices, from three different perspectives, in different places in the body. The middle slices shows two colors, indicating a kidney and a tumor were annotated in this slice. ## Apply Window-Level to Increase Contrast CT-scan data can contain a large range of pixel values. This means that the contrast in the slices shown above is low. We show histograms to visualize the distribution of the pixel values. We then apply a soft tissue window level to increase the contrast for soft tissue in the visualization. See [Radiopaedia](https://radiopaedia.org/articles/windowing-ct) for information on windowing CT-scan data. ``` fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(15, 4)) axs[0].hist(image_data[z, ::]) axs[1].hist(image_data[:, x, :]) axs[2].hist(image_data[:, :, y]); # (-125,225) is a suitable level for visualizing soft tissue window_start = -125 window_end = 225 image_data[image_data < window_start] = window_start image_data[image_data > window_end] = window_end show_slices(z, x, y) ``` ## Extract Slices from Nifty Data The `save_kits19_frames` function has the mask_path of one nii.gz segmentation mask as argument, and converts the mask and corresponding image to a series of images that are saved as jpg (for images) and png (for masks). ``` def save_kits19_frames( mask_path: Path, root_dir: os.PathLike, window_level: Optional[Tuple] = None, make_binary: bool = True, ): """ Save Kits19 CT-scans to image files, optionally applying a window level. Images and masks are saved in a subdirectory of root_dir: case_XXXXX. Images are saved in imaging_frames, masks in segmentation frames, which are both subdirectories of the case directory. Frames are taken in the axial direction. :param mask_path: Path to segmentation.nii.gz file. The corresponding imaging.nii.gz file should be in the same directory. :param root_dir: Root directory to save the generated image files. Will be generated if it does not exist :param window_level: Window level top apply to the data before saving :param make_binary: If true, create a binary mask where all non-zero pixels are considered to be "foreground" pixels and get pixel value 1. """ start_time = time.time() Path(root_dir).mkdir(exist_ok=True) image_path = mask_path.with_name("imaging.nii.gz") assert mask_path.exists(), f"mask_path {mask_path} does not exist!" assert image_path.exists(), f"image_path {image_path} does not exist!" nii_mask = nib.load(mask_path) nii_image = nib.load(image_path) mask_data = nii_mask.get_fdata() image_data = nii_image.get_fdata() assert mask_data.shape == image_data.shape, f"Mask and image shape of {mask_path} are not equal" if make_binary: mask_data[mask_data > 0] = 1 if window_level is not None: window_start, window_end = window_level image_data[image_data < window_start] = window_start image_data[image_data > window_end] = window_end image_directory = Path(root_dir) / mask_path.parent.name / "imaging_frames" mask_directory = Path(root_dir) / mask_path.parent.name / "segmentation_frames" image_directory.parent.mkdir(exist_ok=True) image_directory.mkdir(exist_ok=True) mask_directory.mkdir(exist_ok=True) for i, (mask_frame, image_frame) in enumerate(zip(mask_data, image_data)): image_frame = (image_frame - image_frame.min()) / (image_frame.max() - image_frame.min()) image_frame = image_frame * 255 image_frame = image_frame.astype(np.uint8) new_image_path = str(image_directory / f"{mask_path.parent.name}_{i:04d}.jpg") new_mask_path = str(mask_directory / f"{mask_path.parent.name}_{i:04d}.png") cv2.imwrite(new_image_path, image_frame) cv2.imwrite(new_mask_path, mask_frame) end_time = time.time() print( f"Saved {mask_path.parent.name} with {mask_data.shape[0]} frames " f"in {end_time-start_time:.2f} seconds" ) ``` Running the next cell will convert all Nifty files in NIFTI_PATH to images that are saved in FRAMES_DIR. A soft tissue window level of (-125,225) is appplied and the segmentation labels are converted to binary kidney segmentations. Running this cell will take quite a long time. ``` mask_paths = sorted(NIFTI_PATH.glob("case_*/segmentation.nii.gz")) for mask_path in mask_paths: save_kits19_frames( mask_path=mask_path, root_dir=FRAMES_DIR, window_level=(-125, 225), make_binary=True ) ``` ## References - [Kits19 Challenge Homepage](https://kits19.grand-challenge.org/) - [Kits19 Github Repository](https://github.com/neheller/kits19) - [The KiTS19 Challenge Data: 300 Kidney Tumor Cases with Clinical Context, CT Semantic Segmentations, and Surgical Outcomes](https://arxiv.org/abs/1904.00445) - [The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 challenge](https://www.sciencedirect.com/science/article/pii/S1361841520301857)
github_jupyter
# Inheritance with the Gaussian Class To give another example of inheritance, take a look at the code in this Jupyter notebook. The Gaussian distribution code is refactored into a generic Distribution class and a Gaussian distribution class. Read through the code in this Jupyter notebook to see how the code works. The Distribution class takes care of the initialization and the read_data_file method. Then the rest of the Gaussian code is in the Gaussian class. You'll later use this Distribution class in an exercise at the end of the lesson. Run the code in each cell of this Jupyter notebook. This is a code demonstration, so you do not need to write any code. ``` class Distribution: def __init__(self, mu=0, sigma=1): """ Generic distribution class for calculating and visualizing a probability distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats extracted from the data file """ self.mean = mu self.stdev = sigma self.data = [] def read_data_file(self, file_name): """Function to read in data from a txt file. The txt file should have one number (float) per line. The numbers are stored in the data attribute. Args: file_name (string): name of a file to read from Returns: None """ with open(file_name) as file: data_list = [] line = file.readline() while line: data_list.append(int(line)) line = file.readline() file.close() self.data = data_list import math import matplotlib.pyplot as plt class Gaussian(Distribution): """ Gaussian distribution class for calculating and visualizing a Gaussian distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats extracted from the data file """ def __init__(self, mu=0, sigma=1): Distribution.__init__(self, mu, sigma) def calculate_mean(self): """Function to calculate the mean of the data set. Args: None Returns: float: mean of the data set """ avg = 1.0 * sum(self.data) / len(self.data) self.mean = avg return self.mean def calculate_stdev(self, sample=True): """Function to calculate the standard deviation of the data set. Args: sample (bool): whether the data represents a sample or population Returns: float: standard deviation of the data set """ if sample: n = len(self.data) - 1 else: n = len(self.data) mean = self.calculate_mean() sigma = 0 for d in self.data: sigma += (d - mean) ** 2 sigma = math.sqrt(sigma / n) self.stdev = sigma return self.stdev def plot_histogram(self): """Function to output a histogram of the instance variable data using matplotlib pyplot library. Args: None Returns: None """ plt.hist(self.data) plt.title('Histogram of Data') plt.xlabel('data') plt.ylabel('count') def pdf(self, x): """Probability density function calculator for the gaussian distribution. Args: x (float): point for calculating the probability density function Returns: float: probability density function output """ return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2) def plot_histogram_pdf(self, n_spaces = 50): """Function to plot the normalized histogram of the data and a plot of the probability density function along the same range Args: n_spaces (int): number of data points Returns: list: x values for the pdf plot list: y values for the pdf plot """ mu = self.mean sigma = self.stdev min_range = min(self.data) max_range = max(self.data) # calculates the interval between x values interval = 1.0 * (max_range - min_range) / n_spaces x = [] y = [] # calculate the x values to visualize for i in range(n_spaces): tmp = min_range + interval*i x.append(tmp) y.append(self.pdf(tmp)) # make the plots fig, axes = plt.subplots(2,sharex=True) fig.subplots_adjust(hspace=.5) axes[0].hist(self.data, density=True) axes[0].set_title('Normed Histogram of Data') axes[0].set_ylabel('Density') axes[1].plot(x, y) axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation') axes[0].set_ylabel('Density') plt.show() return x, y def __add__(self, other): """Function to add together two Gaussian distributions Args: other (Gaussian): Gaussian instance Returns: Gaussian: Gaussian distribution """ result = Gaussian() result.mean = self.mean + other.mean result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2) return result def __repr__(self): """Function to output the characteristics of the Gaussian instance Args: None Returns: string: characteristics of the Gaussian """ return "mean {}, standard deviation {}".format(self.mean, self.stdev) # initialize two gaussian distributions gaussian_one = Gaussian(25, 3) gaussian_two = Gaussian(30, 2) # initialize a third gaussian distribution reading in a data efile gaussian_three = Gaussian() gaussian_three.read_data_file('numbers.txt') gaussian_three.calculate_mean() gaussian_three.calculate_stdev() # print out the mean and standard deviations print(gaussian_one.mean) print(gaussian_two.mean) print(gaussian_one.stdev) print(gaussian_two.stdev) print(gaussian_three.mean) print(gaussian_three.stdev) # plot histogram of gaussian three gaussian_three.plot_histogram_pdf() # add gaussian_one and gaussian_two together gaussian_one + gaussian_two ```
github_jupyter
# Examples depicting the visualization and analysis of Retinal neuron data from mouse brain ## This notebook can be viewed with 3-d plots at [nbviewer](https://nbviewer.jupyter.org/github/natverse/nat.examples/blob/master/notebooks/helmstaedter2013_Mouse_RetinalConnectome.ipynb). ``` library('curl') library('R.matlab') library('gdata') ``` ## Step 1: Download data ``` # Set up URLs for data download.. download_urls <- c('http://neuro.rzg.mpg.de/download/Helmstaedter_et_al_Nature_2013_skeletons_contacts_matrices.mat', 'https://media.nature.com/original/nature-assets/nature/journal/v500/n7461/extref/nature12346-s3.zip') # Set up Folder name insides 'natverse_examples'.. dataset_name <- 'natverse_mouseconnectome' download_filename <- basename(download_urls) if (is.null(getOption(dataset_name))){ message("Setting options") options(natverse_examples=rappdirs::user_data_dir('R/natverse_examples')) } dataset_path <- file.path(getOption('natverse_examples'),dataset_name) message("Dataset path:", dataset_path) if (!dir.exists(dataset_path)){ message("Creating folder: ", basename(dataset_path)) dir.create(dataset_path, recursive = TRUE, showWarnings = FALSE) } downloaded_dataset <- '' if(length(download_urls)) message("Downloading files to: ", dataset_path) for (download_fileidx in 1:length(download_urls)){ localfile <- file.path(dataset_path, download_filename[download_fileidx]) downloaded_dataset[download_fileidx] <- localfile if(file.exists(localfile)) next message("Processing URL: ", download_urls[download_fileidx], " (file ", download_fileidx, "/", length(download_urls), ")") curl::curl_download(download_urls[download_fileidx], localfile, quiet=FALSE) } message("Downloads done") message(paste(c("Dataset downloaded at: ", downloaded_dataset), collapse="\n")) ``` ## Step 2: Save intermediate `r` objects from `mat` format ``` raw_data <- R.matlab::readMat(downloaded_dataset[1]) names(raw_data) ``` ### Save metadata (like cellIDs, globalTypeIDs) from skeleton ``` skeleton_metadata <- raw_data[2:8] saveRDS(skeleton_metadata, file=file.path(dataset_path,'skeleton_metadata.rds')) skall <- raw_data$kn.e2006.ALLSKELETONS.FINAL2012 rownames(skall[[1]][[1]]) #' Parse a single skeleton object cleaning up the rather messy structure that comes out of readMat parse.moritz.skel<-function(x, simple_fields_only=TRUE){ # delist x=x[[1]] stopifnot(inherits(x,'array')) vars=rownames(x) stopifnot(all(c('nodes','edges') %in% vars)) #simplevars=intersect(c('nodes','edges', 'edgeSel'), vars) simplevars=intersect(c('nodes','edges'), vars) r=sapply(simplevars, function(v) x[v,,][[1]], simplify=FALSE) othervars=setdiff(vars, simplevars) process_var<-function(y){ if(inherits(y,'list')) y=y[[1]] if(inherits(y,'array')) apply(y, 1, unlist) else { if(is.numeric(y)) drop(y) else y } } if(simple_fields_only) structure(r, class=c('skel','list')) else { r2=sapply(othervars, function(v) process_var(x[v,,]), simplify = FALSE) structure(c(r, r2), class=c('skel','list')) } } # Convert all the neurons to intermediate skeleton format (here just a neuronlist in 'nat') skallp=nat::nlapply(skall, parse.moritz.skel, .progress='text') # give the neurons names names(skallp)=sprintf("sk%04d", seq_along(skallp)) skallp_unique <- skallp attr(skallp_unique,'df')=data.frame( row.names=names(skallp_unique), skid=seq.int(skallp_unique), cellid=skeleton_metadata$kn.e2006.ALLSKELETONS.FINAL2012.cellIDs[1,], typeid=skeleton_metadata$kn.e2006.ALLSKELETONS.FINAL2012.globalTypeIDs.REDOMAR2013[1,], cellid.soma=skeleton_metadata$kn.e2006.ALLSKELETONS.FINAL2012.cellIDs.pure.forSomata[1,] ) head(attr(skallp_unique,'df')) attr(skallp_unique,'df')$nedges=sapply(skallp_unique,function(x) nrow(x$edges)) message("Number of unique cells(neurons): ", length(unique(attr(skallp_unique,'df')$cellid))) message("Number of tracings: ", length(skallp_unique)) ``` ### As each neuron has been traced by multiple tracers (which results in multiple skids), choose only that skid which has maximum number of edges ``` sk.uniq_temp=skallp_unique[by(attr(skallp_unique,'df'),attr(skallp_unique,'df')$cellid,function(x) x$skid[which.max(x$nedges)])] message("Number of tracings used: ", length(sk.uniq_temp)) message("Reading SI 4 spreadsheet sheet 3 ...") utils::unzip(downloaded_dataset[2],exdir = file.path(dataset_path, sub('\\.zip$', '', basename(downloaded_dataset[2])))) SI4.s3=gdata::read.xls(file.path(dataset_path, sub('\\.zip$', '', basename(downloaded_dataset[2])),'Helmstaedter_et_al_SUPPLinformation4.xlsx'), sheet=3) message("Reading done") head(SI4.s3) ``` ### Save metadata (like sortedtypeIDs) ``` saveRDS(SI4.s3,file=file.path(dataset_path,"SI4.s3.rds")) # assign sorted typed id as used in the paper attr(sk.uniq_temp,'df')$stypeid=0 attr(sk.uniq_temp,'df')$stypeid[match(SI4.s3$cell.ID.in.skeleton.db,attr(sk.uniq_temp,'df')$cellid)]=SI4.s3$sorted.type.ID..as.in.Type.Mx. ``` ### Add metadata like cell type (ganglion, amacrine etc) ``` # Make classes for # 1:12 ganglion # 13:57 amacrine # 58:71 bipolar df=attr(sk.uniq_temp,'df') df$class='' df<-within(df,class[stypeid%in%1:12]<-'ganglion') df<-within(df,class[stypeid%in%13:57]<-'amacrine') # narow 13-24 # wide: 25, 28, 30–32, 37, 39, 41, 47, 53 and 57 # medium: types 26, 27, 29, 33-36, 38, 40, 42–46, 48–52 and 54-57 # typo for that last 57? df<-within(df,class[stypeid%in%58:71]<-'bipolar') # I assume that these must be Muller glial cells df<-within(df,class[stypeid%in%77]<-'glial') df$class=factor(df$class) df$subclass='' df$subclass[df$stypeid%in%13:24]='narrow field' df$subclass[df$stypeid%in%c(25, 28, 30:32, 37, 39, 41, 47, 53, 57)]='wide field' df$subclass[df$stypeid%in%c(26, 27, 29, 33:36, 38, 40, 42:46, 48:52,54:57)]='medium field' df$subclass=factor(df$subclass) df$ntype='' df$ntype[df$stypeid%in%c(33,51)]='starburst amacrine' attr(sk.uniq_temp,'df')=df saveRDS(sk.uniq_temp, file=file.path(dataset_path,'sk.uniq.rds')) file.path(dataset_path,'sk.uniq.rds') ``` ## Step 3: Just compare the created `r` objects with previous ones (made by Greg), this section will be removed soon.. ``` load('/Users/sri/Documents/R/dev/nat.examples/03-helmstaedter2013/sk.uniq.rda') str(sk.uniq[[1]]) str(sk.uniq_temp[[1]]) head(attr(sk.uniq_temp,'df')) head(attr(sk.uniq,'df')) all.equal(attr(sk.uniq,'df'),attr(sk.uniq_temp,'df')) all.equal(sk.uniq,sk.uniq_temp, check.attributes = F) ``` ## Step 4: Visualization of neurons.. ### Step 4a: Define functions for converting a neuron in Moritz's format to nat's internal neuron format.. ``` #install.packages("/Users/sri/Documents/R/dev/nat",repos = NULL, #type = "source",force=T) library(nat) #' Convert Helmstaedter's matlab skel format into nat::neuron objects #' #' @description skel objects are my direct R translation of the matlab data #' provided by Briggman, Helmstaedter and Denk. #' @param x A skel format neuron to convert #' @param ... arguments passed to as.neuron #' @return An object of class \code{neuron} #' @seealso \code{\link{as.neuron}} as.neuron.skel<-function(x, ...) { as.neuron(as.ngraph(x), ...) } #' Convert Helmstaedter's matlab skel format into nat::ngraph objects #' #' @details \code{ngraph} objects are thin wrappers for \code{igraph::graph} #' objects. #' #' This function always removes self edges (i.e. when a node is connected to #' itself), which seem to be used as a placeholder in the Helmstaedter dataset #' when there are no valid edges. #' #' Isolated nodes are also removed by default (i.e. the nodes that are not #' connected by any edges.) #' @param remove.isolated Whether or not to remove isolated nodes from the graph #' (see Details). #' @inheritParams as.neuron.skel #' @seealso \code{\link[nat]{ngraph}}, \code{\link{as.neuron.skel}} as.ngraph.skel<-function(x, remove.isolated=TRUE, ...) { self_edges=x$edges[,1]==x$edges[,2] if(sum(self_edges)>0){ if(sum(self_edges)==nrow(x$edges)){ # there are only self edges - make a dummy empty edge matrix x$edges=matrix(nrow=0, ncol=2) } else { x$edges[!self_edges, , drop=FALSE] } } ng=ngraph(x$edges, vertexnames = seq_len(nrow(x$nodes)), xyz=x$nodes, ...) if(remove.isolated){ isolated_vertices=igraph::V(ng)[igraph::degree(ng)==0] g=igraph::delete.vertices(graph=ng,isolated_vertices) ng=as.ngraph(g) } } options(nat.progress="none") options(warn=-1) skn=nlapply(sk.uniq_temp, as.neuron, OmitFailures = T, Verbose = FALSE) options(warn=0) class(skn[[1]]) ``` ### Step 4b: Plot using plotly backend and display the widget.. ``` #options(nat.plotengine = 'plotly') #getOption('nat.plotengine') library(htmlwidgets) library(plotly) library(IRdisplay) clearplotlyscene() tempval <- plot3d(skn[1:5], plotengine ='plotly', soma = TRUE) #tempval$plotlyscenehandle htmlwidgets::saveWidget(plotly::as_widget(tempval$plotlyscenehandle), 'helmstaedter2013_01.html') IRdisplay::display_html(paste0('<iframe src="','helmstaedter2013_01.html" width=100%, height=500> frameborder="0" </iframe>')) clearplotlyscene() tempval <- plot3d(skn,ntype=='starburst amacrine',col=stypeid, plotengine = 'plotly', soma = TRUE, alpha = 0.5) #tempval$plotlyscenehandle htmlwidgets::saveWidget(plotly::as_widget(tempval$plotlyscenehandle), 'helmstaedter2013_02.html') IRdisplay::display_html(paste0('<iframe src="','helmstaedter2013_02.html" width=100%, height=500> frameborder="0" </iframe>')) ``` ## Step 5: Session summary.. ``` sessionInfo() ```
github_jupyter
``` import heapq import random from PIL import Image import numpy import nltk from IPython.display import display, Image as Img ``` # Minimum Spanning Trees: This tutorial will teach you the basics of Minimum spanning trees, Algorithms on how to make them, and some applications of minimum spanning trees. # Task Zero: Graph Representations For this part, we need to decide on a graph representation that we will use for the rest of the problems in this task. For implementing Minimum spanning trees, it is useful to represent graphs as a list of vertices (from 0 to n-1), and a list of edges in the form (u, v, c). As you will soon see, most of our algorithms involve sorting edges and finding connected components, which we can do quickly via these representations. There are many other representation that are more suited to different algorithms, for example if we were trying to compute a search algorithm, it would be very convienient to represents graphs as a dictionary of outgoing edges, since we only care about the local neighborhood of any given vertex, rather than the position of every edge relative to the other edges. In paticular, we notice that for running prim's algorithms, it is really uncomfortable to use our graph representation, so our Prim's algorithm does not run in the advertised time. Note: we will only use directed graphs for this tutorial, but you can make them directed by duplicating every edge and reversing the direction (which we will often do anyways). # Task One: The Basics We will first cover two sequential algorithms for finding minimum spanning trees. These are both sequential, and you can verify that they take O(mlogm) work and O(m) span, where m is the number of edges. 1) Prim's Algorithm: First discovered in 1930 by Vojtech Jarnik, the algorithm goes as follows: a) pick one vertex to visit first, in our case, we'll just pick vertex 0. b) find the lightest outgoing edge from the set of visited vertices, if it leads to a new edge, then add the edge to the edges in the MST, and add the vertex to the visited vertices, otherwise pick the next lightest edge until a suitable one is found c) Repeat step 2 until all vertices are visited, or we have completed a component. We can repeat this for every component, but for now we'll just assume that our graph is connected. 2) Kruskals Algorithm: First appearing in 1956, written by Joseph Kruskal, the algorithm goes as follows: a) Create a list of all edges in the graph, and initialize a forrest of individual vertices. b) Add the minimum weight edges to the MST Edges if and only if it doesn't form a cycle. The vertex must connect two forrests, so combine/contract the two forrests into one. (*) c) Repeat the algorithm until there is only one connected component, or no edges remain. Aside: A good way to perform contraction is to have representatives for each vertex. When we want to contract an edge, we pick one of the vertices to be the representative of the other, and to find the top, we follow the chain up until one vertex is its own representative. This takes O(n) to find the the component a vertex is in, worst case. ``` def outgoing(vertex, edges): return map(lambda (x, y, c) : (c, (x, y)), filter(lambda (x, y, c) : x == vertex, edges)) def find_set(components, x): if(components[x] == x): return x else: return find_set(components, components[x]) def prims(graph): visited = [graph[0][0]] edges = [] heap = outgoing(visited[0], graph[1]) heapq.heapify(map(lambda x : reversed(x), heap)) while(visited != graph[0]): (c, (x, y)) = heapq.heappop(heap) if(y in visited): continue else: edges += [(x, y, c)] visited += [y] for (c, (a, b)) in outgoing(y, graph[1]): heapq.heappush(heap, (c, (a, b))) return edges def kruskals(graph): edges = list(reversed(sorted(graph[1], key=lambda x : x[2]))) components = sorted(graph[0]) numcomponents = len(graph[0]) mstedges = [] while(numcomponents != 1 and edges != []): (x, y, c) = edges.pop() if(find_set(components, x) == find_set(components, y)): continue else: components[y] = x mstedges += [(x, y, c)] numcomponents = numcomponents - 1 return mstedges ``` # Task Two: Parallelism to the Rescue The previous algorithms were nice, but seemed awkward to implement with our graph representation. Now we will see how we can parallelize the process. Definition: We define the cut_G(S) = {(u, v) in E | u in S and v in V\S} Lemma (Light edge property): Assume that G has unique edge weights. For all S subset E, if we define x to be the minimum weight edge in cut_G(S), then x is in the MST Edges. Proof: Let T be the MST of G, and fix a cut S, and let x be the minimum edge in the cut. If x is in T, then we are done. Otherwise assume for contradiction that x is not in the cut. Since T is an MST, S and V\S must be in the same connected component with respect to T, so there must be another edge, y, in the cut that is in T. Since our edge weights are unique, the weight of x must be strictly less than the weight of y. Consider the tree formed by removing y and connecting S and V\S with x, this tree must be of less weight than T, which contradicts the fact that T was the MST. Therefore x must be in the MST for any set S. Equipped with this knowledge, we realize that by defining S = {v}, we can find an edge in the MST for every vertex in the graph, and this motivates some parallel algorithms... # Boruvka's Algorithm Published in 1926 by Otakar Boruvka, the algorithm goes as follows: 1) Initialize a forrest where each tree is a single vertex 2) While the forrest has more than one connected component: a) For each component, add the cheapest outgoing edge to the potential MST edges. Since the cut {v} and V/{v} is valid, the lightest of the edges spanning this cut (ie outing edges of v) must be in the MST. b) Contract along the potential MST edges, add the edges to the MST. 3) Repeat step 2 (called the Boruvka step) until there is only one connected component, which means we have found the minimum spanning tree. # Contraction Part of Boruvka's algorithm is contracting the edges to make a new edgeset. Before, when we contracted only one edge at a time, this was easy, but now we may need to contract multiple edges into a single vertex. There are many ways to do this, but in this example we will use Star Contraction. 1) Label vertices as stars or satellites by flipping a coin n times 2) For every edge (u, v), if u is a satellite and v is a star, then contract u into v. We represent the contraction by setting the representative of u as v, and replacing u by v in all of the edges. (and then removing all edges going from v -> v). Our contraction function just returns the representatives, BoruvkaStep takes care of pruning the edges. At every round of Boruvka's algorithm we do this, and you can verify that we expect to get O(n) many edges that need to be contracted, and at 1/4 of them will be contracted in expectation, which means we expect O(logn) many rounds to occur, each taking O(m) time. So the work of Boruvka's is O(mlogn). ``` # Requires: E is the edge set, reverse sorted by weights (so low weights appear at the end of the list) # Returns: Minimum edge comming out of each vertex in range(n) def minEdges(E, n): minE = [(-1, -1, (-1, -1, -1)) for i in range(n)] for (u, v, w, l) in E: minE[u] = (v, w, l) return minE # To do star contraction, we first decide which ones are stars and which are sattelites. # If v is a head, then we contract u to v (and add it to the mst), otherwise we leave u alone. def findSatellites((u, (v, w, l)), flips): if(v == -1): return (u, -1, -1, (-1, -1, -1)) else: if(flips[u] == 0 and flips[v] == 1): return (u, v, w, l) else: return (u, -1, -1, (-1, -1, -1)) # First we flip heads, and then get the minimum outgoing edge of each of the vertices # Then we run star contraction on it to get the components, as well as mst edges def starContract(E, n): flips = [random.randint(0, 1) for i in range(n)] minE = minEdges(E, n) contracted = map(lambda x : findSatellites(x, flips), enumerate(minE)) return contracted def BoruvkaStep(labeled, T, n): # Get the components of the new graph contract = starContract(labeled, n) # Find the representatives (if we contracted the edge, then v will not be -1, so u will be contracted into v, otherwise u stays as u) reps = map(lambda (u, v, w, l) : u if v == -1 else v, contract) # Now we remove the edges that were invalid to get the mst edges contract = filter(lambda (u, v, w, l) : v != -1, contract) # l represents the original edge, so we add it to the mst T = T + [l for (u, v, w, l) in contract] # Now we apply the contraction and filter out any edges that were destroyed as a result. labeled = filter(lambda (u, v, w, l) : reps[u] != reps[v], labeled) labeled = map (lambda (u, v, w, l) : (reps[u], reps[v], w, l), labeled) return labeled, T def Boruvka(graph): n = len(graph[0]) sort = (sorted(graph[1], lambda x, y : -x[2] + y[2])) # We want to contract, so we need to carry the original list with us labeled = map(lambda (u, v, w) : (u, v, w, (u, v, w)), sort) # Initialize a new MST T = [] while(len(labeled) != 0): labeled, T = BoruvkaStep(labeled, T, n) return T ## Here is a small example to make sure that the code works. V = range(4) E = [(0, 1, 4), (1, 0, 4), (1, 2, 3), (2, 1, 3), (2, 3, 4), (3, 2, 4), (0, 2, 7), (2, 0, 7)] print(Boruvka((V, E))) print(prims((V, E))) print(kruskals((V, E))) # We can verify that all 3 algorithms agree on the MST for this small case, which means we didn't do anything too wrong. ``` # Task Three: Randomize Now that we have the Boruvka step, we can actually one up boruvka's algorithm by doing work in between rounds of Boruvka Steps. In paticular we define F-heavy edges in the following manner: Let F be a Minimum spanning forrest of a graph G, then an edge (u,v, c) in E is a F-heavy edge if c is greater than the heaviest edge in the path connecting u and v in F. (if no path connects u and v in F, then (u, v, c) is not F-heavy). We claim that for any minimum spanning forrest F, all of the F-heavy edges in G are not in the MST, and we can compute them in linear time (with respect to the number of edges in the graph). One way we can do this is by running kruskal's algorithm, traversing the edges in order of weight. When we run kruskals and encounter an edge that would make a cycle, we can remove it since it is larger than the heaviest edge currently in the mst connecting it's endpoints. So, here's the algorithm: 1) Run some fixed number of boruvka steps to get some preliminary mst edges. 2) After that, create a new graph by including edges in G with one half probability, which we will call F. 3) Find the F-heavy edges of G by running modified kruskals algorithm. 4) Remove the F-heavy edges we found, and do it again until we are done finding the MST. If it is done right, this will solve the MST problem in expected time O(m), and worst case time O(mlogn), the same bound as Boruvka's algorithm. ``` # As we describe above, what findheavy needs to do is run kruskals algorithm # and remove any cycle edges, as they can't be F-light. def findheavy(graph, edges): edges = sorted(edges, lambda x : x[2]) components = sorted(graph[1]) numcomponents = len(graph[0]) mstedges = [] fheavy = [] while(numcomponents != 1 or edges != []): (x, y, c, l) = edges.pop() if(find_set(components, x) == find_set(components, y)): fheavy += [(x, y, c, l)] continue else: if((x, y, c, l) not in graph[1]): continue components[y] = x mstedges += [(x, y)] numcomponents = numcomponents - 1 return fheavy, mstedges def Tarjan(graph): n = len(graph[0]) sort = (sorted(graph[1], lambda x, y : -x[2] + y[2])) # We want to contract, so we need to carry the original list with us labeled = map(lambda (u, v, w) : (u, v, w, (u, v, w)), sort) # Initialize a new MST T = [] while(len(labeled) != 0): labeled, T = BoruvkaStep(labeled, T, n) labeled, T = BoruvkaStep(labeled, T, n) flips = [random.randint(0, 1) for i in range(len(labeled))] # There's Some False Advertising here, as this could be done much faster by removing edges from E as we find them. # We also note that the theoretical bound of O(m) is very difficult to achieve without using Fibbonacci heaps or Brodal set, # neither of which are fun to implement or quick newE = [x for x in labeled if flips[x] == 1] fheavy, msf = findheavy(graph[0], newE, labeled) labeled = filter(lambda x : x not in fheavy, labeled) return T ``` # Task Four: Applications Now we will look at some uses of minimum spanning tree. In paticular we will look at creating a dependency tree from a sentence, and performing clustering with a modified boruvka's algorithm. # Dependency Parsers Following the outline set in http://www.seas.upenn.edu/~strctlrn/bib/PDF/nonprojectiveHLT-EMNLP2005.pdf, we can use Maximum spanning trees (which we can get by negating a graph and computing it's Minimum spanning tree), to determine dependencies among words in a sentence (assuming we can extract features from the sentence). We won't go into detail about what kinds of features are best to use, but we will use some cursory features to get a feel for how the algorithm works. Features for this parser can get very complex (http://ufal.mff.cuni.cz/~zabokrtsky/publications/papers/featureengin07.pdf), and often require machine learning to determine weighting, but we will use the following as features with some made up weightings: 1) The direction of the dependency (-1 or 1 depending on which of the two words appears first in the sentence). 2) The POS tags of the dependency 3) Distance between two words The basic idea of the algorithm is that if we have a metric between words that represents how dependent words are, we can find a maximum spanning tree of the dense graph, and this will find us the subtree that has the highest dependency. Of course, finding the best way to determine dependency is really difficult. There isn't enough room in this tutorial for me to explain how to perform the machine learning to determine the proper correlation weighting for two words in a sentence, but I highly recommend reading the papers above to learn more about how it was really done. ``` # These are fake functions to give you a sense of what this would do def tagscore(tag1, tag2): if(tag1 == "NN" and tag2 == "NN"): return 1 else: return 2 def f(distance, direction, POSs): return distance + direction + POSs def extractGraph(sentence): edges = [] for i in range(len(sentence)): for j in range(len(sentence)): if(i == j): continue else: distance = abs(i-j) direction = 1 if i < j else -1 POSs = tagscore(sentence[i][1], sentence[j][1]) edges += (i, j, -f(distance, direction, POSs)) return edges # This will work, but not as well as we expect it to do. To really do it, read the papers listed above. def parsedep(sentence): G = (range(len(sentence.split(" "))), extractGraph(nltk.pos_tag(sentence.split(" ")))) edges = Boruvka(G) return map(lambda (x, y, c) : (sentence.split(" ")[x], sentence.split(" ")[y]), edges) # Here is a sentence that we can parse, and below it is the answer as provided by the Stanford parser parsedep("Bills on ports and immigration were submitted by Republican Senator Brownback of Kansas") Img(filename='deptree.png') ``` # Cluster Detection and Image Segmentation Now, we will modify our implementation of Boruvka's to make it find a minimum spanning forrest, removing key edges from the original graph if they are too "difficult" to contract. In paticular we will make it cost "currency" to contract edges, and when two edges try to contract, we subtract the weight of the edge from the minimum credits of the two contracted vertices. At the beginning of every round we filter out any edges that can not be contracted because of our cost requirement. We continue this until there are no contractable edges or we are finished contracting, and unlike the previous MST algorithms, here we are interested in the connected components of the MSF rather than the edges, as these represent the clusters in the data. We notice that rather than making a minimum spanning tree, this will create a forrest, and each forrest will be determined by some notion of proximity of vertices in the components. For a picture, we determine the proximity by the absolute difference in the colors of the two, so that similar colors will get merged more often than different colors. For clustering, we could define it to be any metric we want, the L2 norm, or manhattan distance. The function below will look really similar to Boruvka's algorithm, with some new credit variables. ``` def Segment(graph, initialcredits): # The first part is copying over your code from boruvka's n = len(graph[0]) credits = [initialcredits for i in range(n)] sort = sorted(graph[1], lambda x, y : int(-x[2] + y[2])) # We are going to record the connect components of the graph the same way as before. colors = range(n) # We want to contract, so we need to carry the original list with us labeled = map(lambda (u, v, w) : (u, v, w, (u, v, w)), sort) # Initialize a new MST T = [] # We will repeat the body of the loop until there are not more edges to run on while(len(labeled) != 0): # First filter out the edges that can't be contracted along labeled = filter(lambda (u, v, w, l) : min(credits[u], credits[v]) > w, labeled) # Get the components of the new graph contract = starContract(labeled, n) # Find the representatives (if we contracted the edge, then v will not be -1, so u will be contracted into v, otherwise u stays as u) reps = map(lambda (u, v, w, l) : u if v == -1 else v, contract) for i in range(n): if(contract[i][1] != -1): colors[contract[i][0]] = contract[i][1] # Now we remove the edges that were invalid to get the mst edges contract = filter(lambda (u, v, w, l) : v != -1, contract) # l represents the original edge, so we add it to the mst T = T + [l for (u, v, w, l) in contract] # Now we apply the contraction and filter out any edges that were destroyed as a result. labeled = filter(lambda (u, v, w, l) : reps[u] != reps[v], labeled) labeled = map (lambda (u, v, w, l) : (reps[u], reps[v], w, l), labeled) ## I might consider making this a seperate chunk, but the logic is very simple, and ## I don't see a reason to modularize this part. contractedcredit = map(lambda (u, v, w, l) : (v, credits[u]), contract) creditdict = dict() for (u, c) in contractedcredit: if(u in creditdict): creditdict[u] += [c] else: creditdict[u] = [c] def findMin(v, dic): return (v, reduce(lambda x, y : min(x, y), dic[v])) for i in creditdict: creditdict[i] = (findMin(i, creditdict)) weights = map(lambda (u, v, w, l) : (v, w), contract) weightdict = dict() for (u, w) in weightdict: if(u in weightdict): weightdict[u] += [w] else: weightdict[u] = [w] for i in weightdict: weightdict[i] = (reduce(lambda x, y : x+y, (i, weightdict))) for i in weightdict: credits[i] = creditdict[i]-weightdict[i] return (T, colors) ``` The function below is just a helper function that creates all of our edges from the picture array by taking every pair of adjacent vertices and weighting the edge as the magnitude of the distance vector between their colors. It's long because there are a lot of edge cases and I wanted to make sure that I was transparent with my algorithms. ``` def process(filename): img = numpy.asarray(Image.open(filename)) edges = [] vertices = [] for i in range(len(img)): for j in range(len(img[0])): vertices += [i + j*len(img)] if(i == 0): if(j == 0): edges += [(i + j*len(img), i + j*len(img) + 1, abs(img[i][j] - img[i+1][j])), (i + j*len(img), i + (j+1)*len(img), abs(img[i][j] - img[i][j+1]))] elif(j == len(img[0]) - 1): edges += [(i + j*len(img), i + j*len(img) + 1, abs(img[i][j] - img[i+1][j])), (i + j*len(img), i + (j-1)*len(img), abs(img[i][j] - img[i][j-1]))] else: edges += [(i + j*len(img), i + j*len(img) + 1, abs(img[i][j] - img[i+1][j])), (i + j*len(img), i + (j-1)*len(img), abs(img[i][j] - img[i][j-1])), (i + j*len(img), i + (j+1)*len(img), abs(img[i][j] - img[i][j+1]))] elif(i == len(img) - 1): if(j == 0): edges += [(i + j*len(img), i + j*len(img) - 1, abs(img[i][j] - img[i-1][j])), (i + j*len(img), i + (j+1)*len(img), abs(img[i][j] - img[i][j+1]))] elif(j == len(img[0])-1): edges += [(i + j*len(img), i + j*len(img) - 1, abs(img[i][j] - img[i-1][j])), (i + j*len(img), i + (j-1)*len(img), abs(img[i][j] - img[i][j-1]))] else: edges += [(i + j*len(img), i + j*len(img) - 1, abs(img[i][j] - img[i-1][j])), (i + j*len(img), i + (j-1)*len(img), abs(img[i][j] - img[i][j-1])), (i + j*len(img), i + (j+1)*len(img), abs(img[i][j] - img[i][j+1]))] elif(j == 0): edges += [(i + j*len(img), i + j*len(img) + 1, abs(img[i][j] - img[i+1][j])), (i + j*len(img), i + j*len(img) - 1, abs(img[i][j] - img[i-1][j])), (i + j*len(img), i + (j+1)*len(img), abs(img[i][j] - img[i][j+1]))] elif(j == len(img[0])-1): edges += [(i + j*len(img), i + j*len(img) + 1, abs(img[i][j] - img[i+1][j])), (i + j*len(img), i + j*len(img) - 1, abs(img[i][j] - img[i-1][j])), (i + j*len(img), i + (j-1)*len(img), abs(img[i][j] - img[i][j-1]))] else: edges += [(i + j*len(img), i + j*len(img) - 1, abs(img[i][j] - img[i-1][j])), (i + j*len(img), i + j*len(img) + 1, abs(img[i][j] - img[i+1][j])), (i + j*len(img), i + (j-1)*len(img), abs(img[i][j] - img[i][j-1])), (i + j*len(img), i + (j+1)*len(img), abs(img[i][j] - img[i][j+1]))] return (sorted(vertices), map(lambda (a, b, w) : (a, b, numpy.linalg.norm(w)), edges)) ``` Below is a little example of how to run the program, incase you wanted to try it out. Basically we find the MST components and turn each pixel's color into it's representatives color. (This takes a few seconds to run). ``` (V, E) = process("sunset.jpg") MST, Components = Segment((V, E), 10) newpic = numpy.zeros(numpy.asarray(Image.open("sunset.jpg")).shape, dtype = numpy.uint8) oldpic = numpy.asarray(Image.open('sunset.jpg')) (x, y, z) = newpic.shape for i in V: component = find_set(Components, i) color = numpy.asarray(Image.open("sunset.jpg"))[component%x, (int(component/x))] newpic[i%x, (int(i/x))] = color image = Image.fromarray(newpic, 'RGB') image.save("sunset-10.png") Img(filename = 'sunset-10.png') ``` Below are some more pretty pictures, observe what happens when we increase the initial credits given to each vertex. ``` Img(filename='dog.png') Img(filename='dog-1000.png') Img(filename='skittles.png') Img(filename='skittles-100.png') Img(filename='skittles-1000.png') ```
github_jupyter
###### importing libraries : ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd import os ``` ### Data Pre-processing Step : ###### Reading the Data: ``` dataset = pd.read_csv('Social_Network_Ads.csv') ``` ###### Visulaizing the Data: ``` dataset.shape dataset.head() ``` ###### Defining the features and the Target: ``` X = dataset.iloc[:, [2,3]].values y = dataset.iloc[:,4].values ``` ##### Splitting the dataset into training and test data: ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.25,random_state=0) ``` ##### Feature Scaling: ``` from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.fit_transform(X_test) ``` # for linear kernel: ##### Fitting classifier to the Training Set: ``` from sklearn.svm import SVC classifier = SVC(kernel = 'linear', random_state=0) classifier.fit(X_train, y_train) ``` ##### Predicting the Test Set Resutls: ``` y_pred = classifier.predict(X_test) ``` ##### Making the Confusion Matrix: ``` from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) print(cm) ``` ##### Model Score: ``` score = classifier.score(X_test,y_test) print(score) ``` ##### Visualising the Training set results: ``` from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('SVM (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ``` ##### Visualizing the Test Set Results: ``` from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('SVM (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ``` # for rbf kernel: ##### Fitting classifier to the Training Set: ``` from sklearn.svm import SVC classifier = SVC(kernel = 'rbf', random_state=0) classifier.fit(X_train, y_train) ``` ##### Predicting the Test Set Resutls: ``` y_pred = classifier.predict(X_test) ``` ##### Making the Confusion Matrix: ``` from sklearn.metrics import confusion_matrix cm1 = confusion_matrix(y_test, y_pred) print(cm1) ``` ##### Model Score: ``` score1 = classifier.score(X_test,y_test) print(score1) ``` ##### Visualising the Training set results: ``` from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('SVM (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ``` ##### Visualizing the Test Set Results: ``` from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('SVM (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ``` # for poly kernel: ##### Fitting classifier to the Training Set: ``` from sklearn.svm import SVC classifier = SVC(kernel = 'poly', random_state=0) classifier.fit(X_train, y_train) ``` ##### Predicting the Test Set Resutls: ``` y_pred = classifier.predict(X_test) ``` ##### Making the Confusion Matrix: ``` from sklearn.metrics import confusion_matrix cm2 = confusion_matrix(y_test, y_pred) print(cm2) ``` ##### Model Score: ``` score2 = classifier.score(X_test,y_test) print(score2) ``` ##### Visualising the Training set results: ``` from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('SVM (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ``` ##### Visualizing the Test Set Results: ``` from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('SVM (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ```
github_jupyter
## Iterators and Generators In Python, anything which can be iterated over is called an iterable: ``` bowl = { "apple": 5, "banana": 3, "orange": 7 } for fruit in bowl: print(fruit.upper()) ``` Surprisingly often, we want to iterate over something that takes a moderately large amount of memory to store - for example, our map images in the green-graph example. Our green-graph example involved making an array of all the maps between London and Birmingham. This kept them all in memory *at the same time*: first we downloaded all the maps, then we counted the green pixels in each of them. This would NOT work if we used more points: eventually, we would run out of memory. We need to use a **generator** instead. This chapter will look at iterators and generators in more detail: how they work, when to use them, how to create our own. ### Iterators Consider the basic python `range` function: ``` range(10) total = 0 for x in range(int(1e6)): total += x total ``` In order to avoid allocating a million integers, `range` actually uses an **iterator**. We don't actually need a million integers *at once*, just each integer *in turn* up to a million. Because we can get an iterator from it, we say that a range is an **iterable**. So we can `for`-loop over it: ``` for i in range(3): print(i) ``` There are two important Python built-in functions for working with iterables. First is `iter`, which lets us create an iterator from any iterable object. ``` a = iter(range(3)) ``` Once we have an iterator object, we can pass it to the `next` function. This moves the iterator forward, and gives us its next element: ``` next(a) next(a) next(a) ``` When we are out of elements, a `StopIteration` exception is raised: ``` next(a) ``` This tells Python that the iteration is over. For example, if we are in a `for i in range(3)` loop, this lets us know when we should exit the loop. We can turn an iterable or iterator into a list with the `list` constructor function: ``` list(range(5)) ``` ### Defining Our Own Iterable When we write `next(a)`, under the hood Python tries to call the `__next__()` method of `a`. Similarly, `iter(a)` calls `a.__iter__()`. We can make our own iterators by defining *classes* that can be used with the `next()` and `iter()` functions: this is the **iterator protocol**. For each of the *concepts* in Python, like sequence, container, iterable, the language defines a *protocol*, a set of methods a class must implement, in order to be treated as a member of that concept. To define an iterator, the methods that must be supported are `__next__()` and `__iter__()`. `__next__()` must update the iterator. We'll see why we need to define `__iter__` in a moment. Here is an example of defining a custom iterator class: ``` class fib_iterator: """An iterator over part of the Fibonacci sequence.""" def __init__(self, limit, seed1=1, seed2=1): self.limit = limit self.previous = seed1 self.current = seed2 def __iter__(self): return self def __next__(self): (self.previous, self.current) = (self.current, self.previous + self.current) self.limit -= 1 if self.limit < 0: raise StopIteration() return self.current x = fib_iterator(5) next(x) next(x) next(x) next(x) for x in fib_iterator(5): print(x) sum(fib_iterator(1000)) ``` ### A shortcut to iterables: the `__iter__` method In fact, we don't always have to define both `__iter__` and `__next__`! If, to be iterated over, a class just wants to behave as if it were some other iterable, you can just implement `__iter__` and return `iter(some_other_iterable)`, without implementing `next`. For example, an image class might want to implement some metadata, but behave just as if it were just a 1-d pixel array when being iterated: ``` from numpy import array from matplotlib import pyplot as plt class MyImage(object): def __init__(self, pixels): self.pixels = array(pixels, dtype='uint8') self.channels = self.pixels.shape[2] def __iter__(self): # return an iterator over just the pixel values return iter(self.pixels.reshape(-1, self.channels)) def show(self): plt.imshow(self.pixels, interpolation="None") x = [[[255, 255, 0], [0, 255, 0]], [[0, 0, 255], [255, 255, 255]]] image = MyImage(x) %matplotlib inline image.show() image.channels from webcolors import rgb_to_name for pixel in image: print(rgb_to_name(pixel)) ``` See how we used `image` in a `for` loop, even though it doesn't satisfy the iterator protocol (we didn't define both `__iter__` and `__next__` for it)? The key here is that we can use any *iterable* object (like `image`) in a `for` expression, not just iterators! Internally, Python will create an iterator from the iterable (by calling its `__iter__` method), but this means we don't need to define a `__next__` method explicitly. The *iterator* protocol is to implement both `__iter__` and `__next__`, while the *iterable* protocol is to implement `__iter__` and return an iterator. ### Generators There's a fair amount of "boiler-plate" in the above class-based definition of an iterable. Python provides another way to specify something which meets the iterator protocol: **generators**. ``` def my_generator(): yield 5 yield 10 x = my_generator() next(x) next(x) next(x) for a in my_generator(): print(a) sum(my_generator()) ``` A function which has `yield` statements instead of a `return` statement returns **temporarily**: it automagically becomes something which implements `__next__`. Each call of `next()` returns control to the function where it left off. Control passes back-and-forth between the generator and the caller. Our Fibonacci example therefore becomes a function rather than a class. ``` def yield_fibs(limit, seed1=1, seed2=1): current = seed1 previous = seed2 while limit > 0: limit -= 1 current, previous = current + previous, current yield current ``` We can now use the output of the function like a normal iterable: ``` sum(yield_fibs(5)) for a in yield_fibs(10): if a % 2 == 0: print(a) ``` Sometimes we may need to gather all values from a generator into a list, such as before passing them to a function that expects a list: ``` list(yield_fibs(10)) plt.plot(list(yield_fibs(20))) ``` ## Related Concepts Iterables and generators can be used to achieve complex behaviour, especially when combined with functional programming. In fact, Python itself contains some very useful language features that make use of these practices: context managers and decorators. We have already seen these in this class, but here we discuss them in more detail. ### Context managers [We have seen before](../ch01data/060files.html#Closing-files) [[notebook](../ch01data/060files.ipynb#Closing-files)] that, instead of separately `open`ing and `close`ing a file, we can have the file be automatically closed using a context manager: ``` %%writefile example.yaml modelname: brilliant import yaml with open('example.yaml') as foo: print(yaml.safe_load(foo)) ``` In addition to more convenient syntax, this takes care of any clean-up that has to be done after the file is closed, even if any errors occur while we are working on the file. How could we define our own one of these, if we too have clean-up code we always want to run after a calling function has done its work, or set-up code we want to do first? We can define a class that meets an appropriate protocol: ``` class verbose_context(): def __init__(self, name): self.name=name def __enter__(self): print("Get ready, ", self.name) def __exit__(self, exc_type, exc_value, traceback): print("OK, done") with verbose_context("Monty"): print("Doing it!") ``` However, this is pretty verbose! Again, a generator with `yield` makes for an easier syntax: ``` from contextlib import contextmanager @contextmanager def verbose_context(name): print("Get ready for action, ", name) yield name.upper() print("You did it") with verbose_context("Monty") as shouty: print(f"Doing it, {shouty}") ``` Again, we use `yield` to temporarily return from a function. ### Decorators When doing functional programming, we may often want to define mutator functions which take in one function and return a new function, such as our derivative example earlier. ``` from math import sqrt def repeater(count): def wrap_function_in_repeat(func): def _repeated(x): counter = count while counter > 0: counter -= 1 x = func(x) return x return _repeated return wrap_function_in_repeat fiftytimes = repeater(50) fiftyroots = fiftytimes(sqrt) print(fiftyroots(100)) ``` It turns out that, quite often, we want to apply one of these to a function as we're defining a class. For example, we may want to specify that after certain methods are called, data should always be stored: Any function which accepts a function as its first argument and returns a function can be used as a **decorator** like this. Much of Python's standard functionality is implemented as decorators: we've seen @contextmanager, @classmethod and @attribute. The @contextmanager metafunction, for example, takes in an iterator, and yields a class conforming to the context manager protocol. ``` @repeater(3) def hello(name): return f"Hello, {name}" hello("Cleese") ``` ## Supplementary material The remainder of this page contains an example of the flexibility of the features discussed above. Specifically, it shows how generators and context managers can be combined to create a testing framework like the one previously seen in the course. ### Test generators A few weeks ago we saw a test which loaded its test cases from a YAML file and asserted each input with each output. This was nice and concise, but had one flaw: we had just one test, covering all the fixtures, so we got just one . in the test output when we ran the tests, and if any test failed, the rest were not run. We can do a nicer job with a test **generator**: ``` def assert_exemplar(**fixture): answer = fixture.pop('answer') assert_equal(greet(**fixture), answer) def test_greeter(): with open(os.path.join(os.path.dirname( __file__), 'fixtures', 'samples.yaml') ) as fixtures_file: fixtures = yaml.safe_load(fixtures_file) for fixture in fixtures: yield assert_exemplar(**fixture) ``` Each time a function beginning with `test_` does a `yield` it results in another test. ### Negative test contexts managers We have seen this: ``` from pytest import raises with raises(AttributeError): x = 2 x.foo() ``` We can now see how `pytest` might have implemented this: ``` from contextlib import contextmanager @contextmanager def reimplement_raises(exception): try: yield except exception: pass else: raise Exception("Expected,", exception, " to be raised, nothing was.") with reimplement_raises(AttributeError): x = 2 x.foo() ``` ### Negative test decorators Some frameworks, like `nose`, also implement a very nice negative test decorator, which lets us marks tests that we know should produce an exception: ``` import nose @nose.tools.raises(TypeError, ValueError) def test_raises_type_error(): raise TypeError("This test passes") test_raises_type_error() @nose.tools.raises(Exception) def test_that_fails_by_passing(): pass test_that_fails_by_passing() ``` We could reimplement this ourselves now too, using the context manager we wrote above: ``` def homemade_raises_decorator(exception): def wrap_function(func): # Closure over exception # Define a function which runs another function under our "raises" context: def _output(*args): # Closure over func and exception with reimplement_raises(exception): func(*args) # Return it return _output return wrap_function @homemade_raises_decorator(TypeError) def test_raises_type_error(): raise TypeError("This test passes") test_raises_type_error() ```
github_jupyter
# Introduction to Predictive Maintenance ## Fault Classification using Supervised learning #### Author Nagdev Amruthnath Date: 1/10/2019 ##### Citation Info If you are using this for your research, please use the following for citation. Amruthnath, Nagdev, and Tarun Gupta. "A research study on unsupervised machine learning algorithms for early fault detection in predictive maintenance." In 2018 5th International Conference on Industrial Engineering and Applications (ICIEA), pp. 355-361. IEEE, 2018. ##### Disclaimer This is a tutorial for performing fault detection using machine learning. You this code at your own risk. I do not gurantee that this would work as shown below. If you have any suggestions please branch this project. ## Introduction This is the first of four part demostration series of using machine learning for predictive maintenance. The area of predictive maintenance has taken a lot of prominence in the last couple of years due to various reasons. With new algorithms and methodologies growing across different learning methods, it has remained a challenge for industries to adopt which method is fit, robust and provide most accurate detection. One the most common learning approaches used today for fault diagnosis is supervised learning. This is wholly based on the predictor variable and response variable. In this tutorial, we will be looking into some of the common supervised learning models such as SVM, random forest, k-nearest neighbour and H2O's AutoML model. ## Load libraries ``` options(warn=-1) # load libraries library(mdatools) #mdatools version 0.9.1 library(caret) library(foreach) library(dplyr) library(h2o) library(doParallel, verbose = F) library(ModelMetrics, verbose = F) ``` ## Load data Here we are using data from a bench press. There are total of four different states in this machine and they are split into four different csv files. We need to load the data first. In the data time represents the time between samples, ax is the acceleration on x axis, ay is the acceleration on y axis, az is the acceleration on z axis and at is the G's. The data was collected at sample rate of 100hz. Four different states of the machine were collected 1. Nothing attached to drill press 2. Wooden base attached to drill press 3. Imbalance created by adding weight to one end of wooden base 4. Imbalacne created by adding weight to two ends of wooden base. ``` #read csv files file1 = read.csv("dry run.csv", sep=",", header =T) file2 = read.csv("base.csv", sep=",", header =T) file3 = read.csv("imbalance 1.csv", sep=",", header =T) file4 = read.csv("imbalance 2.csv", sep=",", header =T) #Add labels to data file1$y = 1 file2$y = 2 file3$y = 3 file4$y = 4 #view top rows of data head(file1) ``` We can look at the summary of each file using summary function in R. Below, we can observe that 66 seconds long data is available. We also have min, max and mean for each of the variables. ``` # summary of each file summary(file1) ``` ## Data Aggregration and feature extraction Here, the data is aggregated by 1 minute and features are extracted. Features are extracted to reduce the dimension of the data and only storing the representation of the data. ``` file1$group = as.factor(round(file1$time)) file2$group = as.factor(round(file2$time)) file3$group = as.factor(round(file3$time)) file4$group = as.factor(round(file4$time)) #(file1,20) #list of all files files = list(file1, file2, file3, file4) #loop through all files and combine features = NULL for (i in 1:4){ res = files[[i]] %>% group_by(group) %>% summarize(ax_mean = mean(ax), ax_sd = sd(ax), ax_min = min(ax), ax_max = max(ax), ax_median = median(ax), ay_mean = mean(ay), ay_sd = sd(ay), ay_min = min(ay), ay_may = max(ay), ay_median = median(ay), az_mean = mean(az), az_sd = sd(az), az_min = min(az), az_maz = max(az), az_median = median(az), aT_mean = mean(aT), aT_sd = sd(aT), aT_min = min(aT), aT_maT = max(aT), aT_median = median(aT), y = mean(y) ) features = rbind(features, res) } #view all features features$y = as.factor(features$y) head(features) ``` ## Create sample size for training the model From the information, we know that we have four states in the data. Based on this information, the data is split into train and test samples. The train set is used to build the model and test set is used to validate the model. The ratio between train and test is 80:20. You can adjust this based on type of data. The below table shows the number of observations for each group. Note: It is adviced to have atleast 30 samples for each group. ``` table(features$y) ``` From the above results, we can observe that there are atleast 30 samples for each group. Now, we can used this data to split into train and test set. ``` #create samples of 80:20 ratio sample = sample(nrow(features) , nrow(features)* 0.8) train = features[sample,] test = features[-sample,] ``` ## Supervised Fault Classification ### Fault Classification using Random Forest Random Forest is a flexible, easy to use machine learning algorithm that produces, even without hyper-parameter tuning, a great result most of the time. It is also one of the most used algorithms, because it’s simplicity and the fact that it can be used for both classification and regression tasks. In this post, you are going to learn, how the random forest algorithm works and several other important things about it [3]. More about Random forest can be learnt here. https://towardsdatascience.com/the-random-forest-algorithm-d457d499ffcd ``` #If you don't want parallel computing, comment bellow lines of code. #create parallel clusters for 16 cores cl <- makeCluster(16) registerDoParallel(cl) #set training partameters fitControl = trainControl(method = "repeatedcv", number = 10, ## repeated ten times repeats = 20) rf.model = train(y ~ ., train, method = "parRF", allowParallel = TRUE, metric ="Kappa", trControl = fitControl, verbose = FALSE #rfeControl=control ) #summary of trained model rf.model #close all clusters stopCluster(cl) ``` From the training results, we can note that 97% accuracy and 96% kappa value has been achieved. This indicates a very good model. Next, we need to validate based on test data set and compute the validation accuracy. ``` # Used the model to perform prediction prediction = data.frame(pred=predict.train(rf.model, test)) #create a confusion matrix caret::confusionMatrix(test$y, prediction$pred) ``` From the above results, we can observe that the following model achived highest accuracy. Also, no information rate is less than accuracy indicating that the model is reliable. Also, kappa value is very high indicating similar inference. ### Fault Classification using Support Vector Machine (SVM) A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data (supervised learning), the algorithm outputs an optimal hyperplane which categorizes new examples. In two dimentional space this hyperplane is a line dividing a plane in two parts where in each class lay in either side [4] An indetail tutorial regarding SVM can be found here. https://medium.com/machine-learning-101/chapter-2-svm-support-vector-machine-theory-f0812effc72 ``` #If you don't want parallel computing, comment bellow lines of code. #create parallel clusters for 16 cores cl <- makeCluster(16) registerDoParallel(cl) #set training partameters fitControl = trainControl(method = "repeatedcv", number = 10, ## repeated ten times repeats = 20) rf.model = train(y ~ ., train, method = "svmLinear", allowParallel = TRUE, metric ="Kappa", trControl = fitControl, verbose = FALSE ) #summary of trained model rf.model #close all clusters stopCluster(cl) ``` From the above results we can observe that accuracy and kappa values have decreased. But, this decrease is very minor and negligible. Next we can validate with test data ``` # Used the model to perform prediction prediction = data.frame(pred=predict.train(rf.model, test)) #create a confusion matrix caret::confusionMatrix(test$y, prediction$pred) ``` From the above validation results, we can conclude that the results have marginally decreased compared to random forest model. Accuracy is much higher than NIR which indicates that this is still a reliable model. ### Fault Classification using K-nearest Neighbour (KNN) KNN is one of the simplest supervised learning method. This model can be used for both classification and regression models. One of the drawbacks of this model is, this is on the fly training model. This is a time consuming model. As the amount of data increases, the training time increases. K nearest neighbors is a simple algorithm that stores all available cases and classifies new cases based on a similarity measure (e.g., distance functions). KNN has been used in statistical estimation and pattern recognition already in the beginning of 1970’s as a non-parametric technique [5]. ``` #create parallel clusters for 16 cores cl <- makeCluster(16) registerDoParallel(cl) #set training partameters fitControl = trainControl(method = "repeatedcv", ## repeated ten times repeats = 3) rf.model = train(y ~ ., train, method = "knn", trControl = fitControl, preProcess = c("center","scale") ) #summary of trained model rf.model #close all clusters stopCluster(cl) ``` From the train results we can note that the accuracy of KNN is much lower than RandomForest and SVM. Next, we need to test the accuracy of validation data. ``` # Used the model to perform prediction prediction = data.frame(pred=predict.train(rf.model, test)) #create a confusion matrix caret::confusionMatrix(test$y, prediction$pred) ``` From the confusion matrix we can observe that the accuracy and kappa value is significantly lower than the values compared to random forest and SVM model. ### Fault Classification using AutoML The H2O AutoML interface is designed to have as few parameters as possible so that all the user needs to do is point to their dataset, identify the response column and optionally specify a time constraint or limit on the number of total models trained [6]. The below model will be trained for 30 seconds for this tutorial. This can be scaled up or down based on needs. ``` #initialize H2O h2o.init() #change to h2o dataframe. trainAML = as.h2o(train) testAML = as.h2o(test) # Identify predictors and response y = "y" x <- setdiff(names(train), y) # For binary classification, response should be a factor trainAML[,y] = as.factor(trainAML[,y]) testAML[,y] = as.factor(testAML[,y]) #train model using AutoML aml <- h2o.automl(y = y, training_frame = trainAML, leaderboard_frame = testAML, max_runtime_secs = 10) # View the AutoML Leaderboard lb <- aml@leaderboard lb pred = h2o.predict(aml@leader, testAML) preds = as.data.frame(pred) #create a confusion matrix caret::confusionMatrix(test$y, preds$predict) ``` The model used during AutoML process is GBM. This model provided the highest accuracy during training. From the validation results, we can observe that the accuracy is much higher than NIR and Kappa is also much higher compared to all the other models in this tutorial. #### References [1] Amruthnath, Nagdev, and Tarun Gupta. "A research study on unsupervised machine learning algorithms for early fault detection in predictive maintenance." In 2018 5th International Conference on Industrial Engineering and Applications (ICIEA), pp. 355-361. IEEE, 2018. [2] Amruthnath, Nagdev, and Tarun Gupta. "Fault class prediction in unsupervised learning using model-based clustering approach." In Information and Computer Technologies (ICICT), 2018 International Conference on, pp. 5-12. IEEE, 2018. [3] Niklas Donges, "The Random Forest Algorithm", https://towardsdatascience.com/the-random-forest-algorithm-d457d499ffcd. [4] Savan Patel, "Chapter 2 : SVM (Support Vector Machine) — Theory", https://medium.com/machine-learning-101/chapter-2-svm-support-vector-machine-theory-f0812effc72 [5] "K Nearest Neighbors - Classification", https://www.saedsayad.com/k_nearest_neighbors.htm [6] "AutoML: Automatic Machine Learning", http://docs.h2o.ai/h2o/latest-stable/h2o-docs/automl.html
github_jupyter
<img align="right" style="max-width: 200px; height: auto" src="hsg_logo.png"> ### Lab 01 - "Introduction to the Lab Environment" Machine Learning (BBWL), University of St. Gallen, Spring Term 2021 The lab environment of the **"Machine Learning"** course is powered by Jupyter Notebooks (https://jupyter.org), which allows one to perform a great deal of data analysis and statistical validation. In this first lab, we want to touch on the basic concepts and techniques of such notebooks. Furthermore, its capabilities will be demonstrated based on a few simple and introductory examples. ### Lab Objectives: After today's lab, you should be able to: > 1. Understand the general workflow, structure, and functionality of **Jupyter** notebooks. > 2. Import and apply python data science libraries such as `NumPy` and `Pandas`. > 3. Understand how the **Python** programming language can be utilized to manipulate and analyze financial data. > 4. Download arbitrary stock market and financial data using the `Pandas` `DataReader` API. > 5. Use the `Matplotlib` library to visualize data as well as analytical results. Note: The content of this first lab is inspired by the Quantopian lecture series ( https://www.quantopian.com ). If you are interested to learn more about financial data science and/or algorithmic trading their lectures are a great resource to get you started. ### 1. Jupyter Notebook Introduction #### Code Cells vs. Text Cells As you can see, each cell can be either code or text. To select between them, choose from the `Cell Type` dropdown menu on the top left. Hello World! ``` 1 + 5 ``` #### Executing a Command A code cell will be evaluated when you press **'Run'**, or when you press the shortcut, shift-enter. Evaluating a cell evaluates each line of code in sequence, and prints the results of the last line below the cell. ``` 40 + 2 ``` Sometimes there is no result to be printed, as is the case with the following assignment: ``` X = 2 X ``` Remember that only the result from the last line is printed. ``` 2 + 2 3 + 3 ``` However, you can print whichever lines you want using the print statement. ``` print(2 + 2) 3 + 3 ``` #### Knowing When a Cell is Running While a cell is running, a **[*]** will be displayed on the left of the respective cell. When a cell has yet to be executed, **[ ]** will be displayed. When it has been run, a number will display indicating the order in which it was run during the execution of the notebook **[5]**. Try on this cell and note what is happening: ``` # take some time to run something c = 0 for i in range(10000000): c = c + i c ``` ### 2. Importing Python Libraries The vast majority of the time, we will use functions from pre-built libraries, such as: ``` # importing the python sys library import sys ``` You can check your Python version at the command line by running: ``` # determine the python system version sys.version ``` You can't import every library into the lab environment due to security issues. However, you can import most of the common scientific ones. Here we import the libraries `NumPy` (https://www.numpy.org) and `Pandas` (https://pandas.pydata.org), two of the most common and useful libraries in data science. We recommend copying this import statement for every new notebook that you will create. ``` # import the number and Pandas data science libraries import numpy import pandas ``` Let's now use the `NumPy` library to calculate the mean of a list of numbers: ``` numpy.mean([1, 2, 3, 4]) ``` Notice that you can rename libraries to whatever you want after importing. The `as` statement allows this. Here we use `np` and `pd` as aliases for both the pre-built `NumPy` and `Pandas` libraries. This is very common aliasing and will be found in most code snippets around the web. The idea behind this is to allow you to type fewer characters when you are frequently accessing these libraries. ``` # importing the NumPy and Pandas data science libraries using aliases import numpy as np import pandas as pd ``` Let's now use the `NumPy` library to calculate the mean of a list of numbers: ``` np.mean([1, 2, 3, 4]) ``` ### 3. Code Completion and Documentation #### Autocomplete Code Pressing tab will give you a list of Jupyter's best guesses for what you might want to type next. This is incredibly valuable and will save you a lot of time. If there is only one possible option for what you could type next, Jupyter will fill that in for you. Try pressing tab very frequently; it will seldom fill in anything you don't want as if there is ambiguity a list will be shown. This is a great way to see what functions are available in a library. Try placing your cursor after the `.` and press the `tab` key on your keyboard. ``` np.random.gamma ``` #### Documentation Help Placing a question mark after a function and executing that line of code will give you the documentation Jupyter has for that function. It's often best to do this in a new cell, as you avoid re-executing other code and running into bugs. ``` np.random.normal? ``` ### 4. Plotting Data #### Random Data Sampling Let's' sample some random data using a function from `NumPy`. ``` # sample 100 points with a mean of 0 and an std of 1. This is a standard normal distribution. x = np.random.normal(0, 1, 100) x ``` #### Data Plotting Python's `Matplotlib` library (https://matplotlib.org) is a very flexible library and has a lot of handy, built-in defaults that will help you out tremendously. As such, you don’t need much to get started: you need to make the necessary imports, prepare some data, and you can start plotting with the help of the `plot()` function. Let's have a look. Let's import Matplotlib by running the following statements: ``` # importing the matplotlib plotting library import matplotlib.pyplot as plt ``` Note that we imported the `pyplot` module of the `Matplotlib` library under the alias `plt`. We can now use the plotting functionality provided by Matplotlib as follows: ``` plt.plot(x) ``` Let's apply some variations as well as the axis legends to our plot: ``` plt.plot(x, linewidth=3, linestyle="--") # set label and title details plt.xlabel('sample', weight='normal', fontsize=8) plt.ylabel('value', weight='normal', fontsize=8) ``` #### Remove Line Output You might have noticed a similar annoying line of the form: `[<matplotlib.lines.Line2D at 0x10a4cce90>]` before the created plot. If you wish to get rid of this output, end the plot statement using a semicolon `;`: ``` plt.plot(x); ``` #### Adding Axis Labels No self-respecting quantitative analyst leaves a graph without labeled axes. Here are some commands to help with that. ``` # sample 100 points twice with a mean of 0 and an std of 1 from a standard normal distribution. x1 = np.random.normal(0, 1, 100) x2 = np.random.normal(0, 1, 100) # plot both sample results plt.plot(x1); plt.plot(x2); # add x-axis and y-axis label plt.xlabel('Time') plt.ylabel('Returns') # add plot legend plt.legend(['X1', 'X2']) # add plot title plt.title('Sample Returns X1 and X2'); ``` ### 5. Generating Statistics Let's use `NumPy` to take some simple statistics like the mean of the generated samples: ``` np.mean(x1) ``` As well as the corresponding standard deviation of the generated samples: ``` np.std(x1) ``` ### 6. Collect and Plot Real Pricing Data One of the first steps to any data science project is usually to import your data. Randomly sampled data can be great for testing ideas, but let's now import some real financial market data. As part of the `Labs` section of the **Intro into ML and DL** Git repository, you will find a "Comma Separated Value (CSV)" file named `sample_google_data_daily.csv`. The file contains the daily stock market data of the **Alphabet (Google) Inc.** stock within the time frame `31-12-2015` till `31-12-2017`. Pls. download the data and copy it to the same directory as this Jupyter Notebook. Once you completed the download you're ready to import the file into Python using the `read_csv()` function of the `Pandas` library by running the following statement: ``` # set the url of the data data_url = 'https://raw.githubusercontent.com/HSG-AIML/LabML/master/lab_01/sample_google_data_daily.csv' # read the alphabet data alphabet_data = pd.read_csv(data_url, sep=';') ``` The retrieved data is a so-called `Pandas` `DataFrame`. You can see the datetime index and the columns with different pricing data. Let's inspect the top 5 rows of the imported data using the `head()` function of the `Pandas` library: ``` alphabet_data.head(5) ``` Looks good, right? It's great to import data that was already collected and stored accordingly. In real data science projects, we are often challenged to retrieve the data from a variety of sources, e.g., the web. But where to get financial data of good quality? A great source for retrieving such data can be found in the `Pandas` `Datareader`package. Although the **MS Azure** as well as the **Google Colab** environment come with a lot of pre-installed libraries, sometimes a needed library might not be available. Therefore, you may want to install libraries directly within an individual notebook. Please note, libraries installed from the notebook apply only to the current server session. Library installations aren't persistend once the server is shut down. In general, libraries in Python can be installed using the shell **pip** command within code cells. Any command that works at the command-line can be used in Jupyter Notebbos by prefixing it with the `!` character. Let's give it a try and install the `pandas_datareader` python library. ``` !pip3 install pandas_datareader --ignore-installed ``` Let's import the `DataReader` as well as the `DateTime`library to retreive some financial data: ``` import datetime as dt import pandas_datareader as dr ``` Specify both the `start` and `end` date of the data download: ``` start_date = dt.datetime(2015, 12, 31) end_date = dt.datetime(2017, 12, 31) ``` Download the daily **Tesla Inc.** (ticker symbol: TSLA) stock data using the `DataReader` object of the `Pandas` data science library: ``` # download tesla market data tesla_data = dr.data.DataReader('TSLA', data_source='yahoo', start=start_date, end=end_date) ``` We again retrieved the data as a `Pandas` `dataframe` but this time using the `DataReader` object that comes with `Pandas` . Let's inspect the top 5 rows of the imported data using the `head()` function of the `Pandas` library: ``` tesla_data.head(5) ``` To obtain an initial understanding of the date retrieved, let us have a look at some basic data statistics: ``` tesla_data.describe() ``` Ok, at first glance, the data looks fine. Let's, therefore, save it to your local directory using the `to_csv()` function of the `Pandas` library: ``` tesla_data.to_csv('sample_tesla_data_daily.csv', sep=';', encoding='utf-8') ``` Note: it is possible that when running this cell, you get an error similar to: `ImportError: cannot import name 'CompressionOptions' from 'pandas._typing' (/usr/local/lib/python3.7/dist-packages/pandas/_typing.py)`. In this case, you need to restart the colab runtime to install the `pandas_datareader` library correctly. You can do so under `Runtime` in the colab interface above. Once restarted, the lab should execute smoothly when you re-run it. Let's continue with the **Alphabet Inc.** data prepared. To get a specific column of a `Pandas` dataframe, we can column-slice it to get the daily adjusted closing price data like this: ``` tesla_closing = tesla_data['Adj Close'] ``` Let's inspect the **top 5** rows of the sliced data: ``` tesla_closing.head(5) ``` Ok great, we got two columns (1) the index 'Date' of the DataFrame as well as (2) the data column 'Adj. Close' price that we asked for. Let's now plot the date vs. the adjusted closing prices. But before doing so, we need to be able to disentangle the index from the data. This can be accomplished by the `.index` function that will return the index of a given DataFrame as well as the `.values` function that will return the actual data (excl. the index) of a given DataFrame. We will use both commands to specify the X and Y coordinates of the plot: ``` # plot both sample results plt.plot(tesla_closing.index, tesla_closing.values) # add x-axis and y-axis label plt.xlabel('Time') plt.ylabel('Closing Price') # add plot title plt.title('Tesla Inc. Daily Adjusted Closing Price'); np.mean(tesla_closing.values) np.std(tesla_closing.values) ``` ### 7. Obtaining Returns from Prices When analyzing stock market data, we are often also interested in the return $R_t$ of a financial instrument over a certain time frame: $$R_t=\frac{V_{f}-V_{i}}{V_{i}}$$ where: - $V_{f}$ denotes the financial instruments final value, including dividends and interest - $V_{i}$ denotes the financial instruments initial value The `Pandas` data science library provides us with a variety of functions that come quite "handy" when analyzing such data. To determine the daily return $r_t$ we may, for example, utilize Pandas `pct_change` function: ``` tesla_returns = tesla_closing.pct_change() ``` Let's inspect the calculated returns: ``` tesla_returns.head(5) ``` Notice, how we drop the first element after doing this, as it will be `NaN`. ``` tesla_returns = tesla_returns[1:] ``` And inspect the returns data again: ``` tesla_returns.head(5) ``` Let's now plot the distribution of daily returns as a histogram: ``` # plot histogram of returns plt.hist(tesla_returns, bins=20) # add x-axis and y-axis label plt.xlabel('Return') plt.ylabel('Frequency') # add plot title plt.title('Alphabet Inc. Adjusted Daily Returns'); ``` Let's again get statistics on the real daily return data: ``` np.mean(tesla_returns) np.std(tesla_returns) ``` Let's generate data out of a normal distribution using the statistics we estimated from the daily returns of the Tesla stock. We'll see that we have good reason to suspect the Tesla returns may not be normally distributed, as the resulting normal distribution looks far different. ``` # plot histogram of randomly sampled returns plt.hist(np.random.normal(np.mean(tesla_returns), np.std(tesla_returns), 10000), bins=20) # add x-axis and y-axis label plt.xlabel('Return') plt.ylabel('Frequency') # add plot title plt.title('Tesla Inc. Adjusted Daily Returns (Normal)'); ``` ### 8. Generating a Moving Average When analyzing stock market data, we are often also interested in calculating so-called "rolling statistics", e.g., a 90- or 200-day moving average. Again the `Pandas` library is offering some great functions that allow us to generate such rolling statistics. Here's an example. Notice how there's no moving average for the first 30 days, as we don't have 90 days before we can determine the first value: ``` # determine the rolling average of the last 90 days tesla_moving_average = tesla_closing.rolling(window=90, center=False).mean() ``` Let's plot the obtained moving averages. ``` # plot quarterly returns plt.plot(tesla_closing.index, tesla_closing.values) # plot moving averages quarterly returns plt.plot(tesla_moving_average.index, tesla_moving_average.values) # add x-axis and y-axis label plt.xlabel('Return') plt.ylabel('Price') # add plot legend plt.legend(['Return', '90-day MAVG']); # add plot title plt.title('Tesla Inc. Returns vs. Moving Average Returns'); ``` ### Lab Assignments: You may want to try the following exercises after the lab: **1. Download data using the `Pandas` `DataReader` API, `dr.data.DataReader('TSLA', data_source='yahoo', ...)`.** > Research the `Pandas` `DataReader` API and download the daily closing prices (instead of the quarterly) of the three following stocks: Netflix, Facebook, and Microsoft. Download the daily stock closing prices starting from 2014-01-01 until today as a `Pandas` DataFrame. ``` # *************************************************** # INSERT YOUR CODE HERE # *************************************************** ``` Throughout the course, we will visualize and analyze plenty of data. The following exercises should provide you a first intuition on how this can be achieved using Python's `Pandas` and `Matplotlib` library: **2. Visualise data using the `Matplotlib` library, `plt.plot(...)` and `plt.hist(...)`.** > Visualize the downloaded data by plotting the daily adjusted closing prices of the three stocks over time (1) into a single plot for each stock as well as (2) into a single plot containing the closing prices of all three stocks combined. ``` # *************************************************** # INSERT YOUR CODE HERE # *************************************************** ``` **3. Save data using the `Pandas` library.** > Research the `Panda's` data science library on how to save a `Pandas` `DataFrame` to a local directory. Save the raw daily closing prices and corresponding date of all three stocks in a comma-separated value (CSV) format to your local directory. Save the CSV file using the semicolon `';'` separator and encode it as `'utf-8'`. ``` # *************************************************** # INSERT YOUR CODE HERE # *************************************************** ``` **4. Analyze data using the `Pandas` library, `data.rolling(..., window=...)`.** > For each stock, calculate the rolling moving averages of the daily closing prices using a time window of 30 and 90 days. For each stock, plot the daily closing price as well as the 30 and 90 days moving average into a single plot. ``` # *************************************************** # INSERT YOUR CODE HERE # *************************************************** ``` ### Lab Summary: In this initial lab, a step by step introduction into some basic concepts of analyzing financial data using Jupyter notebooks are presented. The code and exercises presented in this lab may serve you as a starting point for more complex and tailored analytics. You may want to execute the content of your lab outside of the Jupyter notebook environment, e.g., on a compute node or a server. The cell below converts the lab notebook into a standalone and executable python script. ``` !jupyter nbconvert --to script ml_colab_01.ipynb ```
github_jupyter
<center> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png" width="300" alt="cognitiveclass.ai logo" /> </center> <h1>Extracting and Visualizing Stock Data</h1> <h2>Description</h2> Extracting essential data from a dataset and displaying it is a necessary part of data science; therefore individuals can make correct decisions based on the data. In this assignment, you will extract some stock data, you will then display this data in a graph. <h2>Table of Contents</h2> <div class="alert alert-block alert-info" style="margin-top: 20px"> <ul> <li>Define a Function that Makes a Graph</li> <li>Question 1: Use yfinance to Extract Stock Data</li> <li>Question 2: Use Webscraping to Extract Tesla Revenue Data</li> <li>Question 3: Use yfinance to Extract Stock Data</li> <li>Question 4: Use Webscraping to Extract GME Revenue Data</li> <li>Question 5: Plot Tesla Stock Graph</li> <li>Question 6: Plot GameStop Stock Graph</li> </ul> <p> Estimated Time Needed: <strong>30 min</strong></p> </div> <hr> ``` !pip install yfinance #!pip install pandas #!pip install requests !pip install bs4 #!pip install plotly import yfinance as yf import pandas as pd import requests from bs4 import BeautifulSoup import plotly.graph_objects as go from plotly.subplots import make_subplots ``` ## Define Graphing Function In this section, we define the function `make_graph`. You don't have to know how the function works, you should only care about the inputs. It takes a dataframe with stock data (dataframe must contain Date and Close columns), a dataframe with revenue data (dataframe must contain Date and Revenue columns), and the name of the stock. ``` def make_graph(stock_data, revenue_data, stock): fig = make_subplots(rows=2, cols=1, shared_xaxes=True, subplot_titles=("Historical Share Price", "Historical Revenue"), vertical_spacing = .3) fig.add_trace(go.Scatter(x=pd.to_datetime(stock_data.Date, infer_datetime_format=True), y=stock_data.Close.astype("float"), name="Share Price"), row=1, col=1) fig.add_trace(go.Scatter(x=pd.to_datetime(revenue_data.Date, infer_datetime_format=True), y=revenue_data.Revenue.astype("float"), name="Revenue"), row=2, col=1) fig.update_xaxes(title_text="Date", row=1, col=1) fig.update_xaxes(title_text="Date", row=2, col=1) fig.update_yaxes(title_text="Price ($US)", row=1, col=1) fig.update_yaxes(title_text="Revenue ($US Millions)", row=2, col=1) fig.update_layout(showlegend=False, height=900, title=stock, xaxis_rangeslider_visible=True) fig.show() ``` ## Question 1: Use yfinance to Extract Stock Data Using the `Ticker` function enter the ticker symbol of the stock we want to extract data on to create a ticker object. The stock is Tesla and its ticker symbol is `TSLA`. ``` tesla = yf.Ticker("TSLA") ``` Using the ticker object and the function `history` extract stock information and save it in a dataframe named `tesla_data`. Set the `period` parameter to `max` so we get information for the maximum amount of time. ``` tesla_data = tesla.history(period="max") ``` **Reset the index** using the `reset_index(inplace=True)` function on the tesla_data DataFrame and display the first five rows of the `tesla_data` dataframe using the `head` function. Take a screenshot of the results and code from the beginning of Question 1 to the results below. ``` tesla_data.reset_index(inplace=True) tesla_data.head() ``` ## Question 2: Use Webscraping to Extract Tesla Revenue Data Use the `requests` library to download the webpage [https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue](https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ). Save the text of the response as a variable named `html_data`. ``` tesla_url = "https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue" tesla_html_data = requests.get(tesla_url).text ``` Parse the html data using `beautiful_soup`. ``` tesla_soup = BeautifulSoup(tesla_html_data, "html5lib") ``` Using beautiful soup extract the table with `Tesla Quarterly Revenue` and store it into a dataframe named `tesla_revenue`. The dataframe should have columns `Date` and `Revenue`. Make sure the comma and dollar sign is removed from the `Revenue` column. ``` tesla_tables = tesla_soup.find_all('table') for index,table in enumerate(tesla_tables): if ("Tesla Quarterly Revenue" in str(table)): tesla_table_index = index tesla_revenue = pd.DataFrame(columns=["Date", "Revenue"]) for row in tesla_tables[tesla_table_index].tbody.find_all("tr"): col = row.find_all("td") if (col !=[]): date = col[0].text revenue = col[1].text.replace("$", "").replace(",", "") tesla_revenue = tesla_revenue.append({"Date" : date, "Revenue" : revenue}, ignore_index=True) ``` <details><summary>Click here if you need help removing the dollar sign and comma</summary> ``` If you parsed the HTML table by row and column you can use the replace function on the string revenue = col[1].text.replace("$", "").replace(",", "") If you use the read_html function you can use the replace function on the string representation of the column tesla_revenue["Revenue"] = tesla_revenue["Revenue"].str.replace("$", "").str.replace(",", "") ``` </details> Remove the rows in the dataframe that are empty strings or are NaN in the Revenue column. Print the entire `tesla_revenue` DataFrame to see if you have any. ``` tesla_revenue = tesla_revenue[tesla_revenue['Revenue'] != ""] tesla_revenue ``` <details><summary>Click here if you need help removing the Nan or empty strings</summary> ``` If you have NaN in the Revenue column tesla_revenue.dropna(inplace=True) If you have emtpty string in the Revenue column tesla_revenue = tesla_revenue[tesla_revenue['Revenue'] != ""] ``` </details> Display the last 5 row of the `tesla_revenue` dataframe using the `tail` function. Take a screenshot of the results. ``` tesla_revenue.tail() ``` ## Question 3: Use yfinance to Extract Stock Data Using the `Ticker` function enter the ticker symbol of the stock we want to extract data on to create a ticker object. The stock is GameStop and its ticker symbol is `GME`. ``` gamestop = yf.Ticker("GME") ``` Using the ticker object and the function `history` extract stock information and save it in a dataframe named `gme_data`. Set the `period` parameter to `max` so we get information for the maximum amount of time. ``` gme_data = gamestop.history(period="max") ``` **Reset the index** using the `reset_index(inplace=True)` function on the gme_data DataFrame and display the first five rows of the `gme_data` dataframe using the `head` function. Take a screenshot of the results and code from the beginning of Question 3 to the results below. ``` gme_data.reset_index(inplace=True) gme_data.head() ``` ## Question 4: Use Webscraping to Extract GME Revenue Data Use the `requests` library to download the webpage [https://www.macrotrends.net/stocks/charts/GME/gamestop/revenue](https://www.macrotrends.net/stocks/charts/GME/gamestop/revenue?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork-23455606&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ). Save the text of the response as a variable named `html_data`. ``` gme_url = "https://www.macrotrends.net/stocks/charts/GME/gamestop/revenue" gme_html_data = requests.get(gme_url).text ``` Parse the html data using `beautiful_soup`. ``` gme_soup = BeautifulSoup(gme_html_data, "html5lib") ``` Using beautiful soup extract the table with `GameStop Quarterly Revenue` and store it into a dataframe named `gme_revenue`. The dataframe should have columns `Date` and `Revenue`. Make sure the comma and dollar sign is removed from the `Revenue` column using a method similar to what you did in Question 2. ``` gme_tables = gme_soup.find_all('table') for index,table in enumerate(gme_tables): if ("GameStop Quarterly Revenue" in str(table)): gme_table_index = index gme_revenue = pd.DataFrame(columns=["Date", "Revenue"]) for row in gme_tables[gme_table_index].tbody.find_all("tr"): col = row.find_all("td") if (col !=[]): date = col[0].text revenue = col[1].text.replace("$", "").replace(",", "") gme_revenue = gme_revenue.append({"Date" : date, "Revenue" : revenue}, ignore_index=True) ``` Display the last five rows of the `gme_revenue` dataframe using the `tail` function. Take a screenshot of the results. ``` gme_revenue.tail() ``` ## Question 5: Plot Tesla Stock Graph Use the `make_graph` function to graph the Tesla Stock Data, also provide a title for the graph. The structure to call the `make_graph` function is `make_graph(tesla_data, tesla_revenue, 'Tesla')` ``` make_graph(tesla_data, tesla_revenue, 'Tesla') ``` ## Question 6: Plot GameStop Stock Graph Use the `make_graph` function to graph the GameStop Stock Data, also provide a title for the graph. The structure to call the `make_graph` function is `make_graph(gme_data, gme_revenue, 'GameStop')`. ``` make_graph(gme_data, gme_revenue, 'GameStop') ``` <h2>About the Authors:</h2> <a href="https://www.linkedin.com/in/joseph-s-50398b136/">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD. Azim Hirjani ## Change Log | Date (YYYY-MM-DD) | Version | Changed By | Change Description | | ----------------- | ------- | ------------- | ------------------------- | | 2020-11-10 | 1.1 | Malika Singla | Deleted the Optional part | | 2020-08-27 | 1.0 | Malika Singla | Added lab to GitLab | <hr> ## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/> <p>
github_jupyter
# Source layouts schematics ``` from IPython.display import display # noqa: F401 # ignore used but not imported from pathlib import Path import numpy as np import pandas as pd import verde as vd import matplotlib.pyplot as plt from matplotlib.patches import Rectangle import boost_and_layouts from boost_and_layouts import save_to_json ``` ## Define parameters for building the source distributions ``` # Define results directory to read synthetic ground survey results_dir = Path("..") / "results" ground_results_dir = results_dir / "ground_survey" ``` ## Read synthetic ground survey Get coordinates of observation points from a synthetic ground survey ``` survey = pd.read_csv(ground_results_dir / "survey.csv") inside = np.logical_and( np.logical_and( survey.easting > 0, survey.easting < 40e3, ), np.logical_and( survey.northing > -60e3, survey.northing < -20e3, ), ) survey = survey.loc[inside] survey fig, ax = plt.subplots(figsize=(6, 6)) tmp = ax.scatter(survey.easting, survey.northing) ax.set_aspect("equal") ax.set_title("Height of ground survey points") plt.show() coordinates = (survey.easting, survey.northing, survey.height) ``` ### Generate the source distributions ``` block_spacing = 3000 grid_spacing = 2000 layouts = ["source_below_data", "grid_sources", "block_averaged_sources"] depth_type = "constant_depth" parameters = {} layout = "source_below_data" parameters[layout] = dict( depth_type=depth_type, depth=500, ) layout = "grid_sources" parameters[layout] = dict(depth_type=depth_type, depth=500, spacing=grid_spacing) layout = "block_averaged_sources" parameters[layout] = dict(depth_type=depth_type, depth=500, spacing=block_spacing) source_distributions = {} for layout in parameters: source_distributions[layout] = getattr(boost_and_layouts, layout)( coordinates, **parameters[layout] ) ``` Create lines for plotting the boundaries of the blocks ``` region = vd.get_region(coordinates) grid_nodes = vd.grid_coordinates(region, spacing=block_spacing) grid_lines = (np.unique(grid_nodes[0]), np.unique(grid_nodes[1])) for nodes in grid_lines: nodes.sort() ``` ## Plot observation points and source layouts ``` # Load matplotlib configuration plt.style.use(Path(".") / "matplotlib.rc") titles = { "source_below_data": "Sources below data", "block_averaged_sources": "Block-averaged sources", "grid_sources": "Regular grid", } fig, axes = plt.subplots(nrows=1, ncols=4, sharey=True, figsize=(7, 1.7), dpi=300) size = 3 labels = "a b c d".split() for ax, label in zip(axes, labels): ax.set_aspect("equal") ax.annotate( label, xy=(0.02, 0.95), xycoords="axes fraction", bbox=dict(boxstyle="circle", fc="white", lw=0.2), ) ax.axis("off") # Plot observation points ax = axes[0] ax.scatter(survey.easting, survey.northing, s=size, c="C0", marker="^") ax.set_title("Observation points") # Plot location of sources for each source layout for ax, layout in zip(axes[1:], layouts): ax.scatter(*source_distributions[layout][:2], s=size, c="C1") ax.set_title(titles[layout]) # Add blocks boundaries to Block Averaged Sources plot ax = axes[3] grid_style = dict(color="grey", linewidth=0.5, linestyle="--") xmin, xmax, ymin, ymax = region[:] for x in grid_lines[0]: ax.plot((x, x), (ymin, ymax), **grid_style) for y in grid_lines[1]: ax.plot((xmin, xmax), (y, y), **grid_style) plt.tight_layout(w_pad=0) plt.savefig( Path("..") / "manuscript" / "figs" / "source-layouts-schematics.pdf", dpi=300, bbox_inches="tight", ) plt.show() ``` ## Dump number of observation points and sources to JSON file ``` variables = { "source_layouts_schematics_observations": survey.easting.size, } for layout in layouts: variables["source_layouts_schematics_{}".format(layout)] = source_distributions[ layout ][0].size json_file = results_dir / "source-layouts-schematics.json" save_to_json(variables, json_file) ``` # Gradient boosting schematics ``` sources = source_distributions["source_below_data"] region = vd.get_region(sources) overlapping = 0.5 window_size = 18e3 spacing = window_size * (1 - overlapping) centers, indices = vd.rolling_window(sources, size=window_size, spacing=spacing) spacing_easting = centers[0][0, 1] - centers[0][0, 0] spacing_northing = centers[1][1, 0] - centers[1][0, 0] print("Desired spacing:", spacing) print("Actual spacing:", (spacing_easting, spacing_northing)) indices = [i[0] for i in indices.ravel()] centers = [i.ravel() for i in centers] n_windows = centers[0].size print("Number of windows:", n_windows) ncols = 10 figsize = (1.7 * ncols, 1.7) size = 3 fig, axes = plt.subplots( ncols=ncols, nrows=1, figsize=figsize, dpi=300, sharex=True, sharey=True ) for ax in axes: ax.set_aspect("equal") ax.axis("off") # Observation points axes[0].scatter(survey.easting, survey.northing, s=size, c="C0", marker="^") # Sources axes[1].scatter(*sources[:2], s=size, c="C1") # First fit # --------- window_i = 0 window = indices[window_i] not_window = [i for i in np.arange(sources[0].size) if i not in window] w_center_easting, w_center_northing = centers[0][window_i], centers[1][window_i] rectangle_kwargs = dict( xy=(w_center_easting - window_size / 2, w_center_northing - window_size / 2), width=window_size, height=window_size, fill=False, linewidth=0.5, linestyle="--", color="#444444", ) # Observation points axes[2].scatter( survey.easting.values[window], survey.northing.values[window], s=size, c="C0", marker="^", ) axes[2].scatter( survey.easting.values[not_window], survey.northing.values[not_window], s=size, c="C7", marker="^", ) rectangle = Rectangle(**rectangle_kwargs) axes[2].add_patch(rectangle) # Sources axes[3].scatter(sources[0][window], sources[1][window], s=size, c="C1") axes[3].scatter(sources[0][not_window], sources[1][not_window], s=size, c="C7") rectangle = Rectangle(**rectangle_kwargs) axes[3].add_patch(rectangle) # First Prediction # ---------------- axes[4].scatter(survey.easting, survey.northing, s=size, c="C3", marker="v") axes[5].scatter(sources[0][window], sources[1][window], s=size, c="C1") rectangle = Rectangle(**rectangle_kwargs) axes[5].add_patch(rectangle) # Second fit # ---------- window_i = 3 window = indices[window_i] not_window = [i for i in np.arange(sources[0].size) if i not in window] w_center_easting, w_center_northing = centers[0][window_i], centers[1][window_i] rectangle_kwargs = dict( xy=(w_center_easting - window_size / 2, w_center_northing - window_size / 2), width=window_size, height=window_size, fill=False, linewidth=0.5, linestyle="--", color="#444444", ) # Residue axes[6].scatter( survey.easting.values[window], survey.northing.values[window], s=size, c="C3", marker="v", ) axes[6].scatter( survey.easting.values[not_window], survey.northing.values[not_window], s=size, c="C7", marker="^", ) rectangle = Rectangle(**rectangle_kwargs) axes[6].add_patch(rectangle) # Sources axes[7].scatter(sources[0][window], sources[1][window], s=size, c="C1") axes[7].scatter(sources[0][not_window], sources[1][not_window], s=size, c="C7") rectangle = Rectangle(**rectangle_kwargs) axes[7].add_patch(rectangle) # Second Prediction # ----------------- axes[8].scatter(survey.easting, survey.northing, s=size, c="C3", marker="v") axes[9].scatter(sources[0][window], sources[1][window], s=size, c="C1") rectangle = Rectangle(**rectangle_kwargs) axes[9].add_patch(rectangle) plt.savefig(Path("..") / "manuscript" / "figs" / "svg" / "gradient-boosting-raw.svg") plt.show() ```
github_jupyter
<a href="https://colab.research.google.com/github/GitMarco27/TMML/blob/main/Notebooks/009_Custom_Loss.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # 3 Minutes Machine Learning ## Episode 9: Custom Loss #### Marco Sanguineti, 2021 --- Welcome to 3 minutes Machine Learning! Reference: https://archive.ics.uci.edu/ml/datasets/Airfoil+Self-Noise ``` import tensorflow as tf import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn import preprocessing print(tf.__version__) import os def loadThumb(path): # Let's import this video thumbnail! if os.path.exists(path): myThumb = plt.imread(path) fig, ax = plt.subplots(figsize=(15, 10)) plt.axis('off') ax.imshow(myThumb) plt.show() loadThumb('/tmp/yt_thumb_009.png') ``` #### Video Topics > 1. Load the dataset from UCI.edu > 2. Create a model with the keras API with a custom layer and custon loss > 3. Train the model and check the results > 4. See you on next video! # Load the dataset ___ ``` URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/00291/airfoil_self_noise.dat" cols = ['Frequency', 'Angle of Attack', 'Chord length', 'Free-stream velocity', 'Suction side displacement thickness', 'Sound Pressure'] dataset = pd.read_table(URL, names=cols, dtype='float32') dataset dataset.describe().T # sns.pairplot(dataset) # plt.show() ``` # Create the model ___ ``` from tensorflow.keras.layers import Dense, Input from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam from tensorflow.keras.layers import Layer # Let's create a custom quadratic layer class myDenseLayer(Layer): def __init__(self, units=32, activation=None): super(myDenseLayer, self).__init__() self.units = units self.activation = tf.keras.activations.get(activation) def build(self, input_shape): a_init = tf.random_normal_initializer() self.a = tf.Variable(name='a', initial_value=a_init(shape=(input_shape[-1], self.units)), dtype='float32', trainable=True) self.b = tf.Variable(name='b', initial_value=a_init(shape=(input_shape[-1], self.units)), dtype='float32', trainable=True) c_init = tf.zeros_initializer() self.c = tf.Variable(name='c', initial_value=c_init(shape=(self.units)), dtype='float32', trainable=True) def call(self, inputs): return self.activation(tf.matmul(tf.math.square(inputs), self.a)+tf.matmul(inputs, self.b) + self.c) myLayer = myDenseLayer(units=64, activation='relu') myLayer_2 = myDenseLayer(units=64, activation='relu') #My Custom Regressor Accuracy import keras.backend as K import sklearn class CustomAccuracy(tf.keras.losses.Loss): def __init__(self): super().__init__() def call(self, y_true, y_pred): mse = tf.reduce_mean(tf.square(y_pred-y_true)) rmse = tf.math.sqrt(mse) return rmse / tf.reduce_mean(tf.square(y_true)) - 1 import numpy as np loss = CustomAccuracy() a = tf.random.uniform(shape=(32, 5)) b = tf.random.uniform(shape=(32, 5)) loss(a, b) input_data = Input(shape=(5), name='Input') customDense = myLayer(input_data) customDense_2 = myLayer_2(customDense) output = Dense(1, name='output')(customDense_2) model = Model(input_data, output) model.compile(optimizer=Adam(learning_rate=0.001), loss=CustomAccuracy(), metrics=['mae', 'mse']) model.summary() tf.keras.utils.plot_model( model, to_file='model.png', show_shapes=True, show_dtype=True, show_layer_names=True, rankdir='TB', expand_nested=False, dpi=96 ) def separate(df): return df[['Sound Pressure']].to_numpy(), df.drop(df[['Sound Pressure']], axis=1).to_numpy() min_max_scaler = preprocessing.MinMaxScaler() df_normed = pd.DataFrame(min_max_scaler.fit_transform(dataset)) df_normed.columns = list(dataset.columns) train_set, test_set = train_test_split(df_normed) train_labels, train_features = separate(train_set) test_labels, test_features = separate(test_set) ``` # Train and check the results ___ ``` myLayer.variables history = model.fit( train_features, train_labels, batch_size = 128, epochs=500, validation_data=(test_features, test_labels) ) print(f'My final score on testo set {- model.evaluate(test_features, test_labels)[0]}') myLayer.variables loss = history.history['loss'] val_loss = history.history['val_loss'] fig, ax = plt.subplots(figsize=(8, 6)) plt.plot(loss) plt.plot(val_loss) plt.grid('both') plt.xlabel('Epochs') plt.ylabel('Loss Function') plt.title('Loss Function trend') plt.show() fig, ax = plt.subplots(1, 2, figsize=(12, 6), sharey=True) ax[0].axis('equal') ax[0].scatter(train_labels[:, 0], model.predict(train_features)[:, 0], marker='^', color='r', edgecolor='k') ax[0].plot([0, 1], [0, 1], c='k') ax[0].plot([0, 1], [0.2, 1.2],'--', c='orange') ax[0].plot([0, 1], [-0.2, 0.8],'--', c='orange') ax[0].plot([0, 1], [0.1, 1.1],'--', c='pink') ax[0].plot([0, 1], [-0.1, 0.9],'--', c='pink') ax[0].set_title('Training Set - Y1') ax[0].set_ylim(0, 1) ax[0].grid(which='both', alpha=0.8, c='white') ax[0].set_facecolor('#eaeaf2') ax[0].spines['bottom'].set_color('white') ax[0].spines['top'].set_color('white') ax[0].spines['right'].set_color('white') ax[0].spines['left'].set_color('white') ax[1].axis('equal') ax[1].scatter(test_labels[:, 0], model.predict(test_features)[:, 0], marker='^', color='g', edgecolor='k') ax[1].plot([0, 1], [0, 1], c='k') ax[1].plot([0, 1], [0.2, 1.2],'--', c='orange') ax[1].plot([0, 1], [-0.2, 0.8],'--', c='orange') ax[1].plot([0, 1], [0.1, 1.1],'--', c='pink') ax[1].plot([0, 1], [-0.1, 0.9],'--', c='pink') ax[1].set_title('Validation Set - Y1') ax[1].set_ylim(0, 1) ax[1].grid(which='both', alpha=0.8, c='white') ax[1].set_facecolor('#eaeaf2') ax[1].spines['bottom'].set_color('white') ax[1].spines['top'].set_color('white') ax[1].spines['right'].set_color('white') ax[1].spines['left'].set_color('white') import numpy as np from sklearn.metrics import r2_score from scipy.stats import pearsonr for i in range(np.shape(train_labels)[1]): metrics= { 'mae-train': np.mean(np.abs(train_labels[:, i] - model.predict(train_features)[:, i])), 'mse-train': np.mean(np.square(train_labels[:, i] - model.predict(train_features)[:, i])), 'r2-train': r2_score(train_labels[:, i], model.predict(train_features)[:, i]), 'pearson-train': pearsonr(train_labels[:, i], model.predict(train_features)[:, i])[0], 'mae-test': np.mean(np.abs(test_labels[:, i] - model.predict(test_features)[:, i])), 'mse-test': np.mean(np.square(test_labels[:, i] - model.predict(test_features)[:, i])), 'r2-test': r2_score(test_labels[:, i] ,model.predict(test_features)[:, i]), 'pearson-test': pearsonr(test_labels[:, i], model.predict(test_features)[:, i])[0] } blue = lambda x: '\033[94m' + x + '\033[0m' yellow = lambda x: '\033[93m' + x + '\033[0m' for key in metrics: if 'train' in key: print(f'Y{i} - {blue(key)} - {str(metrics[key])[:7]}') else: print(f'Y{i} - {yellow(key)} - {str(metrics[key])[:7]}') ``` # Greetings --- ``` !pip install art from art import tprint, aprint tprint('See you on next videos!') def subscribe(): """ Attractive subscription form """ aprint("giveme", number=5) print(f'\n\tLike and subscribe to support this work!\n') aprint("giveme", number=5) subscribe() ```
github_jupyter
``` # reload packages %load_ext autoreload %autoreload 2 ``` ### Choose GPU ``` %env CUDA_DEVICE_ORDER=PCI_BUS_ID %env CUDA_VISIBLE_DEVICES=3 import tensorflow as tf gpu_devices = tf.config.experimental.list_physical_devices('GPU') if len(gpu_devices)>0: tf.config.experimental.set_memory_growth(gpu_devices[0], True) print(gpu_devices) tf.keras.backend.clear_session() ``` ### Load packages ``` import tensorflow as tf import numpy as np import matplotlib.pyplot as plt from tqdm.autonotebook import tqdm from IPython import display import pandas as pd import umap import copy import os, tempfile import tensorflow_addons as tfa import pickle ``` ### parameters ``` dataset = "cifar10" labels_per_class = 4 # 'full' n_latent_dims = 1024 confidence_threshold = 0.8 # minimum confidence to include in UMAP graph for learned metric learned_metric = True # whether to use a learned metric, or Euclidean distance between datapoints augmented = False # min_dist= 0.001 # min_dist parameter for UMAP negative_sample_rate = 5 # how many negative samples per positive sample batch_size = 128 # batch size optimizer = tf.keras.optimizers.Adam(1e-3) # the optimizer to train optimizer = tfa.optimizers.MovingAverage(optimizer) label_smoothing = 0.2 # how much label smoothing to apply to categorical crossentropy max_umap_iterations = 500 # how many times, maximum, to recompute UMAP max_epochs_per_graph = 10 # how many epochs maximum each graph trains for (without early stopping) graph_patience = 10 # how many times without improvement to train a new graph min_graph_delta = 0.0025 # minimum improvement on validation acc to consider an improvement for training from datetime import datetime datestring = datetime.now().strftime("%Y_%m_%d_%H_%M_%S_%f") datestring = ( str(dataset) + "_" + str(confidence_threshold) + "_" + str(labels_per_class) + "____" + datestring + '_umap_augmented' ) print(datestring) ``` #### Load dataset ``` from tfumap.semisupervised_keras import load_dataset ( X_train, X_test, X_labeled, Y_labeled, Y_masked, X_valid, Y_train, Y_test, Y_valid, Y_valid_one_hot, Y_labeled_one_hot, num_classes, dims ) = load_dataset(dataset, labels_per_class) ``` ### load architecture ``` from tfumap.semisupervised_keras import load_architecture encoder, classifier, embedder = load_architecture(dataset, n_latent_dims) ``` ### load pretrained weights ``` from tfumap.semisupervised_keras import load_pretrained_weights encoder, classifier = load_pretrained_weights(dataset, augmented, labels_per_class, encoder, classifier) ``` #### compute pretrained accuracy ``` # test current acc pretrained_predictions = classifier.predict(encoder.predict(X_test, verbose=True), verbose=True) pretrained_predictions = np.argmax(pretrained_predictions, axis=1) pretrained_acc = np.mean(pretrained_predictions == Y_test) print('pretrained acc: {}'.format(pretrained_acc)) ``` ### get a, b parameters for embeddings ``` from tfumap.semisupervised_keras import find_a_b a_param, b_param = find_a_b(min_dist=min_dist) ``` ### build network ``` from tfumap.semisupervised_keras import build_model model = build_model( batch_size=batch_size, a_param=a_param, b_param=b_param, dims=dims, encoder=encoder, classifier=classifier, negative_sample_rate=negative_sample_rate, optimizer=optimizer, label_smoothing=label_smoothing, embedder = embedder, ) ``` ### build labeled iterator ``` from tfumap.semisupervised_keras import build_labeled_iterator labeled_dataset = build_labeled_iterator(X_labeled, Y_labeled_one_hot, augmented, dims) ``` ### training ``` from livelossplot import PlotLossesKerasTF from tfumap.semisupervised_keras import get_edge_dataset from tfumap.semisupervised_keras import zip_datasets ``` #### callbacks ``` # plot losses callback groups = {'acccuracy': ['classifier_accuracy', 'val_classifier_accuracy'], 'loss': ['classifier_loss', 'val_classifier_loss']} plotlosses = PlotLossesKerasTF(groups=groups) history_list = [] current_validation_acc = 0 batches_per_epoch = np.floor(len(X_train)/batch_size).astype(int) epochs_since_last_improvement = 0 current_umap_iterations = 0 current_epoch = 0 from tfumap.paths import MODEL_DIR, ensure_dir save_folder = MODEL_DIR / 'semisupervised-keras' / dataset / str(labels_per_class) / datestring ensure_dir(save_folder / 'test_loss.npy') for cui in tqdm(np.arange(current_epoch, max_umap_iterations)): if len(history_list) > graph_patience+1: previous_history = [np.mean(i.history['val_classifier_accuracy']) for i in history_list] best_of_patience = np.max(previous_history[-graph_patience:]) best_of_previous = np.max(previous_history[:-graph_patience]) if (best_of_previous + min_graph_delta) > best_of_patience: print('Early stopping') break # make dataset edge_dataset = get_edge_dataset( model, augmented, classifier, encoder, X_train, Y_masked, batch_size, confidence_threshold, labeled_dataset, dims, learned_metric = learned_metric ) # zip dataset zipped_ds = zip_datasets(labeled_dataset, edge_dataset, batch_size) # train dataset history = model.fit( zipped_ds, epochs= current_epoch + max_epochs_per_graph, initial_epoch = current_epoch, validation_data=( (X_valid, tf.zeros_like(X_valid), tf.zeros_like(X_valid)), {"classifier": Y_valid_one_hot}, ), callbacks = [plotlosses], max_queue_size = 100, steps_per_epoch = batches_per_epoch, #verbose=0 ) current_epoch+=len(history.history['loss']) history_list.append(history) # save score class_pred = classifier.predict(encoder.predict(X_test)) class_acc = np.mean(np.argmax(class_pred, axis=1) == Y_test) np.save(save_folder / 'test_loss.npy', (np.nan, class_acc)) # save weights encoder.save_weights((save_folder / "encoder").as_posix()) classifier.save_weights((save_folder / "classifier").as_posix()) # save history with open(save_folder / 'history.pickle', 'wb') as file_pi: pickle.dump([i.history for i in history_list], file_pi) current_umap_iterations += 1 previous_history = [np.mean(i.history['val_classifier_accuracy']) for i in history_list] best_of_patience = np.max(previous_history[-graph_patience:]) best_of_previous = np.max(previous_history[:-graph_patience]) if (best_of_previous + min_graph_delta) > best_of_patience: print('Early stopping') plt.plot(previous_history) save_folder ``` ### save embedding ``` z = encoder.predict(X_train) reducer = umap.UMAP(verbose=True) embedding = reducer.fit_transform(z.reshape(len(z), np.product(np.shape(z)[1:]))) plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_train.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10) np.save(save_folder / 'train_embedding.npy', embedding) ```
github_jupyter
# Create tables summarising contents of each dataset ``` import os from decimal import Decimal import numpy as np import pandas as pd pd.set_option('display.max_colwidth', -1) ``` ## Paths to directories ``` # Network dataset construction directory network_dir = os.path.join(os.path.curdir, os.path.pardir, '1_network') # Generator dataset construction directory generators_dir = os.path.join(os.path.curdir, os.path.pardir, '2_generators') # Signals dataset construction directory signals_dir = os.path.join(os.path.curdir, os.path.pardir, '3_load_and_dispatch_signals') # Output directory output_dir = os.path.join(os.path.curdir, 'output') ``` ## Functions used to parse data ``` def get_numerical_range(df, column_name, no_round=False, round_lower=False, round_upper=False, sci_notation_lower=False, sci_notation_upper=False): "Round output" lower = df[column_name].min() upper = df[column_name].max() if no_round: return '${0}-{1}$'.format(lower, upper) else: if round_lower: lower_out = np.around(lower, decimals=round_lower) if round_upper: upper_out = np.around(upper, decimals=round_upper) if sci_notation_lower: lo = df[column_name].min() lo_exp = '{:.2e}'.format(lo) # Exponential notation # Make latex friendly lo_exp = lo_exp.replace('-0','-') lower_out = lo_exp.replace('e', ' \times 10^{') + '}' if sci_notation_upper: up = df[column_name].max() up_exp = '{:.2e}'.format(up) # Exponential notation # Make latex friendly up_exp = up_exp.replace('-0','-') upper_out = up_exp.replace('e', ' \times 10^{') + '}' return '${0}-{1}$'.format(lower_out, upper_out) def add_caption(table, caption, label): "Add caption to table" return table.replace('\\end{tabular}\n', '\\end{tabular}\n\\caption{%s}\n\\label{%s}\n' % (caption, label)) def wrap_in_table(table): "Wrap tabular in table environment" table = table.replace('\\begin{tabular}', '\\begin{table}[H]\n\\begin{tabular}') table = table + '\\end{table}' return table def format_table(df_out, caption, label, filename, add_caption_and_label=False): "Format table to add caption and labels" # Reset index and rename column df_out = df_out.reset_index().rename(columns={'index': 'Col. Name'}) # Format col names so underscores don't cause errors df_out['Col. Name'] = df_out['Col. Name'].map(lambda x: x.replace('_', '\_')) df_out.index = range(1, len(df_out.index) + 1) df_out.index.name = 'Col.' df_out = df_out.reset_index() # Raw table table = df_out.to_latex(escape=False, index=False, multicolumn=False) # Add caption and labels and wrap in table environment if add_caption_and_label: table_out = add_caption(table, caption=caption, label=label) table_out = wrap_in_table(table_out) else: table_out = table # Save to file with open(os.path.join(output_dir, filename), 'w') as f: f.write(table_out) return table_out ``` ## Network ### Nodes ``` def create_network_nodes_table(): "Create summary of network node datasets" # Input DataFrame df = pd.read_csv(os.path.join(network_dir, 'output', 'network_nodes.csv')) # Initialise output DataFrame df_out = pd.DataFrame(index=df.columns, columns=['Format', 'Units', 'Range', 'Description']) df_out.loc['NODE_ID'] = {'Format': 'str', 'Units': '-', 'Range': get_numerical_range(df, 'NODE_ID', no_round=True), 'Description': 'Node ID'} df_out.loc['STATE_NAME'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'State in which node is located'} df_out.loc['NEM_REGION'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'NEM region in which node is located'} df_out.loc['NEM_ZONE'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'NEM zone in which node is located'} df_out.loc['VOLTAGE_KV'] = {'Format': 'int', 'Units': 'kV', 'Range': get_numerical_range(df, 'VOLTAGE_KV', no_round=True), 'Description': 'Node voltage'} df_out.loc['RRN'] = {'Format': 'int', 'Units': '-', 'Range': get_numerical_range(df, 'RRN', no_round=True), 'Description': 'If 1 node is a RRN, if 0 node is not a RNN'} df_out.loc['PROP_REG_D'] = {'Format': 'float', 'Units': '-', 'Range': get_numerical_range(df, 'PROP_REG_D', round_lower=3, round_upper=3), 'Description': 'Proportion of NEM regional demand consumed at node'} df_out.loc['LATITUDE'] = {'Format': 'float', 'Units': 'N$^{\circ}$', 'Range': get_numerical_range(df, 'LATITUDE', round_lower=1, round_upper=1), 'Description': 'Latitude (GDA94)'} df_out.loc['LONGITUDE'] = {'Format': 'float', 'Units': 'E$^{\circ}$', 'Range': get_numerical_range(df, 'LONGITUDE', round_lower=1, round_upper=1), 'Description': 'Longitude (GDA94)'} # Output table after formatting table_out = format_table(df_out, caption='Network nodes dataset summary', label='tab: nodes', filename='network_nodes.tex', add_caption_and_label=True) return table_out create_network_nodes_table() ``` ### AC edges ``` def create_network_edges_table(): "Create table summarising AC network edges dataset" # Input DataFrame df = pd.read_csv(os.path.join(network_dir, 'output', 'network_edges.csv')) # Initialise output DataFrame df_out = pd.DataFrame(index=df.columns, columns=['Format', 'Units', 'Range', 'Description']) df_out.loc['LINE_ID'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'Network edge ID'} df_out.loc['NAME'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'Name of network edge'} df_out.loc['FROM_NODE'] = {'Format': 'int', 'Units': '-', 'Range': get_numerical_range(df, 'FROM_NODE', no_round=True), 'Description': 'Node ID for origin node'} df_out.loc['TO_NODE'] = {'Format': 'int', 'Units': '-', 'Range': get_numerical_range(df, 'TO_NODE', no_round=True), 'Description': 'Node ID for destination node'} df_out.loc['R_PU'] = {'Format': 'float', 'Units': 'p.u.', 'Range': get_numerical_range(df, 'R_PU', sci_notation_lower=True, round_upper=3), 'Description': 'Per-unit resistance'} df_out.loc['X_PU'] = {'Format': 'float', 'Units': 'p.u.', 'Range': get_numerical_range(df, 'X_PU', sci_notation_lower=True, round_upper=3), 'Description': 'Per-unit reactance'} df_out.loc['B_PU'] = {'Format': 'float', 'Units': 'p.u.', 'Range': get_numerical_range(df, 'B_PU', sci_notation_lower=True, round_upper=3), 'Description': 'Per-unit susceptance'} df_out.loc['NUM_LINES'] = {'Format': 'int', 'Units': '-', 'Range': get_numerical_range(df, 'NUM_LINES', no_round=True), 'Description': 'Number of parallel lines'} df_out.loc['LENGTH_KM'] = {'Format': 'float', 'Units': 'km', 'Range': get_numerical_range(df, 'LENGTH_KM', round_lower=2, round_upper=1), 'Description': 'Line length'} df_out.loc['VOLTAGE_KV'] = {'Format': 'float', 'Units': 'kV', 'Range': get_numerical_range(df, 'VOLTAGE_KV', no_round=True), 'Description': 'Line voltage'} # Output table after formatting table_out = format_table(df_out, caption='Network edges dataset summary', label='tab: edges', filename='network_edges.tex', add_caption_and_label=True) return table_out create_network_edges_table() ``` ### HVDC links ``` def create_hvdc_links_table(): "Create summary of HVDC links dataset" df = pd.read_csv(os.path.join(network_dir, 'output', 'network_hvdc_links.csv')) # Initialise output DataFrame df_out = pd.DataFrame(index=df.columns, columns=['Format', 'Units', 'Range', 'Description']) df_out.loc['HVDC_LINK_ID'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'HVDC link ID'} df_out.loc['FROM_NODE'] = {'Format': 'int', 'Units': '-', 'Range': get_numerical_range(df, 'FROM_NODE', no_round=True), 'Description': 'Node ID of origin node'} df_out.loc['TO_NODE'] = {'Format': 'int', 'Units': '-', 'Range': get_numerical_range(df, 'TO_NODE', no_round=True), 'Description': 'Node ID of destination node'} df_out.loc['FORWARD_LIMIT_MW'] = {'Format': 'float', 'Units': 'MW', 'Range': get_numerical_range(df, 'FORWARD_LIMIT_MW', no_round=True), 'Description': "`From' node to `To' node power-flow limit"} df_out.loc['REVERSE_LIMIT_MW'] = {'Format': 'float', 'Units': 'MW', 'Range': get_numerical_range(df, 'REVERSE_LIMIT_MW', no_round=True), 'Description': "`To' node to `From' node power-flow limit"} df_out.loc['VOLTAGE_KV'] = {'Format': 'float', 'Units': 'kV', 'Range': get_numerical_range(df, 'VOLTAGE_KV', no_round=True), 'Description': 'HVDC link voltage'} # Output table after formatting table_out = format_table(df_out, caption='Network HVDC links dataset summary', label='tab: hvdc links', filename='network_hvdc_links.tex', add_caption_and_label=True) return table_out create_hvdc_links_table() ``` ### AC interconnector links ``` def create_ac_interconnector_links_table(): "Create table summarising AC interconnector connection point locations" # Input DataFrame df = pd.read_csv(os.path.join(network_dir, 'output', 'network_ac_interconnector_links.csv')) # Initialise output DataFrame df_out = pd.DataFrame(index=df.columns, columns=['Format', 'Units', 'Range', 'Description']) df_out.loc['INTERCONNECTOR_ID'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'AC interconnector ID'} df_out.loc['FROM_NODE'] = {'Format': 'int', 'Units': '-', 'Range': get_numerical_range(df, 'FROM_NODE', no_round=True), 'Description': 'Node ID of origin node'} df_out.loc['FROM_REGION'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': "Region in which `From' node is located"} df_out.loc['TO_NODE'] = {'Format': 'int', 'Units': '-', 'Range': get_numerical_range(df, 'TO_NODE', no_round=True), 'Description': 'Node ID for destination node'} df_out.loc['TO_REGION'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': "Region in which `To' node is located"} df_out.loc['VOLTAGE_KV'] = {'Format': 'float', 'Units': 'kV', 'Range': get_numerical_range(df, 'VOLTAGE_KV', no_round=True), 'Description': 'Line voltage'} # Output table after formatting table_out = format_table(df_out, caption='AC interconnector locations dataset summary', label='tab: interconnectors - links', filename='network_ac_interconnector_links.tex', add_caption_and_label=True) return table_out create_ac_interconnector_links_table() ``` ### Interconnector flow limits ``` def create_ac_interconnector_flow_limits_table(): "Create table summarising interconnector flow limits" # Input DataFrame df = pd.read_csv(os.path.join(network_dir, 'output', 'network_ac_interconnector_flow_limits.csv')) # Initialise output DataFrame df_out = pd.DataFrame(index=df.columns, columns=['Format', 'Units', 'Range', 'Description']) df_out.loc['INTERCONNECTOR_ID'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'AC interconnector ID'} df_out.loc['FROM_REGION'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': "Region in which `From' node is located"} df_out.loc['TO_REGION'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': "Region in which `To' node is located"} df_out.loc['FORWARD_LIMIT_MW'] = {'Format': 'float', 'Units': 'MW', 'Range': get_numerical_range(df, 'FORWARD_LIMIT_MW', no_round=True), 'Description': "`From' node to `To' node power-flow limit"} df_out.loc['REVERSE_LIMIT_MW'] = {'Format': 'float', 'Units': 'MW', 'Range': get_numerical_range(df, 'REVERSE_LIMIT_MW', no_round=True), 'Description': "`To' node to `From' node power-flow limit"} # Output table after formatting table_out = format_table(df_out, caption='AC interconnector flow limits summary', label='tab: interconnectors - flow limits', filename='network_ac_interconnector_flow_limits.tex', add_caption_and_label=True) return table_out create_ac_interconnector_flow_limits_table() ``` ## Generators ``` def create_generators_tables(): "Create table summarising fields in generator datasets" # Input DataFrame column_dtypes = {'NODE': int, 'REG_CAP': int, 'RR_STARTUP': int, 'RR_SHUTDOWN': int, 'RR_UP': int, 'RR_DOWN': int, 'MIN_ON_TIME': int, 'MIN_OFF_TIME': int, 'SU_COST_COLD': int, 'SU_COST_WARM': int, 'SU_COST_HOT': int} df = pd.read_csv(os.path.join(generators_dir, 'output', 'generators.csv'), dtype=column_dtypes) # Initialise output DataFrame df_out = pd.DataFrame(index=df.columns, columns=['Format', 'Units', 'Range', 'Description']) df_out.loc['DUID'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'Unique ID for each unit'} df_out.loc['STATIONID'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'ID of station to which DUID belongs'} df_out.loc['STATIONNAME'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'Name of station to which DUID belongs'} df_out.loc['NEM_REGION'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'Region in which DUID is located'} df_out.loc['NEM_ZONE'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'Zone in which DUID is located'} df_out.loc['NODE'] = {'Format': 'int', 'Units': '-', 'Range': get_numerical_range(df, 'NODE', no_round=True), 'Description': 'Node to which DUID is assigned'} df_out.loc['FUEL_TYPE'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'Primary fuel type'} df_out.loc['FUEL_CAT'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'Primary fuel category'} df_out.loc['EMISSIONS'] = {'Format': 'float', 'Units': 'tCO$_{2}$/MWh', 'Range': get_numerical_range(df, 'EMISSIONS', round_lower=2, round_upper=2), 'Description': 'Equivalent CO$_{2}$ emissions intensity'} df_out.loc['SCHEDULE_TYPE'] = {'Format': 'str', 'Units': '-', 'Range': '-', 'Description': 'Schedule type for unit'} df_out.loc['REG_CAP'] = {'Format': 'float', 'Units': 'MW', 'Range': get_numerical_range(df, 'REG_CAP', no_round=True), 'Description': 'Registered capacity'} df_out.loc['MIN_GEN'] = {'Format': 'float', 'Units': 'MW', 'Range': get_numerical_range(df, 'MIN_GEN', no_round=True), 'Description': 'Minimum dispatchable output'} df_out.loc['RR_STARTUP'] = {'Format': 'float', 'Units': 'MW/h', 'Range': get_numerical_range(df, 'RR_STARTUP', no_round=True), 'Description': 'Ramp-rate for start-up'} df_out.loc['RR_SHUTDOWN'] = {'Format': 'float', 'Units': 'MW/h', 'Range': get_numerical_range(df, 'RR_SHUTDOWN', no_round=True), 'Description': 'Ramp-rate for shut-down'} df_out.loc['RR_UP'] = {'Format': 'float', 'Units': 'MW/h', 'Range': get_numerical_range(df, 'RR_UP', no_round=True), 'Description': 'Ramp-rate up when running'} df_out.loc['RR_DOWN'] = {'Format': 'float', 'Units': 'MW/h', 'Range': get_numerical_range(df, 'RR_DOWN', no_round=True), 'Description': 'Ramp-rate down when running'} df_out.loc['MIN_ON_TIME'] = {'Format': 'int', 'Units': 'h', 'Range': get_numerical_range(df, 'MIN_ON_TIME', no_round=True), 'Description': 'Minimum on time'} df_out.loc['MIN_OFF_TIME'] = {'Format': 'int', 'Units': 'h', 'Range': get_numerical_range(df, 'MIN_OFF_TIME', no_round=True), 'Description': 'Minimum off time'} df_out.loc['SU_COST_COLD'] = {'Format': 'int', 'Units': '\$', 'Range': get_numerical_range(df, 'SU_COST_COLD', no_round=True), 'Description': 'Cold start start-up cost'} df_out.loc['SU_COST_WARM'] = {'Format': 'int', 'Units': '\$', 'Range': get_numerical_range(df, 'SU_COST_WARM', no_round=True), 'Description': 'Warm start start-up cost'} df_out.loc['SU_COST_HOT'] = {'Format': 'int', 'Units': '\$', 'Range': get_numerical_range(df, 'SU_COST_HOT', no_round=True), 'Description': 'Hot start start-up cost'} df_out.loc['VOM'] = {'Format': 'float', 'Units': '\$/MWh', 'Range': get_numerical_range(df, 'VOM', no_round=True), 'Description': 'Variable operations and maintenance costs'} df_out.loc['HEAT_RATE'] = {'Format': 'float', 'Units': 'GJ/MWh', 'Range': get_numerical_range(df, 'HEAT_RATE', round_lower=1, round_upper=1), 'Description': 'Heat rate'} df_out.loc['NL_FUEL_CONS'] = {'Format': 'float', 'Units': '-', 'Range': get_numerical_range(df, 'NL_FUEL_CONS', no_round=True), 'Description': 'No-load fuel consumption as a proportion of full load consumption'} df_out.loc['FC_2016-17'] = {'Format': 'float', 'Units': '\$/GJ', 'Range': get_numerical_range(df, 'FC_2016-17', round_lower=1, round_upper=1), 'Description': 'Fuel cost for the year 2016-17'} df_out.loc['SRMC_2016-17'] = {'Format': 'float', 'Units': '\$/MWh', 'Range': get_numerical_range(df, 'SRMC_2016-17', round_lower=1, round_upper=1), 'Description': 'Short-run marginal cost for the year 2016-17'} # Sources for generator dataset records source = dict() source['DUID'] = '\cite{aemo_data_2018}' source['STATIONID'] = '\cite{aemo_data_2018}' source['STATIONNAME'] = '\cite{aemo_data_2018}' source['FUEL_TYPE'] = '\cite{aemo_data_2018}' source['EMISSIONS'] = '\cite{aemo_current_2018}' source['SCHEDULE_TYPE'] = '\cite{aemo_data_2018}' source['REG_CAP'] = '\cite{aemo_data_2018}' source['MIN_GEN'] = '\cite{aemo_data_2018, aemo_ntndp_2018}' source['RR_STARTUP'] = '\cite{aemo_ntndp_2018}' source['RR_SHUTDOWN'] = '\cite{aemo_ntndp_2018}' source['RR_UP'] = '\cite{aemo_ntndp_2018}' source['RR_DOWN'] = '\cite{aemo_ntndp_2018}' source['MIN_ON_TIME'] = '\cite{aemo_ntndp_2018}' source['MIN_OFF_TIME'] = '\cite{aemo_ntndp_2018}' source['SU_COST_COLD'] = '\cite{aemo_ntndp_2018}' source['SU_COST_WARM'] = '\cite{aemo_ntndp_2018}' source['SU_COST_HOT'] = '\cite{aemo_ntndp_2018}' source['VOM'] = '\cite{aemo_ntndp_2018}' source['HEAT_RATE'] = '\cite{aemo_ntndp_2018}' source['NL_FUEL_CONS'] = '\cite{aemo_ntndp_2018}' source['FC_2016-17'] = '\cite{aemo_ntndp_2018}' df_out['Source\tnote{$\dagger$}'] = df_out.apply(lambda x: '' if x.name not in source.keys() else source[x.name], axis=1) # Include double dagger symbol for heatrate record df_out = df_out.rename(index={'HEAT_RATE': 'HEAT_RATE\tnote{$\ddagger$}'}) table_out = format_table(df_out, caption='Generator dataset summary', label='tab: generator dataset', filename='generators.tex') # Wrap in three part table. Append environments to beginning of tabular table_out = '\\begin{table}[H]\n\\begin{threeparttable}\n\\centering\n\\small' + table_out # Append table notes and environment to end of table append_to_end = """\\begin{tablenotes} \\item[$\\dagger$] Where no source is given, the value has been derived as part of the dataset construction procedure. NEM\\_REGION and NEM\\_ZONE were found by determining the region and zone of each generator's assigned node. FUEL\\_CAT assigns a generic category to FUEL\\_TYPE. MIN\\_GEN was computed by combining minimum output as a proportion of nameplate capacity from~\cite{aemo_ntndp_2018} with registered capacities from~\\cite{aemo_data_2018}. SRMC\\_2016-17 is calculated from VOM, HEAT\\_RATE, and FC\\_2016-17 fields, using equation~(\\ref{eqn: SRMC calculation}). \\item[$\\ddagger$] While not explicitly stated, it is assumed that a lower heating value is referred to. This is consistent with another field in~\\cite{aemo_ntndp_2018} that gives DUID thermal efficiency in terms of lower heating values. \\end{tablenotes} \\end{threeparttable} \\caption{Generator dataset summary} \\label{tab: generator dataset} \\end{table}""" table_out = table_out + append_to_end # Save to file with open(os.path.join(output_dir, 'generators.tex'), 'w') as f: f.write(table_out) return table_out create_generators_tables() ``` ## Load and dispatch signals ### Load signals ``` def create_load_signals_table(): df = pd.read_csv(os.path.join(signals_dir, 'output', 'signals_regional_load.csv')) # Initialise output DataFrame df_out = pd.DataFrame(index=df.columns, columns=['Format', 'Units', 'Range', 'Description']) df_out.loc['SETTLEMENTDATE'] = {'Format': 'timestamp', 'Units': '-', 'Range': '1/6/2017 12:30:00 AM - 1/7/2017 12:00:00 AM', 'Description': 'Trading interval'} df_out.loc['NSW1'] = {'Format': 'float', 'Units': 'MW', 'Range': get_numerical_range(df, 'NSW1', round_lower=1, round_upper=1), 'Description': 'New South Wales demand signal'} df_out.loc['QLD1'] = {'Format': 'float', 'Units': 'MW', 'Range': get_numerical_range(df, 'QLD1', round_lower=1, round_upper=1), 'Description': 'Queensland demand signal'} df_out.loc['SA1'] = {'Format': 'float', 'Units': 'MW', 'Range': get_numerical_range(df, 'SA1', round_lower=1, round_upper=1), 'Description': 'South Australia demand signal'} df_out.loc['TAS1'] = {'Format': 'float', 'Units': 'MW', 'Range': get_numerical_range(df, 'TAS1', round_lower=1, round_upper=1), 'Description': 'Tasmania demand signal'} df_out.loc['VIC1'] = {'Format': 'float', 'Units': 'MW', 'Range': get_numerical_range(df, 'VIC1', round_lower=1, round_upper=1), 'Description': 'Victoria demand signal'} table_out = format_table(df_out, caption='Regional demand signals dataset summary', label='tab: regional demand signals', filename='signals_regional_demand.tex', add_caption_and_label=True) return table_out create_load_signals_table() ``` ### Dispatch signals ``` df = pd.read_csv(os.path.join(signals_dir, 'output', 'signals_dispatch.csv')) df_out = pd.DataFrame(columns=['Format', 'Units', 'Range', 'Description']) df_out.loc['SETTLEMENTDATE'] = {'Format': 'timestamp', 'Units': '-', 'Range': '1/6/2017 12:30:00 AM - 1/7/2017 12:00:00 AM', 'Description': 'Trading interval'} df_out.loc['(DUID)'] = {'Format': 'float', 'Units': 'MW', 'Range': '-', 'Description': 'DUID dispatch profile'} # Rename columns df_out = df_out.reset_index().rename(columns={'index': 'Col. Name'}) df_out.index.name = 'Col.' df_out = df_out.rename(index={0: '1', 1: '2-{0}'.format(len(df.columns))}) df_out = df_out.reset_index() # Raw table table = df_out.to_latex(escape=False, index=False, multicolumn=False) table_out = add_caption(table, caption='DUID dispatch profiles. Columns correspond to DUIDs.', label='tab: duid dispatch profiles') table_out = wrap_in_table(table_out) # Save to file with open(os.path.join(output_dir, 'signals_dispatch.tex'), 'w') as f: f.write(table_out) ```
github_jupyter
``` import numpy as np import keras from keras import layers from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D from keras.models import Model, load_model from keras.preprocessing import image from keras.utils import layer_utils from keras.utils.data_utils import get_file from keras.applications.imagenet_utils import preprocess_input import pydot from IPython.display import SVG from keras.utils.vis_utils import model_to_dot # from keras.utils import plot_model from keras.utils.vis_utils import plot_model from keras.initializers import glorot_uniform import scipy.misc from matplotlib.pyplot import imshow import tensorflow as tf import glob from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt import graphviz %matplotlib inline import keras.backend as K K.set_image_data_format('channels_last') K.set_learning_phase(1) # GRADED FUNCTION: identity_block def identity_block(X, f, filters, stage, block): """ Implementation of the identity block as defined in Figure 3 Arguments: X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev) f -- integer, specifying the shape of the middle CONV's window for the main path filters -- python list of integers, defining the number of filters in the CONV layers of the main path stage -- integer, used to name the layers, depending on their position in the network block -- string/character, used to name the layers, depending on their position in the network Returns: X -- output of the identity block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Retrieve Filters F1, F2, F3 = filters # Save the input value. You'll need this later to add back to the main path. X_shortcut = X # First component of main path X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X) X = Activation('relu')(X) ### START CODE HERE ### # Second component of main path (≈3 lines) X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1, 1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed = 0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X) X = Activation('relu')(X) # Third component of main path (≈2 lines) X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1, 1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed = 0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X) # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X, X_shortcut]) X = Activation('relu')(X) ### END CODE HERE ### return X # GRADED FUNCTION: convolutional_block def convolutional_block(X, f, filters, stage, block, s=2): """ Implementation of the convolutional block as defined in Figure 4 Arguments: X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev) f -- integer, specifying the shape of the middle CONV's window for the main path filters -- python list of integers, defining the number of filters in the CONV layers of the main path stage -- integer, used to name the layers, depending on their position in the network block -- string/character, used to name the layers, depending on their position in the network s -- Integer, specifying the stride to be used Returns: X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Retrieve Filters F1, F2, F3 = filters # Save the input value X_shortcut = X ##### MAIN PATH ##### # First component of main path X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(s, s), padding='valid', name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X) X = Activation('relu')(X) ### START CODE HERE ### # Second component of main path (≈3 lines) X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same', name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X) X = Activation('relu')(X) # Third component of main path (≈2 lines) X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X) ##### SHORTCUT PATH #### (≈2 lines) X_shortcut = Conv2D(filters=F3, kernel_size=(1, 1), strides=(s, s), padding='valid', name=conv_name_base + '1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut) X_shortcut = BatchNormalization(axis=3, name=bn_name_base + '1')(X_shortcut) # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X, X_shortcut]) X = Activation('relu')(X) ### END CODE HERE ### return X # GRADED FUNCTION: ResNet50 # GRADED FUNCTION: ResNet50 def ResNet50(input_shape=(64, 64, 3), classes=6): """ Implementation of the popular ResNet50 the following architecture: CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3 -> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER Arguments: input_shape -- shape of the images of the dataset classes -- integer, number of classes Returns: model -- a Model() instance in Keras """ # Define the input as a tensor with shape input_shape X_input = Input(input_shape) # Zero-Padding X = ZeroPadding2D((3, 3))(X_input) # Stage 1 X = Conv2D(64, (7, 7), strides=(2, 2), name='conv1', kernel_initializer=glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name='bn_conv1')(X) X = Activation('relu')(X) X = MaxPooling2D((3, 3), strides=(2, 2))(X) # Stage 2 X = convolutional_block(X, f=3, filters=[64, 64, 256], stage=2, block='a', s=1) X = identity_block(X, 3, [64, 64, 256], stage=2, block='b') X = identity_block(X, 3, [64, 64, 256], stage=2, block='c') ### START CODE HERE ### # Stage 3 (≈4 lines) X = convolutional_block(X, f=3, filters=[128, 128, 512], stage=3, block='a', s=2) X = identity_block(X, 3, [128, 128, 512], stage=3, block='b') X = identity_block(X, 3, [128, 128, 512], stage=3, block='c') X = identity_block(X, 3, [128, 128, 512], stage=3, block='d') # Stage 4 (≈6 lines) X = convolutional_block(X, f=3, filters=[256, 256, 1024], stage=4, block='a', s=2) X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b') X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c') X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d') X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e') X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f') # Stage 5 (≈3 lines) X = X = convolutional_block(X, f=3, filters=[512, 512, 2048], stage=5, block='a', s=2) X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b') X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c') # AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)" X = AveragePooling2D(pool_size=(2, 2), padding='same')(X) ### END CODE HERE ### # output layer X = Flatten()(X) X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer=glorot_uniform(seed=0))(X) # Create model model = Model(inputs=X_input, outputs=X, name='ResNet50') return model (X_train, Y_train), (X_test, Y_test) = keras.datasets.mnist.load_data() X_train = X_train.reshape(60000, 28, 28, 1) X_test = X_test.reshape(10000, 28, 28, 1) print('Shape of train data is:', X_train.shape) print('Shape of train lables is:', Y_train.shape) print('Shape of test data is:',X_test.shape) print('Shape of test lables is:', Y_test.shape) model = ResNet50(input_shape = (28, 28, 1), classes = 10) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) Y_train = keras.utils.np_utils.to_categorical(Y_train, 10) Y_test = keras.utils.np_utils.to_categorical(Y_test, 10) print('Reshaped Y_train:', Y_train.shape) print('Reshaped Y_test:', Y_test.shape) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_test /= 255 X_train /= 255 model.fit(X_train, Y_train, epochs = 1, batch_size = 32) ```
github_jupyter
## _*Using QISKit ACQUA for stableset problems*_ This QISKit ACQUA Optimization notebook demonstrates how to use the VQE algorithm to compute the maximum stable set of a given graph. The problem is defined as follows. Given a graph $G = (V,E)$, we want to compute $S \subseteq V$ such that there do not exist $i, j \in S : (i, j) \in E$, and $|S|$ is maximized. In other words, we are looking for a maximum cardinality set of mutually non-adjacent vertices. The graph provided as an input is used first to generate an Ising Hamiltonian, which is then passed as an input to VQE. As a reference, this notebook also computes the maximum stable set using the Exact Eigensolver classical algorithm and the solver embedded in the commercial IBM CPLEX product (if it is available in the system and the user has followed the necessary configuration steps in order for QISKit ACQUA to find it). Please refer to the QISKit ACQUA Optimization documentation for installation and configuration details. ``` from qiskit_acqua import Operator, run_algorithm, get_algorithm_instance from qiskit_acqua.input import get_input_instance from qiskit_acqua.ising import stableset import numpy as np ``` Here an Operator instance is created for our Hamiltonian. In this case the Paulis are from an Ising Hamiltonian of the maximum stable set problem (expressed in minimization form). We load a small instance of the maximum stbale set problem. ``` w = stableset.parse_gset_format('sample.maxcut') qubitOp, offset = stableset.get_stableset_qubitops(w) algo_input = get_input_instance('EnergyInput') algo_input.qubit_op = qubitOp ``` We also offer a function to generate a random graph as a input. ``` if True: np.random.seed(8123179) w = stableset.random_graph(5, edge_prob=0.5) qubitOp, offset = stableset.get_stableset_qubitops(w) algo_input.qubit_op = qubitOp print(w) to_be_tested_algos = ['ExactEigensolver', 'CPLEX', 'VQE'] operational_algos = [] for algo in to_be_tested_algos: try: get_algorithm_instance(algo) operational_algos.append(algo) except: print("{} is unavailable, please check your setting.".format(algo)) print(operational_algos) ``` We can now use the Operator without regard to how it was created. First we need to prepare the configuration params to invoke the algorithm. Here we will use the ExactEigensolver first to return the smallest eigenvalue. Backend is not required since this is computed classically not using quantum computation. We then add in the qubitOp Operator in dictionary format. Now the complete params can be passed to the algorithm and run. The result is a dictionary. ``` if 'ExactEigensolver' not in operational_algos: print("ExactEigensolver is not in operational algorithms.") else: algorithm_cfg = { 'name': 'ExactEigensolver', } params = { 'problem': {'name': 'ising'}, 'algorithm': algorithm_cfg } result = run_algorithm(params,algo_input) x = stableset.sample_most_likely(result['eigvecs'][0]) print('energy:', result['energy']) print('stable set objective:', result['energy'] + offset) print('solution:', stableset.get_graph_solution(x)) print('solution objective and feasibility:', stableset.stableset_value(x, w)) ``` We change the configuration parameters to solve it with the CPLEX backend. The CPLEX backend can deal with a particular type of Hamiltonian called Ising Hamiltonian, which consists of only Pauli Z at most second order and can be used for combinatorial optimization problems that can be formulated as quadratic unconstrained binary optimization problems, such as the stable set problem. Note that we may obtain a different solution - but if the objective value is the same as above, the solution will be optimal. ``` if 'CPLEX' not in operational_algos: print("CPLEX is not in operational algorithms.") else: algorithm_cfg = { 'name': 'CPLEX', 'display': 0 } params = { 'problem': {'name': 'ising'}, 'algorithm': algorithm_cfg } result = run_algorithm(params, algo_input) x_dict = result['x_sol'] print('energy:', result['energy']) print('time:', result['time']) print('stable set objective:', result['energy'] + offset) x = np.array([x_dict[i] for i in sorted(x_dict.keys())]) print('solution:', stableset.get_graph_solution(x)) print('solution objective and feasibility:', stableset.stableset_value(x, w)) ``` Now we want VQE and so change it and add its other configuration parameters. VQE also needs and optimizer and variational form. While we can omit them from the dictionary, such that defaults are used, here we specify them explicitly so we can set their parameters as we desire. ``` if 'VQE' not in operational_algos: print("VQE is not in operational algorithms.") else: algorithm_cfg = { 'name': 'VQE', 'operator_mode': 'matrix' } optimizer_cfg = { 'name': 'L_BFGS_B', 'maxfun': 2000 } var_form_cfg = { 'name': 'RYRZ', 'depth': 3, 'entanglement': 'linear' } params = { 'problem': {'name': 'ising'}, 'algorithm': algorithm_cfg, 'optimizer': optimizer_cfg, 'variational_form': var_form_cfg, 'backend': {'name': 'local_statevector_simulator'} } result = run_algorithm(params,algo_input) x = stableset.sample_most_likely(result['eigvecs'][0]) print('energy:', result['energy']) print('time:', result['eval_time']) print('stable set objective:', result['energy'] + offset) print('solution:', stableset.get_graph_solution(x)) print('solution objective and feasibility:', stableset.stableset_value(x, w)) ```
github_jupyter
# How to contribute to jupyter notebooks ``` from fastai.gen_doc.nbdoc import * from fastai.gen_doc.gen_notebooks import * from fastai.gen_doc import * ``` The documentation is built from notebooks in `docs_src/`. Follow the steps below to build documentation. For more information about generating and authoring notebooks, see [`fastai.gen_doc.gen_notebooks`](/gen_doc.gen_notebooks.html#gen_doc.gen_notebooks). ## Modules ### [`fastai.gen_doc.gen_notebooks`](/gen_doc.gen_notebooks.html#gen_doc.gen_notebooks) Generate and update notebook skeletons automatically from modules. Includes an overview of the whole authoring process. ### [`fastai.gen_doc.convert2html`](/gen_doc.convert2html.html#gen_doc.convert2html) Create HTML (jekyll) docs from notebooks. ### [`fastai.gen_doc.nbdoc`](/gen_doc.nbdoc.html#gen_doc.nbdoc) Underlying documentation functions; most important is [`show_doc`](/gen_doc.nbdoc.html#show_doc). ## Process for contributing to the docs If you want to help us and contribute to the docs, you just have to make modifications to the source notebooks, our scripts will then automatically convert them to HTML. There is just one script to run after cloning the fastai repo, to ensure that everything works properly. The rest of this page goes more in depth about all the functionalities this module offers, but if you just want to add a sentence or correct a typo, make a PR with the notebook changed and we'll take care of the rest. ### Thing to run after git clone Make sure you follow this recipe: git clone https://github.com/fastai/fastai cd fastai tools/run-after-git-clone This will take care of everything that is explained in the following two sections. We'll tell you what they do, but you need to execute just this one script. Note: windows users, not using bash emulation, will need to invoke the command as: python tools\run-after-git-clone If you're on windows, you also need to convert the Unix symlink between `docs_src\imgs` and `docs\imgs`. You will need to (1) remove `docs_src\imgs`, (2) execute `cmd.exe` as administrator, and (3) finally, in the `docs_src` folder, execute: mklink /d imgs ..\docs\imgs #### after-git-clone #1: a mandatory notebook strip out Currently we only store `source` code cells under git (and a few extra fields for documentation notebooks). If you would like to commit or submit a PR, you need to confirm to that standard. This is done automatically during `diff`/`commit` git operations, but you need to configure your local repository once to activate that instrumentation. Therefore, your developing process will always start with: tools/trust-origin-git-config The last command tells git to invoke configuration stored in `fastai/.gitconfig`, so your `git diff` and `git commit` invocations for this particular repository will now go via `tools/fastai-nbstripout` which will do all the work for you. You don't need to run it if you run: tools/run-after-git-clone If you skip this configuration your commit/PR involving notebooks will not be accepted, since it'll carry in it many JSON bits which we don't want in the git repository. Those unwanted bits create collisions and lead to unnecessarily complicated and time wasting merge activities. So please do not skip this step. Note: we can't make this happen automatically, since git will ignore a repository-stored `.gitconfig` for security reasons, unless a user will tell git to use it (and thus trust it). If you'd like to check whether you already trusted git with using `fastai/.gitconfig` please look inside `fastai/.git/config`, which should have this entry: [include] path = ../.gitconfig or alternatively run: tools/trust-origin-git-config -t #### after-git-clone #2: automatically updating doc notebooks to be trusted on git pull We want the doc notebooks to be already trusted when you load them in `jupyter notebook`, so this script which should be run once upon `git clone`, will install a `git` `post-merge` hook into your local check out. The installed hook will be executed by git automatically at the end of `git pull` only if it triggered an actual merge event and that the latter was successful. To trust run: tools/trust-doc-nbs-install-hook You don't need to run it if you run: tools/run-after-git-clone To distrust run: rm .git/hooks/post-merge ### Validate any notebooks you're contributing to If you were using a text editor to make changes, when you are done working on a notebook improvement, please, make sure to validate that notebook's format, by simply loading it in the jupyter notebook. Alternatively, you could use a CLI JSON validation tool, e.g. [jsonlint](https://jsonlint.com/): jsonlint-php example.ipynb but it's second best, since you may have a valid JSON, but invalid notebook format, as the latter has extra requirements on which fields are valid and which are not. ## Building the documentation website The https://docs.fast.ai website is comprised from documentation notebooks converted to `.html`, `.md` files, jekyll metadata, jekyll templates (including the sidebar). * `.md` files are automatically converted by github pages (requires no extra action) * the sidebar and other jekyll templates under `docs/_data/` are automatically deployed by github pages (requires no extra action) * changes in jekyll metadata require a rebuild of the affected notebooks * changes in `.ipynb` nbs require a rebuild of the affected notebooks ### Updating sidebar 1. edit `docs_src/sidebar/sidebar_data.py` 2. `python tools/make_sidebar.py` 3. check `docs/_data/sidebars/home_sidebar.yml` 4. `git commit docs_src/sidebar/sidebar_data.py docs/_data/sidebars/home_sidebar.yml` [jekyll sidebar documentation](https://idratherbewriting.com/documentation-theme-jekyll/#configure-the-sidebar). ### Updating notebook metadata In order to pass the right settings to the website version of the `docs`, each notebook has a custom entry which if you look at the source code, looks like: ``` "metadata": { "jekyll": { "keywords": "fastai", "toc": "false", "title": "Welcome to fastai" }, [...] ``` Do not edit this entry manually, or your changes will be overwritten in the next metadata update. The only correct way to change any notebook's metadata is by opening `docs_src/jekyll_metadata.ipynb`, finding the notebook you want to change the metadata for, changing it, and running the notebook, then saving and committing it and the resulting changes. ### Updating notebooks Use this section only when you have added a new function that you want to document, or modified an existing function. Here is how to build/update the documentation notebooks to reflect changes in the library. To update all modified notebooks under `docs_src` run: ```bash python tools/build-docs ``` To update specific `*ipynb` nbs: ```bash python tools/build-docs docs_src/notebook1.ipynb docs_src/notebook2.ipynb ... ``` To force a rebuild of all notebooks and not just the modified ones, use the `-f` option. ```bash python tools/build-docs -f ``` To scan a module and add any new module functions to documentation notebook: ```bash python tools/build-docs --document-new-fns ``` To automatically append new fastai methods to their corresponding documentation notebook: ```bash python tools/build-docs --update-nb-links ``` Use the `-h` for more options. Alternatively, [`update_notebooks`](/gen_doc.gen_notebooks.html#update_notebooks) can be run from the notebook. To update all notebooks under `docs_src` run: ```python update_notebooks('.') ``` To update specific python file only: ```python update_notebooks('gen_doc.gen_notebooks.ipynb', update_nb=True) ``` `update_nb=True` inserts newly added module methods into the docs that haven't already been documented. Alternatively, you can update a specific module: ```python update_notebooks('fastai.gen_doc.gen_notebooks', dest_path='fastai/docs_src') ``` ### Updating html only If you are not syncronizing the code base with its documentation, but made some manual changes to the documentation notebooks, then you don't need to update the notebooks, but just convert them to `.html`: To convert `docs_src/*ipynb` to `docs/*html`: * only the modified `*ipynb`: ```bash python tools/build-docs -l ``` * specific `*ipynb`s: ```bash python tools/build-docs -l docs_src/notebook1.ipynb docs_src/notebook2.ipynb ... ``` * force to rebuild all `*ipynb`s: ```bash python tools/build-docs -fl ``` ## Links and anchors ### Validate links and anchors After you commit doc changes please validate that all the links and `#anchors` are correct. If it's the first time you are about to run the link checker, install the [prerequisites](https://github.com/fastai/fastai/blob/master/tools/checklink/README.md) first. After committing the new changes, first, wait a few minutes for github pages to sync, otherwise you'll be testing an outdated live site. Then, do: ``` cd tools/checklink ./checklink-docs.sh ``` The script will be silent and only report problems as it finds them. Remember, that it's testing the live website, so if you detect problems and make any changes, remember to first commit the changes and wait a few minutes before re-testing. You can also test the site locally before committing your changes, please see: [README](https://github.com/fastai/fastai/blob/master/tools/checklink/README.md). To test the course-v3.fast.ai site, do: ``` ./checklink-course-v3.sh ``` ## Working with Markdown ### Preview If you work on markdown (.md) files it helps to be able to validate your changes so that the resulting layout is not broken. [grip](https://github.com/joeyespo/grip) seems to work quite well for this purpose (`pip install grip`). For example: ``` grip -b docs/dev/release.md ``` will open a browser with the rendered markdown as html - it uses github API, so this is exacly how it'll look on github once you commit it. And here is a handy alias: ``` alias grip='grip -b' ``` so you don't need to remember the flag. ### Markdown Tips * If you use numbered items and their number goes beyond 9 you must switch to 4-whitespace chars indentation for the paragraphs belonging to each item. Under 9 or with \* you need 3-whitespace chars as a leading indentation. * When building tables make sure to use `--|--` and not `--+--` to separate the headers - github will not render it properly otherwise. ## Testing site locally Install prerequisites: ``` sudo apt install ruby-bundler ``` When running this one it will ask for your user's password (basically running a sudo operation): ``` bundle install jekyll ``` Start the website: ``` cd docs bundle exec jekyll serve ``` it will tell you which localhost url to go to to see the site.
github_jupyter
``` import numpy as np from utils_h5 import H5Loader from astroNN.apogee import aspcap_mask from astroNN.models import ApogeeBCNNCensored loader = H5Loader('__train') # continuum normalized dataset loader.load_err = True loader.target = ['teff', 'logg', 'C', 'C1', 'N', 'O', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'K', 'Ca', 'Ti', 'Ti2', 'V', 'Cr', 'Mn', 'Fe','Co', 'Ni'] x, y, x_err, y_err = loader.load() bcnn = ApogeeBCNNCensored() bcnn.num_hidden = [192, 64, 32, 16, 2] # default model size used in the paper bcnn.max_epochs = 60 # default max epochs used in the paper bcnn.autosave = True bcnn.folder_name = 'small_data_fixed_50' rand_train_idx = np.random.choice(np.arange(x.shape[0]), int(x.shape[0]/2), replace=False) bcnn.train(x[rand_train_idx], y[rand_train_idx], labels_err=y_err[rand_train_idx]) import numpy as np from utils_h5 import H5Loader from astroNN.apogee import aspcap_mask from astroNN.models import ApogeeBCNNCensored loader = H5Loader('__train') # continuum normalized dataset loader.load_err = True loader.target = ['teff', 'logg', 'C', 'C1', 'N', 'O', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'K', 'Ca', 'Ti', 'Ti2', 'V', 'Cr', 'Mn', 'Fe','Co', 'Ni'] x, y, x_err, y_err = loader.load() bcnn = ApogeeBCNNCensored() bcnn.num_hidden = [192, 64, 32, 16, 2] # default model size used in the paper bcnn.max_epochs = 60 # default max epochs used in the paper bcnn.autosave = True bcnn.folder_name = 'small_data_fixed_25' rand_train_idx = np.random.choice(np.arange(x.shape[0]), int(x.shape[0]/4), replace=False) bcnn.train(x[rand_train_idx], y[rand_train_idx], labels_err=y_err[rand_train_idx]) import numpy as np from utils_h5 import H5Loader from astroNN.apogee import aspcap_mask from astroNN.models import ApogeeBCNNCensored loader = H5Loader('__train') # continuum normalized dataset loader.load_err = True loader.target = ['teff', 'logg', 'C', 'C1', 'N', 'O', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'K', 'Ca', 'Ti', 'Ti2', 'V', 'Cr', 'Mn', 'Fe','Co', 'Ni'] x, y, x_err, y_err = loader.load() bcnn = ApogeeBCNNCensored() bcnn.num_hidden = [192, 64, 32, 16, 2] # default model size used in the paper bcnn.max_epochs = 60 # default max epochs used in the paper bcnn.autosave = True bcnn.folder_name = 'small_data_fixed_12_5' rand_train_idx = np.random.choice(np.arange(x.shape[0]), int(x.shape[0]/8), replace=False) bcnn.train(x[rand_train_idx], y[rand_train_idx], labels_err=y_err[rand_train_idx]) import numpy as np from utils_h5 import H5Loader from astroNN.apogee import aspcap_mask from astroNN.models import ApogeeBCNNCensored loader = H5Loader('__train') # continuum normalized dataset loader.load_err = True loader.target = ['teff', 'logg', 'C', 'C1', 'N', 'O', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'K', 'Ca', 'Ti', 'Ti2', 'V', 'Cr', 'Mn', 'Fe','Co', 'Ni'] x, y, x_err, y_err = loader.load() bcnn = ApogeeBCNNCensored() bcnn.num_hidden = [192, 64, 32, 16, 2] # default model size used in the paper bcnn.max_epochs = 60 # default max epochs used in the paper bcnn.autosave = True bcnn.folder_name = 'small_data_fixed_6_25' rand_train_idx = np.random.choice(np.arange(x.shape[0]), int(x.shape[0]/16), replace=False) bcnn.train(x[rand_train_idx], y[rand_train_idx], labels_err=y_err[rand_train_idx]) import numpy as np from utils_h5 import H5Loader from astroNN.apogee import aspcap_mask from astroNN.models import ApogeeBCNNCensored loader = H5Loader('__train') # continuum normalized dataset loader.load_err = True loader.target = ['teff', 'logg', 'C', 'C1', 'N', 'O', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'K', 'Ca', 'Ti', 'Ti2', 'V', 'Cr', 'Mn', 'Fe','Co', 'Ni'] x, y, x_err, y_err = loader.load() bcnn = ApogeeBCNNCensored() bcnn.num_hidden = [192, 64, 32, 16, 2] # default model size used in the paper bcnn.max_epochs = 60 # default max epochs used in the paper bcnn.autosave = True bcnn.folder_name = 'small_data_fixed_3_125' rand_train_idx = np.random.choice(np.arange(x.shape[0]), int(x.shape[0]/32), replace=False) bcnn.train(x[rand_train_idx], y[rand_train_idx], labels_err=y_err[rand_train_idx]) import numpy as np import pandas as pd from astropy.stats import mad_std from IPython.display import display, HTML from astroNN.models import load_folder from utils_h5 import H5Loader loader = H5Loader('__train') # continuum normalized dataset loader.load_combined = False # we want individual visit loader.load_err = False loader.target = ['teff', 'logg', 'C', 'C1', 'N', 'O', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'K', 'Ca', 'Ti', 'Ti2', 'V', 'Cr', 'Mn', 'Fe','Co', 'Ni'] x, y = loader.load() net_100 = load_folder("astroNN_0617_run001") # this is the main model we used net_50 = load_folder("small_data_fixed_50") net_50_pred, err = net_50.test(x) residue = (net_50_pred - y) mae = mad_std(residue, axis=0) me = np.median(residue, axis=0) d = {'Targetname': net_100.targetname, 'Scatter': mae, 'Bias': me} df = pd.DataFrame(data=d) print("16703 data") display(HTML(df.to_html())) net_25 = load_folder("small_data_fixed_25") net_25_pred, err = net_25.test(x) residue = (net_25_pred - y) mae = mad_std(residue, axis=0) me = np.median(residue, axis=0) d = {'Targetname': net_100.targetname, 'Scatter': mae, 'Bias': me} df = pd.DataFrame(data=d) print("8351 data") display(HTML(df.to_html())) net_12_5 = load_folder("small_data_fixed_12_5") net_12_5_pred, err = net_12_5.test(x) residue = (net_12_5_pred - y) mae = mad_std(residue, axis=0) me = np.median(residue, axis=0) d = {'Targetname': net_100.targetname, 'Scatter': mae, 'Bias': me} df = pd.DataFrame(data=d) print("4175 data") display(HTML(df.to_html())) net_6_25 = load_folder("small_data_fixed_6_25") net_6_25_pred, err = net_6_25.test(x) residue = (net_6_25_pred - y) mae = mad_std(residue, axis=0) me = np.median(residue, axis=0) d = {'Targetname': net_100.targetname, 'Scatter': mae, 'Bias': me} df = pd.DataFrame(data=d) print("2087 data") display(HTML(df.to_html())) net_3_125 = load_folder("small_data_fixed_3_125") net_3_125_pred, err = net_3_125.test(x) residue = (net_3_125_pred - y) mae = mad_std(residue, axis=0) me = np.median(residue, axis=0) d = {'Targetname': net_100.targetname, 'Scatter': mae, 'Bias': me} df = pd.DataFrame(data=d) print("1043 data") display(HTML(df.to_html())) ```
github_jupyter
# SchNet S2EF training example The purpose of this notebook is to demonstrate some of the basics of the Open Catalyst Project's (OCP) codebase and data. In this example, we will train a schnet model for predicting the energy and forces of a given structure (S2EF task). First, ensure you have installed the OCP ocp repo and all the dependencies according to the [README](https://github.com/Open-Catalyst-Project/ocp/blob/master/README.md). Disclaimer: This notebook is for tutorial purposes, it is unlikely it will be practical to train baseline models on our larger datasets using this format. As a next step, we recommend trying the command line examples. ## Imports ``` import torch import os from ocpmodels.trainers import ForcesTrainer from ocpmodels import models from ocpmodels.common import logger from ocpmodels.common.utils import setup_logging setup_logging() # a simple sanity check that a GPU is available if torch.cuda.is_available(): print("True") else: print("False") ``` ## The essential steps for training an OCP model 1) Download data 2) Preprocess data (if necessary) 3) Define or load a configuration (config), which includes the following - task - model - optimizer - dataset - trainer 4) Train 5) Depending on the model/task there might be intermediate relaxation step 6) Predict ## Dataset This examples uses the LMDB generated from the following [tutorial](http://laikapack.cheme.cmu.edu/notebook/open-catalyst-project/mshuaibi/notebooks/projects/ocp/docs/source/tutorials/lmdb_dataset_creation.ipynb). Please run that notebook before moving on. Alternatively, if you have other LMDBs available you may specify that instead. ``` # set the path to your local lmdb directory train_src = "s2ef" ``` ## Define config For this example, we will explicitly define the config; however, a set of default config files exists in the config folder of this repository. Default config yaml files can easily be loaded with the `build_config` util (found in `ocp/ocpmodels/common/utils.py`). Loading a yaml config is preferrable when launching jobs from the command line. We have included our best models' config files [here](https://github.com/Open-Catalyst-Project/ocp/tree/master/configs/s2ef). **Task** ``` task = { 'dataset': 'trajectory_lmdb', # dataset used for the S2EF task 'description': 'Regressing to energies and forces for DFT trajectories from OCP', 'type': 'regression', 'metric': 'mae', 'labels': ['potential energy'], 'grad_input': 'atomic forces', 'train_on_free_atoms': True, 'eval_on_free_atoms': True } ``` **Model** - SchNet for this example ``` model = { 'name': 'schnet', 'hidden_channels': 1024, # if training is too slow for example purposes reduce the number of hidden channels 'num_filters': 256, 'num_interactions': 3, 'num_gaussians': 200, 'cutoff': 6.0 } ``` **Optimizer** ``` optimizer = { 'batch_size': 16, # if hitting GPU memory issues, lower this 'eval_batch_size': 8, 'num_workers': 8, 'lr_initial': 0.0001, 'scheduler': "ReduceLROnPlateau", 'mode': "min", 'factor': 0.8, 'patience': 3, 'max_epochs': 80, 'max_epochs': 1, # used for demonstration purposes 'force_coefficient': 100, } ``` **Dataset** For simplicity, `train_src` is used for all the train/val/test sets. Feel free to update with the actual S2EF val and test sets, but it does require additional downloads and preprocessing. If you desire to normalize your targets, `normalize_labels` must be set to `True` and corresponding `mean` and `stds` need to be specified. These values have been precomputed for you and can be found in any of the [`base.yml`](https://github.com/Open-Catalyst-Project/ocp/blob/master/configs/s2ef/20M/base.yml#L5-L9) config files. ``` dataset = [ {'src': train_src, 'normalize_labels': False}, # train set {'src': train_src}, # val set (optional) {'src': train_src} # test set (optional - writes predictions to disk) ] ``` **Trainer** Use the `ForcesTrainer` for the S2EF and IS2RS tasks, and the `EnergyTrainer` for the IS2RE task ``` trainer = ForcesTrainer( task=task, model=model, dataset=dataset, optimizer=optimizer, identifier="SchNet-example", run_dir="./", # directory to save results if is_debug=False. Prediction files are saved here so be careful not to override! is_debug=False, # if True, do not save checkpoint, logs, or results is_vis=False, print_every=5, seed=0, # random seed to use logger="tensorboard", # logger of choice (tensorboard and wandb supported) local_rank=0, amp=False, # use PyTorch Automatic Mixed Precision (faster training and less memory usage) ) ``` ## Check the model ``` print(trainer.model) ``` ## Train ``` trainer.train() ``` ### Load Checkpoint Once training has completed a `Trainer` class, by default, is loaded with the best checkpoint as determined by training or validation (if available) metrics. To load a `Trainer` class directly with a pretrained model, specify the `checkpoint_path` as defined by your previously trained model (`checkpoint_dir`): ``` checkpoint_path = os.path.join(trainer.config["cmd"]["checkpoint_dir"], "checkpoint.pt") checkpoint_path model = { 'name': 'schnet', 'hidden_channels': 1024, # if training is too slow for example purposes reduce the number of hidden channels 'num_filters': 256, 'num_interactions': 3, 'num_gaussians': 200, 'cutoff': 6.0 } pretrained_trainer = ForcesTrainer( task=task, model=model, dataset=dataset, optimizer=optimizer, identifier="SchNet-example", run_dir="./", # directory to save results if is_debug=False. Prediction files are saved here so be careful not to override! is_debug=False, # if True, do not save checkpoint, logs, or results is_vis=False, print_every=10, seed=0, # random seed to use logger="tensorboard", # logger of choice (tensorboard and wandb supported) local_rank=0, amp=False, # use PyTorch Automatic Mixed Precision (faster training and less memory usage) ) pretrained_trainer.load_checkpoint(checkpoint_path=checkpoint_path) ``` ## Predict If a test has been provided in your config, predictions are generated and written to disk automatically upon training completion. Otherwise, to make predictions on unseen data a `torch.utils.data` DataLoader object must be constructed. Here we reference our test set to make predictions on. Predictions are saved in `{results_file}.npz` in your `results_dir`. ``` # make predictions on the existing test_loader predictions = pretrained_trainer.predict(pretrained_trainer.test_loader, results_file="s2ef_results", disable_tqdm=False) energies = predictions["energy"] forces = predictions["forces"] ```
github_jupyter
# Unstructured Data Wrangling ``` import os import chardet os.listdir()[3] import pandas as pd # dictionary to form the dataframe connection_dict = {'contact_name': [], 'position': [], 'timespan': [], 'timespan_type': [], 'tier_status': [], 'company': [], 'last_date_contact': [], 'shared_interest': []} # to add contacts not included in the unstructured contact list adding_contacts = {'contact_name': ["Shantel Vargas", "Alex Steger", "Dorcas N Flowers", "Wilfredo Rodriguez", "Jennifer Rothenberg", "Gigi Yuen-Reed"], 'position': ["Master Student", "Equity Analyst", "Special Education Teacher", "City of Dallas", "Marketing & Data Science", "Data Science Solution for Healthcare"], 'timespan': ["2", "3", "33", "6", None, None], 'timespan_type': ["years", "years", "years", "years", None, None], 'tier_status': ["tier1", "tier2", "tier1", "tier2", "tier3", "tier3"], 'company': ["Unemployed", "National Investment Services", "Unemployed", "City of Dallas", "Unemployed", "IBM"], 'last_date_contact': ["05/1/2016", "04/1/2016", "04/1/2018", "02/1/2018", None, None], 'shared_interest': ["Data Analysis", "Analysis", "Life", "Programing", "Data Analysis", "Data Analysis"]} # dictionary to populate the tiers tier_dict = {'tier1': {'name':["Lauren O'Farrill", "John Carnevalla Jr", "MARIA RAMOS ISIDOR, MAcc", "Yaileen Garza", "Lydia Montano"], 'Company':["Ditech Holding Corporation", "ConnectWise", "PwC", "HealthTrust Worforce Solutions", "Florida Family Primary Care Centers"], 'Email Address':["laurenofarrill@gmail.com", "jcarnevalla21@yahoo.com", "mramosis.mri@gmail.com", "rodyaileen@outlook.com", "lydia4lifeint822016@gmail.com"], 'Last Date Contacted':["06/1/2017", "04/1/2018", "03/1/2018", "04/2/2018", "03/20/2018"], 'Shared Interest':["Finance", "Programing", "Accounting", "Billing", "Programing"]}, 'tier2': {'name':["David Lievano", "Junior Sainval", "T. Hudson White III", "Nick Biengardo", "Ashley Borde", "Lexy Scarpiello", "Rami Siab", "Josani Schneider", "Alyssa Mason", "Brian Earnest", "Maruja Azar Leanos", "Mike Bowen", "Nikki Stowell", "Stephen Baricko", "Pravishka Wickramasuriya", "Ruben Madamba", "Spencer Crawford", "Única Channa", "Dan Bell", "Daniel LeBlanc", "Carolyn Ebanks", "Charles Mardook", "Julian Brown"], 'Company':["State Farm", "PwC", "PwC", "Unemployed", "Unemployed", "Comcast", "Planet Dodge Chrysler Jeep Ram", "Caymen Islands Monetary Authority", "The Depository Trust and Clearing Corporation", "Unemployed", "Mannaz Designs", "University of South Florida", "University of South Florida", "Freedman's Office Furniture and Supplies", "BB&T", "Alere Home Monitoring", "Machine Zone", "Alere Home Monitoring", "NerdHire", "Unemployed", "Children's Board of Hillsborough County", "Self-Employed Consultant", "Citi" ], 'Last Date Contacted':["12/1/2016", "10/1/2016", "12/20/2017", "12/1/2016", "12/1/2016", "10/1/2016", "12/1/2016", "12/1/2016", "12/1/2016", "10/1/2016", "12/1/2016", "05/1/2016", "10/1/2016", "06/1/2016", "03/1/2018", "02/1/2018", "12/1/2018", "01/20/2018", "02/1/2018", "10/1/2016", "05/1/2016", "12/1/2016", "12/1/2016"], 'Shared Interest':["Finance", "Accounting", "Finance", "Business", "Business", "Finance", "Finance", "Finance", "Finance", "Entrepreneurship", "Business", "Finance", "Business", "Finance", "Data Analysis", "Video Games", "Business", "", "Technology", "Finance", "Budget Analysis", "Business", "Finance"]}, 'tier3': {'name':["Renee Murphy", "Jamie Kelly", "Maria Isabel Caicedo", "Ralph Herz", "Olga Leontyeva, MS, PMP, CSPO", "Sireesha Pulipati", "Madhuvanthi Kandadai", "Fiona Huo", "Perri Ma", "Maria Kavaliova", "Shaquille Powell, MBA", "Chelsea Jone", "Renee Manneh", "Costa Stamatinos", "Nazia Habib", "Scott Provencher", "Aya Masuo", "Mihwa Han"], 'Company':["Precision Health Technologies", "Tesla", "Unemployed", "BB&T", "Alere Home Monitoring", "Genomic Health", "Corium International", "Unemployed", "Warner Brothers", "Oportun", "Harnham", "Bellator Recruiting Academy", "Voyage", "Unemployed" "Houston GMAT", "Unemployed", "Unemployed", "Unemployed", "Unemployed"], 'Last Date Contacted':[None, "03/1/2018", "03/1/2018", "12/1/2016", "12/1/2017", "02/1/2018", "04/3/2018", "02/1/2018", "03/1/2018", "02/1/2018", "07/1/2017", "03/1/2018", "03/1/2018", None, None, "02/1/2018", None, None], 'Shared Interest':[None, "Programing", "Data Analysis", "Finance", "Business Analysis", "Data Analysis", "Statistics", "Data Analysis", "Data Analysis", "Data Analysis", "Military", "Self-Driving Vehicles", "Data Analysis", "Data Analysis", "Data Analysis", "Data Analysis", "Data Analysis", "Data Analysis"]}} total_contacts = len(tier_dict['tier2']['name']) + \ len(tier_dict['tier1']['name']) + \ len(tier_dict['tier3']['name']) + \ len(adding_contacts['contact_name']) print("You have {} contacts.".format(total_contacts)) with open('contacts.txt', 'rb') as my_contacts: data_lines = my_contacts.read() data_list = data_lines.decode('utf-8').replace(u'’', '').strip().split('\r\n') for i, value in enumerate(data_list): #print(value) if value == "Members name": #print(data_list[i+1]) connection_dict['contact_name'].append(data_list[i+1]) elif value == "Members occupation": #print(data_list[i+1]) connection_dict['position'].append(data_list[i+1]) elif value.find('Connected') >= 0: #print(data_list[i]) connection_split = data_list[i].split(' ') connection_dict['timespan'].append(connection_split[1]) connection_dict['timespan_type'].append(connection_split[2]) # setting tiers if value == "Members name": contact_name = data_list[i+1] if contact_name in tier_dict['tier1']['name']: # appending tier1 connection_dict['tier_status'].append('tier1') # appending other info index_target = tier_dict['tier1']['name'].index(contact_name) connection_dict['company'].append(tier_dict['tier1']['Company'][index_target]) connection_dict['last_date_contact'].append(tier_dict['tier1']['Last Date Contacted'][index_target]) connection_dict['shared_interest'].append(tier_dict['tier1']['Shared Interest'][index_target]) elif contact_name in tier_dict['tier2']['name']: # appending tier2 connection_dict['tier_status'].append('tier2') # appending other info index_target = tier_dict['tier2']['name'].index(contact_name) connection_dict['company'].append(tier_dict['tier2']['Company'][index_target]) connection_dict['last_date_contact'].append(tier_dict['tier2']['Last Date Contacted'][index_target]) connection_dict['shared_interest'].append(tier_dict['tier2']['Shared Interest'][index_target]) elif contact_name in tier_dict['tier3']['name']: # appending tier3 connection_dict['tier_status'].append('tier3') # appending other info index_target = tier_dict['tier3']['name'].index(contact_name) connection_dict['company'].append(tier_dict['tier3']['Company'][index_target]) connection_dict['last_date_contact'].append(tier_dict['tier3']['Last Date Contacted'][index_target]) connection_dict['shared_interest'].append(tier_dict['tier3']['Shared Interest'][index_target]) else: connection_dict['tier_status'].append(None) connection_dict['company'].append(None) connection_dict['last_date_contact'].append(None) connection_dict['shared_interest'].append(None) for key in connection_dict.keys(): print(len(connection_dict[key]), key) df1 = pd.DataFrame(adding_contacts) df = pd.DataFrame(connection_dict) df = df.append(df1) df.reset_index(drop=True, inplace=True) condition = (df.tier_status.isna() == False) df_tier = df.loc[condition] df_tier.head() df_tier.count() df_tier.to_csv('my_tiers.csv') ```
github_jupyter
# Softmax exercise *Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.* This exercise is analogous to the SVM exercise. You will: - implement a fully-vectorized **loss function** for the Softmax classifier - implement the fully-vectorized expression for its **analytic gradient** - **check your implementation** with numerical gradient - use a validation set to **tune the learning rate and regularization** strength - **optimize** the loss function with **SGD** - **visualize** the final learned weights ``` import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt from __future__ import print_function %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading extenrnal modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500): """ Load the CIFAR-10 dataset from disk and perform preprocessing to prepare it for the linear classifier. These are the same steps as we used for the SVM, but condensed to a single function. """ # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # subsample the data mask = list(range(num_training, num_training + num_validation)) X_val = X_train[mask] y_val = y_train[mask] mask = list(range(num_training)) X_train = X_train[mask] y_train = y_train[mask] mask = list(range(num_test)) X_test = X_test[mask] y_test = y_test[mask] mask = np.random.choice(num_training, num_dev, replace=False) X_dev = X_train[mask] y_dev = y_train[mask] # Preprocessing: reshape the image data into rows X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_val = np.reshape(X_val, (X_val.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) X_dev = np.reshape(X_dev, (X_dev.shape[0], -1)) # Normalize the data: subtract the mean image mean_image = np.mean(X_train, axis = 0) X_train -= mean_image X_val -= mean_image X_test -= mean_image X_dev -= mean_image # add bias dimension and transform into columns X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))]) X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))]) X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))]) X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))]) return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev # Cleaning up variables to prevent loading data multiple times (which may cause memory issue) try: del X_train, y_train del X_test, y_test print('Clear previously loaded data.') except: pass # Invoke the above function to get our data. X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data() print('Train data shape: ', X_train.shape) print('Train labels shape: ', y_train.shape) print('Validation data shape: ', X_val.shape) print('Validation labels shape: ', y_val.shape) print('Test data shape: ', X_test.shape) print('Test labels shape: ', y_test.shape) print('dev data shape: ', X_dev.shape) print('dev labels shape: ', y_dev.shape) ``` ## Softmax Classifier Your code for this section will all be written inside **cs231n/classifiers/softmax.py**. ``` # First implement the naive softmax loss function with nested loops. # Open the file cs231n/classifiers/softmax.py and implement the # softmax_loss_naive function. from cs231n.classifiers.softmax import softmax_loss_naive import time # Generate a random softmax weight matrix and use it to compute the loss. W = np.random.randn(3073, 10) * 0.0001 loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0) # As a rough sanity check, our loss should be something close to -log(0.1). print('loss: %f' % loss) print('sanity check: %f' % (-np.log(0.1))) ``` ## Inline Question 1: Why do we expect our loss to be close to -log(0.1)? Explain briefly.** **Your answer:** *Fill this in* ``` # Complete the implementation of softmax_loss_naive and implement a (naive) # version of the gradient that uses nested loops. loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0) # As we did for the SVM, use numeric gradient checking as a debugging tool. # The numeric gradient should be close to the analytic gradient. from cs231n.gradient_check import grad_check_sparse f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0] grad_numerical = grad_check_sparse(f, W, grad, 10) # similar to SVM case, do another gradient check with regularization loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1) f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0] grad_numerical = grad_check_sparse(f, W, grad, 10) # Now that we have a naive implementation of the softmax loss function and its gradient, # implement a vectorized version in softmax_loss_vectorized. # The two versions should compute the same results, but the vectorized version should be # much faster. tic = time.time() loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005) toc = time.time() print('naive loss: %e computed in %fs' % (loss_naive, toc - tic)) from cs231n.classifiers.softmax import softmax_loss_vectorized tic = time.time() loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005) toc = time.time() print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)) # As we did for the SVM, we use the Frobenius norm to compare the two versions # of the gradient. grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro') print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized)) print('Gradient difference: %f' % grad_difference) # Use the validation set to tune hyperparameters (regularization strength and # learning rate). You should experiment with different ranges for the learning # rates and regularization strengths; if you are careful you should be able to # get a classification accuracy of over 0.35 on the validation set. from cs231n.classifiers import Softmax results = {} best_val = -1 best_softmax = None learning_rates = [1e-7, 5e-7] regularization_strengths = [2.5e4, 5e4] ################################################################################ # TODO: # # Use the validation set to set the learning rate and regularization strength. # # This should be identical to the validation that you did for the SVM; save # # the best trained softmax classifer in best_softmax. # ################################################################################ # Your code ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print('lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy)) print('best validation accuracy achieved during cross-validation: %f' % best_val) # evaluate on test set # Evaluate the best softmax on test set y_test_pred = best_softmax.predict(X_test) test_accuracy = np.mean(y_test == y_test_pred) print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, )) ``` **Inline Question** - *True or False* It's possible to add a new datapoint to a training set that would leave the SVM loss unchanged, but this is not the case with the Softmax classifier loss. *Your answer*: *Your explanation*: ``` # Visualize the learned weights for each class w = best_softmax.W[:-1,:] # strip out the bias w = w.reshape(32, 32, 3, 10) w_min, w_max = np.min(w), np.max(w) classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] for i in range(10): plt.subplot(2, 5, i + 1) # Rescale the weights to be between 0 and 255 wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min) plt.imshow(wimg.astype('uint8')) plt.axis('off') plt.title(classes[i]) ```
github_jupyter
SOP033 - azdata logout ====================== Use the azdata command line interface to logout of a Big Data Cluster. Steps ----- ### Common functions Define helper functions used in this notebook. ``` # Define `run` function for transient fault handling, hyperlinked suggestions, and scrolling updates on Windows import sys import os import re import json import platform import shlex import shutil import datetime from subprocess import Popen, PIPE from IPython.display import Markdown retry_hints = {} error_hints = {} install_hint = {} first_run = True rules = None def run(cmd, return_output=False, no_output=False, retry_count=0): """ Run shell command, stream stdout, print stderr and optionally return output """ MAX_RETRIES = 5 output = "" retry = False global first_run global rules if first_run: first_run = False rules = load_rules() # shlex.split is required on bash and for Windows paths with spaces # cmd_actual = shlex.split(cmd) # Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries # user_provided_exe_name = cmd_actual[0].lower() # When running python, use the python in the ADS sandbox ({sys.executable}) # if cmd.startswith("python "): cmd_actual[0] = cmd_actual[0].replace("python", sys.executable) # On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail # with: # # UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128) # # Setting it to a default value of "en_US.UTF-8" enables pip install to complete # if platform.system() == "Darwin" and "LC_ALL" not in os.environ: os.environ["LC_ALL"] = "en_US.UTF-8" # To aid supportabilty, determine which binary file will actually be executed on the machine # which_binary = None # Special case for CURL on Windows. The version of CURL in Windows System32 does not work to # get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance # of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost # always the first curl.exe in the path, and it can't be uninstalled from System32, so here we # look for the 2nd installation of CURL in the path) if platform.system() == "Windows" and cmd.startswith("curl "): path = os.getenv('PATH') for p in path.split(os.path.pathsep): p = os.path.join(p, "curl.exe") if os.path.exists(p) and os.access(p, os.X_OK): if p.lower().find("system32") == -1: cmd_actual[0] = p which_binary = p break # Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this # seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound) # # NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split. # if which_binary == None: which_binary = shutil.which(cmd_actual[0]) if which_binary == None: if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None: display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.')) raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") else: cmd_actual[0] = which_binary start_time = datetime.datetime.now().replace(microsecond=0) print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)") print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})") print(f" cwd: {os.getcwd()}") # Command-line tools such as CURL and AZDATA HDFS commands output # scrolling progress bars, which causes Jupyter to hang forever, to # workaround this, use no_output=True # # Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait # wait = True try: if no_output: p = Popen(cmd_actual) else: p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1) with p.stdout: for line in iter(p.stdout.readline, b''): line = line.decode() if return_output: output = output + line else: if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file regex = re.compile(' "(.*)"\: "(.*)"') match = regex.match(line) if match: if match.group(1).find("HTML") != -1: display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"')) else: display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"')) wait = False break # otherwise infinite hang, have not worked out why yet. else: print(line, end='') if rules is not None: apply_expert_rules(line) if wait: p.wait() except FileNotFoundError as e: if install_hint is not None: display(Markdown(f'HINT: Use {install_hint} to resolve this issue.')) raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait() if not no_output: for line in iter(p.stderr.readline, b''): line_decoded = line.decode() # azdata emits a single empty line to stderr when doing an hdfs cp, don't # print this empty "ERR:" as it confuses. # if line_decoded == "": continue print(f"STDERR: {line_decoded}", end='') if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"): exit_code_workaround = 1 if user_provided_exe_name in error_hints: for error_hint in error_hints[user_provided_exe_name]: if line_decoded.find(error_hint[0]) != -1: display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.')) if rules is not None: apply_expert_rules(line_decoded) if user_provided_exe_name in retry_hints: for retry_hint in retry_hints[user_provided_exe_name]: if line_decoded.find(retry_hint) != -1: if retry_count < MAX_RETRIES: print(f"RETRY: {retry_count} (due to: {retry_hint})") retry_count = retry_count + 1 output = run(cmd, return_output=return_output, retry_count=retry_count) if return_output: return output else: return elapsed = datetime.datetime.now().replace(microsecond=0) - start_time # WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so # don't wait here, if success known above # if wait: if p.returncode != 0: raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n') else: if exit_code_workaround !=0 : raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n') print(f'\nSUCCESS: {elapsed}s elapsed.\n') if return_output: return output def load_json(filename): with open(filename, encoding="utf8") as json_file: return json.load(json_file) def load_rules(): try: # Load this notebook as json to get access to the expert rules in the notebook metadata. # j = load_json("sop033-azdata-logout.ipynb") except: pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename? else: if "metadata" in j and \ "azdata" in j["metadata"] and \ "expert" in j["metadata"]["azdata"] and \ "rules" in j["metadata"]["azdata"]["expert"]: rules = j["metadata"]["azdata"]["expert"]["rules"] rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first. # print (f"EXPERT: There are {len(rules)} rules to evaluate.") return rules def apply_expert_rules(line): global rules for rule in rules: # rules that have 9 elements are the injected (output) rules (the ones we want). Rules # with only 8 elements are the source (input) rules, which are not expanded (i.e. TSG029, # not ../repair/tsg029-nb-name.ipynb) if len(rule) == 9: notebook = rule[1] cell_type = rule[2] output_type = rule[3] # i.e. stream or error output_type_name = rule[4] # i.e. ename or name output_type_value = rule[5] # i.e. SystemExit or stdout details_name = rule[6] # i.e. evalue or text expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it! # print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.") if re.match(expression, line, re.DOTALL): # print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook)) match_found = True display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.')) print('Common functions defined successfully.') # Hints for binary (transient fault) retry, (known) error and install guide # retry_hints = {'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use']} error_hints = {'azdata': [['azdata login', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Error processing command: "ApiError', 'TSG110 - Azdata returns ApiError', '../repair/tsg110-azdata-returns-apierror.ipynb'], ['Error processing command: "ControllerError', 'TSG036 - Controller logs', '../log-analyzers/tsg036-get-controller-logs.ipynb'], ['ERROR: 500', 'TSG046 - Knox gateway logs', '../log-analyzers/tsg046-get-knox-logs.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ["Can't open lib 'ODBC Driver 17 for SQL Server", 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb']]} install_hint = {'azdata': ['SOP055 - Install azdata command line interface', '../install/sop055-install-azdata.ipynb']} ``` ### Use azdata to log out ``` run('azdata logout') print('Notebook execution complete.') ``` Related ------- - [SOP028 - azdata login](../common/sop028-azdata-login.ipynb)
github_jupyter
``` # Visualization of the KO+ChIP Gold Standard from: # Miraldi et al. (2018) "Leveraging chromatin accessibility for transcriptional regulatory network inference in Th17 Cells" # TO START: In the menu above, choose "Cell" --> "Run All", and network + heatmap will load # NOTE: Default limits networks to TF-TF edges in top 1 TF / gene model (.93 quantile), to see the full # network hit "restore" (in the drop-down menu in cell below) and set threshold to 0 and hit "threshold" # You can search for gene names in the search box below the network (hit "Match"), and find regulators ("targeted by") # Change "canvas" to "SVG" (drop-down menu in cell below) to enable drag interactions with nodes & labels # Change "SVG" to "canvas" to speed up layout operations # More info about jp_gene_viz and user interface instructions are available on Github: # https://github.com/simonsfoundation/jp_gene_viz/blob/master/doc/dNetwork%20widget%20overview.ipynb # directory containing gene expression data and network folder directory = "." # folder containing networks netPath = 'Networks' # network file name networkFile = 'ENCODE_bias50_sp.tsv' # title for network figure netTitle = 'ENCODE DHS, bias = 50, prior-based TFA' # name of gene expression file expressionFile = 'Th0_Th17_48hTh.txt' # column of gene expression file to color network nodes rnaSampleOfInt = 'Th17(48h)' # edge cutoff -- for Inferelator TRNs, corresponds to signed quantile (rank of edges in 15 TFs / gene models), # increase from 0 --> 1 to get more significant edges (e.g., .33 would correspond to edges only in 10 TFs / gene # models) edgeCutoff = .93 import sys if ".." not in sys.path: sys.path.append("..") from jp_gene_viz import dNetwork dNetwork.load_javascript_support() # from jp_gene_viz import multiple_network from jp_gene_viz import LExpression LExpression.load_javascript_support() # Load network linked to gene expression data L = LExpression.LinkedExpressionNetwork() L.show() # Load Network and Heatmap L.load_network(directory + '/' + netPath + '/' + networkFile) L.load_heatmap(directory + '/' + expressionFile) N = L.network N.set_title(netTitle) N.threshhold_slider.value = edgeCutoff N.apply_click(None) N.draw() # Add labels to nodes N.labels_button.value=True # Limit to TFs only, remove unconnected TFs, choose and set network layout N.restore_click() N.tf_only_click() N.connected_only_click() N.layout_dropdown.value = 'fruchterman_reingold' N.layout_click() # Interact with Heatmap # Limit genes in heatmap to network genes L.gene_click(None) # Z-score heatmap values L.expression.transform_dropdown.value = 'Z score' L.expression.apply_transform() # Choose a column in the heatmap (e.g., 48h Th17) to color nodes L.expression.col = rnaSampleOfInt L.condition_click(None) # Switch SVG layout to get line colors, then switch back to faster canvas mode N.force_svg(None) ```
github_jupyter
``` from PIL import Image, ImageOps, ImageMath, ImageEnhance #%pylab inline %matplotlib inline import numpy as np import matplotlib.pyplot as plt def add_transparency(bg_image): if (bg_image.mode is not "RGBA"): bg_image = bg_image.convert("RGBA") pixdata = bg_image.load() for y in xrange(bg_image.size[1]): for x in xrange(bg_image.size[0]): if pixdata[x, y] == (255, 255, 255, 255): pixdata[x, y] = (255, 255, 255, 0) return bg_image nmec_fr_003_001 = Image.open("/work/fs4/datasets/nmec-handwriting/stil-writing-corpus/French/French-Images/FR-003-001.tif").convert('RGBA') nmec_fr_003_001_resize = nmec_fr_003_001.resize(tuple([x / 3 for x in nmec_fr_003_001.size])) stain1 = Image.open("./coffee_stain_1.jpg").convert('RGBA') stain1_w, stain1_h = stain1.size offset1 = 250 nmec_fr_003_001_crop = nmec_fr_003_001_resize.crop((offset1, offset1, offset1+stain1_w, offset1+stain1_h)) fig = plt.figure() pic1 = fig.add_subplot(1,2,1) plt.imshow(np.asarray(nmec_fr_003_001_crop)) pic1.set_title('original cropped HW') pic1 = fig.add_subplot(1,2,2) plt.imshow(np.asarray(stain1)) pic1.set_title('original stain') #mode 'L' doesn't work; RGBA does alphac = Image.alpha_composite(nmec_fr_003_001_crop, add_transparency(stain1)) fig = plt.figure() pic1 = fig.add_subplot(1,2,1) plt.imshow(np.asarray((alphac))) pic1.set_title('alpha comp : stain onto hw') alphac = Image.alpha_composite(stain1, add_transparency(nmec_fr_003_001_crop)) pic1 = fig.add_subplot(1,2,2) plt.imshow(np.asarray((alphac))) pic1.set_title('alpha comp: hw onto stain') blend = Image.blend(stain1, nmec_fr_003_001_crop, 0.5) plt.imshow(np.asarray(blend)) plt.title("blend with alpha 0.5") #Image.composite(fg, bg, mask_image) is not appropriate because we don't have all three nmec_fr_003_001_crop_orig = nmec_fr_003_001_crop.copy() nmec_fr_003_001_crop.paste(add_transparency(stain1)) plt.imshow(np.asarray(nmec_fr_003_001_crop)) stain_gray_inv = ImageOps.invert(stain1.convert('L')) hw_gray = ImageOps.invert(nmec_fr_003_001_crop_orig.convert('L')) summed_inversion = ImageMath.eval("convert(a+b, 'L')", a=stain_gray_inv, b=hw_gray) plt.imshow(ImageOps.invert(summed_inversion), cmap='gray') # to lighten stain, slide between 1.0 and ~2.0; doc. says 0.0 gives black image #preferred method vs. below's "contrast" fig = plt.figure(figsize=(15,3)) enhancer = ImageEnhance.Brightness(stain1) pic1 = fig.add_subplot(1,5,1) plt.imshow(np.asarray(enhancer.enhance(1.0).convert('L')), cmap='gray') pic1.set_title('orig stain') pic1 = fig.add_subplot(1,5,2) plt.imshow(np.asarray((enhancer.enhance(0.5).convert('L'))), cmap='gray') pic1.set_title('br. 0.5') pic1 = fig.add_subplot(1,5,3) plt.imshow(np.asarray((enhancer.enhance(0.01).convert('L'))), cmap='gray') pic1.set_title('br. 0.01') pic1 = fig.add_subplot(1,5,4) plt.imshow(np.asarray((enhancer.enhance(1.5).convert('L'))), cmap='gray') pic1.set_title('br. 1.5') pic1 = fig.add_subplot(1,5,5) plt.imshow(np.asarray((enhancer.enhance(1.75).convert('L'))), cmap='gray') pic1.set_title('br. 1.75') #to lighten stain, slide between 1.0 and ~0.0.....but entire image gets grayer, not just stain #documentation says 0.0 results in all-gray image #not converting this to grayscale to show effect, and because I don't want this enhancer anyway fig = plt.figure(figsize=(15,3)) enhancer = ImageEnhance.Contrast(stain1) pic1 = fig.add_subplot(1,5,1) plt.imshow(np.asarray(enhancer.enhance(1.0))) pic1.set_title('orig stain') pic1 = fig.add_subplot(1,5,2) plt.imshow(np.asarray((enhancer.enhance(0.5)))) pic1.set_title('contr. 0.5') pic1 = fig.add_subplot(1,5,3) plt.imshow(np.asarray((enhancer.enhance(0.1)))) pic1.set_title('contr. 0.1') pic1 = fig.add_subplot(1,5,4) plt.imshow(np.asarray((enhancer.enhance(2)))) pic1.set_title('contr. 1.5') pic1 = fig.add_subplot(1,5,5) plt.imshow(np.asarray((enhancer.enhance(5)))) pic1.set_title('contr. 1.75') offset2 = 350 nmec_fr_003_001_small = nmec_fr_003_001_resize.crop((offset2, offset2, offset2+stain1_w, offset2+stain1_h)) #plt.imshow(np.asarray(nmec_fr_003_001_small)) #following line won't work #print np.random.randint(0) def get_rand_stain_w_random_brightness(shingle_img, stain_img, shin_dim, rng): #assumes RGBA stain > shingle in both dim #assumes shingle is 'L' stain_w, stain_h = stain_img.size max_rand_x = stain_w - shin_dim[1] max_rand_y = stain_h - shin_dim[0] startx = rng.randint(max_rand_x) starty = rng.randint(max_rand_y) rand_cropped_stain = stain_img.crop((startx, starty, startx+shin_dim[1], starty+shin_dim[0])) rand_bright = rng.random_sample()+1.0 rand_faded_stain = ImageEnhance.Brightness(rand_cropped_stain).enhance(rand_bright).convert('L') stain_inv = ImageOps.invert(rand_faded_stain) shingle_inv = ImageOps.invert(shingle_img) return ImageOps.invert(ImageMath.eval("convert(a+b, 'L')", a=stain_inv, b=shingle_inv)) img = get_rand_stain_w_random_brightness(nmec_fr_003_001_small.crop((0,0,60,110)).convert('L'), stain1, shin_dim=(110,60), rng=np.random.RandomState()) plt.imshow(img, cmap="gray") ```
github_jupyter
# Image classification - training from scratch demo 1. [Introduction](#Introduction) 2. [Prerequisites and Preprocessing](#Prequisites-and-Preprocessing) 3. [Fine-tuning the Image classification model](#Fine-tuning-the-Image-classification-model) 4. [Set up hosting for the model](#Set-up-hosting-for-the-model) 1. [Import model into hosting](#Import-model-into-hosting) 2. [Create endpoint configuration](#Create-endpoint-configuration) 3. [Create endpoint](#Create-endpoint) 5. [Perform Inference](#Perform-Inference) ## Introduction Welcome to our end-to-end example of distributed image classification algorithm in transfer learning mode. In this demo, we will use the Amazon sagemaker image classification algorithm to learn to classify the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. ## Prequisites and Preprocessing ### Permissions and environment variables Here we set up the linkage and authentication to AWS services. There are three parts to this: * The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook * The S3 bucket that you want to use for training and model data * The Amazon sagemaker image classification docker image which need not be changed ``` %%time import boto3 import re from sagemaker import get_execution_role role = get_execution_role() bucket='jsimon-sagemaker-us' # customize to your bucket containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/image-classification:latest', 'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/image-classification:latest', 'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/image-classification:latest', 'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/image-classification:latest'} training_image = containers[boto3.Session().region_name] print(training_image) ``` ## Training the Image classification model The CIFAR-10 dataset consist of images from 10 categories and has 50,000 images with 5,000 images per category. The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.incubator.apache.org/tutorials/basic/record_io.html) and the other is a [lst format](https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec). Files for both these formats are available at http://data.mxnet.io/data/cifar10/. In this example, we will use the recordio format for training and use the training/validation split. ``` import os import urllib.request import boto3 def download(url): filename = url.split("/")[-1] if not os.path.exists(filename): urllib.request.urlretrieve(url, filename) def upload_to_s3(channel, file): s3 = boto3.resource('s3') data = open(file, "rb") key = channel + '/' + file s3.Bucket(bucket).put_object(Key=key, Body=data) # CIFAR-10 download('http://data.mxnet.io/data/cifar10/cifar10_train.rec') download('http://data.mxnet.io/data/cifar10/cifar10_val.rec') upload_to_s3('validation/cifar10', 'cifar10_val.rec') upload_to_s3('train/cifar10', 'cifar10_train.rec') ``` Once we have the data available in the correct format for training, the next step is to actually train the model using the data. Before training the model, we need to setup the training parameters. The next section will explain the parameters in detail. ## Training parameters There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include: * **Input specification**: These are the training and validation channels that specify the path where training data is present. These are specified in the "InputDataConfig" section. The main parameters that need to be set is the "ContentType" which can be set to "application/x-recordio" or "application/x-image" based on the input data format and the S3Uri which specifies the bucket and the folder where the data is present. * **Output specification**: This is specified in the "OutputDataConfig" section. We just need to specify the path where the output can be stored after training * **Resource config**: This section specifies the type of instance on which to run the training and the number of hosts used for training. If "InstanceCount" is more than 1, then training can be run in a distributed manner. Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are: * **num_layers**: The number of layers (depth) for the network. We use 44 in this sample but other values can be used. * **num_training_samples**: This is the total number of training samples. It is set to 50000 for CIFAR-10 dataset with the current split * **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For CIFAR-10, we use 10. * **epochs**: Number of training epochs * **learning_rate**: Learning rate for training * **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run After setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 10 to 12 minutes per epoch on a p2.xlarge machine. The network typically converges after 10 epochs. ``` # The algorithm supports multiple network depth (number of layers). They are 18, 34, 50, 101, 152 and 200 # For this training, we will use 50 layers num_layers = 44 # we need to specify the input image shape for the training data image_shape = "3,28,28" # we also need to specify the number of training samples in the training set # for CIFAR-10 it is 50000 num_training_samples = 50000 # specify the number of output classes num_classes = 10 # batch size for training mini_batch_size = 128 # number of epochs epochs = 100 # optimizer optimizer='adam' # Since we are using transfer learning, we set use_pretrained_model to 1 so that weights can be # initialized with pre-trained weights use_pretrained_model = 0 ``` # Training Run the training using Amazon sagemaker CreateTrainingJob API ``` %%time import time import boto3 from time import gmtime, strftime s3 = boto3.client('s3') # create unique job name job_name_prefix = 'sagemaker-imageclassification-cifar10' timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime()) job_name = job_name_prefix + timestamp training_params = \ { # specify the training docker image "AlgorithmSpecification": { "TrainingImage": training_image, "TrainingInputMode": "File" }, "RoleArn": role, "OutputDataConfig": { "S3OutputPath": 's3://{}/{}/output'.format(bucket, job_name_prefix) }, "ResourceConfig": { "InstanceCount": 1, "InstanceType": "ml.p2.xlarge", "VolumeSizeInGB": 50 }, "TrainingJobName": job_name, "HyperParameters": { "image_shape": image_shape, "num_layers": str(num_layers), "num_training_samples": str(num_training_samples), "num_classes": str(num_classes), "mini_batch_size": str(mini_batch_size), "epochs": str(epochs), "learning_rate": str(learning_rate), "use_pretrained_model": str(use_pretrained_model) }, "StoppingCondition": { "MaxRuntimeInSeconds": 360000 }, #Training data should be inside a subdirectory called "train" #Validation data should be inside a subdirectory called "validation" #The algorithm currently only supports fullyreplicated model (where data is copied onto each machine) "InputDataConfig": [ { "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": 's3://{}/train/cifar10'.format(bucket), "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "application/x-recordio", "CompressionType": "None" }, { "ChannelName": "validation", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": 's3://{}/validation/cifar10'.format(bucket), "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "application/x-recordio", "CompressionType": "None" } ] } print('Training job name: {}'.format(job_name)) print('\nInput Data Location: {}'.format(training_params['InputDataConfig'][0]['DataSource']['S3DataSource'])) # create the Amazon SageMaker training job sagemaker = boto3.client(service_name='sagemaker') sagemaker.create_training_job(**training_params) # confirm that the training job has started status = sagemaker.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus'] print('Training job current status: {}'.format(status)) try: # wait for the job to finish and report the ending status sagemaker.get_waiter('training_job_completed_or_stopped').wait(TrainingJobName=job_name) training_info = sagemaker.describe_training_job(TrainingJobName=job_name) status = training_info['TrainingJobStatus'] print("Training job ended with status: " + status) except: print('Training failed to start') # if exception is raised, that means it has failed message = sagemaker.describe_training_job(TrainingJobName=job_name)['FailureReason'] print('Training failed with the following error: {}'.format(message)) training_info = sagemaker.describe_training_job(TrainingJobName=job_name) status = training_info['TrainingJobStatus'] print("Training job ended with status: " + status) ``` If you see the message, > `Training job ended with status: Completed` then that means training sucessfully completed and the output model was stored in the output path specified by `training_params['OutputDataConfig']`. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab. ## Plot training and validation accuracies ``` import boto3 import matplotlib.pyplot as plt import matplotlib.ticker as ticker client = boto3.client('logs') lgn='/aws/sagemaker/TrainingJobs' lsn='sagemaker-imageclassification-cifar10-2018-01-16-10-31-05/algo-1-1516099203' log=client.get_log_events(logGroupName=lgn, logStreamName=lsn) trn_accs=[] val_accs=[] for e in log['events']: msg=e['message'] if 'Validation-accuracy' in msg: val = msg.split("=") val = val[1] val_accs.append(float(val)) if 'Train-accuracy' in msg: trn = msg.split("=") trn = trn[1] trn_accs.append(float(trn)) print("Maximum validation accuracy: %f " % max(val_accs)) fig, ax = plt.subplots() plt.xlabel('Epochs') plt.ylabel('Accuracy') trn_plot, = ax.plot(range(epochs), trn_accs, label="Training accuracy") val_plot, = ax.plot(range(epochs), val_accs, label="Validation accuracy") plt.legend(handles=[trn_plot,val_plot]) ax.yaxis.set_ticks(np.arange(0.4, 1.05, 0.05)) ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%0.2f')) plt.show() ``` # Inference *** A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the topic mixture representing a given document. This section involves several steps, 1. [Create Model](#CreateModel) - Create model for the training output 1. [Create Endpoint Configuration](#CreateEndpointConfiguration) - Create a configuration defining an endpoint. 1. [Create Endpoint](#CreateEndpoint) - Use the configuration to create an inference endpoint. 1. [Perform Inference](#Perform Inference) - Perform inference on some input data using the endpoint. ## Create Model We now create a SageMaker Model from the training output. Using the model we can create an Endpoint Configuration. ``` %%time import boto3 from time import gmtime, strftime sage = boto3.Session().client(service_name='sagemaker') model_name="test-image-classification-model-cifar-10epochs" print(model_name) info = sage.describe_training_job(TrainingJobName=job_name) model_data = info['ModelArtifacts']['S3ModelArtifacts'] print(model_data) containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/image-classification:latest', 'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/image-classification:latest', 'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/image-classification:latest', 'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/image-classification:latest'} hosting_image = containers[boto3.Session().region_name] primary_container = { 'Image': hosting_image, 'ModelDataUrl': model_data, } create_model_response = sage.create_model( ModelName = model_name, ExecutionRoleArn = role, PrimaryContainer = primary_container) print(create_model_response['ModelArn']) ``` ### Create Endpoint Configuration At launch, we will support configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment, and at launch will describe the autoscaling configuration. ``` from time import gmtime, strftime timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime()) endpoint_config_name = job_name_prefix + '-epc-' + timestamp endpoint_config_response = sage.create_endpoint_config( EndpointConfigName = endpoint_config_name, ProductionVariants=[{ 'InstanceType':'ml.m4.xlarge', 'InitialInstanceCount':1, 'ModelName':model_name, 'VariantName':'AllTraffic'}]) print('Endpoint configuration name: {}'.format(endpoint_config_name)) print('Endpoint configuration arn: {}'.format(endpoint_config_response['EndpointConfigArn'])) ``` ### Create Endpoint Lastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete. ``` %%time import time timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime()) endpoint_name = job_name_prefix + '-ep-' + timestamp print('Endpoint name: {}'.format(endpoint_name)) endpoint_params = { 'EndpointName': endpoint_name, 'EndpointConfigName': endpoint_config_name, } endpoint_response = sagemaker.create_endpoint(**endpoint_params) print('EndpointArn = {}'.format(endpoint_response['EndpointArn'])) ``` Finally, now the endpoint can be created. It may take sometime to create the endpoint... ``` # get the status of the endpoint response = sagemaker.describe_endpoint(EndpointName=endpoint_name) status = response['EndpointStatus'] print('EndpointStatus = {}'.format(status)) # wait until the status has changed sagemaker.get_waiter('endpoint_in_service').wait(EndpointName=endpoint_name) # print the status of the endpoint endpoint_response = sagemaker.describe_endpoint(EndpointName=endpoint_name) status = endpoint_response['EndpointStatus'] print('Endpoint creation ended with EndpointStatus = {}'.format(status)) if status != 'InService': raise Exception('Endpoint creation failed.') ``` If you see the message, > `Endpoint creation ended with EndpointStatus = InService` then congratulations! You now have a functioning inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console. We will finally create a runtime object from which we can invoke the endpoint. ## Perform Inference Finally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint. ``` import boto3 runtime = boto3.Session().client(service_name='runtime.sagemaker') ``` ### Download test image ``` # Bird #!wget -O /tmp/test.jpg https://cdn.pixabay.com/photo/2015/12/19/10/54/bird-1099639_960_720.jpg # Horse #!wget -O /tmp/test.jpg https://cdn.pixabay.com/photo/2016/02/15/13/26/horse-1201143_960_720.jpg # Dog !wget -O /tmp/test.jpg https://cdn.pixabay.com/photo/2016/02/19/15/46/dog-1210559_960_720.jpg # Truck # Truck #!wget -O /tmp/test.jpg https://cdn.pixabay.com/photo/2015/09/29/10/14/truck-truck-963637_960_720.jpg file_name = '/tmp/test.jpg' # test image from IPython.display import Image Image(file_name) import json import numpy as np with open(file_name, 'rb') as f: payload = f.read() payload = bytearray(payload) response = runtime.invoke_endpoint(EndpointName=endpoint_name, ContentType='application/x-image', Body=payload) result = response['Body'].read() # result will be in json format and convert it to ndarray result = json.loads(result) print(result) # the result will output the probabilities for all classes # find the class with maximum probability and print the class index index = np.argmax(result) object_categories = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] print("Result: label - " + object_categories[index] + ", probability - " + str(result[index])) ``` ### Clean up When we're done with the endpoint, we can just delete it and the backing instances will be released. Run the following cell to delete the endpoint. ``` sage.delete_endpoint(EndpointName=endpoint_name) ```
github_jupyter
# Cloudwatch Metrics に学習過程のスコアを書き出す ## 概要 このノートブックでは,Amazon SageMaker 上で学習する際のスコアを,Cloudwatch Metrics に書き出して可視化するやり方について確認します. ## データセットのS3へのアップロード - keras.datasetsを利用してmnistのデータをダウンロードしてnpz形式で保存します。 - 保存したnpz形式のファイルを、SageMaker Python SDKを利用してS3にアップロードします。 ``` import os import keras import numpy as np from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() os.makedirs("./data", exist_ok = True) np.savez('./data/train', image=x_train, label=y_train) np.savez('./data/test', image=x_test, label=y_test) import sagemaker sagemaker_session = sagemaker.Session() bucket_name = sagemaker_session.default_bucket() input_data = sagemaker_session.upload_data(path='./data', bucket=bucket_name, key_prefix='dataset/mnist') print('Training data is uploaded to: {}'.format(input_data)) ``` ## メトリクスの記述 Estimator オブジェクトを作成する際に,metric_definitions を JSON 形式で指定することができます.ここで正規表現の形でメトリクスを指定することで,ジョブを実行した際の標準出力からマッチする値を取り出して,自動的に Cloudwatch Metrics に書き出してくれます. ここでは Keras のジョブを実行するため,以下のような形式でログが出力されます.エポックごとの,訓練データと評価データ両方に対する損失関数の値を,メトリクスとして抜き出すことを考えてみましょう. ``` 59600/60000 [============================>.] - ETA: 0s - loss: 0.2289 - acc: 0.9298 59800/60000 [============================>.] - ETA: 0s - loss: 0.2286 - acc: 0.9299 60000/60000 [==============================] - 28s 460us/step - loss: 0.2282 - acc: 0.9300 - val_loss: 0.1047 - val_acc: 0.9671 Epoch 2/100 100/60000 [..............................] - ETA: 28s - loss: 0.1315 - acc: 0.9500 300/60000 [..............................] - ETA: 25s - loss: 0.1260 - acc: 0.9600 500/60000 [..............................] - ETA: 25s - loss: 0.1209 - acc: 0.9620 ``` ここでは,以下のようにメトリクスを定義することで,上記形式のログから,訓練・評価データそれぞれの損失関数の値を抜き出すことができます. ``` metric_definitions=[ { "Name": "train:loss", "Regex": ".*step\\s-\\sloss:\\s(\\S+).*" }, { "Name": "val:loss", "Regex": ".*\\sval_loss:\\s(\\S+).*" } ], ``` ## SageMakerでの学習 先ほど説明したメトリクス定義を含めて Tensorflow オブジェクトを作成し,実行することで,メトリクスも出力されます.確認のために,マネジメントコンソールの左メニュー内「トレーニングジョブ」から,該当するジョブを選択します.詳細画面の下側にある「モニタリング」フィールド内の「アルゴリズムメトリクスの表示」リンクから,定義したメトリクスのグラフに飛ぶことができます. ``` from sagemaker.tensorflow import TensorFlow from sagemaker import get_execution_role role = get_execution_role() mnist_estimator = TensorFlow(entry_point = "./src/keras_mlp_mnist.py", role=role, train_instance_count=1, train_instance_type="ml.m4.xlarge", framework_version="1.11.0", py_version='py3', script_mode=True, metric_definitions=[ { "Name": "train:loss", "Regex": ".*step\\s-\\sloss:\\s(\\S+).*" }, { "Name": "val:loss", "Regex": ".*\\sval_loss:\\s(\\S+).*" } ], hyperparameters={'batch_size': 64, 'n_class': 10, 'epochs': 15}) mnist_estimator.fit(input_data) ```
github_jupyter
``` import os import sys import itertools import copy import glob import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib as mpl from n2j import trainval_data import n2j.trainval_data.utils.raytracing_utils as ru import n2j.trainval_data.utils.halo_utils as hu import n2j.trainval_data.utils.coord_utils as cu import lenstronomy print(lenstronomy.__path__) mpl.rcParams['text.usetex'] = True mpl.rcParams['mathtext.fontset'] = 'stix' mpl.rcParams['font.family'] = 'STIXGeneral' mpl.rcParams['text.latex.preamble'] = r'\usepackage{amsmath}' #for \text command mpl.rcParams['axes.labelsize'] = 'x-large' %matplotlib inline %load_ext autoreload %autoreload 2 out_dir = '../n2j/data/cosmodc2_10450/Y_10450' def show_kappa_map(kappa_map, path=None, fov=6.0): plt.close('all') fig, ax = plt.subplots(figsize=(10, 10)) #kappa_map = np.load(os.path.join(out_dir, 'kappa_map_sightline={:d}_sample={:d}.npy').format(sightline_i, sample_i)) min_k = np.min(kappa_map) print("Minmax: ", np.min(kappa_map), np.max(kappa_map)) print(np.mean(kappa_map[kappa_map<0.4])) kappa_map[kappa_map<0] = np.nan print("Number of negative pixels: ", (~np.isfinite(kappa_map)).sum()) #plt.scatter(0, 0, marker='x', color='k') #some_halos = halos[halos['eff'] < 0.5] #plt.scatter(some_halos['ra_diff']*60.0*60.0, some_halos['dec_diff']*60.0*60.0, marker='x', color='white') #plt.imshow(kappa_map) cmap = copy.copy(plt.cm.viridis) cmap.set_bad((1, 0, 0, 1)) im = ax.imshow(kappa_map, vmin=min_k, vmax=0.2, extent=[-fov*60.0*0.5, fov*60.0*0.5, -fov*60.0*0.5, fov*60.0*0.5], origin='lower', cmap=cmap) ax.set_xticks(np.linspace(-fov*60.0*0.5, fov*60.0*0.5, 10)) ax.set_yticks(np.linspace(-fov*60.0*0.5, fov*60.0*0.5, 10)) #plt.xlabel(r"'' ") #plt.ylabel(r"'' ") ax.set_xlabel('asec') ax.set_ylabel('asec') fig.colorbar(im) plt.close('all') fig, ax = plt.subplots(figsize=(10, 10)) los_i = 1 g1 = np.load(os.path.join(out_dir, 'g1_map_los={0:07d}.npy').format(los_i)) g2 = np.load(os.path.join(out_dir, 'g2_map_los={0:07d}.npy').format(los_i)) import lenstronomy.Util.param_util as param_util phi, gamma = param_util.shear_cartesian2polar(g1, g2) fov = 1.35 #min_k = np.min(kappa_map) #print("Minmax: ", np.min(kappa_map), np.max(kappa_map)) #print(np.mean(kappa_map[kappa_map<0.4])) #kappa_map[kappa_map<0] = np.nan #print("Number of negative pixels: ", (~np.isfinite(kappa_map)).sum()) halos_cols = ['ra', 'ra_diff', 'dec', 'dec_diff', 'z', 'dist'] halos_cols += ['eff', 'halo_mass', 'stellar_mass', 'Rs', 'alpha_Rs'] halos_cols += ['galaxy_id'] halos_cols.sort() halos_arr = np.load(glob.glob(os.path.join(out_dir, 'halos_los={0:07d}_id=*.npy'.format(los_i)))[0]) halos = pd.DataFrame(halos_arr, columns=halos_cols) plt.scatter(0, 0, marker='x', color='k') #some_halos = halos[halos['eff'] < 0.5] plt.scatter(halos['ra_diff']*60.0*60.0, halos['dec_diff']*60.0*60.0, marker='x', color='white', s=np.log10(halos['halo_mass'].values)*4) #plt.imshow(kappa_map) cmap = copy.copy(plt.cm.viridis) cmap.set_bad((1, 0, 0, 1)) im = ax.imshow(gamma, extent=[-fov*60.0*0.5, fov*60.0*0.5, -fov*60.0*0.5, fov*60.0*0.5], origin='lower', cmap=cmap) ax.set_xticks(np.linspace(-fov*60.0*0.5, fov*60.0*0.5, 10)) ax.set_yticks(np.linspace(-fov*60.0*0.5, fov*60.0*0.5, 10)) #plt.xlabel(r"'' ") #plt.ylabel(r"'' ") ax.set_title(r'$\gamma_{\rm ext}$', fontsize=20) ax.set_xlabel('asec') ax.set_ylabel('asec') fig.colorbar(im, ) plt.close('all') fig, ax = plt.subplots(figsize=(10, 10)) los_i = 0 g1 = np.load(os.path.join(out_dir, 'g1_map_los={0:07d}.npy').format(los_i)) g2 = np.load(os.path.join(out_dir, 'g2_map_los={0:07d}.npy').format(los_i)) import lenstronomy.Util.param_util as param_util phi, gamma = param_util.shear_cartesian2polar(g1, g2) fov = 1.35 #min_k = np.min(kappa_map) #print("Minmax: ", np.min(kappa_map), np.max(kappa_map)) #print(np.mean(kappa_map[kappa_map<0.4])) #kappa_map[kappa_map<0] = np.nan #print("Number of negative pixels: ", (~np.isfinite(kappa_map)).sum()) halos_cols = ['ra', 'ra_diff', 'dec', 'dec_diff', 'z', 'dist'] halos_cols += ['eff', 'halo_mass', 'stellar_mass', 'Rs', 'alpha_Rs'] halos_cols += ['galaxy_id'] halos_cols.sort() halos_arr = np.load(glob.glob(os.path.join(out_dir, 'halos_los={0:07d}_id=*.npy'.format(los_i)))[0]) halos = pd.DataFrame(halos_arr, columns=halos_cols) ax.scatter(0, 0, marker='x', color='k') #some_halos = halos[halos['eff'] < 0.5] ax.scatter(halos['ra_diff']*60.0*60.0, halos['dec_diff']*60.0*60.0, marker='x', color='white', s=np.log10(halos['halo_mass'].values)*4) #plt.imshow(kappa_map) cmap = copy.copy(plt.cm.viridis) cmap.set_bad((1, 0, 0, 1)) im = ax.imshow(phi, extent=[-fov*60.0*0.5, fov*60.0*0.5, -fov*60.0*0.5, fov*60.0*0.5], origin='lower', cmap=cmap) ax.set_xticks(np.linspace(-fov*60.0*0.5, fov*60.0*0.5, 10)) ax.set_yticks(np.linspace(-fov*60.0*0.5, fov*60.0*0.5, 10)) ax.set_title(r'$\phi_{\rm ext}$', fontsize=20) #plt.xlabel(r"'' ") #plt.ylabel(r"'' ") ax.set_xlabel('asec') ax.set_ylabel('asec') fig.colorbar(im, ) plt.close('all') fig, ax = plt.subplots(figsize=(10, 10)) los_i = 4 kappa = np.load(os.path.join(out_dir, 'k_map_los={0:07d}.npy'.format(los_i))) fov = 1.35 # diameter fov_gal = 2.0 # diameter #min_k = np.min(kappa_map) #print("Minmax: ", np.min(kappa_map), np.max(kappa_map)) #print(np.mean(kappa_map[kappa_map<0.4])) #kappa_map[kappa_map<0] = np.nan #print("Number of negative pixels: ", (~np.isfinite(kappa_map)).sum()) halos_cols = ['ra', 'ra_diff', 'dec', 'dec_diff', 'z', 'dist'] halos_cols += ['eff', 'halo_mass', 'stellar_mass', 'Rs', 'alpha_Rs'] halos_cols += ['galaxy_id'] halos_cols.sort() halos_arr = np.load(glob.glob(os.path.join(out_dir, 'halos_los={0:07d}_id=*.npy'.format(los_i)))[0]) halos = pd.DataFrame(halos_arr, columns=halos_cols) ax.scatter(0, 0, marker='x', color='tab:green') #some_halos = halos[halos['eff'] < 0.5] ax.scatter(halos['ra_diff'].values*60.0, halos['dec_diff'].values*60.0, marker='x', color='white', s=np.log10(halos['halo_mass'].values)*5) #plt.imshow(kappa_map) cmap = copy.copy(plt.cm.inferno) cmap.set_bad((1, 0, 0, 1)) ax.set_facecolor('k') im = ax.imshow(kappa, extent=[-fov*0.5, fov*0.5, -fov*0.5, fov*0.5], origin='lower', cmap=cmap) # Get galaxy positions to overlay import torch graph_dir = '../n2j/data/cosmodc2_10450/processed' data = torch.load(os.path.join(graph_dir, f'subgraph_{los_i}.pt')) mask = data.x[:, -4] < 26.8 ra = data.x[mask, 4] dec = data.x[mask, 5] brightness = 20*10**(0.4*(22.5 - data.x[mask, -4])) ax.scatter(ra*60.0, dec*60.0, marker='*', color='yellow', s=brightness) ax.set_xticks(np.linspace(-fov_gal*0.5, fov_gal*0.5, 10)) ax.set_yticks(np.linspace(-fov_gal*0.5, fov_gal*0.5, 10)) ax.tick_params(axis='both', which='major', labelsize=14) #plt.xlabel(r"'' ") #plt.ylabel(r"'' ") #ax.set_title(r'$\kappa_{\rm ext}$', fontsize=20) ax.set_xlabel(r"$(\alpha - \alpha_{\rm LOS})\cos(\delta_{\rm LOS}) \quad (\prime)$", fontsize=20) ax.set_ylabel(r"$(\delta - \delta_{\rm LOS}) \quad (\prime)$", fontsize=20) cbar = fig.colorbar(im, fraction=0.046, pad=0.04) cbar.ax.tick_params(labelsize=14) cbar.ax.set_ylabel(r'$\kappa$', rotation=0, fontsize=20, labelpad=15) fig.savefig('kappa_vs_gals.png', dpi=200, pad_inches=0, bbox_inches='tight') plt.close('all') fig, ax = plt.subplots(figsize=(10, 10)) los_i = 0 kappa = np.load(os.path.join(out_dir, 'k_map_los={0:07d}.npy'.format(los_i))) fov = 1.35 # diameter fov_gal = 2.0 # diameter #min_k = np.min(kappa_map) #print("Minmax: ", np.min(kappa_map), np.max(kappa_map)) #print(np.mean(kappa_map[kappa_map<0.4])) #kappa_map[kappa_map<0] = np.nan #print("Number of negative pixels: ", (~np.isfinite(kappa_map)).sum()) halos_cols = ['ra', 'ra_diff', 'dec', 'dec_diff', 'z', 'dist'] halos_cols += ['eff', 'halo_mass', 'stellar_mass', 'Rs', 'alpha_Rs'] halos_cols += ['galaxy_id'] halos_cols.sort() halos_arr = np.load(glob.glob(os.path.join(out_dir, 'halos_los={0:07d}_id=*.npy'.format(los_i)))[0]) halos = pd.DataFrame(halos_arr, columns=halos_cols) ax.scatter(0, 0, marker='x', color='tab:green') #some_halos = halos[halos['eff'] < 0.5] ax.scatter(halos['ra_diff']*60.0, halos['dec_diff']*60.0, marker='x', color='white', s=np.log10(halos['stellar_mass'].values)*5) #plt.imshow(kappa_map) cmap = copy.copy(plt.cm.inferno) cmap.set_bad((1, 0, 0, 1)) ax.set_facecolor('k') im = ax.imshow(kappa, extent=[-fov*0.5, fov*0.5, -fov*0.5, fov*0.5], origin='lower', cmap=cmap) # Get galaxy positions to overlay import torch graph_dir = '../n2j/data/cosmodc2_10450/processed' data = torch.load(os.path.join(graph_dir, f'subgraph_{los_i}.pt')) mask = data.x[:, -4] < 26.8 ra = data.x[mask, 4] dec = data.x[mask, 5] brightness = 50*10**(0.4*(22.5 - data.x[mask, -4])) ax.scatter(ra*60.0, dec*60.0, marker='*', color='yellow', s=brightness) ax.set_xticks(np.linspace(-fov_gal*0.5, fov_gal*0.5, 10)) ax.set_yticks(np.linspace(-fov_gal*0.5, fov_gal*0.5, 10)) ax.tick_params(axis='both', which='major', labelsize=14) #plt.xlabel(r"'' ") #plt.ylabel(r"'' ") #ax.set_title(r'$\kappa_{\rm ext}$', fontsize=20) ax.set_xlabel(r"Angular dist $(\prime)$", fontsize=20) ax.set_ylabel(r"Angular dist $(\prime)$", fontsize=20) cbar = fig.colorbar(im, fraction=0.046, pad=0.04) cbar.ax.tick_params(labelsize=14) cbar.ax.set_ylabel(r'$\kappa$', rotation=0, fontsize=20, labelpad=15) fig.savefig('kappa_vs_gals.png', dpi=200, pad_inches=0, bbox_inches='tight') i_mag = data.x[mask, -4] halo_mass = data.y_local[mask, 0] stellar_mass = data.y_local[mask, 1] plt.scatter(i_mag, stellar_mass) data.x.shape # Get pointings pointings_cols = ['kappa', 'gamma1', 'gamma2'] pointings_cols += ['galaxy_id', 'ra', 'dec', 'z', 'eps'] pointings_cols.sort() pointings_arr = np.load('/home/jwp/stage/sl/n2j/n2j/data/cosmodc2_10326/Y_10326/sightlines.npy') pointings = pd.DataFrame(pointings_arr, columns=pointings_cols) import glob i = 1 halos_cols = ['ra', 'ra_diff', 'dec', 'dec_diff', 'z', 'dist'] halos_cols += ['eff', 'halo_mass', 'stellar_mass', 'Rs', 'alpha_Rs'] halos_cols += ['galaxy_id'] halos_cols.sort() out_dir = '/home/jwp/stage/sl/n2j/n2j/data/cosmodc2_10326/Y_10326' halos_path_fmt = os.path.join(out_dir, 'halos_los={0:07d}_id=*.npy') halos_path = glob.glob(halos_path_fmt.format(i))[0] halos_arr = np.load(halos_path) halos = pd.DataFrame(halos_arr, columns=halos_cols) ``` Let's get the kappa map. ``` from lenstronomy.LensModel.lens_model import LensModel from astropy.cosmology import WMAP7 # WMAP 7-year cosmology KAPPA_DIFF = 1.0 # arcsec fov = 1.35 # diameter of fov in arcmin sightline = pointings.iloc[i] n_halos = halos.shape[0] # Instantiate multi-plane lens model lens_model = LensModel(lens_model_list=['NFW']*n_halos, z_source=sightline['z'], lens_redshift_list=halos['z'].values, multi_plane=True, cosmo=WMAP7, observed_convention_index=[]) halos['center_x'] = halos['ra_diff']*3600.0 # deg to arcsec halos['center_y'] = halos['dec_diff']*3600.0 nfw_kwargs = halos[['Rs', 'alpha_Rs', 'center_x', 'center_y']].to_dict('records') uncalib_kappa = lens_model.kappa(0.0, 0.0, nfw_kwargs, diff=KAPPA_DIFF, diff_method='square') uncalib_gamma1, uncalib_gamma2 = lens_model.gamma(0.0, 0.0, nfw_kwargs, diff=KAPPA_DIFF, diff_method='square') # Log the uncalibrated shear/convergence and the weighted sum of halo masses w_mass_sum = np.log10(np.sum(halos['eff'].values*halos['halo_mass'].values)) new_row_data = dict(idx=[i], kappa=[uncalib_kappa], gamma1=[uncalib_gamma1], gamma2=[uncalib_gamma2], weighted_mass_sum=[w_mass_sum], ) # Optionally map the uncalibrated shear and convergence on a grid hu.get_kappa_map(lens_model, nfw_kwargs, fov, 'kappa_map.npy', KAPPA_DIFF, x_grid=np.arange(-fov*0.5, fov*0.5, 0.1/60.0)*60.0, y_grid=np.arange(-fov*0.5, fov*0.5, 0.1/60.0)*60.0) plt.close('all') fig, ax = plt.subplots(figsize=(10, 10)) import lenstronomy.Util.param_util as param_util kappa_map = np.load('kappa_map.npy') #min_k = np.min(kappa_map) #print("Minmax: ", np.min(kappa_map), np.max(kappa_map)) #print(np.mean(kappa_map[kappa_map<0.4])) #kappa_map[kappa_map<0] = np.nan #print("Number of negative pixels: ", (~np.isfinite(kappa_map)).sum()) ax.scatter(0, 0, marker='x', color='k') #some_halos = halos[halos['eff'] < 0.5] ax.scatter(halos['ra_diff']*60.0*60.0, halos['dec_diff']*60.0*60.0, marker='x', color='white', s=np.log10(halos['halo_mass'].values)*4) cmap = copy.copy(plt.cm.viridis) cmap.set_bad((1, 0, 0, 1)) im = ax.imshow(kappa_map, extent=[-fov*60.0*0.5, fov*60.0*0.5, -fov*60.0*0.5, fov*60.0*0.5], origin='lower', cmap=cmap) ax.scatter(gals['ra_true']*60.0*60.0, halos['dec_true']*60.0*60.0, marker='o', color='yellow') #im = ax.imshow(phi, extent=[-fov*60.0*0.5, fov*60.0*0.5, -fov*60.0*0.5, fov*60.0*0.5], origin='lower', cmap=cmap) ax.set_xticks(np.linspace(-fov*60.0*0.5, fov*60.0*0.5, 10)) ax.set_yticks(np.linspace(-fov*60.0*0.5, fov*60.0*0.5, 10)) #plt.xlabel(r"'' ") #plt.ylabel(r"'' ") ax.set_xlabel('asec') ax.set_ylabel('asec') fig.colorbar(im) np.sum(phi) c_200 = ru.get_concentration(new_halos['halo_mass'].values, new_halos['stellar_mass'].values) print(c_200) lens_cosmo = LensCosmo(z_lens=new_halos['z'].values, z_source=los_info['z'], cosmo=WMAP7) Rs_angle, alpha_Rs = lens_cosmo.nfw_physical2angle(M=new_halos['halo_mass'].values, c=c_200) rho0, Rs, c, r200, M200 = lens_cosmo.nfw_angle2physical(Rs_angle=Rs_angle, alpha_Rs=alpha_Rs) print(Rs, alpha_Rs) los_i = 0 sightlines = pd.read_csv(os.path.join(out_dir, 'sightlines.csv'), index_col=None) halos = pd.read_csv(os.path.join(out_dir, 'los_halos_los={:d}.csv'.format(los_i)), index_col=None) plt.scatter(sightlines['ra'], sightlines['dec'], marker='.', color='tab:gray', alpha=0.5) plt.plot(sightlines['ra'][los_i], sightlines['dec'][los_i], marker='x', color='k') plt.scatter(halos['ra'], halos['dec'], marker='<', color='tab:blue', alpha=0.5) plt.xlabel('ra (deg)') plt.ylabel('dec (deg)') uncalib = pd.read_csv(os.path.join(out_dir, 'uncalib.csv'), index_col=None) sightlines['final_gamma1'].values, sightlines['gamma1'] + uncalib['gamma1'] np.max(sightlines['z'].values) plt.hist(sightlines['z'], color='tab:gray', bins=20) plt.yscale('log', nonposy='clip') plt.xlabel(r'$z_{\rm src}$', fontsize=20) plt.ylabel('Count') halos = pd.read_csv(os.path.join(out_dir, 'los_halos_los={:d}.csv'.format(los_i)), index_col=None) binning = np.histogram_bin_edges(np.log10(halos['halo_mass'])) plt.hist(np.log10(halos['halo_mass']), bins=20) plt.xlabel('log10(halo mass)') plt.ylabel('Count') plt.axvline(10.5, linestyle='--', color='k') plt.axvline(11, linestyle='--', color='k') plt.axvline(11.25, linestyle='--', color='k') plt.axvline(11.5, linestyle='--', color='k') plt.axvline(12, linestyle='--', color='k') #plt.scatter(halos['redshift'], halos['baseDC2/target_halo_redshift'], marker='.') #plt.plot(halos['redshift'], halos['redshift'], linestyle='--', color='k') #plt.hist(halos['dist']) plt.close('all') fov = 0.85 # arcmin plt.scatter(halos['ra_diff']*3600.0, halos['dec_diff']*3600.0, alpha=0.5) #plt.scatter(halos['ra_diff']*3600.0, halos['dec_diff']*3600.0, alpha=0.5) plt.scatter(0, 0, color='k', marker='x') fig = plt.gcf() fig.set_size_inches(5, 5) plt.xticks(np.linspace(-fov*60.0*0.5, fov*60.0*0.5, 10)) plt.yticks(np.linspace(-fov*60.0*0.5, fov*60.0*0.5, 10)) plt.xlabel('asec') plt.ylabel('asec') print(halos.shape) print(halos.columns) print(halos['halo_mass'].nunique()) c_200 = ru.get_concentration(halos['halo_mass'].values, halos['stellar_mass'].values) def plot_c_vs_M(lower, upper, df, marker, color): binning = np.logical_and(df['z'] < upper, df['z'] > lower) plt.scatter(halos.loc[binning, 'halo_mass'].values/halos.loc[binning, 'stellar_mass'].values, c_200[binning], marker=marker, color=color, alpha=0.5) plt.hist(np.log10(halos['halo_mass']), bins=50) plt.xlabel('log10(halo mass/solar mass)') plt.hist(halos['dist'].values, bins=50) plt.xlabel('log10(halo mass/solar mass)') plt.close('all') plot_c_vs_M(0, 0.5, halos, '.', 'k') plot_c_vs_M(0.5, 1.5, halos, '<', 'tab:orange') plot_c_vs_M(1.5, 2.5, halos, 'x', 'tab:olive') plot_c_vs_M(2.5, 3.5, halos, '<', 'tab:blue') plt.xscale('log') plt.xlabel(r'$M/M_\star$') plt.ylabel(r'$c$') plt.ylim([2, 7]) plt.hist(c_200) #plt.savefig('kappa_map.pdf') halos.shape high_m = halos[halos['halo_mass']>10.0**13].reset_index() high_m.iloc[high_m['dist'].idxmin()] _ = plt.hist(kappa_map.flatten(), bins=100, range=[-0.03, 2], log=True) plt.axvline(np.mean(kappa_map[kappa_map<0.4]), color='r') print(np.mean(kappa_map[kappa_map<0.4])) kappa_samples = np.load('{:s}/kappa_samples_sightline={:d}.npy'.format(out_dir, sightline_i)) _ = plt.hist(kappa_samples, bins=100, range=[-0.03, 0.1], log=True) plt.axvline(np.mean(kappa_samples[kappa_samples<0.4]), color='r') print(np.mean(kappa_samples[kappa_samples<0.4])) plt.hist(halos['dec']) r, d = cu.sample_in_aperture(10000, 3.0/60.0) plt.hist((r**2.0 + d**2.0)) # R^2 ~ U(0, Rmax^2) plt.hist(np.arctan(d/r)) plt.hist(d) plt.close('all') fov = 6.0 sightline_i = 0 g1_map = np.load(os.path.join(out_dir, 'gamma1_map_sightline={:d}.npy'.format(sightline_i))) min_k = np.min(g1_map) print(np.min(g1_map), np.max(g1_map)) print(g1_map.shape) print(np.median(g1_map.flatten())) #plt.scatter(0, 0, marker='x', color='k') #plt.scatter(halos['ra_diff']*60.0*60.0, halos['dec_diff']*60.0*60.0, marker='x', color='k') #plt.imshow(kappa_map) plt.imshow(g1_map, vmin=min_k, extent=[-fov*60.0*0.5, fov*60.0*0.5, -fov*60.0*0.5, fov*60.0*0.5], origin='lower') plt.xticks(np.linspace(-fov*60.0*0.5, fov*60.0*0.5, 10)) plt.yticks(np.linspace(-fov*60.0*0.5, fov*60.0*0.5, 10)) #plt.xlabel(r"'' ") #plt.ylabel(r"'' ") plt.xlabel('asec') plt.ylabel('asec') plt.colorbar() #plt.savefig('kappa_map.pdf') k = np.load('../los_test/kappa_samples_sightline={:d}.npy'.format(los_i)) min_k = np.min(k[~is_outlier(k)]) max_k = np.max(k[~is_outlier(k)]) print(min_k, max_k) binning = np.histogram_bin_edges(k, range=[min_k, max_k], bins='scott') plt.hist(k, bins=binning) plt.ylabel('Count') plt.xlabel(r'$\kappa_{\rm ext}$', fontsize=20) plt.scatter(raw_kappas, wl_kappas) sightlines = pd.read_csv(os.path.join(out_dir, 'sightlines.csv'), index_col=None) idx = 1 sightline = sightlines.iloc[idx] print(sightline) ru.raytrace_single_sightline(idx, 10450, sightline['ra'], sightline['dec'], sightline['z'], 6.0, True, True, 5, 11.0, out_dir, test=False) df = pd.DataFrame(columns=['idx', 'kappa', 'gamma1', 'gamma2']) df.to_csv('test.csv', index=None) uncalib_kappa = -1 uncalib_gamma1 = 0 uncalib_gamma2 = 1 idx = 99 new_data = {'idx': [idx], 'kappa': [uncalib_kappa], 'gamma1': [uncalib_gamma1], 'gamma2': [uncalib_gamma2]} new_df = pd.DataFrame(new_data) #new_data = {'idx': idx, 'kappa': uncalib_kappa, 'gamma1': uncalib_gamma1, 'gamma2': uncalib_gamma2} #new_df = pd.Series(new_data) new_df.to_csv('test.csv', index=None, mode='a', header=None) new_data = {'idx': [idx], 'kappa': [uncalib_kappa], 'gamma1': [uncalib_gamma1], 'gamma2': [uncalib_gamma2]} new_df = pd.DataFrame(new_data) #new_data = {'idx': idx, 'kappa': uncalib_kappa, 'gamma1': uncalib_gamma1, 'gamma2': uncalib_gamma2} #new_df = pd.Series(new_data) new_df.to_csv('test.csv', index=None, mode='a', header=None) uncalib_kappas = pd.read_csv('test.csv', index_col=None) uncalib_kappas uncalib_kappas.drop_duplicates('idx', inplace=True) uncalib_kappas import healpy as hp hp.nside2resol(4096, arcmin=True) sightlines = pd.read_csv(os.path.join(out_dir, 'sightlines.csv'), index_col=None) uncalib_kappas = pd.read_csv(os.path.join(out_dir, 'uncalib.csv'), index_col=None) uncalib_kappas.columns = ['idx', 'kappa', 'gamma1', 'gamma2'] print(uncalib_kappas.shape) print(uncalib_kappas.loc[uncalib_kappas['idx'] == 0, 'kappa'].sample(1).item()) n_sightlines = 1000 mean_kappas = np.empty(n_sightlines) median_kappas = np.empty(n_sightlines) final_kappas = np.empty(n_sightlines) # [los, samples] raw_kappas = np.empty(n_sightlines) samples = [] for s in range(n_sightlines): # loop over sightlines #kappa_map = np.load('../sprint_week/kappa_map_sightline={:d}.npy'.format(s)) #print(np.mean(kappa_map.flatten())) wl_kappa = sightlines.loc[s, 'kappa'] k = np.load(os.path.join(out_dir, 'k_samples_los={:d}.npy'.format(s))) samples.append(k) k = k[k < 0.5] mean_kappas[s] = np.mean(k) median_kappas[s] = np.median(k) uncalib_kappa = uncalib_kappas.loc[uncalib_kappas['idx'] == s, 'kappa'].sample(1).item() raw_kappas[s] = uncalib_kappa final_kappas[s] = uncalib_kappa + wl_kappa - np.mean(k) _ = plt.hist(raw_kappas - mean_kappas, bins=40, histtype='step', color='tab:gray', edgecolor='tab:red', linestyle='-', fill=True, linewidth=2, label=r'uncalibrated $\kappa_{\rm ext}$ - mean(realizations)') plt.legend(fontsize=15) plt.ylabel('Count (sightlines)') plt.xlabel(r'$\kappa_{\rm ext}$', fontsize=20) resampled = samples[4] resampled = resampled[resampled < 0.4] plt.hist(resampled, bins=100, range=[-0.005, 0.4],) plt.xlabel('Resampled kappas for a sightline') plt.axvline(np.mean(resampled), linestyle='--', color='k', label='Average, with a kappa cut at 0.4') print(np.mean(resampled)) plt.legend(fontsize=15) for n in [100, 200, 400, 800, 1000]: random_i = np.random.randint(0, 1000, size=n) print(np.array(samples[0])[random_i].mean()) _ = plt.hist(mean_kappas, bins=20, histtype='step', label='mean(realizations)', color='tab:gray', linestyle='-', linewidth=2) _ = plt.hist(median_kappas, bins=20, histtype='step', label='median(realizations)', color='tab:gray', linestyle='--') plt.legend(fontsize=20) plt.ylabel('Count (sightlines)') plt.xlabel(r'$\kappa_{\rm ext}$', fontsize=20) _ = plt.hist(raw_kappas, bins=50, histtype='step', color='tab:red', range=[0., max(raw_kappas)], linestyle='-', linewidth=2, label=r'uncalibrated $\kappa_{\rm ext}$') plt.legend(fontsize=20) plt.ylabel('Count (sightlines)') plt.xlabel(r'$\kappa_{\rm ext}$', fontsize=20) #central_final_kappas = np.mean(final_kappas, axis=1) binning = np.histogram_bin_edges(final_kappas, bins='scott') print(final_kappas.shape, ) _ = plt.hist(final_kappas, bins=20, alpha=1.0, label="1'' (enhanced)", histtype='step', linewidth=2, density=True) wl_kappas = sightlines['kappa'].values[:n_sightlines] binning = np.histogram_bin_edges(wl_kappas, bins='scott') _ = plt.hist(wl_kappas, bins=20, alpha=1.0, label="51'' (cosmoDC2)", histtype='step', hatch='//', density=True) plt.plot([], [], linestyle='-', color='k', label='Greene+ 2013, all LOS') plt.legend(fontsize=12, frameon=False) plt.ylabel('Density') plt.xlabel(r'$\kappa_{\rm ext}$', fontsize=20) import n2j.trainval_data.raytracing_utils as gen_ru fit_model = gen_ru.fit_gp(weighted_mass_sum, mean_kappas) approx_mean_kappa() unnormed_N = np.empty(n_sightlines) unnormed_mass_sum = np.empty(n_sightlines) weighted_mass_sum = np.empty(n_sightlines) unnormed_dist_weighted_N = np.empty(n_sightlines) for s in range(n_sightlines): # loop over sightlines halos_s = pd.read_csv(os.path.join(out_dir, 'los_halos_los={:d}.csv'.format(s)), index_col=None) #_, _, eff = ru.get_nfw_kwargs(halos_s['halo_mass'].values, halos_s['stellar_mass'].values, # halos_s['z'].values, sightlines.iloc[los_i]['z']) #halos_s['eff'] = eff unnormed_N[s] = halos_s.shape[0] unnormed_mass_sum[s] = halos_s['halo_mass'].sum() weighted_mass_sum[s] = np.sum(halos_s['halo_mass'].values*halos_s['eff'].values) unnormed_dist_weighted_N[s] = np.sum(1.0/halos_s['dist'].values) #print(np.sum(1.0/halos_s['dist'].values)) normed_N = unnormed_N/np.mean(unnormed_N) normed_dist_weighted_N = unnormed_dist_weighted_N/np.mean(unnormed_dist_weighted_N) np.argmax(unnormed_mass_sum), unnormed_mass_sum[2] plt.scatter(unnormed_N, np.log10(unnormed_mass_sum)) plt.ylabel('log10(sum of halo masses/solar mass)') plt.xlabel('Total number of halos') plt.scatter(sightlines['kappa'].values[:n_sightlines], raw_kappas) plt.ylabel('Raw kappa') plt.xlabel('WL kappa') plt.scatter(sightlines['wl_kappa'].values[:n_sightlines], final_kappas, alpha=0.5) plt.plot(final_kappas, final_kappas, linestyle='--', color='k') plt.ylabel('Structure-enhanced kappa') plt.xlabel('cosmoDC2 kappa') np.argmax(sightlines['wl_kappa'].values) print(raw_kappas.mean(), mean_kappas.mean()) plt.scatter(raw_kappas, mean_kappas, alpha=0.5, color='tab:orange') los_i = 4 #plt.scatter(raw_kappas[los_i], mean_kappas[los_i], alpha=0.5, color='tab:red') plt.plot(raw_kappas, raw_kappas, linestyle='--', color='k') #plt.xlim([min(raw_kappas), 0.01])#max(raw_kappas)]) plt.ylim([min(raw_kappas), max(raw_kappas)]) plt.ylabel('Mean(resampled kappa)') plt.xlabel('Uncalibrated kappa') plt.hist(np.log10(unnormed_mass_sum)) plt.scatter(np.log10(weighted_mass_sum), mean_kappas) plt.ylabel('Mean(kappa) from reshuffling halos') plt.xlabel('Weighted sum of halo masses') plt.scatter(weighted_mass_sum, mean_kappas) plt.ylabel('Mean(kappa) from reshuffling halos') plt.xlabel('Weighted sum of halo masses') plt.scatter(np.log10(unnormed_mass_sum), mean_kappas, alpha=0.5) #plt.xlim([15, 15.4]) #plt.ylim([0.001, 0.005]) plt.xlabel('log10(sum of halo masses/solar mass)') plt.ylabel('Mean(resampled kappa)') plt.scatter(np.log10(weighted_mass_sum), raw_kappas, alpha=0.5, marker='.', label='uncalib') plt.scatter(np.log10(weighted_mass_sum), final_kappas, alpha=0.5, marker='.', label='calib', color='k') plt.scatter(np.log10(weighted_mass_sum), sightlines['wl_kappa'].values[:n_sightlines], alpha=0.5, marker='.', label='WL') plt.scatter(np.log10(weighted_mass_sum), mean_kappas, marker='o', alpha=0.5, label='mean') #plt.xlim([15, 15.4]) #plt.ylim([0.0005, 0.003]) plt.xlabel('log10(weighted sum of halo masses/solar mass)') plt.ylabel('kappa') plt.legend() plt.hist(unnormed_N) print(np.min(unnormed_N), np.max(unnormed_N)) plt.hist(np.log10(unnormed_mass)) unnormed_dist_weighted_N[unnormed_dist_weighted_N>100000].shape from scipy.stats import pearsonr from scipy.stats import spearmanr plt.scatter(unnormed_N, final_kappas, alpha=0.5) plt.xlabel('Number count (unnormed)') plt.ylabel(r'Finetuned $\kappa_{\rm ext}$', fontsize=20) corr, _ = pearsonr(unnormed_N, final_kappas) print('Pearsons correlation: %.3f' % corr) corr, _ = spearmanr(unnormed_N, final_kappas) print('Spearmans correlation: %.3f' % corr) #exclude = False plt.scatter(unnormed_dist_weighted_N, final_kappas, alpha=0.5) plt.xlabel('Dist-weighted number count (unnormed)') plt.ylabel(r'Finetuned $\kappa_{\rm ext}$', fontsize=20) corr, _ = pearsonr(unnormed_dist_weighted_N, final_kappas) print('Pearsons correlation: %.3f' % corr) corr, _ = spearmanr(unnormed_dist_weighted_N, final_kappas) print('Spearmans correlation: %.3f' % corr) wl_kappa = sightlines['wl_kappa'].values[:n_sightlines] plt.scatter(unnormed_N, wl_kappa, alpha=0.5) plt.xlabel('Number count (unnormed)') plt.ylabel(r'WL $\kappa_{\rm ext}$', fontsize=20) corr, _ = pearsonr(unnormed_N, wl_kappa) print('Pearsons correlation: %.3f' % corr) corr, _ = spearmanr(unnormed_N, wl_kappa) print('Spearmans correlation: %.3f' % corr) plt.scatter(unnormed_dist_weighted_N, wl_kappa, alpha=0.5) plt.xlabel('Dist-weighted number count (unnormed)') plt.ylabel(r'WL $\kappa_{\rm ext}$', fontsize=20) corr, _ = pearsonr(unnormed_dist_weighted_N, wl_kappa) print('Pearsons correlation: %.3f' % corr) corr, _ = spearmanr(unnormed_dist_weighted_N, wl_kappa) print('Spearmans correlation: %.3f' % corr) sightlines.columns k = np.load(os.path.join('../multi_sprint', 'kappa_samples_sightline={:d}.npy'.format(0))) plt.hist(k) dir_mass_cut_aperture_size = [('../multi_sprint_3', (10.5), 6.0), ('../multi_sprint', 11, 6.0), ('../multi_sprint_4', 11.25, 6.0), ('../multi_sprint_1', 11.5, 6.0), ('../multi_sprint_2', 12, 6.0)] dir_mass_cut_aperture_size += [('../m0', 10.5, 4.0), ('../m1', 11.0, 4.0), ('../m2', 11.25, 4.0), ('../m3', 11.5, 4.0), ('../m4', 12.0, 4.0)] plt.close('all') mass_cuts = [] mean_kappas = [] median_kappas = [] kappas = np.empty((len(dir_mass_cut_aperture_size), 1000)) for i, (folder, mass_cut, aperture) in enumerate(dir_mass_cut_aperture_size): k = np.load(os.path.join(folder, 'kappa_samples_sightline={:d}.npy'.format(0))) print(k.shape) #k = k[~is_outlier(k)] mean_kappas.append(np.mean(k)) median_kappas.append(np.median(k)) mass_cuts.append(mass_cut) kappas[i, :] = k #binning = np.histogram_bin_edges(k, bins='scott', ) plt.close('all') for i, (folder, mass_cut, aperture) in enumerate(dir_mass_cut_aperture_size[:]): fill = False if mass_cut > 11.25: continue if i == 3: fill = True #binning = np.histogram_bin_edges(kappas[i, :], bins='scott', range=[0, 0.06]) if aperture == 4: _ = plt.hist(kappas[i, :], label="aperture {:.1f} amin, mass cut {:.1f}".format(aperture*0.5, mass_cut), bins=40, range=[0, 0.04], histtype='step', fill=fill, density=True, hatch='//') else: _ = plt.hist(kappas[i, :], label="aperture {:.1f} amin, mass cut {:.1f}".format(aperture*0.5, mass_cut), bins=40, range=[0, 0.04], histtype='step', fill=fill, density=True) plt.legend(fontsize=20) plt.xlabel(r'$\kappa_{\rm ext}$', fontsize=20) plt.ylabel('Count') fig = plt.gcf() fig.set_size_inches(12, 6) for i, (folder, mass_cut, aperture) in enumerate(dir_mass_cut_aperture_size[:]): if aperture == 4: plt.scatter(mass_cuts[i], mean_kappas[i], color='tab:orange', label='aperture 2.0 amin') else: plt.scatter(mass_cuts[i], mean_kappas[i], color='tab:blue', label='aperture 3.0 amin') plt.axhline(0, color='k', linestyle='--') plt.xlabel('log10(halo mass cut)') plt.ylabel('mean kappa') for i, (folder, mass_cut, aperture) in enumerate(dir_mass_cut_aperture_size[:]): if aperture == 4: plt.scatter(mass_cuts[i], median_kappas[i], color='tab:orange', label='aperture 2.0 amin') else: plt.scatter(mass_cuts[i], median_kappas[i], color='tab:blue', label='aperture 3.0 amin') plt.axhline(0, color='k', linestyle='--') plt.xlabel('log10(halo mass cut)') plt.ylabel('median kappa') kappas ```
github_jupyter