text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
# Chapter 4 ## Question 11 Using the `Auto` data set to predict whether a given car has high or low mileage (seems like a regression on `mpg` to me?) ``` import statsmodels.api as sm import numpy as np import seaborn as sns import sklearn.model_selection import sklearn.discriminant_analysis import sklearn.neighbors sns.set(style="whitegrid") auto = sm.datasets.get_rdataset("Auto", "ISLR").data auto.head() ``` ### (a) create a binary variable, `mpg01`, that is 1 if `mpg` has a value above the median, and `0` otherwise. ``` mpg_median = auto.mpg.median() mpg01 = np.where(auto.mpg > mpg_median, 1, 0) auto["mpg01"] = mpg01 ``` ### (b) Explore the data graphically in order to investigate the association between mpg01 and the other features. Which of the other features seem useful? Scatterplots? Boxplots? ``` sns.pairplot(auto, hue="mpg01") #, diag_kws={"cut": 0}) ``` - mpg is highly predictive (obviously) - Displacement looks pretty good, as does horsepower and weight (all these correlate with each other anyway) ### (c) split the data into test and training data ``` X = auto[["displacement", "horsepower", "weight"]] y = auto.mpg01 X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.2) ``` ### (d) Perform LDA using the variables identified in (b). What's the test error? ``` lda_model = sklearn.discriminant_analysis.LinearDiscriminantAnalysis() lda_model.fit(X_train, y_train) # reshape required to cast the training data to a 2d array y_pred = lda_model.predict(X_test) confusion_matrix = sklearn.metrics.confusion_matrix(y_test, y_pred) tn, fp, fn, tp = confusion_matrix.ravel() print(confusion_matrix) fraction_correct = (tn+tp)/(tn+tp+fn+fp) print(f"fraction correct: {fraction_correct}") print(sklearn.metrics.classification_report(y_test, y_pred)) ``` ### (e) Perform QDA using the variables identified in (b). What's the test error? ``` qda_model = sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis() qda_model.fit(X_train, y_train) # reshape required to cast the training data to a 2d array y_pred = qda_model.predict(X_test) confusion_matrix = sklearn.metrics.confusion_matrix(y_test, y_pred) tn, fp, fn, tp = confusion_matrix.ravel() print(confusion_matrix) fraction_correct = (tn+tp)/(tn+tp+fn+fp) print(f"fraction correct: {fraction_correct}") print(sklearn.metrics.classification_report(y_test, y_pred)) ``` ### (f) Perform logistic regression using the variables identified in (b). What's the test error? ``` logistic_model = sklearn.linear_model.LogisticRegression() logistic_model.fit(X_train, y_train) # reshape required to cast the training data to a 2d array y_pred = logistic_model.predict(X_test) confusion_matrix = sklearn.metrics.confusion_matrix(y_test, y_pred) tn, fp, fn, tp = confusion_matrix.ravel() print(confusion_matrix) fraction_correct = (tn+tp)/(tn+tp+fn+fp) print(f"fraction correct: {fraction_correct}") print(sklearn.metrics.classification_report(y_test, y_pred)) ``` ### (g) Perform KNN on the training data, with several values of K. What test errors do you obtain? What's the best value of K? ``` for k in range(1,100,1): print("-"*40) print(f"{k}") knn_model = sklearn.neighbors.KNeighborsClassifier(n_neighbors=k) knn_model.fit(X_train, y_train) # reshape required to cast the training data to a 2d array y_pred = knn_model.predict(X_test) confusion_matrix = sklearn.metrics.confusion_matrix(y_test, y_pred) tn, fp, fn, tp = confusion_matrix.ravel() print(f"confusion matrix:\n {confusion_matrix}") fraction_correct = (tn+tp)/(tn+tp+fn+fp) print(f"fraction correct:\n {fraction_correct:.2f}") ``` It doesn't seem to make a lot of different, surprisingly, which value of K is chosen. Somewhere around 5 seems good
github_jupyter
# Time series forecasting with DeepAR - Synthetic data DeepAR is a supervised learning algorithm for forecasting scalar time series. This notebook demonstrates how to prepare a dataset of time series for training DeepAR and how to use the trained model for inference. ``` import time import numpy as np np.random.seed(1) import pandas as pd import json import matplotlib.pyplot as plt ``` We will use the sagemaker client library for easy interface with sagemaker and s3fs for uploading the training data to S3. (Use `pip` to install missing libraries) ``` !conda install -y s3fs import boto3 import s3fs import sagemaker from sagemaker import get_execution_role ``` Let's start by specifying: - The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting. - The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Here we use the `get_execution_role` function to obtain the role arn which was specified when creating the notebook. ``` bucket = '<your_s3_bucket_name_here>' prefix = 'sagemaker/DEMO-deepar' sagemaker_session = sagemaker.Session() role = get_execution_role() s3_data_path = "{}/{}/data".format(bucket, prefix) s3_output_path = "{}/{}/output".format(bucket, prefix) ``` Next, we configure the container image to be used for the region that we are running in. ``` containers = { 'us-east-1': '522234722520.dkr.ecr.us-east-1.amazonaws.com/forecasting-deepar:latest', 'us-east-2': '566113047672.dkr.ecr.us-east-2.amazonaws.com/forecasting-deepar:latest', 'us-west-2': '156387875391.dkr.ecr.us-west-2.amazonaws.com/forecasting-deepar:latest', 'eu-west-1': '224300973850.dkr.ecr.eu-west-1.amazonaws.com/forecasting-deepar:latest' } image_name = containers[boto3.Session().region_name] ``` ### Generating and uploading data In this toy example we want to train a model that can predict the next 48 points of syntheticly generated time series. The time series that we use have hourly granularity. ``` freq = 'H' prediction_length = 48 ``` We also need to configure the so-called `context_length`, which determines how much context of the time series the model should take into account when making the prediction, i.e. how many previous points to look at. A typical value to start with is around the same size as the `prediction_length`. In our example we will use a longer `context_length` of `72`. Note that in addition to the `context_length` the model also takes into account the values of the time series at typical seasonal windows e.g. for hourly data the model will look at the value of the series 24h ago, one week ago one month ago etc. So it is not necessary to make the `context_length` span an entire month if you expect monthly seasonalities in your hourly data. ``` context_length = 72 ``` For this notebook, we will generate 200 noisy time series, each consisting of 400 data points and with seasonality of 24 hours. In our dummy example, all time series start at the same time point `t0`. When preparing your data, it is important to use the correct start point for each time series, because the model uses the time-point as a frame of reference, which enables it to learn e.g. that weekdays behave differently from weekends. ``` t0 = '2016-01-01 00:00:00' data_length = 400 num_ts = 200 period = 24 ``` Each time series will be a noisy sine wave with a random level. ``` time_series = [] for k in range(num_ts): level = 10 * np.random.rand() seas_amplitude = (0.1 + 0.3*np.random.rand()) * level sig = 0.05 * level # noise parameter (constant in time) time_ticks = np.array(range(data_length)) source = level + seas_amplitude*np.sin(time_ticks*(2*np.pi)/period) noise = sig*np.random.randn(data_length) data = source + noise index = pd.DatetimeIndex(start=t0, freq=freq, periods=data_length) time_series.append(pd.Series(data=data, index=index)) time_series[0].plot() plt.show() ``` Often one is interested in tuning or evaluating the model by looking at error metrics on a hold-out set. For other machine learning tasks such as classification, one typically does this by randomly separating examples into train/test sets. For forecasting it is important to do this train/test split in time rather than by series. In this example, we will leave out the last section of each of the time series we just generated and use only the first part as training data. Here we will predict 48 data points, therefore we take out the trailing 48 points from each time series to define the training set. The test set contains the full range of each time series. ``` time_series_training = [] for ts in time_series: time_series_training.append(ts[:-prediction_length]) time_series[0].plot(label='test') time_series_training[0].plot(label='train', ls=':') plt.legend() plt.show() ``` The following utility functions convert `pandas.Series` objects into the appropriate JSON strings that DeepAR can consume. We will use these to write the data to S3. ``` def series_to_obj(ts, cat=None): obj = {"start": str(ts.index[0]), "target": list(ts)} if cat: obj["cat"] = cat return obj def series_to_jsonline(ts, cat=None): return json.dumps(series_to_obj(ts, cat)) encoding = "utf-8" s3filesystem = s3fs.S3FileSystem() with s3filesystem.open(s3_data_path + "/train/train.json", 'wb') as fp: for ts in time_series_training: fp.write(series_to_jsonline(ts).encode(encoding)) fp.write('\n'.encode(encoding)) with s3filesystem.open(s3_data_path + "/test/test.json", 'wb') as fp: for ts in time_series: fp.write(series_to_jsonline(ts).encode(encoding)) fp.write('\n'.encode(encoding)) ``` ### Train a model We can now define the estimator that will launch the training job. ``` estimator = sagemaker.estimator.Estimator( sagemaker_session=sagemaker_session, image_name=image_name, role=role, train_instance_count=1, train_instance_type='ml.c4.xlarge', base_job_name='DEMO-deepar', output_path="s3://" + s3_output_path ) ``` Next we need to set some hyperparameters: for example, frequency of the time series used, number of data points the model will look at in the past, number of predicted data points. The other hyperparameters concern the model to train (number of layers, number of cells per layer, likelihood function) and the training options such as number of epochs, batch size, and learning rate. Refer to the documentation for a full description of the available parameters. ``` hyperparameters = { "time_freq": freq, "context_length": str(context_length), "prediction_length": str(prediction_length), "num_cells": "40", "num_layers": "3", "likelihood": "gaussian", "epochs": "20", "mini_batch_size": "32", "learning_rate": "0.001", "dropout_rate": "0.05", "early_stopping_patience": "10" } estimator.set_hyperparameters(**hyperparameters) ``` We are ready to launch the training job. SageMaker will start an EC2 instance, download the data from S3, start training the model and save the trained model. If you provide the `test` data channel, as we do in this example, DeepAR will also calculate accuracy metrics for the trained model on this test data set. This is done by predicting the last `perdiction_length` points of each time series in the test set and comparing this to the actual value of the time series. The computed error metrics will be included as part of the log output. **Note:** the next cell may take a few minutes to complete, depending on data size, model complexity, and training options. ``` data_channels = { "train": "s3://{}/train/".format(s3_data_path), "test": "s3://{}/test/".format(s3_data_path) } estimator.fit(inputs=data_channels) ``` ### Create endpoint and predictor Now that we have trained a model, we can use it to perform predictions by deploying it to an endpoint. **Note:** remember to delete the enpoint after running this experiment. A cell at the very bottom of this notebook will do that: make sure you run it at the end. ``` job_name = estimator.latest_training_job.name endpoint_name = sagemaker_session.endpoint_from_job( job_name=job_name, initial_instance_count=1, instance_type='ml.m4.xlarge', deployment_image=image_name, role=role ) ``` To query the endpoint and perform predictions, we can define the following utility class: this allows making requests using `pandas.Series` objects rather than raw JSON strings. ``` class DeepARPredictor(sagemaker.predictor.RealTimePredictor): def set_prediction_parameters(self, freq, prediction_length): """Set the time frequency and prediction length parameters. This method **must** be called before being able to use `predict`. Parameters: freq -- string indicating the time frequency prediction_length -- integer, number of predicted time points Return value: none. """ self.freq = freq self.prediction_length = prediction_length def predict(self, ts, cat=None, encoding="utf-8", num_samples=100, quantiles=["0.1", "0.5", "0.9"]): """Requests the prediction of for the time series listed in `ts`, each with the (optional) corresponding category listed in `cat`. Parameters: ts -- list of `pandas.Series` objects, the time series to predict cat -- list of integers (default: None) encoding -- string, encoding to use for the request (default: "utf-8") num_samples -- integer, number of samples to compute at prediction time (default: 100) quantiles -- list of strings specifying the quantiles to compute (default: ["0.1", "0.5", "0.9"]) Return value: list of `pandas.DataFrame` objects, each containing the predictions """ prediction_times = [x.index[-1]+1 for x in ts] req = self.__encode_request(ts, cat, encoding, num_samples, quantiles) res = super(DeepARPredictor, self).predict(req) return self.__decode_response(res, prediction_times, encoding) def __encode_request(self, ts, cat, encoding, num_samples, quantiles): instances = [series_to_obj(ts[k], cat[k] if cat else None) for k in range(len(ts))] configuration = {"num_samples": num_samples, "output_types": ["quantiles"], "quantiles": quantiles} http_request_data = {"instances": instances, "configuration": configuration} return json.dumps(http_request_data).encode(encoding) def __decode_response(self, response, prediction_times, encoding): response_data = json.loads(response.decode(encoding)) list_of_df = [] for k in range(len(prediction_times)): prediction_index = pd.DatetimeIndex(start=prediction_times[k], freq=self.freq, periods=self.prediction_length) list_of_df.append(pd.DataFrame(data=response_data['predictions'][k]['quantiles'], index=prediction_index)) return list_of_df predictor = DeepARPredictor( endpoint=endpoint_name, sagemaker_session=sagemaker_session, content_type="application/json" ) predictor.set_prediction_parameters(freq, prediction_length) ``` ### Make predictions and plot results Now we can use the previously created `predictor` object. For simplicity, we will predict only the first few time series used for training, and compare the results with the actual data we kept in the test set. ``` list_of_df = predictor.predict(time_series_training[:5]) actual_data = time_series[:5] for k in range(len(list_of_df)): plt.figure(figsize=(12,6)) actual_data[k][-prediction_length-context_length:].plot(label='target') p10 = list_of_df[k]['0.1'] p90 = list_of_df[k]['0.9'] plt.fill_between(p10.index, p10, p90, color='y', alpha=0.5, label='80% confidence interval') list_of_df[k]['0.5'].plot(label='prediction median') plt.legend() plt.show() ``` ### Delete endpoint ``` sagemaker_session.delete_endpoint(endpoint_name) ```
github_jupyter
``` import cv2 import numpy as np import matplotlib.pyplot as plt import glob import pathlib %matplotlib inline class ColorReduction: def __call__(self, img): if len(img.shape) == 3: return self.apply_3(img) if len(img.shape) == 2: return self.apply_2(img) return None # problem 84 の reference solution は、ここの処理間違ってそう def reduction_onepixel(self, value): if 0 <= value < 64: return 32 elif 64 <= value < 128: return 96 elif 128 <= value < 192: return 160 elif 192 <= value < 256: return 224 return -1 def apply_3(self, img): H, W, ch = img.shape output_img = img.copy() for i in range(H): for j in range(W): for c in range(ch): output_img[i, j, c] = self.reduction_onepixel(img[i, j, c]) return output_img def apply_2(self, img): H, W = img.shape output_img = img.copy() for i in range(H): for j in range(W): output_img[i, j] = self.reduction_onepixel(img[i, j]) return output_img class TinyImageRecognition: def __init__(self, gt_path, parse_func): self.color_reduction = ColorReduction() self.reduced_valuemap = { 32: 0, 96: 1, 160: 2, 224: 3 } self.gt_path = gt_path self.parse_func = parse_func self.images, self.names, self.classes = self._get_images() self.hists = self._get_hists() def _get_images(self): images, names, classes = [], [], [] file_list = sorted(glob.glob(self.gt_path + "/train_*.jpg")) for file in file_list: images.append(cv2.imread(file)) names.append(file) classes.append(self.parse_func(pathlib.Path(file).name)) return images, names, classes def _get_hist(self, img): assert len(img.shape) == 3, "invalid img dimension: expected: 3, got: {}".format(img.shape) H, W, ch = img.shape hist = np.zeros((12)) for i in range(H): for j in range(W): for c in range(ch): cls = 4*c + self.reduced_valuemap[self.color_reduction.reduction_onepixel(img[i, j, c])] hist[cls] += 1 return hist def _get_hists(self): # create histograms hists = np.zeros((len(self.images), 12)) for i in range(len(self.images)): hists[i] = self._get_hist(self.images[i]) return hists def nearest_neighbour(self, img): hist_test = self._get_hist(img) argmin = np.argmin(np.sum(np.abs(self.hists - hist_test), axis=1)) return argmin def recognition(self, test_path, verbose=True): file_list = sorted(glob.glob(test_path + "/test_*.jpg")) class_list = [self.parse_func(pathlib.Path(f).name) for f in file_list] correct = 0 for i, file in enumerate(file_list): img = cv2.imread(file) nearest = self.nearest_neighbour(img) if nearest != -1: if verbose: print("{} is similar >> {} Pred >> {}".format( pathlib.Path(file).name, pathlib.Path(self.names[nearest]).name, self.classes[nearest] ) ) if class_list[i] == self.classes[nearest]: correct += 1 return correct, len(file_list) def problem_84(self): plt.figure(figsize=(20, 10)) for i in range(len(self.images)): plt.subplot(2, 5, i+1) plt.title(pathlib.Path(self.names[i]).name) plt.bar(np.arange(0, 12) + 1, self.hists[i]) print(self.hists[i]) plt.show() def problem_85(self, test_path): self.recognition(test_path) def problem_86(self, test_path): correct, samples = self.recognition(test_path) accuracy = correct / samples print("Accuracy >> {:.2f} ({}/{})".format( accuracy, correct, samples ) ) def parse_func(file_name): return file_name.split("_")[1] recog = TinyImageRecognition("../dataset", parse_func) recog.problem_86("../dataset") ```
github_jupyter
``` import pandas as pd import csv import re # names of files to read from r_maxo_classes_with_definitionsTSV = '~/Git/MAxO/src/ontology/sparql-test/maxo_classes_with_definitions.tsv' r_ncit_definitionsTSV = '~/Git/MAxO/src/ontology/sparql-test/ncit_definitions.tsv' tsv_read_maxo = pd.read_csv(r_maxo_classes_with_definitionsTSV, sep='\t') tsv_read_ncit = pd.read_csv(r_ncit_definitionsTSV, sep='\t') maxo_id=list() ncit_id=list() tsv_read_maxo.columns from pandas import DataFrame x=() mylist=[] newlist=list() #extract just the maxo_id maxo_id = pd.DataFrame(tsv_read_maxo) cols = ["?cls","?xref","?def"] maxo_id = maxo_id[maxo_id.columns[0]] for line in maxo_id: line=line.strip('/') x=re.findall('[A-Z]{4,11}_[A-Z0-9]{1,15}', line) x=[item.replace('_', ':') for item in x] mylist.append(x) maxo_df= DataFrame(mylist,columns=['Maxo_ID']) maxo_id_def= maxo_df.join(tsv_read_maxo, lsuffix="_left", rsuffix="_right") print(maxo_id_def.head(2)) with open("maxo_xref_definitions.tsv",'wb') as out: maxo_id_def.to_csv('maxo_xref_definitions.tsv', encoding='utf-8', sep='\t', index=False) y=() newlist=[] #extract just the ncit_id ncit_id = pd.DataFrame(tsv_read_ncit) cols = ["?cls","?def"] ncit_id = ncit_id[ncit_id.columns[0]] for line in ncit_id: line=line.strip('/') y=re.findall('[A-Z]{4,11}_[A-Z0-9]{1,15}', line) y=[item.replace('_', ':') for item in y] newlist.append(y) ncit_df= DataFrame(newlist,columns=['NCIT_ID']) ncit_id_def= ncit_df.join(tsv_read_ncit, lsuffix="_left", rsuffix="_right") print(ncit_id_def.head(2)) with open("ncit_definitions.tsv",'wb') as out: ncit_id_def.to_csv('ncit_definitions.tsv', encoding='utf-8', sep='\t', index=False) ``` ncit_id_def.info ``` maxo_id_def.columns = ["Maxo_ID","?cls","ID","?def"] print(maxo_id_def.head()) maxo_id_list= [] maxo_def_list= [] maxo_def_xref_list= [] ncit_id_list=[] ncit_def_list= [] for index, row in maxo_id_def.iterrows(): if row[2].startswith("NCIT:"): for index, rows in ncit_id_def.iterrows(): #determine if the the MAXO def xref matches the NCIT ID if row[2] == rows[0]: maxo_id_list.append(row[0]) maxo_def_list.append(row[3]) maxo_def_xref_list.append(row[2]) ncit_def_list.append(rows[2]) ncit_id_list.append(rows[0]) else: continue maxo_ncit_def_df=pd.DataFrame(list(zip(maxo_id_list, maxo_def_list, maxo_def_xref_list, ncit_id_list, ncit_def_list)), columns=["maxo_id","maxo_def", "maxo_def_xref","ncit_id", "ncit_def"]) print(maxo_ncit_def_df.head()) with open("maxo_ncit_def.tsv",'wb') as out: maxo_ncit_def_df.to_csv('maxo_ncit_def.tsv', encoding='utf-8', sep='\t', index=False) ```
github_jupyter
# RNA velocity analysis using scVelo * __Notebook version__: `v0.0.1` * __Created by:__ `Imperial BRC Genomics Facility` * __Maintained by:__ `Imperial BRC Genomics Facility` * __Docker image:__ `imperialgenomicsfacility/scanpy-notebook-image:release-v0.0.4` * __Github repository:__ [imperial-genomics-facility/scanpy-notebook-image](https://github.com/imperial-genomics-facility/scanpy-notebook-image) * __Created on:__ {{ DATE_TAG }} * __Contact us:__ [Imperial BRC Genomics Facility](https://www.imperial.ac.uk/medicine/research-and-impact/facilities/genomics-facility/contact-us/) * __License:__ [Apache License 2.0](https://github.com/imperial-genomics-facility/scanpy-notebook-image/blob/master/LICENSE) * __Project name:__ {{ PROJECT_IGF_ID }} {% if SAMPLE_IGF_ID %}* __Sample name:__ {{ SAMPLE_IGF_ID }}{% endif %} ## Table of contents * [Introduction](#Introduction) * [Tools required](#Tools-required) * [Loading required libraries](#Loading-required-libraries) * [Input parameters](#Input-parameters) * [Reading data from Cellranger output](#Reading-data-from-Cellranger-output) * [Reading output of Scanpy](#Reading-output-of-Scanpy) * [Reading output of Velocyto](#Reading-output-of-Velocyto) * [Estimate RNA velocity](#Estimate-RNA-velocity) * [Dynamical Model](#Dynamical-Model) * [Project the velocities](#Project-the-velocities) * [Interprete the velocities](#Interprete-the-velocities) * [Identify important genes](#Identify-important-genes) * [Kinetic rate paramters](#Kinetic-rate-paramters) * [Latent time](#Latent-time) * [Top-likelihood genes](#Top-likelihood-genes) * [Cluster-specific top-likelihood genes](#Cluster-specific-top-likelihood-genes) * [Velocities in cycling progenitors](#Velocities-in-cycling-progenitors) * [Speed and coherence](#Speed-and-coherence) * [PAGA velocity graph](#PAGA-velocity-graph) * [References](#References) * [Acknowledgement](#Acknowledgement) ## Introduction This notebook for running RNA velocity analysis (for a single sample) using [scVelo](https://scvelo.readthedocs.io/) package. Most of the codes and documentation used in this notebook has been copied from the following sources: * [RNA Velocity Basics](https://scvelo.readthedocs.io/VelocityBasics/) * [Dynamical Modeling](https://scvelo.readthedocs.io/DynamicalModeling/) ## Tools required * [scVelo](https://scvelo.readthedocs.io/) ## Loading required libraries We need to load all the required libraries to environment before we can run any of the analysis steps. Also, we are checking the version information for most of the major packages used for analysis. ``` %matplotlib inline import pandas as pd import scvelo as scv scv.logging.print_version() ``` <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## Input parameters ``` scanpy_h5ad = '{{ SCANPY_H5AD }}' loom_file = '{{ VELOCYTO_LOOM }}' threads = {{ CPU_THREADS }} s_genes = {{ CUSTOM_S_GENES_LIST }} g2m_genes = {{ CUSTOM_G2M_GENES_LIST }} ``` <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## Reading data from Cellranger output ### Reading output of Scanpy We have already processed the count data using [Scanpy](https://scanpy.readthedocs.io/en/stable/). Now we are loading the h5ad file using scVelo. ``` adata = scv.read(scanpy_h5ad, cache=True) ``` ### Reading output of Velocyto We have already generated loom file using [Velocyto](http://velocyto.org/velocyto.py/). Now we are loading the loom file to scVelo. ``` ldata = scv.read(loom_file, cache=True) ldata.var_names_make_unique() adata = scv.utils.merge(adata, ldata) ``` Displaying the proportions of spliced/unspliced counts ``` scv.pl.proportions(adata, groupby='leiden', dpi=150) ``` Further, we need the first and second order moments (means and uncentered variances) computed among nearest neighbors in PCA space, summarized in `scv.pp.moments`. First order is needed for deterministic velocity estimation, while stochastic estimation also requires second order moments. ``` scv.pp.moments(adata, n_neighbors=30, n_pcs=20, use_highly_variable=True) ``` <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## Estimate RNA velocity Velocities are vectors in gene expression space and represent the direction and speed of movement of the individual cells. The velocities are obtained by modeling transcriptional dynamics of splicing kinetics, either stochastically (default) or deterministically (by setting mode='`deterministic`'). For each gene, a steady-state-ratio of pre-mature (unspliced) and mature (spliced) mRNA counts is fitted, which constitutes a constant transcriptional state. Velocities are then obtained as residuals from this ratio. Positive velocity indicates that a gene is up-regulated, which occurs for cells that show higher abundance of unspliced mRNA for that gene than expected in steady state. Conversely, negative velocity indicates that a gene is down-regulated. ### Dynamical Model We run the dynamical model to learn the full transcriptional dynamics of splicing kinetics. It is solved in a likelihood-based expectation-maximization framework, by iteratively estimating the parameters of reaction rates and latent cell-specific variables, i.e. transcriptional state and cell-internal latent time. It thereby aims to learn the unspliced/spliced phase trajectory for each gene. ``` scv.tl.recover_dynamics(adata, n_jobs=threads) scv.tl.velocity(adata, mode='dynamical') ``` The computed velocities are stored in `adata.layers` just like the count matrices. The combination of velocities across genes can then be used to estimate the future state of an individual cell. In order to project the velocities into a lower-dimensional embedding, transition probabilities of cell-to-cell transitions are estimated. That is, for each velocity vector we find the likely cell transitions that are accordance with that direction. The transition probabilities are computed using cosine correlation between the potential cell-to-cell transitions and the velocity vector, and are stored in a matrix denoted as velocity graph. The resulting velocity graph has dimension $n_{obs}×n_{obs}$ and summarizes the possible cell state changes that are well explained through the velocity vectors (for runtime speedup it can also be computed on reduced PCA space by setting `approx=True`). ``` scv.tl.velocity_graph(adata) ``` <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## Project the velocities Finally, the velocities are projected onto any embedding, specified by basis, and visualized in one of these ways: * on cellular level with `scv.pl.velocity_embedding` * as gridlines with `scv.pl.velocity_embedding_grid` * or as streamlines with `scv.pl.velocity_embedding_stream` ``` scv.pl.velocity_embedding( adata, basis='umap', color='leiden', arrow_size=2, arrow_length=2, legend_loc='center right', figsize=(9,7), dpi=150) scv.pl.velocity_embedding_grid( adata, basis='umap', color='leiden', arrow_size=1, arrow_length=2, legend_loc='center right', figsize=(9,7), dpi=150) scv.pl.velocity_embedding_stream( adata, basis='umap', color='leiden', linewidth=0.5, figsize=(9,7), dpi=150) ``` The velocity vector field displayed as streamlines yields fine-grained insights into the developmental processes. It accurately delineates the cycling population of ductal cells and endocrine progenitors. Further, it illuminates cell states of lineage commitment, cell-cycle exit, and endocrine cell differentiation. The most fine-grained resolution of the velocity vector field we get at single-cell level, with each arrow showing the direction and speed of movement of an individual cell. <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## Interprete the velocities We will examine the phase portraits of some marker genes, visualized with `scv.pl.velocity(adata, gene_names)` and `scv.pl.scatter(adata, gene_names)` Gene activity is orchestrated by transcriptional regulation. Transcriptional induction for a particular gene results in an increase of (newly transcribed) precursor unspliced mRNAs while, conversely, repression or absence of transcription results in a decrease of unspliced mRNAs. Spliced mRNAs is produced from unspliced mRNA and follows the same trend with a time lag. Time is a hidden/latent variable. Thus, the dynamics needs to be inferred from what is actually measured: spliced and unspliced mRNAs as displayed in the phase portrait. We are collecting the top marker gene for each cluster from the Scanpy output ``` top_marker_genes = \ pd.DataFrame( adata.uns['rank_genes_groups']['names']).\ head(1).\ values.\ tolist()[0] pd.DataFrame(adata.uns['rank_genes_groups']['names']).head(1) ``` Now plotting phase and velocity plot for top marker genes. The phase plot shows spliced against unspliced expressions with steady-state fit. Further the embedding is shown colored by velocity and expression. ``` scv.pl.velocity(adata, top_marker_genes, ncols=1, figsize=(9,7), dpi=150) ``` The black line corresponds to the estimated 'steady-state' ratio, i.e. the ratio of unspliced to spliced mRNA abundance which is in a constant transcriptional state. RNA velocity for a particular gene is determined as the residual, i.e. how much an observation deviates from that steady-state line. Positive velocity indicates that a gene is up-regulated, which occurs for cells that show higher abundance of unspliced mRNA for that gene than expected in steady state. Conversely, negative velocity indicates that a gene is down-regulated. ``` scv.pl.scatter( adata, top_marker_genes, add_outline=True, color='leiden', ncols=2, dpi=150) scv.pl.scatter( adata, top_marker_genes, add_outline=True, color='velocity', ncols=2, dpi=150) ``` <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## Identify important genes We need a systematic way to identify genes that may help explain the resulting vector field and inferred lineages. To do so, we can test which genes have cluster-specific differential velocity expression, being siginificantly higher/lower compared to the remaining population. The module `scv.tl.rank_velocity_genes` runs a differential velocity t-test and outpus a gene ranking for each cluster. Thresholds can be set (e.g. `min_corr`) to restrict the test on a selection of gene candidates. ``` scv.tl.rank_velocity_genes(adata, groupby='leiden', min_corr=.3) df = \ scv.DataFrame( adata.uns['rank_velocity_genes']['names']) df.head() ``` <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## Kinetic rate paramters The rates of RNA transcription, splicing and degradation are estimated without the need of any experimental data. They can be useful to better understand the cell identity and phenotypic heterogeneity. ``` df = adata.var df = df[(df['fit_likelihood'] > .1) & (df['velocity_genes'] == True)] kwargs = dict(xscale='log', fontsize=16) with scv.GridSpec(ncols=3) as pl: pl.hist( df['fit_alpha'], xlabel='transcription rate', **kwargs) pl.hist( df['fit_beta'] * df['fit_scaling'], xlabel='splicing rate', xticks=[.1, .4, 1], **kwargs) pl.hist( df['fit_gamma'], xlabel='degradation rate', xticks=[.1, .4, 1], **kwargs) scv.get_df(adata, 'fit*', dropna=True).head() ``` The estimated gene-specific parameters comprise rates of transription (`fit_alpha`), splicing (`fit_beta`), degradation (`fit_gamma`), switching time point (`fit_t_`), a scaling parameter to adjust for under-represented unspliced reads (`fit_scaling`), standard deviation of unspliced and spliced reads (`fit_std_u`, `fit_std_s`), the gene likelihood (`fit_likelihood`), inferred steady-state levels (`fit_steady_u`, `fit_steady_s`) with their corresponding p-values (`fit_pval_steady_u`, `fit_pval_steady_s`), the overall model variance (`fit_variance`), and a scaling factor to align the gene-wise latent times to a universal, gene-shared latent time (`fit_alignment_scaling`). <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## Latent time The dynamical model recovers the latent time of the underlying cellular processes. This latent time represents the cell’s internal clock and approximates the real time experienced by cells as they differentiate, based only on its transcriptional dynamics. ``` scv.tl.latent_time(adata) scv.pl.scatter( adata, color='latent_time', color_map='gnuplot', size=80, dpi=150) scv.tl.latent_time(adata) scv.pl.scatter( adata, color='latent_time', color_map='gnuplot', size=80, dpi=150) ``` <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## Top-likelihood genes Driver genes display pronounced dynamic behavior and are systematically detected via their characterization by high likelihoods in the dynamic model. ``` top_genes = \ adata.var['fit_likelihood'].sort_values(ascending=False).index scv.pl.scatter( adata, basis=top_genes[:15], color='leiden', ncols=3, frameon=False, dpi=150) ``` <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## Cluster-specific top-likelihood genes Moreover, partial gene likelihoods can be computed for a each cluster of cells to enable cluster-specific identification of potential drivers. ``` scv.tl.rank_dynamical_genes(adata, groupby='leiden') df = scv.DataFrame(adata.uns['rank_dynamical_genes']['names']) df.head(5) adata.obs['leiden'].drop_duplicates().sort_values().values.tolist() for cluster in adata.obs['leiden'].drop_duplicates().sort_values().values.tolist(): scv.pl.scatter( adata, df[cluster][:3], ylabel=cluster, color='leiden', frameon=False) ``` <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## Velocities in cycling progenitors The cell cycle detected by RNA velocity, is biologically affirmed by cell cycle scores (standardized scores of mean expression levels of phase marker genes). Unless gene lists are provided for S and G2M phase, it calculates scores and assigns a cell cycle phase (G1, S, G2M) using the list of cell cycle genes defined in _Tirosh et al, 2015_ (https://doi.org/10.1126/science.aad0501). ``` if s_genes is not None and g2m_genes is not None and \ isinstance(s_genes, list) and isinstance(g2m_genes, list) and \ len(s_genes) > 0 and len(g2m_genes) > 0: print('Using custom cell cycle genes') scv.tl.score_genes_cell_cycle(adata, s_genes=s_genes, g2m_genes=g2m_genes) else: print('Using predefined cell cycle genes') scv.tl.score_genes_cell_cycle(adata, s_genes=None, g2m_genes=None) scv.pl.scatter( adata, color_gradients=['S_score', 'G2M_score'], smooth=True, perc=[5, 95], dpi=150) ``` The previous module also computed a spearmans correlation score, which we can use to rank/sort the phase marker genes to then display their phase portraits. ``` s_genes, g2m_genes = \ scv.utils.get_phase_marker_genes(adata) s_genes = \ scv.get_df( adata[:, s_genes], 'spearmans_score', sort_values=True).index g2m_genes = \ scv.get_df( adata[:, g2m_genes], 'spearmans_score', sort_values=True).index kwargs = \ dict( frameon=False, ylabel='cell cycle genes', color='leiden', ncols=3, dpi=150) scv.pl.scatter(adata, list(s_genes[:5]) + list(g2m_genes[:5]), **kwargs) scv.pl.velocity( adata, list(s_genes[:5]) + list(g2m_genes[:5]), ncols=1, add_outline=True, dpi=150) ``` <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## Speed and coherence Two more useful stats: * The speed or rate of differentiation is given by the length of the velocity vector. * The coherence of the vector field (i.e., how a velocity vector correlates with its neighboring velocities) provides a measure of confidence. ``` scv.tl.velocity_confidence(adata) scv.pl.scatter(adata, c='velocity_length', cmap='coolwarm', perc=[5, 95], figsize=(9,7), dpi=150) scv.pl.scatter(adata, c='velocity_confidence', cmap='coolwarm', perc=[5, 95], figsize=(9,7), dpi=150) ``` These provide insights where cells differentiate at a slower/faster pace, and where the direction is un-/determined. ``` df = adata.obs.groupby('leiden')['velocity_length', 'velocity_confidence'].mean().T df.style.background_gradient(cmap='coolwarm', axis=1) ``` <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## Velocity graph and pseudotime We can visualize the velocity graph to portray all velocity-inferred cell-to-cell connections/transitions. It can be confined to high-probability transitions by setting a `threshold`. The graph, for instance, indicates two phases of Epsilon cell production, coming from early and late Pre-endocrine cells. ``` scv.pl.velocity_graph(adata, threshold=.1, color='leiden', figsize=(9,7), dpi=150) ``` Further, the graph can be used to draw descendents/anscestors coming from a specified cell. Here, a pre-endocrine cell is traced to its potential fate. ``` x, y = \ scv.utils.get_cell_transitions( adata, basis='umap', starting_cell=70) ax = \ scv.pl.velocity_graph( adata, c='lightgrey', edge_width=.05, show=False, dpi=150) ax = \ scv.pl.scatter( adata, x=x, y=y, s=120, c='ascending', cmap='gnuplot', ax=ax, figsize=(9,7), dpi=150) ``` Finally, based on the velocity graph, a velocity pseudotime can be computed. After inferring a distribution over root cells from the graph, it measures the average number of steps it takes to reach a cell after walking along the graph starting from the root cells. Contrarily to diffusion pseudotime, it implicitly infers the root cells and is based on the directed velocity graph instead of the similarity-based diffusion kernel. ``` scv.tl.velocity_pseudotime(adata) scv.pl.scatter(adata, color='velocity_pseudotime', cmap='gnuplot', figsize=(9,7), dpi=150) ``` <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## PAGA velocity graph [PAGA](https://doi.org/10.1186/s13059-019-1663-x) graph abstraction has benchmarked as top-performing method for trajectory inference. It provides a graph-like map of the data topology with weighted edges corresponding to the connectivity between two clusters. Here, PAGA is extended by velocity-inferred directionality. ``` scv.tl.paga(adata, groups='leiden') df = scv.get_df(adata, 'paga/transitions_confidence', precision=2).T df.style.background_gradient(cmap='Blues').format('{:.2g}') scv.pl.paga( adata, basis='umap', size=50, alpha=.1, dpi=150, figsize=(9,7), min_edge_width=2, node_size_scale=1.2) ``` <div align="right"><a href="#Table-of-contents">Go to TOC</a></div> ## References * [scVelo](https://scvelo.readthedocs.io/) * [RNA Velocity Basics](https://scvelo.readthedocs.io/VelocityBasics/) * [Dynamical Modeling](https://scvelo.readthedocs.io/DynamicalModeling/) ## Acknowledgement The Imperial BRC Genomics Facility is supported by NIHR funding to the Imperial Biomedical Research Centre.
github_jupyter
# Querying WikiData for henet edges ``` import json import pandas as pd from pathlib import Path from datetime import datetime from tqdm import tqdm_notebook # ModuleNotFoundError ## edited "hetnet_ml.src" to "hetnet_ml" in .py script import wdhetnetbuilder as wdh # make sure wikidataintegrator is installed ## pip install wikidataintegrator # install ## pip install git+https://github.com/mmayers12/hetnet_ml net_info_dir = Path('../0_data/manual').resolve() h = wdh.WDHetnetQueryBuilder(net_info_dir.joinpath('node_info.json'), net_info_dir.joinpath('edge_info.json')) # AssertionError but still runs ``` ## Defining the strucure of the metagraph ``` hetnet_edges = [ {'abbrev': 'CdiC'}, {'abbrev': 'CtD'}, #{'abbrev': 'PPaiC'}, {'abbrev': 'CHhcC'}, {'abbrev': 'PWhpC'}, {'abbrev': 'CpP'}, {'abbrev': 'PiwC'}, {'abbrev': 'VntC'}, {'abbrev': 'VptC'}, {'abbrev': 'DaP', 'target': 'Gene'}, {'abbrev': 'DaG'}, {'abbrev': 'DsyS'}, {'abbrev': 'DmsMS'}, {'abbrev': 'CHsyS'}, {'abbrev': 'CHsyD'}, {'abbrev': 'VndD'}, {'abbrev': 'VpdD'}, {'abbrev': 'VvP', 'target': 'Gene'}, {'abbrev': 'VvG'}, {'abbrev': 'PWhpP', 'target': 'Gene'}, {'abbrev': 'PWhpG'}, {'abbrev': 'PccCC'}, {'abbrev': 'PbpBP'}, {'abbrev': 'PmfMF'}, {'abbrev': 'PhpPD'}, {'abbrev': 'PhpSS'}, {'abbrev': 'PFhpP'}, {'abbrev': 'PhpBS'}, {'abbrev': 'PhpAS'}, {'abbrev': 'PhpSM'}, #{'abbrev': 'PPtaD'}, {'abbrev': 'CrCR'}, {'abbrev': 'DlA'}, {'abbrev': 'CHafA'}, {'abbrev': 'CtCH'}, {'abbrev': 'BPhpC'}, {'abbrev': 'PccA'}, {'abbrev': 'PWhpBP'}, {'abbrev': 'PFhpBS'}, {'abbrev': 'PDhpSS'}, {'abbrev': 'PFhpSS'}, {'abbrev': 'PWhpBP'}, {'abbrev': 'PFhpPD'}, {'abbrev': 'PFhpAS'}, {'abbrev': 'PregBP'} ] queries = [h.build_query_from_abbrev(**edge) for edge in hetnet_edges] ``` An error was found in the Feburary 2018 Data Dump... The majority of Biological Process nodes are missing their `instance_of Biological Process` statment (`wdt:P31 'wd:Q996394`), leading to severely decreased number of edges with these node types. Because biological processes are also defined by the property `Biological Process` (`wdt:P686`) we can use this as well as a check for a GO Term Identifier to recover these edges. ``` ini_queries_2_2018 = [h.build_query_from_abbrev(**edge) for edge in hetnet_edges] # Biological Process nodes forwhatever reason lost their wdt:P31 wd:Q2996394 statments in 2018 for whatever reason # so instead still use the biological process proterty (wdt:P682) beteen the protien and bp # and check to make sure they have a go id... (wdt:P686) queries_2_2018 = [] for q in ini_queries_2_2018: queries_2_2018.append(q.replace(""" ?biological_process wdt:P31 wd:Q2996394 .""", """ ?biological_process wdt:P686 ?go_id .""") .replace(""" ?biological_process1 wdt:P31 wd:Q2996394 .""", """ ?biological_process1 wdt:P686 ?go_id1 .""") .replace(""" ?biological_process2 wdt:P31 wd:Q2996394 .""", """ ?biological_process2 wdt:P686 ?go_id2 .""")) ``` A similar problem was found back in early 2017: Genes and proteins were `subclass of` Gene or Protein... not `instance of`... Disease was a mess, with some `subclass of` some `instance of` and some both... fixing these for our 2017 queries ``` # Fix gene and protein h.node_info['Gene']['subclass'] = True h.node_info['Protein']['subclass'] = True # Update the class with the new info # TODO: Add an update node method that re-runs this auto-magically... h.subclass = h._extract_node_key('subclass') h.extend = h._extract_node_key('extend') ini_queries_2017 = [h.build_query_from_abbrev(**edge) for edge in hetnet_edges] # Disease are sometimes 'instance_of', sometimes 'subclass_of', so we will ectend both... queries_2017 = [] for q in ini_queries_2017: queries_2017.append(q.replace(""" # Initial typing for Disease ?disease wdt:P31 wd:Q12136 .""", """ # Initial typing for Disease ?disease wdt:P31|wdt:P279* wd:Q12136 .""")) print(h.build_query_from_abbrev('CtD')) endpoints = { 'https://query.wikidata.org/sparql': datetime.today().strftime('%Y-%m-%d'), 'http://avalanche.scripps.edu:9988/bigdata/sparql': '2018-11-12', 'http://avalanche.scripps.edu:9999/bigdata/sparql': '2018-02-05', 'http://kylo.scripps.edu:9988/bigdata/sparql': '2017-01-16', } results = dict() # Sort so live wikidata is done last incase of errors on local instances... for ep, dump_date in tqdm_notebook(sorted(endpoints.items()), desc='All Endpoints'): # Get the correct set of queries for the correct years... if dump_date.startswith('2017'): to_query = queries_2017 elif dump_date.startswith('2018-02'): to_query = queries_2_2018 else: to_query = queries cur_res = dict() for meta_edge, query in tqdm_notebook(zip(hetnet_edges, to_query), desc=dump_date+' Data', total=len(hetnet_edges)): cur_res[meta_edge['abbrev']] = wdh.execute_sparql_query(query, endpoint=ep) results[dump_date] = cur_res edge_count = [] for date, res in results.items(): counts = pd.Series({name: len(res[name]) for name in res}, name=date) edge_count.append(counts) edge_count = pd.concat(edge_count, axis=1) edge_count this_name = '01_querying_wikidata_for_hetnet_edges' out_dir = Path('../2_pipeline').resolve().joinpath(this_name, 'out') out_dir.mkdir(parents=True, exist_ok=True) edge_count.to_csv(out_dir.joinpath('edge_counts.csv')) ``` ### Some Error Fixing 1. If start and end nodetypes are the same, could potentiall have node_id1 -> node_id2 and node_id2 -> node_id1... This is only useful if the edge is directed, but most of these edges are bi-directional (undirected) so only one of the directions is needed. 2. Since WikiData can have more than one 'instance_of' statment per node, some nodes may be members of mulitple types... will look at those queried and see where they are. 3. Qualified statments need further processing, so we will collect those 4. Multi-step edges that will be compresssed to 1 edge need further processing, so we will collect those ``` def remove_query_numb(query_name): numb = wdh.get_query_numb(query_name) if numb: idx = query_name.index(numb) return query_name[:idx] else: return query_name def to_full_name(query_name): name = remove_query_numb(query_name) return name.replace('_', ' ').title() def process_query_res(q_result): node_ids = dict() id_to_name = dict() self_ref = set() qualified = set() multi_step = set() # Do some processing on the collected edges for e, r in q_result.items(): s_kind, e_type, e_kind = wdh.gt.parse_edge_abbrev(e) all_n_types = [c for c in r.columns if not c.endswith('Label')] for nt in all_n_types: # Get the node type by removing any trailing numbers numb = wdh.get_query_numb(nt) if numb: idx = nt.index(numb) node_type = nt[:idx] else: node_type = nt # For a given node type, collect all the ids... don't need qualifiers if node_type != 'qualifier': if node_type in node_ids: node_ids[node_type].update(set(r[nt])) else: node_ids[node_type] = set(r[nt]) id_to_name.update(r.set_index(nt)[nt+'Label'].to_dict()) # Identifiy self_reffrenetial edges if s_kind == e_kind: self_ref.add(e) if len(all_n_types) > 2: # Grab qualified edges for further processing if 'qualifier' in all_n_types: qualified.add(e) # Currently, an edge can not be both multi-step and qualified else: multi_step.add(e) return node_ids, id_to_name, self_ref, qualified, multi_step def fix_self_ref_edges(q_result, self_ref, id_to_name): fixed = dict() for kind in tqdm_notebook(self_ref): # no need to worry about forward vs reverse in directed edges if '>' in kind or '<' in kind: continue # Only look at 1 kind of edge at a time this_edges = q_result[kind] col_names = this_edges.columns edge_ids = set() for row in this_edges.itertuples(): # Grab the edge ID, sorting, so lowest ID first: # If both 'Q00001 -- Q00002' and 'Q00002 -- Q00001' exist, effectively standarizes to # 'Q00001 -- Q00002' edge_id = tuple(sorted([row[1], row[3]])) edge_ids.add(edge_id) start_ids = [] start_names = [] end_ids = [] end_names = [] for edge_id in edge_ids: start_ids.append(edge_id[0]) start_names.append(id_to_name[edge_id[0]]) end_ids.append(edge_id[1]) end_names.append(id_to_name[edge_id[1]]) fixed[kind] = pd.DataFrame({col_names[0]: start_ids, col_names[1]: start_names, col_names[2]: end_ids, col_names[3]: end_names}) return fixed def find_func_numb(node_names, name, func): return func([wdh.get_query_numb(n) for n in node_names if n.startswith(name)]) def find_max_numb(node_names, name): return find_func_numb(node_names, name, max) def find_min_numb(node_names, name): return find_func_numb(node_names, name, min) def find_correct_node_name(node_names, name, func): for node in node_names: numb = wdh.get_query_numb(node) if node.startswith(name) and node != name and numb: return name + str(func(node_names, name)) return name def get_start_and_end_names(node_names, s_type, e_type): s_name = find_correct_node_name(node_names, s_type, find_min_numb) e_name = find_correct_node_name(node_names, e_type, find_max_numb) return s_name, e_name def process_multi_step_edges(q_result, qualified, multi_step): fixed = dict() # Essentially just change the column order for later processing... for kind in tqdm_notebook(multi_step.union(qualified)): # Get the information for the current edge this_edges = q_result[kind] col_names = this_edges.columns node_cols = [c for c in col_names if not c.endswith('Label')] # Need to know what start and end types we're looking for s_kind, e_type, e_kind = wdh.gt.parse_edge_abbrev(kind) s_name = wdh.to_query_name(h.node_abv_to_full[s_kind])[1:] e_name = wdh.to_query_name(h.node_abv_to_full[e_kind])[1:] if 'qualifier' not in node_cols: s_name, e_name = get_start_and_end_names(node_cols, s_name, e_name) new_node_order = [s_name, e_name] new_node_order += [n for n in node_cols if n not in new_node_order] new_col_names = [] for n in new_node_order: new_col_names += [n, n+'Label'] fixed[kind] = this_edges[new_col_names].copy() return fixed ``` ## Hetnet To Nodes ``` def build_hetnet_nodes(node_ids, id_to_name): nodes = [] for k, v in node_ids.items(): curr_nodes = pd.DataFrame({'id': list(v), 'label': len(v)*[k]}) curr_nodes['name'] = curr_nodes['id'].map(id_to_name) nodes.append(curr_nodes) # Make dataframe nodes = pd.concat(nodes).reset_index(drop=True) # Fix labels (from lowercase_underscore to As Defined in node_info.json) label_map = {wdh.to_query_name(k)[1:]: k for k in h.node_info.keys()} nodes['label'] = nodes['label'].map(label_map) return nodes ``` ## To Hetnet Edges ``` def process_PregBP(edges): edges_out = edges.copy() keep_map = {'positive regulation': 'UP_REGULATES_GuBP', 'negative regulation': 'DOWN_REGULATES_GdBP', 'regulation': 'REGULATES_GregBP'} direction = edges['biological_process1Label'].str.split(' of ', expand=True)[0] edges_out['type'] = direction.map(keep_map) return edges_out.dropna(subset=['type']).reset_index(drop=True) def process_CpP(edges): edges_out = edges.copy() type_map = {'receptor antagonist': 'INHIBITS_CiG', 'enzyme inhibitor': 'INHIBITS_CiG', 'agonist': 'ACTIVATES_CacG', 'channel blocker': 'INHIBITS_CiG', 'substrate': 'BINDS_CbG', 'allosteric modulator': 'BINDS_CbG', 'channel activator activity': 'ACTIVATES_CacG', 'protein-protein interaction inhibitor': 'INHIBITS_CiG', 'ligand in biochemistry': 'BINDS_CbG', 'reuptake inhibitor': 'INHIBITS_CiG', 'neutralizing antibody': 'INHIBITS_CiG'} edges_out['type'] = edges_out['qualifierLabel'].str.lower().map(type_map) return edges_out def build_hetnet_edges(q_result, fixed_edges): edges = [] for k, v in q_result.items(): if k in fixed_edges.keys(): v = fixed_edges[k] col_names = v.columns keep_cols = [c for c in col_names if not c.endswith('Label')] # Queries sometimes return zero results, so skip those... if not keep_cols: continue col_name_map = {keep_cols[0]: 'start_id', keep_cols[1]: 'end_id'} # Inner nodes in multi-step edges become inner1, inner2, etc... inner_cols = {k: 'inner'+str(idx+1) for idx, k in enumerate(keep_cols[2:]) if k != 'qualifier'} col_name_map = {**inner_cols, **col_name_map} v = v.rename(columns=col_name_map) if k == "PregBP": v = process_PregBP(v) elif k == "CpP": v = process_CpP(v) # Replace Proteins with Genes, to merge the protein and gene metanodes parsed_edge = wdh.gt.parse_edge_abbrev(k) if 'P' in parsed_edge: idx = parsed_edge.index('P') parsed_edge = list(parsed_edge) parsed_edge[idx] = 'G' k = ''.join(parsed_edge) if 'type' not in v.columns: v['type'] = h.edge_abv_to_full[parsed_edge[1]] + '_' + k edges.append(v) # Combine the edges into a single dataframe edges = pd.concat(edges, sort=False).reset_index(drop=True) col_order = ['start_id', 'end_id', 'type', 'qualifier'] col_order = col_order + [c for c in col_name_map.values() if c not in col_order] edges = edges[col_order] return edges ``` ## Fixing nodes that are duplicated across two different Node Types ``` def find_combos(nodes): duplicated_nodes = nodes[nodes.duplicated(keep=False, subset=['id'])]['id'].unique() # Find out what types are being combined... combos = (nodes.query('id in @duplicated_nodes') .sort_values(['id', 'label']) .groupby('id')['label'] .apply(list) .astype(str) .to_frame() .reset_index()) return combos def uniquify_node_types(nodes, edges, type_fix_map=None, verbose=True): # Set a default value for the map if type_fix_map is None: type_fix_map = {"['Structural Motif', 'Super-Secondary Structure']": 'Structural Motif', "['Chemical Hazard', 'Disease']": 'Chemical Hazard', "['Disease', 'Symptom']": 'Symptom', "['Sequence Variant', 'Symptom']": 'Symptom', "['Disease', 'Sequence Variant', 'Symptom']": 'Symptom', "['Compound', 'Gene']": 'Compound', "['Chemical Role', 'Compound']": 'Compound', "['Biological Process', 'Disease']": 'Disease', "['Anatomical Structure', 'Cellular Component']": 'Cellular Component', "['Protein Domain', 'Structural Motif', 'Super-Secondary Structure']": 'Protein Domain', "['Protein Domain', 'Protein Family']": 'Protein Family', "['Gene', 'Protein Family']": 'Gene', "['Disease', 'Sequence Variant']": 'Disease' } # Find out what's combined... combos = find_combos(nodes) # Map from the original combination to resolved type final_types = combos.set_index('id')['label'].map(type_fix_map).to_dict() # Fill in types for already unique nodes and map final_types = {**nodes.set_index('id')['label'].to_dict(), **final_types} nodes['label'] = nodes['id'].map(final_types) if verbose: print('Number of nodes before fixing: {:,}'.format(len(nodes))) nodes = nodes.drop_duplicates().reset_index(drop=True) if verbose: print('Number of nodes after fixing: {:,}'.format(len(nodes))) # Now check that the node types in the edge abbreviation match the newly resolved node types combo = wdh.gt.combine_nodes_and_edges(nodes, edges) combo['edge_abv'] = combo['type'].apply(lambda t: t.split('_')[-1]) combo['actual_start'] = combo['edge_abv'].apply(lambda a: h.node_abv_to_full[wdh.gt.parse_edge_abbrev(a)[0]]) combo['actual_end'] = combo['edge_abv'].apply(lambda a: h.node_abv_to_full[wdh.gt.parse_edge_abbrev(a)[2]]) bad_edge = combo.query('start_label != actual_start or end_label != actual_end') if verbose: print('Number of edges with issues to be removed: {:,}'.format(len(bad_edge))) print('Number of edges before fixing: {:,}'.format(len(edges))) edges = edges.drop(bad_edge.index).reset_index(drop=True) if verbose: print('Number of edges after fixing: {:,}'.format(len(edges))) return nodes, edges def build_hetnet(q_result): node_ids, id_to_name, self_ref, qualified, multi_step = process_query_res(q_result) fixed_self_ref = fix_self_ref_edges(q_result, self_ref, id_to_name) fixed_multi_step = process_multi_step_edges(q_result, qualified, multi_step) nodes = build_hetnet_nodes(node_ids, id_to_name) edges = build_hetnet_edges(q_result, {**fixed_multi_step, **fixed_self_ref}) # merge the genes and proteins in the nodes file idx = nodes.query('label == "Protein"').index nodes.loc[idx, 'label'] = 'Gene' nodes, edges = uniquify_node_types(nodes, edges) return nodes, edges for date, q_result in results.items(): out_dir.joinpath(date).mkdir(exist_ok=True, parents=True) print('DUMP DATE: {}'.format(date)) nodes, edges = build_hetnet(q_result) wdh.gt.add_colons(nodes).to_csv(out_dir.joinpath(date, 'nodes.csv'), index=False) wdh.gt.add_colons(edges).to_csv(out_dir.joinpath(date, 'edges.csv'), index=False) print('\n\n') ```
github_jupyter
``` import pandas as pd import seaborn as sns import sys from matplotlib import pyplot as plt %matplotlib inline MIN_PYTHON = (3, 6) if sys.version_info < MIN_PYTHON: sys.exit("Python %s.%s or later is required.\n" % MIN_PYTHON) in_data = pd.read_csv('afterfix_speed_test.log.csv', index_col='TS', names=['TS', 'Server', 'Speed', 'TLD', 'Location'], parse_dates=True) df1 = in_data[in_data['Server'] == 'Panic'] df2 = in_data[in_data['Server'] == 'Linode'] # remove data outside of our test date range # df1 = df1.loc['2017-11-20':'2017-11-22'] # df2 = df2.loc['2017-11-20':'2017-11-22'] # remove rows from TLDs with low occurances df1_c = df1[df1.groupby('TLD').Speed.transform(len) >= 10].copy(True) df2_c = df2[df2.groupby('TLD').Speed.transform(len) >= 10].copy(True) print(df1_c.count()) print(df1_c.count()) # The actual medians for comcast # print(df1_c[df1_c['TLD'] == 'comcast.net'].median()) # print(df2_c[df2_c['TLD'] == 'comcast.net'].median()) # print(df1_c['TLD'].value_counts()) # print(df2_c['TLD'].value_counts()) # Filter by speed # df1_c = df1_c[df1_c['Speed'] < 15000] # df2_c = df2_c[df2_c['Speed'] < 15000] ranks = pd.Index(['comcast.net', 'cox.net', 'charter.com', 'rr.com', 'verizon.net', 'shawcable.net', 'virginm.net', 'qwest.net', 'btcentralplus.com', 't-ipconnect.de', 'sbcglobal.net'], dtype='object', name='TLD') FIG_WIDTH = 20 FIG_HEIGHT = 8 sns.set(font_scale=2) sns.set_style("white") sns.set_style({ 'font.family': [u'sans-serif'], 'font.sans-serif': ['Chrono', 'DejaVu Sans'] }) fig, _ = plt.subplots() fig.set_figwidth(FIG_WIDTH) fig.set_figheight(FIG_HEIGHT) fig.suptitle('Connection to Linode') bp2 = sns.boxplot(data=df2_c, y='TLD', x='Speed', orient='h', order=ranks) _ = bp2.set(xlim=(0, 30000)) fig, _ = plt.subplots() fig.set_figwidth(FIG_WIDTH) fig.set_figheight(FIG_HEIGHT) fig.suptitle('Connection to Panic') bp1 = sns.boxplot(data=df1_c, y='TLD', x='Speed', orient='h', order=ranks) _ = bp1.set(xlim=(0, 30000)) df1_c = df1.copy(True) df2_c = df2.copy(True) df1_cc = df1_c[df1_c['TLD'] == 'comcast.net'].resample('h').median() df2_cc = df2_c[df2_c['TLD'] == 'comcast.net'].resample('h').median() df1_cc['Speed'].interpolate(inplace=True) df2_cc['Speed'].interpolate(inplace=True) fig, _ = plt.subplots() fig.set_figwidth(FIG_WIDTH) fig.set_figheight(FIG_HEIGHT) p1 = df1_cc['Speed'].plot(label="Comcast") _ = plt.legend() _ = p1.set(ylim=(0, 20000)) fig, _ = plt.subplots() fig.set_figwidth(FIG_WIDTH) fig.set_figheight(FIG_HEIGHT) p2 = df2_cc['Speed'].plot(label="Comcast") _ = plt.legend() _ = p2.set(ylim=(0, 20000)) def get_dfs_filtered_by_time(df, label): hour = df.index.hour selector_l = ((15 <= hour) & (hour <= 23)) | ((0 <= hour) & (hour < 1)) selector_h = ((1 <= hour) & (hour < 15)) df_l = df[selector_l].assign(Timeframe=f'{label} Evening') df_h = df[selector_h].assign(Timeframe=f'{label} Morning') return df_l, df_h def plot_by_tld(df1, df2, tld): df1 = df1[df1['TLD'] == tld] df2 = df2[df2['TLD'] == tld] df1_l, df1_h = get_dfs_filtered_by_time(df1, 'Panic') df2_l, df2_h = get_dfs_filtered_by_time(df2, 'Linode') df_combined = pd.concat([df1_l, df1_h, df2_l, df2_h]) fig, _ = plt.subplots() fig.set_figwidth(FIG_WIDTH) fig.set_figheight(FIG_HEIGHT) bp = sns.boxplot(data=df_combined, y='Timeframe', x='Speed', orient='h') _ = bp.set(xlim=(0, 30000)) plot_by_tld(df1_c, df2_c, 'comcast.net') # plot_by_tld(df1_c, df2_c, 'sbcglobal.net') # plot_by_tld(df1_c, df2_c, 'rr.com') # plot_by_tld(df1_c, df2_c, 'verizon.net') # plot_by_tld(df1_c, df2_c, 'comcastbusiness.net') ```
github_jupyter
# Tema 03: Control de flujo (Enunciados) *Nota: Estos ejercicios son optativos para hacer al final de la unidad y están pensados para apoyar tu aprendizaje*. **1) Realiza un programa que lea dos números por teclado y permita elegir entre 3 opciones en un menú:** * Mostrar una suma de los dos números * Mostrar una resta de los dos números (el primero menos el segundo) * Mostrar una multiplicación de los dos números * En caso de no introducir una opción válida, el programa informará de que no es correcta. ``` # Completa el ejercicio aquí opcion = "" inGame = True while inGame: print("""Choose an option 1) Suma de dos numeros 2) Resta de dos numeros 3) Multiplicación de dos numeros 4) Salir""") opcion = input("\nElige la opción: ") if opcion == "1" or opcion == "2" or opcion == "3": n1 = float(input("Introduce el primer numero: ")) n2 = float(input("Introduce el segundo numero: ")) if opcion == "1": print("El resultado es: ", n1 + n2) elif opcion == "2": print("El resultado es: ", n1 - n2) elif opcion == "3": print("El resultado es: ", n1 * n2) elif opcion == "4": inGame = False else: print("No se reconoce el comando, vuelva a intentarlo de nuevo.\n\n") ``` **2) Realiza un programa que lea un número impar por teclado. Si el usuario no introduce un número impar, debe repetise el proceso hasta que lo introduzca correctamente.** ``` # Completa el ejercicio aquí numero = 0 while numero % 2 == 0: numero = int(input("Introduce un impar par para salir: ")) ``` **3) Realiza un programa que sume todos los números enteros pares desde el 0 hasta el 100:** *Sugerencia: Puedes utilizar la funciones sum() y range() para hacerlo más fácil. El tercer parámetro en la función range(inicio, fin, salto) indica un salto de números, pruébalo.* ``` # Completa el ejercicio aquí # Con bucle forma standard suma = 0 for numero in range(0, 101, 2): suma += numero # print(numero) print(suma) # Sin bucle forma especial suma2 = sum(range(0, 101, 2)) print(suma2) ``` **4) Realiza un programa que pida al usuario cuantos números quiere introducir. Luego lee todos los números y realiza una media aritmética:** ``` # Completa el ejercicio aquí userInput = int(input("Introduce cuantos numeros quieres introducir: ")) suma = 0 for num in range(userInput): suma += float(input("Introduce un numero: ")) print("El resultado final es ", suma / userInput) ``` **5) Realiza un programa que pida al usuario un número entero del 0 al 9, y que mientras el número no sea correcto se repita el proceso. Luego debe comprobar si el número se encuentra en la lista de números y notificarlo:** *Consejo: La sintaxis "valor in lista" permite comprobar fácilmente si un valor se encuentra en una lista (devuelve True o False)* ``` # Completa el ejercicio aquí numeros = [1, 3, 6, 9] userInput = -1 while userInput < 0 or userInput > 9: userInput = int(input("Escribe un numero entre 0 y 9: ")) else: if userInput in numeros: print("El numero se ha encontrado en la lista") else: print("El numero no se ha encontrado en la lista") ``` **6) Utilizando la función range() y la conversión a listas genera las siguientes listas dinámicamente:** * Todos los números del 0 al 10 [0, 1, 2, ..., 10] * Todos los números del -10 al 0 [-10, -9, -8, ..., 0] * Todos los números pares del 0 al 20 [0, 2, 4, ..., 20] * Todos los números impares entre -20 y 0 [-19, -17, -15, ..., -1] * Todos los números múltiples de 5 del 0 al 50 [0, 5, 10, ..., 50] *Pista: Utiliza el tercer parámetro de la función range(inicio, fin, salto).* ``` # Completa el ejercicio lista1 = list(range(11)) print(lista1) lista2 = list(range(-10, 1)) print(lista2) lista3 = list(range(0, 21, 2)) print(lista3) lista4 = list(range(-19, 0, 2)) print(lista4) lista5 = list(range(0, 51, 5)) print(lista5) ``` **7) Dadas dos listas, debes generar una tercera con todos los elementos que se repitan en ellas, pero no debe repetise ningún elemento en la nueva lista:** ``` # Completa el ejercicio aquí lista1 = [1, 1, 4, 8, 6, 9, 10, 26, 5, 3, 8] lista3 = [1, 8, 6, 4, 5, 28, 90, 56, 8] listaFinal = [] for i in lista1: for j in lista3: if(i == j): finded = False for k in listaFinal: if(i == k): finded = True else: if not finded: listaFinal.append(i) print(listaFinal) # Segunda forma mas corta listaFinal = [] for num in lista1: if num in lista3 and num not in listaFinal: listaFinal.append(num) print(listaFinal) ```
github_jupyter
# `yacman` features and usage This short tutorial show you the features of `yacman` package in action. First, let's prepare some data to work with ``` import yaml yaml_dict = {'cfg_version': 0.1, 'lvl1': {'lvl2': {'lvl3': {'entry': ['val1', 'val2']}}}} yaml_str = """\ cfg_version: 0.1 lvl1: lvl2: lvl3: entry: ["val1","val2"] """ filepath = "test.yaml" with open(filepath, 'w') as f: data = yaml.dump(yaml_dict, f) import yacman ``` ## `YacAttMap` object creation There are multiple ways to initialize an object of `YacAttMap` class: 1. **Read data from a YAML-formatted file** ``` yacmap = yacman.YacAttMap(filepath=filepath) yacmap ``` 2. **Read data from an `entries` mapping** ``` yacmap = yacman.YacAttMap(entries=yaml_dict) yacmap ``` 3. **Read data from a YAML-formatted string** ``` yacmap = yacman.YacAttMap(yamldata=yaml_str) yacmap ``` ## File locks; race-free writing Instances of `YacAttMap` class support race-free writing and file locking, so that **it's safe to use them in multi-user contexts** They can be created with or without write capabilities. Writable objects create a file lock, which prevents other processes managed by `yacman` from updating the source config file. `writable` argument in the object constructor can be used to toggle writable mode. The source config file can be updated on disk (using `write` method) only if the `YacAttMap` instance is in writable mode ``` yacmap = yacman.YacAttMap(filepath=filepath, writable=False) try: yacmap.write() except OSError as e: print("Error caught: {}".format(e)) ``` The write capabilities can be granted to an object: ``` yacmap = yacman.YacAttMap(filepath=filepath, writable=False) yacmap.make_writable() yacmap.write() ``` Or withheld: ``` yacmap.make_readonly() ``` If a file is currently locked by other `YacAttMap` object. The object will not be made writable/created with write capabilities until the lock is gone. If the lock persists, the action will fail (with a `RuntimeError`) after a selected `wait_time`, which is 10s by default: ``` yacmap = yacman.YacAttMap(filepath=filepath, writable=True) try: yacmap1 = yacman.YacAttMap(filepath=filepath, writable=True, wait_max=1) except RuntimeError as e: print("\nError caught: {}".format(e)) yacmap.make_readonly() ``` Lastly, the `YacAttMap` class instances **can be used in a context manager**. This way the source config file will be locked, possibly updated (depending on what the user chooses to do), safely written to and unlocked with a single line of code: ``` yacmap = yacman.YacAttMap(filepath=filepath) with yacmap as y: y.test = "test" yacmap1 = yacman.YacAttMap(filepath=filepath) yacmap1 ``` ## Key aliases in `AliasedYacAttMap` `AliasedYacAttMap` is a child class of `YacAttMap` that supports top-level key aliases. ### Defining the aliases There are two ways the aliases can be defined at the object construction stage: 1. By passing a literal aliases dictionary 2. By passing a function to be executed on the object itself that returns the dictionary In any case, the resulting aliases mapping has to follow the format presented below: ``` aliases = { "key_1": ["first_key", "key_one"], "key_2": ["second_key", "key_two", "fav_key"], "key_3": ["third_key", "key_three"] } ``` #### Literal aliases dictionary The `aliases` argument in the `AliasedYacAttmap` below is a Python `dict` with that maps the object keys to collection of aliases (Python `list`s of `str`). This format is strictly enforced. ``` aliased_yacmap = yacman.AliasedYacAttMap(entries={'key_1': 'val_1', 'key_2': 'val_2', 'key_3': 'val_3'}, aliases=aliases) print(aliased_yacmap) ``` Having set the aliases we can key the object either with the literal key or the aliases: ``` aliased_yacmap["key_1"] == aliased_yacmap["first_key"] aliased_yacmap["key_two"] == aliased_yacmap["fav_key"] ``` #### Aliases returning function The `aliases` argument in the `AliasedYacAttmap` below is a Python `callable` that takes the obejcect itself as an argument and returns the desired aliases mapping. This is especially useful when the object itself contains the aliases definition, for example: ``` entries={ 'key_1': {'value': 'val_1', 'aliases': ['first_key']}, 'key_2': {'value': 'val_2', 'aliases': ['second_key']}, 'key_3': {'value': 'val_3', 'aliases': ['third_key']} } aliased_yacmap = yacman.AliasedYacAttMap(entries=entries, aliases=lambda x: {k: v.__getitem__("aliases", expand=False) for k, v in x.items()}) print(aliased_yacmap) aliased_yacmap["key_1"] == aliased_yacmap["first_key"] ``` # `YacAttMap` contents validation Another very useful feature of `YacAttMap` object is the embedded [jsonschema](https://json-schema.org/) validation. ## Setup The validation is setup at the `YacAttMap` object creation stage, using `schema_source` and `write_validate` arguments: - `schema_soruce` takes a path or URL of the YAML-formatted jsonschema file reads it in to a Python `dict`. If this argument is provided the object is always validated at least once, at the object creation stage. - `write_validate` takes a boolean indicating whether the object should be validated every time the `YacAttMap.write` method is executed, which is a way of preventing invalid config writing ## Validation demonstration Let's get a path to a YAML-formatted jsonschema and look at the contents: ``` from attmap import AttMap # disregard, this class can be used to print mappings nicely schema_path = "../tests/data/conf_schema.yaml" AttMap(yacman.load_yaml(schema_path)) ``` The schema presented above restrits just 3 top level keys in the `YacAttMap` object to be validated: `newattr`, `testattr` and `anotherattr`. Each of these has to adhere to different requirements, defined in the respective sections. ### At construction validation Let's pass the path to the schema to the object constructor: ``` entries = { "newattr": "test_string" } yacmap = yacman.YacAttMap(entries=entries, schema_source=schema_path) ``` No exceptions were raised, which means that the object passed the validation (`newattr` is a string with no whitespaces). But what if we added an attribute that is *does not* adhere to the schema requirements? ``` entries.update({"testattr": 1}) yacmap = yacman.YacAttMap(entries=entries, schema_source=schema_path) ``` As expected, the object did not pass the validation and an informative exception is raised. ### At `write` validation As mentioned above, the object can be validated when `write` method is called. Let's use the previously created file to demonstrate this feature: ``` yacmap = yacman.YacAttMap(filepath=filepath, schema_source=schema_path, writable=True, write_validate=True) yacmap["newattr"] = "test_string" yacmap.write() del yacmap yacmap = yacman.YacAttMap(filepath=filepath, schema_source=schema_path, writable=True, write_validate=True) yacmap del yacmap ``` As expected, we were able to add a new attribure to the object and write it to the file, with no issues since the new attribute's value adheres to the schema requirements. But, if it doesn't, we are not able to write to the file: ``` yacmap = yacman.YacAttMap(filepath=filepath, schema_source=schema_path, writable=True, write_validate=True) yacmap["newattr"] = 1 yacmap.write(exclude_case=True) ``` This feature is also available when using `YacAttMap` object in context manager: ``` del yacmap yacmap = yacman.YacAttMap(filepath=filepath, schema_source=schema_path, writable=True, write_validate=True) with yacmap as y: y['newattr'] = 1 ```
github_jupyter
## SIS on Beer Reviews - Model Training Aspect 1 (Aroma) ``` import numpy as np from matplotlib import pyplot as plt import seaborn as sns import os import sys import gzip sys.path.insert(0, os.path.abspath('..')) from keras.callbacks import ModelCheckpoint from keras.models import load_model, Model, Sequential from keras.layers import Input, Dense, Flatten, LSTM from keras.layers.embeddings import Embedding from keras.optimizers import Adam from keras.preprocessing import sequence, text from keras import backend as K from sklearn.model_selection import train_test_split import os import tensorflow as tf os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID' os.environ['CUDA_VISIBLE_DEVICES'] = '1' config = tf.ConfigProto() config.gpu_options.allow_growth = True config.allow_soft_placement = True sess = tf.Session(config=config) K.set_session(sess) def load_reviews(path, verbose=True): data_x, data_y = [ ], [ ] fopen = gzip.open if path.endswith(".gz") else open with fopen(path) as fin: for line in fin: line = line.decode('ascii') y, sep, x = line.partition("\t") # x = x.split() y = y.split() if len(x) == 0: continue y = np.asarray([ float(v) for v in y ]) data_x.append(x) data_y.append(y) if verbose: print("{} examples loaded from {}".format(len(data_x), path)) print("max text length: {}".format(max(len(x) for x in data_x))) return data_x, data_y # Load beer review data for a particular aspect ASPECT = 1 # 1, 2, or 3 BASE_PATH = '../data/beer_reviews' path = os.path.join(BASE_PATH, 'reviews.aspect' + str(ASPECT)) train_path = path + '.train.txt.gz' heldout_path = path + '.heldout.txt.gz' X_train_texts, y_train = load_reviews(train_path) X_test_texts, y_test = load_reviews(heldout_path) # y value is just the sentiment for this aspect, throw away the other scores y_train = np.array([y[ASPECT] for y in y_train]) y_test = np.array([y[ASPECT] for y in y_test]) # Create a 3k validation set held-out from the test set X_test_texts, X_val_texts, y_test, y_val = train_test_split( X_test_texts, y_test, test_size=3000, random_state=42) plt.hist(y_train) plt.show() print('Mean: %.3f' % np.mean(y_train)) print('Median: %.3f' % np.median(y_train)) print('Stdev: %.3f' % np.std(y_train)) print('Review length:') train_texts_lengths = [len(x.split(' ')) for x in X_train_texts] print("Mean %.2f words (stddev: %f)" % \ (np.mean(train_texts_lengths), np.std(train_texts_lengths))) # plot review lengths plt.boxplot(train_texts_lengths) plt.show() # Tokenize the texts and keep only the top n words TOP_WORDS = 10000 tokenizer = text.Tokenizer(num_words=TOP_WORDS) tokenizer.fit_on_texts(X_train_texts) X_train = tokenizer.texts_to_sequences(X_train_texts) X_val = tokenizer.texts_to_sequences(X_val_texts) X_test = tokenizer.texts_to_sequences(X_test_texts) print(len(X_train)) print(len(X_val)) print(len(X_test)) # Bound reviews at 500 words, truncating longer reviews and zero-padding shorter reviews MAX_WORDS = 500 X_train = sequence.pad_sequences(X_train, maxlen=MAX_WORDS) X_val = sequence.pad_sequences(X_val, maxlen=MAX_WORDS) X_test = sequence.pad_sequences(X_test, maxlen=MAX_WORDS) index_to_token = {tokenizer.word_index[k]: k for k in tokenizer.word_index.keys()} ``` ## LSTM Model ``` def coeff_determination_metric(y_true, y_pred): SS_res = K.sum(K.square( y_true-y_pred )) SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) ) return ( 1 - SS_res/(SS_tot + K.epsilon()) ) # LSTM 200 def make_lstm_model(top_words, max_words): model = Sequential() model.add(Embedding(top_words, 100, input_length=max_words)) model.add(LSTM(200, return_sequences=True)) model.add(LSTM(200)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='mse', optimizer=Adam(), metrics=['mse', 'mae', coeff_determination_metric]) return model model = make_lstm_model(TOP_WORDS, MAX_WORDS) print(model.summary()) checkpointer = ModelCheckpoint(filepath='../trained_models/asp1.regress.bs128.nodrop.lstm200.100dimembed.weights.{epoch:02d}-{val_loss:.4f}.hdf5', verbose=1, monitor='val_loss', save_best_only=True) model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=15, batch_size=128, callbacks=[checkpointer], verbose=1) ```
github_jupyter
``` """ Update Parameters Here """ CONTRACT_ADDRESS = "0x9A534628B4062E123cE7Ee2222ec20B86e16Ca8F" COLLECTION = "MekaVerse" METHOD = "raritytools" TOKEN_COL = "TOKEN_ID" # Use TOKEN_NAME if you prefer to infer token id from token name NUMBERS_TO_CHECK = 50 # Number of tokens to search for opportunities import time import requests import pandas as pd import matplotlib.pyplot as plt import matplotlib.cm as cm import numpy as np from honestnft_utils import config # Define variables used throughout PATH = f"{config.RARITY_FOLDER}/{COLLECTION}_{METHOD}.csv" ETHER_UNITS = 1e18 """ Plot params """ plt.rcParams.update({"figure.facecolor": "white", "savefig.facecolor": "white"}) # Load rarity database and format RARITY_DB = pd.read_csv(PATH) RARITY_DB = RARITY_DB[RARITY_DB["TOKEN_ID"].duplicated() == False] if TOKEN_COL == "TOKEN_NAME": RARITY_DB["TOKEN_ID"] = RARITY_DB["TOKEN_NAME"].str.split("#").str[1].astype(int) """ Get open bids from OpenSea and plot. """ def getOpenseaOrders(token_id, contract_address): url = "https://api.opensea.io/wyvern/v1/orders" querystring = { "bundled": "false", "include_bundled": "false", "is_english": "false", "include_invalid": "false", "limit": "50", "offset": "0", "order_by": "created_date", "order_direction": "desc", "asset_contract_address": contract_address, "token_ids": [token_id], } headers = {"Accept": "application/json", "X-API-KEY": config.OPENSEA_API_KEY} response = requests.request("GET", url, headers=headers, params=querystring) response_json = response.json() return response_json def plot_all_bids(bid_db): series = [] max_listings = bid_db["token_ids"].value_counts().max() for i in range(1, max_listings + 1): n_bids = bid_db.groupby("token_ids").filter(lambda x: len(x) == i) series.append(n_bids) colors = iter(cm.rainbow(np.linspace(0, 1, len(series)))) for i in range(0, len(series)): plt.scatter( series[i]["ranks"], series[i]["bid"], color=next(colors), label=i + 1 ) plt.xlabel("rarity rank") plt.ylabel("price (ETHER)") plt.legend(loc="best") plt.show() def get_all_bids(rarity_db): token_ids = [] ranks = [] bids = [] numbersToCheck = [] for x in rarity_db["TOKEN_ID"]: numbersToCheck.append(x) if len(numbersToCheck) == 15: # send 15 NFTs at a time to API orders = getOpenseaOrders(numbersToCheck, CONTRACT_ADDRESS) numbersToCheck = [] for order in orders["orders"]: if order["side"] == 0: tokenId = int(order["asset"]["token_id"]) token_ids.append(tokenId) ranks.append( float(rarity_db[rarity_db["TOKEN_ID"] == tokenId]["Rank"]) ) bids.append(float(order["base_price"]) / ETHER_UNITS) bid_db = pd.DataFrame(columns=["token_ids", "ranks", "bid"]) bid_db["token_ids"] = token_ids bid_db["ranks"] = ranks bid_db["bid"] = bids return bid_db bid_db = get_all_bids(RARITY_DB.head(NUMBERS_TO_CHECK)) bid_db = bid_db.sort_values(by=["ranks"]) print(bid_db.set_index("token_ids").head(50)) plot_all_bids(bid_db) """ Get open offers from OpenSea and plot. """ def getOpenseaOrders(token_id, contract_address): # gets orders, both bids and asks # divide token_list into limit sized chunks and get output url = "https://api.opensea.io/wyvern/v1/orders" querystring = { "bundled": "false", "include_bundled": "false", "is_english": "false", "include_invalid": "false", "limit": "50", "offset": "0", "order_by": "created_date", "order_direction": "desc", "asset_contract_address": contract_address, "token_ids": [token_id], } headers = {"Accept": "application/json", "X-API-KEY": config.OPENSEA_API_KEY} response = requests.request("GET", url, headers=headers, params=querystring) responseJson = response.json() return responseJson def display_orders(rarity_db): print("RANK TOKEN_ID PRICE URL") numbersToCheck = [] for x in rarity_db["TOKEN_ID"]: numbersToCheck.append(x) if len(numbersToCheck) == 15: orders = getOpenseaOrders(numbersToCheck, CONTRACT_ADDRESS) numbersToCheck = [] time.sleep(2) for order in orders["orders"]: if order["side"] == 1: tokenId = int(order["asset"]["token_id"]) price = float(order["current_price"]) / 1e18 if price <= 20: current_order = dict() current_order["RANK"] = str( int(rarity_db[rarity_db["TOKEN_ID"] == tokenId]["Rank"]) ) current_order["TOKEN_ID"] = str(tokenId) current_order["PRICE"] = str(price) current_order[ "URL" ] = f"https://opensea.io/assets/{CONTRACT_ADDRESS}/{tokenId}" str_to_print = "" for x in ["RANK", "TOKEN_ID", "PRICE"]: str_to_print += f"{current_order[x]}" str_to_print += " " * (len(x) + 1 - len(current_order[x])) str_to_print += current_order["URL"] print(str_to_print) display_orders(RARITY_DB.head(NUMBERS_TO_CHECK)) import numpy as np A = -0.9 K = 1 B = 5 v = 1 Q = 1.1 C = 1 RARITY_DB["VALUE"] = A + ( (K - A) / np.power((C + Q * np.exp(-B * (1 / RARITY_DB["Rank"]))), 1 / v) ) RARITY_DB["VALUE"] = np.where(RARITY_DB["Rank"] > 96 * 2, 0, RARITY_DB["VALUE"]) RARITY_DB[["Rank", "VALUE"]].sort_values("Rank").plot( x="Rank", y="VALUE", figsize=(14, 7), logx=True, grid=True ) plt.show() RARITY_DB = RARITY_DB.sort_values("TOKEN_ID") RARITY_DB.plot(x="TOKEN_ID", y="VALUE", grid=True, figsize=(14, 7)) RARITY_DB = RARITY_DB.sort_values("TOKEN_ID") RARITY_DB["EXPANDING_VALUE"] = RARITY_DB["VALUE"].expanding().sum() RARITY_DB.plot(x="TOKEN_ID", y="EXPANDING_VALUE", grid=True, figsize=(14, 7)) pd.set_option("display.max_rows", 100) RARITY_DB.sort_values("Rank").head(96) ```
github_jupyter
``` import tensorflow as tf import os os.environ['CUDA_VISIBLE_DEVICES'] = '' import tensorflow as tf import numpy as np # !wget https://raw.githubusercontent.com/tensorflow/models/master/research/slim/nets/inception_utils.py import tensorflow.compat.v1 as tf import tf_slim as slim import inception_utils def block_inception_a(inputs, scope = None, reuse = None): """Builds Inception-A block for Inception v4 network.""" # By default use stride=1 and SAME padding with slim.arg_scope( [slim.conv2d, slim.avg_pool2d, slim.max_pool2d], stride = 1, padding = 'SAME', ): with tf.variable_scope( scope, 'BlockInceptionA', [inputs], reuse = reuse ): with tf.variable_scope('Branch_0'): branch_0 = slim.conv2d( inputs, 96, [1, 1], scope = 'Conv2d_0a_1x1' ) with tf.variable_scope('Branch_1'): branch_1 = slim.conv2d( inputs, 64, [1, 1], scope = 'Conv2d_0a_1x1' ) branch_1 = slim.conv2d( branch_1, 96, [3, 3], scope = 'Conv2d_0b_3x3' ) with tf.variable_scope('Branch_2'): branch_2 = slim.conv2d( inputs, 64, [1, 1], scope = 'Conv2d_0a_1x1' ) branch_2 = slim.conv2d( branch_2, 96, [3, 3], scope = 'Conv2d_0b_3x3' ) branch_2 = slim.conv2d( branch_2, 96, [3, 3], scope = 'Conv2d_0c_3x3' ) with tf.variable_scope('Branch_3'): branch_3 = slim.avg_pool2d( inputs, [3, 3], scope = 'AvgPool_0a_3x3' ) branch_3 = slim.conv2d( branch_3, 96, [1, 1], scope = 'Conv2d_0b_1x1' ) return tf.concat( axis = 3, values = [branch_0, branch_1, branch_2, branch_3] ) def block_reduction_a(inputs, scope = None, reuse = None): """Builds Reduction-A block for Inception v4 network.""" # By default use stride=1 and SAME padding with slim.arg_scope( [slim.conv2d, slim.avg_pool2d, slim.max_pool2d], stride = 1, padding = 'SAME', ): with tf.variable_scope( scope, 'BlockReductionA', [inputs], reuse = reuse ): with tf.variable_scope('Branch_0'): branch_0 = slim.conv2d( inputs, 384, [3, 3], stride = 2, padding = 'VALID', scope = 'Conv2d_1a_3x3', ) with tf.variable_scope('Branch_1'): branch_1 = slim.conv2d( inputs, 192, [1, 1], scope = 'Conv2d_0a_1x1' ) branch_1 = slim.conv2d( branch_1, 224, [3, 3], scope = 'Conv2d_0b_3x3' ) branch_1 = slim.conv2d( branch_1, 256, [3, 3], stride = 2, padding = 'VALID', scope = 'Conv2d_1a_3x3', ) with tf.variable_scope('Branch_2'): branch_2 = slim.max_pool2d( inputs, [3, 3], stride = 2, padding = 'VALID', scope = 'MaxPool_1a_3x3', ) return tf.concat(axis = 3, values = [branch_0, branch_1, branch_2]) def block_inception_b(inputs, scope = None, reuse = None): """Builds Inception-B block for Inception v4 network.""" # By default use stride=1 and SAME padding with slim.arg_scope( [slim.conv2d, slim.avg_pool2d, slim.max_pool2d], stride = 1, padding = 'SAME', ): with tf.variable_scope( scope, 'BlockInceptionB', [inputs], reuse = reuse ): with tf.variable_scope('Branch_0'): branch_0 = slim.conv2d( inputs, 384, [1, 1], scope = 'Conv2d_0a_1x1' ) with tf.variable_scope('Branch_1'): branch_1 = slim.conv2d( inputs, 192, [1, 1], scope = 'Conv2d_0a_1x1' ) branch_1 = slim.conv2d( branch_1, 224, [1, 7], scope = 'Conv2d_0b_1x7' ) branch_1 = slim.conv2d( branch_1, 256, [7, 1], scope = 'Conv2d_0c_7x1' ) with tf.variable_scope('Branch_2'): branch_2 = slim.conv2d( inputs, 192, [1, 1], scope = 'Conv2d_0a_1x1' ) branch_2 = slim.conv2d( branch_2, 192, [7, 1], scope = 'Conv2d_0b_7x1' ) branch_2 = slim.conv2d( branch_2, 224, [1, 7], scope = 'Conv2d_0c_1x7' ) branch_2 = slim.conv2d( branch_2, 224, [7, 1], scope = 'Conv2d_0d_7x1' ) branch_2 = slim.conv2d( branch_2, 256, [1, 7], scope = 'Conv2d_0e_1x7' ) with tf.variable_scope('Branch_3'): branch_3 = slim.avg_pool2d( inputs, [3, 3], scope = 'AvgPool_0a_3x3' ) branch_3 = slim.conv2d( branch_3, 128, [1, 1], scope = 'Conv2d_0b_1x1' ) return tf.concat( axis = 3, values = [branch_0, branch_1, branch_2, branch_3] ) def block_reduction_b(inputs, scope = None, reuse = None): """Builds Reduction-B block for Inception v4 network.""" # By default use stride=1 and SAME padding with slim.arg_scope( [slim.conv2d, slim.avg_pool2d, slim.max_pool2d], stride = 1, padding = 'SAME', ): with tf.variable_scope( scope, 'BlockReductionB', [inputs], reuse = reuse ): with tf.variable_scope('Branch_0'): branch_0 = slim.conv2d( inputs, 192, [1, 1], scope = 'Conv2d_0a_1x1' ) branch_0 = slim.conv2d( branch_0, 192, [3, 3], stride = 2, padding = 'VALID', scope = 'Conv2d_1a_3x3', ) with tf.variable_scope('Branch_1'): branch_1 = slim.conv2d( inputs, 256, [1, 1], scope = 'Conv2d_0a_1x1' ) branch_1 = slim.conv2d( branch_1, 256, [1, 7], scope = 'Conv2d_0b_1x7' ) branch_1 = slim.conv2d( branch_1, 320, [7, 1], scope = 'Conv2d_0c_7x1' ) branch_1 = slim.conv2d( branch_1, 320, [3, 3], stride = 2, padding = 'VALID', scope = 'Conv2d_1a_3x3', ) with tf.variable_scope('Branch_2'): branch_2 = slim.max_pool2d( inputs, [3, 3], stride = 2, padding = 'VALID', scope = 'MaxPool_1a_3x3', ) return tf.concat(axis = 3, values = [branch_0, branch_1, branch_2]) def block_inception_c(inputs, scope = None, reuse = None): """Builds Inception-C block for Inception v4 network.""" # By default use stride=1 and SAME padding with slim.arg_scope( [slim.conv2d, slim.avg_pool2d, slim.max_pool2d], stride = 1, padding = 'SAME', ): with tf.variable_scope( scope, 'BlockInceptionC', [inputs], reuse = reuse ): with tf.variable_scope('Branch_0'): branch_0 = slim.conv2d( inputs, 256, [1, 1], scope = 'Conv2d_0a_1x1' ) with tf.variable_scope('Branch_1'): branch_1 = slim.conv2d( inputs, 384, [1, 1], scope = 'Conv2d_0a_1x1' ) branch_1 = tf.concat( axis = 3, values = [ slim.conv2d( branch_1, 256, [1, 3], scope = 'Conv2d_0b_1x3' ), slim.conv2d( branch_1, 256, [3, 1], scope = 'Conv2d_0c_3x1' ), ], ) with tf.variable_scope('Branch_2'): branch_2 = slim.conv2d( inputs, 384, [1, 1], scope = 'Conv2d_0a_1x1' ) branch_2 = slim.conv2d( branch_2, 448, [3, 1], scope = 'Conv2d_0b_3x1' ) branch_2 = slim.conv2d( branch_2, 512, [1, 3], scope = 'Conv2d_0c_1x3' ) branch_2 = tf.concat( axis = 3, values = [ slim.conv2d( branch_2, 256, [1, 3], scope = 'Conv2d_0d_1x3' ), slim.conv2d( branch_2, 256, [3, 1], scope = 'Conv2d_0e_3x1' ), ], ) with tf.variable_scope('Branch_3'): branch_3 = slim.avg_pool2d( inputs, [3, 3], scope = 'AvgPool_0a_3x3' ) branch_3 = slim.conv2d( branch_3, 256, [1, 1], scope = 'Conv2d_0b_1x1' ) return tf.concat( axis = 3, values = [branch_0, branch_1, branch_2, branch_3] ) def inception_v4_base(inputs, final_endpoint = 'Mixed_7d', scope = None): """Creates the Inception V4 network up to the given final endpoint. Args: inputs: a 4-D tensor of size [batch_size, height, width, 3]. final_endpoint: specifies the endpoint to construct the network up to. It can be one of [ 'Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3', 'Mixed_3a', 'Mixed_4a', 'Mixed_5a', 'Mixed_5b', 'Mixed_5c', 'Mixed_5d', 'Mixed_5e', 'Mixed_6a', 'Mixed_6b', 'Mixed_6c', 'Mixed_6d', 'Mixed_6e', 'Mixed_6f', 'Mixed_6g', 'Mixed_6h', 'Mixed_7a', 'Mixed_7b', 'Mixed_7c', 'Mixed_7d'] scope: Optional variable_scope. Returns: logits: the logits outputs of the model. end_points: the set of end_points from the inception model. Raises: ValueError: if final_endpoint is not set to one of the predefined values, """ end_points = {} def add_and_check_final(name, net): end_points[name] = net return name == final_endpoint with tf.variable_scope(scope, 'InceptionV4', [inputs]): with slim.arg_scope( [slim.conv2d, slim.max_pool2d, slim.avg_pool2d], stride = 1, padding = 'SAME', ): # 299 x 299 x 3 net = slim.conv2d( inputs, 32, [3, 3], stride = 2, padding = 'VALID', scope = 'Conv2d_1a_3x3', ) if add_and_check_final('Conv2d_1a_3x3', net): return net, end_points # 149 x 149 x 32 net = slim.conv2d( net, 32, [3, 3], padding = 'VALID', scope = 'Conv2d_2a_3x3' ) if add_and_check_final('Conv2d_2a_3x3', net): return net, end_points # 147 x 147 x 32 net = slim.conv2d(net, 64, [3, 3], scope = 'Conv2d_2b_3x3') if add_and_check_final('Conv2d_2b_3x3', net): return net, end_points # 147 x 147 x 64 with tf.variable_scope('Mixed_3a'): with tf.variable_scope('Branch_0'): branch_0 = slim.max_pool2d( net, [3, 3], stride = 2, padding = 'VALID', scope = 'MaxPool_0a_3x3', ) with tf.variable_scope('Branch_1'): branch_1 = slim.conv2d( net, 96, [3, 3], stride = 2, padding = 'VALID', scope = 'Conv2d_0a_3x3', ) net = tf.concat(axis = 3, values = [branch_0, branch_1]) if add_and_check_final('Mixed_3a', net): return net, end_points # 73 x 73 x 160 with tf.variable_scope('Mixed_4a'): with tf.variable_scope('Branch_0'): branch_0 = slim.conv2d( net, 64, [1, 1], scope = 'Conv2d_0a_1x1' ) branch_0 = slim.conv2d( branch_0, 96, [3, 3], padding = 'VALID', scope = 'Conv2d_1a_3x3', ) with tf.variable_scope('Branch_1'): branch_1 = slim.conv2d( net, 64, [1, 1], scope = 'Conv2d_0a_1x1' ) branch_1 = slim.conv2d( branch_1, 64, [1, 7], scope = 'Conv2d_0b_1x7' ) branch_1 = slim.conv2d( branch_1, 64, [7, 1], scope = 'Conv2d_0c_7x1' ) branch_1 = slim.conv2d( branch_1, 96, [3, 3], padding = 'VALID', scope = 'Conv2d_1a_3x3', ) net = tf.concat(axis = 3, values = [branch_0, branch_1]) if add_and_check_final('Mixed_4a', net): return net, end_points # 71 x 71 x 192 with tf.variable_scope('Mixed_5a'): with tf.variable_scope('Branch_0'): branch_0 = slim.conv2d( net, 192, [3, 3], stride = 2, padding = 'VALID', scope = 'Conv2d_1a_3x3', ) with tf.variable_scope('Branch_1'): branch_1 = slim.max_pool2d( net, [3, 3], stride = 2, padding = 'VALID', scope = 'MaxPool_1a_3x3', ) net = tf.concat(axis = 3, values = [branch_0, branch_1]) if add_and_check_final('Mixed_5a', net): return net, end_points # 35 x 35 x 384 # 4 x Inception-A blocks for idx in range(4): block_scope = 'Mixed_5' + chr(ord('b') + idx) net = block_inception_a(net, block_scope) if add_and_check_final(block_scope, net): return net, end_points # 35 x 35 x 384 # Reduction-A block net = block_reduction_a(net, 'Mixed_6a') if add_and_check_final('Mixed_6a', net): return net, end_points # 17 x 17 x 1024 # 7 x Inception-B blocks for idx in range(7): block_scope = 'Mixed_6' + chr(ord('b') + idx) net = block_inception_b(net, block_scope) if add_and_check_final(block_scope, net): return net, end_points # 17 x 17 x 1024 # Reduction-B block net = block_reduction_b(net, 'Mixed_7a') if add_and_check_final('Mixed_7a', net): return net, end_points # 8 x 8 x 1536 # 3 x Inception-C blocks for idx in range(3): block_scope = 'Mixed_7' + chr(ord('b') + idx) net = block_inception_c(net, block_scope) if add_and_check_final(block_scope, net): return net, end_points raise ValueError('Unknown final endpoint %s' % final_endpoint) def model( inputs, is_training = True, dropout_keep_prob = 0.8, reuse = None, scope = 'InceptionV4', bottleneck_dim = 512, ): # inputs = tf.image.grayscale_to_rgb(inputs) with tf.variable_scope( scope, 'InceptionV4', [inputs], reuse = reuse ) as scope: with slim.arg_scope( [slim.batch_norm, slim.dropout], is_training = is_training ): net, end_points = inception_v4_base(inputs, scope = scope) print(net.shape) with slim.arg_scope( [slim.conv2d, slim.max_pool2d, slim.avg_pool2d], stride = 1, padding = 'SAME', ): with tf.variable_scope('Logits'): # 8 x 8 x 1536 kernel_size = net.get_shape()[1:3] print(kernel_size) if kernel_size.is_fully_defined(): net = slim.avg_pool2d( net, kernel_size, padding = 'VALID', scope = 'AvgPool_1a', ) else: net = tf.reduce_mean( input_tensor = net, axis = [1, 2], keepdims = True, name = 'global_pool', ) end_points['global_pool'] = net # 1 x 1 x 1536 net = slim.dropout( net, dropout_keep_prob, scope = 'Dropout_1b' ) net = slim.flatten(net, scope = 'PreLogitsFlatten') end_points['PreLogitsFlatten'] = net bottleneck = slim.fully_connected( net, bottleneck_dim, scope = 'bottleneck' ) logits = slim.fully_connected( bottleneck, 5994, activation_fn = None, scope = 'Logits', ) return logits, bottleneck class Model: def __init__(self): self.X = tf.placeholder(tf.float32, [None, 257, None, 1]) with slim.arg_scope(inception_utils.inception_arg_scope()): self.l, self.bottleneck = model(self.X, is_training = True) print(self.l, self.bottleneck) self.bottleneck = tf.keras.layers.Lambda(lambda x: tf.keras.backend.l2_normalize(x, 1))(self.bottleneck) self.logits = tf.identity(self.bottleneck, name = 'logits') tf.reset_default_graph() sess = tf.InteractiveSession() model_ = Model() sess.run(tf.global_variables_initializer()) var_lists = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) saver = tf.train.Saver(var_list = var_lists) saver.restore(sess, 'output-inception-v4/model.ckpt-401000') import librosa import numpy as np def load_wav(vid_path, sr = 16000, mode = 'eval'): wav, sr_ret = librosa.load(vid_path, sr = sr) assert sr_ret == sr if mode == 'train': extended_wav = np.append(wav, wav) if np.random.random() < 0.3: extended_wav = extended_wav[::-1] return extended_wav else: extended_wav = np.append(wav, wav[::-1]) return extended_wav def lin_spectogram_from_wav(wav, hop_length, win_length, n_fft = 1024): linear = librosa.stft( wav, n_fft = n_fft, win_length = win_length, hop_length = hop_length ) return linear.T def load_data( wav, win_length = 400, sr = 16000, hop_length = 160, n_fft = 512, spec_len = 120, mode = 'eval', ): # wav = load_wav(wav, sr=sr, mode=mode) linear_spect = lin_spectogram_from_wav(wav, hop_length, win_length, n_fft) mag, _ = librosa.magphase(linear_spect) # magnitude mag_T = mag.T freq, time = mag_T.shape if mode == 'train': if time < spec_len: spec_mag = np.pad(mag_T, ((0, 0), (0, spec_len - time)), 'constant') else: spec_mag = mag_T else: spec_mag = mag_T mu = np.mean(spec_mag, 0, keepdims = True) std = np.std(spec_mag, 0, keepdims = True) return (spec_mag - mu) / (std + 1e-5) files = [ 'husein-zolkepli.wav', 'khalil-nooh.wav', 'mas-aisyah.wav', 'shafiqah-idayu.wav' ] wavs = [load_data(load_wav(f)) for f in files] def pred(x): return sess.run(model_.bottleneck, feed_dict = {model_.X: np.expand_dims([x], -1)}) #tf.math.l2_normalize(model_.bottleneck, axis = 1) r = [pred(wav) for wav in wavs] r = np.concatenate(r) r.shape from scipy.spatial.distance import cdist cdist(r, r, metric='cosine') # !tar -czvf inception-v4-30-09-2020.tar.gz inception-v4 import json with open('../vggvox-speaker-identification/indices.json') as fopen: data = json.load(fopen) files = data['files'] speakers = data['speakers'] unique_speakers = sorted(list(speakers.keys())) import random random.shuffle(files) def get_id(file): return file.split('/')[-1].split('-')[1] import itertools import random cycle_files = itertools.cycle(files) def random_sample(sample, sr, length = 500): sr = int(sr / 1000) r = np.random.randint(0, len(sample) - (sr * length)) return sample[r : r + sr * length] def generate(sample_rate = 16000, max_length = 5): while True: file = next(cycle_files) try: y = unique_speakers.index(get_id(file)) w = load_wav(file) if len(w) / sample_rate > max_length: w = random_sample( w, sample_rate, random.randint(500, max_length * 1000) ) # if random.randint(0, 1): # w = add_noise( # w, random.choice(noises), random.uniform(0.1, 0.5) # ) w = load_data(w) yield {'inputs': np.expand_dims(w, -1), 'targets': [y]} except Exception as e: print(e) pass def get_dataset(batch_size = 32, shuffle_size = 5): def get(): dataset = tf.data.Dataset.from_generator( generate, {'inputs': tf.float32, 'targets': tf.int32}, output_shapes = { 'inputs': tf.TensorShape([257, None, 1]), 'targets': tf.TensorShape([1]), }, ) dataset = dataset.padded_batch( batch_size, padded_shapes = { 'inputs': tf.TensorShape([257, None, 1]), 'targets': tf.TensorShape([None]), }, padding_values = { 'inputs': tf.constant(0, dtype = tf.float32), 'targets': tf.constant(0, dtype = tf.int32), }, ) dataset = dataset.shuffle(shuffle_size) return dataset return get dataset = get_dataset()() iterator = dataset.make_one_shot_iterator().get_next() def pred(x): return sess.run(model_.l, feed_dict = {model_.X: x}) data = sess.run(iterator) np.argmax(pred(data['inputs']), axis = 1), data['targets'][:,0] saver = tf.train.Saver() saver.save(sess, 'inception-v4/model.ckpt') strings = ','.join( [ n.name for n in tf.get_default_graph().as_graph_def().node if ('Variable' in n.op or 'Placeholder' in n.name or 'logits' in n.name or 'alphas' in n.name or 'self/Softmax' in n.name) and 'adam' not in n.name and 'beta' not in n.name and 'global_step' not in n.name and 'Assign' not in n.name ] ) def freeze_graph(model_dir, output_node_names): if not tf.gfile.Exists(model_dir): raise AssertionError( "Export directory doesn't exists. Please specify an export " 'directory: %s' % model_dir ) checkpoint = tf.train.get_checkpoint_state(model_dir) input_checkpoint = checkpoint.model_checkpoint_path absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1]) output_graph = absolute_model_dir + '/frozen_model.pb' clear_devices = True with tf.Session(graph = tf.Graph()) as sess: saver = tf.train.import_meta_graph( input_checkpoint + '.meta', clear_devices = clear_devices ) saver.restore(sess, input_checkpoint) output_graph_def = tf.graph_util.convert_variables_to_constants( sess, tf.get_default_graph().as_graph_def(), output_node_names.split(','), ) with tf.gfile.GFile(output_graph, 'wb') as f: f.write(output_graph_def.SerializeToString()) print('%d ops in the final graph.' % len(output_graph_def.node)) freeze_graph('inception-v4', strings) def load_graph(frozen_graph_filename, **kwargs): with tf.gfile.GFile(frozen_graph_filename, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) # https://github.com/onnx/tensorflow-onnx/issues/77#issuecomment-445066091 # to fix import T5 for node in graph_def.node: if node.op == 'RefSwitch': node.op = 'Switch' for index in xrange(len(node.input)): if 'moving_' in node.input[index]: node.input[index] = node.input[index] + '/read' elif node.op == 'AssignSub': node.op = 'Sub' if 'use_locking' in node.attr: del node.attr['use_locking'] elif node.op == 'AssignAdd': node.op = 'Add' if 'use_locking' in node.attr: del node.attr['use_locking'] elif node.op == 'Assign': node.op = 'Identity' if 'use_locking' in node.attr: del node.attr['use_locking'] if 'validate_shape' in node.attr: del node.attr['validate_shape'] if len(node.input) == 2: node.input[0] = node.input[1] del node.input[1] with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def) return graph g = load_graph('inception-v4/frozen_model.pb') x = g.get_tensor_by_name('import/Placeholder:0') logits = g.get_tensor_by_name('import/logits:0') test_sess = tf.InteractiveSession(graph = g) data['inputs'].shape test_sess.run(logits, feed_dict = {x: data['inputs'][:2]}).shape ```
github_jupyter
![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg) Find this notebook in https://colab.research.google.com/github/ricardokleinklein/NLP_GenMods/blob/main/Tacotron2.ipynb # Modelos Generativos ## Tacotron2 - Audio Creado por *Ricardo Kleinlein* para [Saturdays.AI](https://saturdays.ai/). Disponible bajo una licencia [Creative Commons](https://creativecommons.org/licenses/by/4.0/). --- ## Sobre el uso de Jupyter Notebooks Este notebook ha sido implementado en Python, pero para su ejecución no es necesario conocer el lenguaje en profundidad. Solamente se debe ejecutar cada una de las celdas, teniendo en cuenta que hay que ejecutar una celda a la vez y secuencialmente, tal y como figuran en orden de aparición. Para ejecutar cada celda pulse en el botón ▶ en la esquina superior izquierda de cada celda. Mientras se esté ejecutando ese fragmento de código, el botón estará girando. En caso de querer detener dicha ejecución, pulse nuevamente sobre este botón mientras gira y la ejecución se detendrá. En caso de que la celda tenga alguna salida (texto, gráficos, etc) será mostrada justo después de esta y antes de mostrar la siguiente celda. El notebook estará guiado con todas las explicaciones necesarias, además irá acompañado por comentarios en el código para facilitar su lectura. En caso de tener alguna duda, anótela. Dedicaremos un tiempo a plantear y resolver la mayoría delas dudas que puedan aparecer. ## Objetivo del notebook Implementar, descargar y utilizar un modelo de Estado del Arte (Tacotron2) de Text-To-Speech Synthesis (TTS). ## Sobre el modelo A la hora de generar una voz sintética, existen una serie de factores que se engloban dentro de lo que denominamos "prosodia", que constituyen algunos factores especialmente peliagudos de modelar por sistemas automáticos: el ritmo, el énfasis o la entonación. Entre otros atributos físicos, son estos factores los que hacen una voz más reconocible que otra en muchos casos. Hemos visto en las diapositivas un poco sobre el modelo Wavenet de generación de habla natural. En ese caso, el modelo era un modelo autorregresivo, esto es, que empleaba predicciones anteriores para elaborar futuros puntos de la muestra. El modelo Tacotron original [[paper](https://arxiv.org/abs/1703.10135)] empleaba como componente fundamental Wavenet a la hora de construir habla. Sin embargo, dicho modelo es muy lento a la hora de generar, puesto que tiene que ver muy atrás en el tiempo para generar cada punto de la muestra. Por ello, Tacotron2 [[paper](https://arxiv.org/abs/1712.05884)] construye sobre esta idea, y propone una solución de compromiso donde sacrifica parte de la "personalidad" de la voz por eficiencia en la generación. Si bien Wavenet entraba dentro de la familia de modelos autorregresivos, Tacotron2 se enmarca dentro de las estrategias "flow-density". En la imagen inferior se muestra un diagrama de las partes que componen este sistema de síntesis de voz natural. ![tacotron2-diagram](./assets/tacotron2_diagram.png) El modelo complementario WaveGlow es un modelo que ha aprendido a generar espectrogramas a partir de texto. Mediante la combinación de Tacotron2 con WaveGlow, el texto nuevo que escribamos como input podrá ser interpretado como habla natural, y se generará en formato de audio. Se podrían modificar aspectos de la voz resultante incorporando información adicional a diferentes niveles dentro del modelo, pero en este ejercicio nos vamos a centrar en cargar el modelo y generar nuestros propios audios. ## Instalar las librerías necesarias ``` %%bash pip install numpy scipy librosa unidecode inflect librosa apt-get update apt-get install -y libsndfile1 ``` ## Importar los modelos pre-entrenados Estos modelos ocupan mucho espacio en memoria, pero sus tiempos de entrenamiento son aún peores, y requieren de una infraestructura avanzada para poder entrenarlos en plazos de tiempo razonables. Desde luego, exceden en mucho las capacidades de la mayoría de nuestros ordenadores, o del servidor default que Colab nos proporciona. Afortunadamente, NVIDIA proporciona un servidor desde el que descargar un modelo completamente preentrenado. ### Tacotron2 Esta versión de Tacotron2 es casi idéntica en arquitectura al original como aparece publicado en el paper, con modificaciones mínimas en algunas capas. Ha sido entrenado en la base de datos [LJSpeech](https://keithito.com/LJ-Speech-Dataset/), la cual constituye una de las referencias principales a la hora de entrenar modelos de síntesis de voz. Probablemente la otra mayor base de datos a tal efecto sea [VCTK](https://datashare.ed.ac.uk/handle/10283/2950), desarrollada por Junichi Yamagishi en Edimburgo, con quién trabajé en Tokyo. LJSpeech consta de ... ``` from typing import Tuple from IPython.display import Audio import torch TacotronModel = Tuple[torch.nn.Module, torch.nn.Module] tacotron2 = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_tacotron2', model_math='fp16') tacotron2 = tacotron2.to('cuda') tacotron2.eval() ``` Podemos repasar las líneas desplegadas para comprobar, junto con el diagrama mostrado al inicio, que la arquitectura es correcta. ### WaveGlow En nuestro ejemplo, WaveGlow juega el rol de un *vocoder*, una herramienta que convierte una codificación numérica del habla en sonidos audibles. ``` waveglow = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_waveglow', model_math='fp16') waveglow = waveglow.remove_weightnorm(waveglow) waveglow = waveglow.to('cuda') waveglow.eval() ``` En este momento ya estamos preparados para sintetizar audio. Por comodidad, vamos a agrupar una serie de operaciones dedicadas a preprocesar el input con que alimentaremos al modelo: ``` utils = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_tts_utils') def synthesize(text: str, model: TacotronModel): """Adjust input text length by padding, and feed to model. :param text: Uttered speech. :param model: Tuple with instances of (Tacotron, WaveGlow). :return: numpy.ndarray with utterance. """ sequences, lengths = utils.prepare_input_sequence([text]) with torch.no_grad(): mel, _, _ = model[0].infer(sequences, lengths) audio = model[1].infer(mel) return audio[0].data.cpu().numpy() ``` ## Playground Ahora solo resta escribir una cadena de texto (en inglés para obtener mejores resultados) y escuchar cuál es el resultado. ``` text = "Isn't Machine Learning something absolutely fabulous?" signal = synthesize(text, (tacotron2, waveglow)) Audio(signal, rate=22050) ```
github_jupyter
``` ####This notebook required run on parallel algorithms which base on MPI#### import numpy as np import libpysal as ps from stwr.gwr import GWR, MGWR,STWR from stwr.sel_bw import * from stwr.utils import shift_colormap, truncate_colormap import geopandas as gp import matplotlib.pyplot as plt import matplotlib as mpl from matplotlib import pyplot import pandas as pd import math from matplotlib.gridspec import GridSpec import time import csv import copy import rasterio import rasterio.plot import rasterio.features import rasterio.warp import pyproj #读入数据需要有这些 cal_coords_list =[] cal_y_list =[] cal_X_list =[] delt_stwr_intervel =[0.0] csvFile = open("../Data_STWR/RealWorldData/precip_isotope_D3.csv", "r") df = pd.read_csv(csvFile,header = 0,names=['Longitude','Latitude','Elevation','ppt','tmean','d2h','timestamp'], dtype = {"Longitude" : "float64","Latitude":"float64", "Elevation":"float64","ppt":"float64","tmean":"float64","d2h":"float64", "timestamp":"float64"}, skip_blank_lines = True, keep_default_na = False) df.info() df = df.sort_values(by=['timestamp']) all_data = df.values tick_time = all_data[0,-1] cal_coord_tick = [] cal_X_tick =[] cal_y_tick =[] time_tol = 1.0e-7 lensdata = len(all_data) for row in range(lensdata): cur_time = all_data[row,-1] if(abs(cur_time-tick_time)>time_tol): cal_coords_list.append(np.asarray(cal_coord_tick)) cal_X_list.append(np.asarray(cal_X_tick)) cal_y_list.append(np.asarray(cal_y_tick)) delt_t = cur_time - tick_time delt_stwr_intervel.append(delt_t) tick_time =cur_time cal_coord_tick = [] cal_X_tick =[] cal_y_tick =[] coords_tick = np.array([all_data[row,0],all_data[row,1]]) cal_coord_tick.append(coords_tick) x_tick = np.array([all_data[row,2],all_data[row,3],all_data[row,4]]) cal_X_tick.append(x_tick) y_tick = np.array([all_data[row,5]]) cal_y_tick.append(y_tick) #最后在放一次 #gwr解出最后一期 cal_cord_gwr = np.asarray(cal_coord_tick) cal_X_gwr = np.asarray(cal_X_tick) cal_y_gwr = np.asarray(cal_y_tick) cal_coords_list.append(np.asarray(cal_coord_tick)) cal_X_list.append(np.asarray(cal_X_tick)) cal_y_list.append(np.asarray(cal_y_tick)) #stwr stwr_selector_ = Sel_Spt_BW(cal_coords_list, cal_y_list, cal_X_list,#gwr_bw0, delt_stwr_intervel,spherical = True) #(1)Here we use parallel algorithms. #optalpha,optsita,opt_btticks,opt_gwr_bw0 = stwr_selector_.search(nproc = 12) #stwr_model = STWR(cal_coords_list,cal_y_list,cal_X_list,delt_stwr_intervel, # optsita,opt_gwr_bw0,tick_nums=opt_btticks,alpha =optalpha,spherical = True,recorded=1) #(2)We use oridinary algorithms optalpha,optsita,opt_btticks,opt_gwr_bw0 = stwr_selector_.search() stwr_model = STWR(cal_coords_list,cal_y_list,cal_X_list,delt_stwr_intervel,optsita,opt_gwr_bw0,tick_nums=opt_btticks+1,alpha =optalpha,spherical = True,recorded=1) stwr_results = stwr_model.fit() print(stwr_results.summary()) stwr_scale = stwr_results.scale stwr_residuals = stwr_results.resid_response #gwr 数据只有最后一期 gwr_selector = Sel_BW(cal_cord_gwr, cal_y_gwr, cal_X_gwr,spherical = True) gwr_bw= gwr_selector.search(bw_min=2) gwr_model = GWR(cal_cord_gwr, cal_y_gwr, cal_X_gwr, gwr_bw,spherical = True) gwr_results = gwr_model.fit() print(gwr_results.summary()) gw_rscale = gwr_results.scale gwr_residuals = gwr_results.resid_response #预测面 Pred_Coords_list =[] X_pre_list = [] theight1 = rasterio.open('../Data_STWR/RealWorldData/extgmted1.tif') bheight1 = theight1.read(1) ppt1 = rasterio.open('../Data_STWR/RealWorldData/extppt1.tif') bppt1 = ppt1.read(1) mean1 = rasterio.open('../Data_STWR/RealWorldData/extmean1.tif') bmean1 = mean1.read(1) pf = ppt1.profile transform =ppt1.profile['transform'] nodata = pf['nodata'] #nodata #transform Z = bppt1.copy() #Z = Z.astype(np.float64) Z2 = bppt1.copy() #Z2 = Z2.astype(np.float64) mask_height = ppt1.dataset_mask() for row in range(mask_height.shape[0]): for col in range (mask_height.shape[1]): if(mask_height[row,col]>0): X_tick = np.array([bheight1[row,col],bppt1[row,col],bmean1[row,col]]) X_pre_list.append(X_tick) Pred_Coords_list.append(ppt1.xy(row,col)) X_pre_arr = np.asarray(X_pre_list) alllen_stwr = len(Pred_Coords_list) allklen_stwr = X_pre_arr.shape[1]+1 rec_parmas_stwr = np.ones((alllen_stwr,allklen_stwr)) calen_stwr = len(cal_y_list[-1]) prelen_stwr = X_pre_arr.shape[0] Pre_y_list = np.ones_like(X_pre_arr[:,1]) #gwr Pre_gwr_y_list = Pre_y_list.copy() #gwr stwr_pre_parmas = np.ones((prelen_stwr,allklen_stwr)) if (calen_stwr>=prelen_stwr): predPointList = Pred_Coords_list PreX_list = X_pre_arr #stwr pred_stwr_dir_result = stwr_model.predict(predPointList,PreX_list,stwr_scale,stwr_residuals) pre_y_stwr = pred_stwr_dir_result.predictions #gwr pred_gwr_dir_result = gwr_model.predict(predPointList,PreX_list,gw_rscale,gwr_residuals) pre_y_gwr = pred_gwr_dir_result.predictions #gwr else: spl_parts_stwr = math.ceil(prelen_stwr*1.0/calen_stwr) spl_X_stwr = np.array_split(X_pre_arr, spl_parts_stwr, axis = 0) spl_coords_stwr = np.array_split(Pred_Coords_list, spl_parts_stwr, axis = 0) pred_stwr_result = np.array_split(Pre_y_list, spl_parts_stwr, axis = 0) # pred_stwrparmas_result = np.array_split(stwr_pre_parmas, spl_parts_stwr, axis = 0) #gwr pred_gwr_result = np.array_split(Pre_gwr_y_list, spl_parts_stwr, axis = 0) #gwr for j in range(spl_parts_stwr): predPointList_tick = [spl_coords_stwr[j]] PreX_list_tick = [spl_X_stwr[j]] pred_stwr_spl_result = stwr_model.predict(predPointList_tick,PreX_list_tick,stwr_scale,stwr_residuals) pred_stwr_result[j] =pred_stwr_spl_result.predictions # pred_stwrparmas_result[j] =np.reshape(pred_stwr_spl_result.params.flatten(),(-1,allklen_stwr)) #gwr pred_gwr_spl_result = gwr_model.predict(spl_coords_stwr[j],spl_X_stwr[j],gw_rscale,gwr_residuals) pred_gwr_result[j] =pred_gwr_spl_result.predictions #gwr pre_y_stwr = pred_stwr_result[0] # pre_parmas_stwr = pred_stwrparmas_result[0] combnum = spl_parts_stwr-1 #gwr pre_y_gwr=pred_gwr_result[0] #gwr for s in range(combnum): pre_y_stwr = np.vstack((pre_y_stwr,pred_stwr_result[s+1])) # pre_parmas_stwr = np.vstack((pre_parmas_stwr,pred_stwrparmas_result[s+1])) #gwr pre_y_gwr = np.vstack((pre_y_gwr,pred_gwr_result[s+1])) #gwr idx = 0 mask_ppt = ppt1.dataset_mask() for row in range(mask_ppt.shape[0]): for col in range (mask_ppt.shape[1]): if(mask_height[row,col]>0): Z[row,col] = pre_y_stwr[idx] #gwr Z2[row,col] = pre_y_gwr[idx] #gwr idx = idx+1 # idx_stwr.iternext() # idx_gwr.iternext() #idx = 0 #mask_height = theight1.dataset_mask() #for row in range(mask_height.shape[0]): # for col in range (mask_height.shape[1]): # if(mask_height[row,col]>0): # Z[row,col] = pre_y_stwr[idx,-1] # #gwr # Z2[row,col] = pre_y_gwr[idx,-1] # #gwr # idx=idx+1 with rasterio.open('../Data_STWR/RealWorldData/output/Rst3_stwr_nd_newt.tif', 'w', driver='GTiff', height=Z.shape[0], width=Z.shape[1], count=1, dtype=Z.dtype, crs='+proj=latlong', transform=transform,nodata = nodata) as dststwr: dststwr.write(Z, 1) with rasterio.open('../Data_STWR/RealWorldData/output/Rst3_gwr_nd_newt.tif', 'w', driver='GTiff', height=Z2.shape[0], width=Z2.shape[1], count=1, dtype=Z2.dtype, crs='+proj=latlong', transform=transform,nodata = nodata) as dstgwr: dstgwr.write(Z2, 1) pyplot.title("Predicted δ2H Surface of STWR") pyplot.imshow(Z,cmap='binary',vmin=-238.478, vmax=18.4553) pyplot.show() pyplot.title("Predicted δ2H Surface of GWR") pyplot.imshow(Z2, cmap='binary',vmin=-238.478, vmax=18.4553) pyplot.show() ```
github_jupyter
``` import tensorflow as tf print(tf.__version__) !ls ../chapter_07/train_base_model/tf_datasets/ !ls -lrt /content/tfrecord-dataset/flowers import tensorflow as tf import tensorflow_hub as hub import tensorflow_datasets as tfds import os import matplotlib.pyplot as plt from PIL import Image, ImageOps import IPython.display as display from tensorflow import keras AUTOTUNE = tf.data.experimental.AUTOTUNE print(tf.__version__) print(hub.__version__) root_dir = '/content/tfrecord-dataset/flowers' train_file_pattern = "{}/image_classification_builder-train*.tfrecord*".format(root_dir) val_file_pattern = "{}/image_classification_builder-validation*.tfrecord*".format(root_dir) test_file_pattern = "{}/image_classification_builder-test*.tfrecord*".format(root_dir) train_all_files = tf.data.Dataset.list_files( tf.io.gfile.glob(train_file_pattern)) val_all_files = tf.data.Dataset.list_files( tf.io.gfile.glob(val_file_pattern)) test_all_files = tf.data.Dataset.list_files( tf.io.gfile.glob(test_file_pattern)) train_all_ds = tf.data.TFRecordDataset(train_all_files, num_parallel_reads = AUTOTUNE) val_all_ds = tf.data.TFRecordDataset(val_all_files, num_parallel_reads = AUTOTUNE) test_all_ds = tf.data.TFRecordDataset(test_all_files, num_parallel_reads = AUTOTUNE) print("Sample size for training: {0}".format(sum(1 for _ in tf.data.TFRecordDataset(train_all_files))) ,'\n', "Sample size for validation: {0}".format(sum(1 for _ in tf.data.TFRecordDataset(val_all_files))) ,'\n', "Sample size for test: {0}".format(sum(1 for _ in tf.data.TFRecordDataset(test_all_files)))) def decode_and_resize(serialized_example): # resized image should be [224, 224, 3] and normalized to value range [0, 255] # label is integer index of class. parsed_features = tf.io.parse_single_example( serialized_example, features = { 'image/channels' : tf.io.FixedLenFeature([], tf.int64), 'image/class/label' : tf.io.FixedLenFeature([], tf.int64), 'image/class/text' : tf.io.FixedLenFeature([], tf.string), 'image/colorspace' : tf.io.FixedLenFeature([], tf.string), 'image/encoded' : tf.io.FixedLenFeature([], tf.string), 'image/filename' : tf.io.FixedLenFeature([], tf.string), 'image/format' : tf.io.FixedLenFeature([], tf.string), 'image/height' : tf.io.FixedLenFeature([], tf.int64), 'image/width' : tf.io.FixedLenFeature([], tf.int64) }) image = tf.io.decode_jpeg(parsed_features['image/encoded'], channels=3) label = tf.cast(parsed_features['image/class/label'], tf.int32) label_txt = tf.cast(parsed_features['image/class/text'], tf.string) label_one_hot = tf.one_hot(label, depth = 5) resized_image = tf.image.resize(image, [224, 224], method='nearest') return resized_image, label_one_hot def normalize(image, label): #Convert `image` from [0, 255] -> [0, 1.0] floats image = tf.cast(image, tf.float32) / 255. return image, label resized_train_ds = train_all_ds.map(decode_and_resize, num_parallel_calls=AUTOTUNE) resized_val_ds = val_all_ds.map(decode_and_resize, num_parallel_calls=AUTOTUNE) resized_test_ds = test_all_ds.map(decode_and_resize, num_parallel_calls=AUTOTUNE) resized_normalized_train_ds = resized_train_ds.map(normalize, num_parallel_calls=AUTOTUNE) resized_normalized_val_ds = resized_val_ds.map(normalize, num_parallel_calls=AUTOTUNE) resized_normalized_test_ds = resized_test_ds.map(normalize, num_parallel_calls=AUTOTUNE) pixels =224 IMAGE_SIZE = (pixels, pixels) TRAIN_BATCH_SIZE = 32 # Validation and test data are small. Use all in a batch. VAL_BATCH_SIZE = sum(1 for _ in tf.data.TFRecordDataset(val_all_files)) TEST_BATCH_SIZE = sum(1 for _ in tf.data.TFRecordDataset(test_all_files)) def prepare_for_model(ds, BATCH_SIZE, cache=True, TRAINING_DATA=True, shuffle_buffer_size=1000): # This is a small dataset, only load it once, and keep it in memory. # use `.cache(filename)` to cache preprocessing work for datasets that don't # fit in memory. if cache: if isinstance(cache, str): ds = ds.cache(cache) else: ds = ds.cache() ds = ds.shuffle(buffer_size=shuffle_buffer_size) if TRAINING_DATA: # Repeat forever ds = ds.repeat() ds = ds.batch(BATCH_SIZE) # `prefetch` lets the dataset fetch batches in the background while the model # is training. ds = ds.prefetch(buffer_size=AUTOTUNE) return ds NUM_EPOCHS = 5 SHUFFLE_BUFFER_SIZE = 1000 prepped_test_ds = prepare_for_model(resized_normalized_test_ds, TEST_BATCH_SIZE, False, False) prepped_train_ds = resized_normalized_train_ds.repeat(100).shuffle(buffer_size=SHUFFLE_BUFFER_SIZE) prepped_train_ds = prepped_train_ds.batch(TRAIN_BATCH_SIZE) prepped_train_ds = prepped_train_ds.prefetch(buffer_size = AUTOTUNE) prepped_val_ds = resized_normalized_val_ds.repeat(NUM_EPOCHS).shuffle(buffer_size=SHUFFLE_BUFFER_SIZE) prepped_val_ds = prepped_val_ds.batch(80) prepped_val_ds = prepped_val_ds.prefetch(buffer_size = AUTOTUNE) FINE_TUNING_CHOICE = False NUM_CLASSES = 5 IMAGE_SIZE = (224, 224) mdl = tf.keras.Sequential([ tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE + (3,), name='input_layer'), hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v1_101/feature_vector/4", trainable=FINE_TUNING_CHOICE, name = 'resnet_fv'), tf.keras.layers.Dense(NUM_CLASSES, activation='softmax', name = 'custom_class') ]) mdl.build([None, 224, 224, 3]) mdl.compile( optimizer=tf.keras.optimizers.SGD(lr=0.005, momentum=0.9), loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True, label_smoothing=0.1), metrics=['accuracy']) mdl.fit( prepped_train_ds, epochs=5, steps_per_epoch=100, validation_data=prepped_val_ds, validation_steps=1) ```
github_jupyter
``` from __future__ import division import numpy as np from pyspark import SparkConf from pyspark import SparkContext conf = SparkConf() conf.setMaster('spark://ip-172-31-9-200:7077') conf.setAppName('spark_analytics_chpt_4') conf.set("spark.executor.memory", "10g") sc = SparkContext(conf=conf) ``` Data from https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/ ``` raw_data = sc.textFile('covtype.data') raw_data.count() from pyspark.mllib.linalg import Vectors from pyspark.mllib.regression import LabeledPoint def to_float(s): try: return float(s) except: return float('nan') def clean(line): for x in line.split(','): values = map(to_float, line.split(',')) featureVector = Vectors.dense(values[:-1]) label = values[-1] - 1 return LabeledPoint(label, featureVector) data = raw_data.map(clean) data.take(5) training_data, cv_data, test_data = data.randomSplit([0.8, 0.1, 0.1]) training_data.cache() cv_data.cache() test_data.cache() training_data.count(), cv_data.count(), test_data.count() ``` ## Decision Tree ``` from pyspark.mllib.evaluation import MulticlassMetrics from pyspark.mllib.tree import DecisionTree, DecisionTreeModel model = DecisionTree.trainClassifier(training_data, 7, {}, 'gini', 4, 100) predictions = model.predict(data.map(lambda x: x.features)) labels_and_predictions = data.map(lambda lp: lp.label).zip(predictions) metrics = MulticlassMetrics(labels_and_predictions) metrics.confusionMatrix() metrics.precision() map(lambda cat: (metrics.precision(cat), metrics.recall(cat)), [0, 1, 2, 3, 4, 6]) def classProbabilities(data): countsByCategory = data.map(lambda x: x.label).countByValue() counts = np.array(countsByCategory.values()) / sum(countsByCategory.values()) return counts trainPriorProbabilities = classProbabilities(training_data) cvPriorProbabilities = classProbabilities(cv_data) sum([x[0] * x[1] for x in zip(trainPriorProbabilities, cvPriorProbabilities)]) for impurity in ('gini', 'entropy'): for depth in (1, 20): for bins in (10, 300): model = DecisionTree.trainClassifier(training_data, 7, {}, impurity, depth, bins) predictions = model.predict(cv_data.map(lambda x: x.features)) labels_and_predictions = cv_data.map(lambda lp: lp.label).zip(predictions) metrics = MulticlassMetrics(labels_and_predictions) accuracy = metrics.precision() print (impurity, depth, bins), accuracy model = DecisionTree.trainClassifier(training_data.union(cv_data), 7, {}, 'entropy', 20, 300) predictions = model.predict(data.map(lambda x: x.features)) labels_and_predictions = data.map(lambda lp: lp.label).zip(predictions) metrics = MulticlassMetrics(labels_and_predictions) accuracy = metrics.precision() print accuracy ``` ## Random Forest ``` from pyspark.mllib.tree import RandomForest forest = RandomForest.trainClassifier(training_data, 7, {10:4, 11:40}, 20, 'auto', 'entropy', 30, 300) predictions = model.predict(data.map(lambda x: x.features)) labels_and_predictions = data.map(lambda lp: lp.label).zip(predictions) metrics = MulticlassMetrics(labels_and_predictions) accuracy = metrics.precision() print accuracy from pyspark.mllib.linalg import Vectors input = '2709,125,28,67,23,3224,253,207,61,6094,0,29' vector = Vectors.dense([to_float(x) for x in input.split(',')]) result = forest.predict(vector) ```
github_jupyter
# Brain connectome comparison using geodesic distances **Authors:** S. Shailja and B.S. Manjunath **Affiliation:** University of California, Santa Barbara The goal of this notebook is to study the importance of geodesic distances on manifolds. Towards that end, we propose the following twin study. We utilize the structural connectomes of 412 human subjects in five different resolutions and two edge weights. Data consists of 206 twin pairs (133 Monozygotic (MZ) and 73 Dizygotic (DZ)). A connectivity graph is computed from neural fiber connections between different anatomically identified Regions of Interest (ROIs) of the brain. For each subject, we have an undirected graph with 83, 129, 234, 463, and 1015 nodes and we consider the following edge weights for our study: - number_of_fibers: the count of the fibers connecting two ROIs. - fiber_length_mean: The mean of the fiber lengths connecting two ROIs. We investigate the performance of geodesic distances on manifolds to assess the network similarity between pairs of twins in structural networks at different network resolutions. We compare these metrics with Euclidean distances. <!-- <table><tr> <td> <img src="emg_wristband.png" style="width: 200px;"/> </td> <td> <img src="paper_rock_scissors.png" style="width: 300px;"/> </td> </tr></table> Figure 1. Left: EMG device: Armband with 8 electrodes recording the electrical activity of the arm's muscle tissues. Right: Three out of the four hand gestures classes considered, here "paper", "rock", "scissors". --> # 1. Introduction and Motivation Diffusion Tensor Imaging (DTI) is a magnetic resonance imaging technique that discovers the connections between much larger areas of the gray matter of the human brain. In recent years, analysis of fibers in DTI has received wide interest due to its potential applications in computational pathology, surgery, and studies of diseases, such as brain tumors, Alzheimer’s, and schizophrenia. Among them, one way to analyze the fiber tracts is to generate a connectivity matrix that provides a compact description of pairwise connectivity of ROIs derived from anatomical or computational brain atlases. For example, the connectivity matrices can be used to compute multiple graph theory-based metrics to distinguish between the brains. However, such methods analyze the derived network parameters overlooking the actual difference between the networks. In this research work, we assess the similarity in structural networks by comparing the connectivity matrices. We evaluate the efficacy of distance metrics in different geometrical spaces. We demonstrate the usefulness of geodesic distances that considers the complex geometric nature of the graph. Furthermore, we evaluate the performance and consistency of the results at different graph resolutions. The computed structural connectomes based on the data of the Human Connectome Project (HCP) are publicly available to download from https://braingraph.org/cms/download-pit-group-connectomes/ [[CBB2017]](#References). ## Outline In this notebook, we will: - Compute the Euclidean distances on adjacency matrices for each pair of twins (MZ and DZ). - Regularize the symmetric semi-positive definite graph Laplacians to symmetric positive-definite matrices and evaluate distances on SPD manifold. - Statistical analysis of similarity metrics with Euclidean distance as baseline using Wilcoxon rank sum non-parametric test [[CJ1985]](#References). # 2. Analysis We import required Python packages. ``` import numpy as np from numpy import linalg as la import geomstats.backend as gs import csv import pandas as pd import math import scipy.stats as stats import geomstats.geometry.spd_matrices as spd import warnings import matplotlib.pyplot as plt import matplotlib.patches as mpatches import matplotlib as mpl gs.random.seed(2021) import sys !{sys.executable} -m pip install seaborn import seaborn as sns import sys !pip install networkx import networkx as nx import os path = os.getcwd() print(path) ``` ## 2.1. Dataset description The connectomes generated by the PIT Bioinformatics Group can be downloaded from https://braingraph.org/cms/download-pit-group-connectomes/. - 86 nodes set, 1064 brains, 1 000 000 streamlines, 10x repeated & averaged (18 MB) - 129 nodes set, 1064 brains, 1 000 000 streamlines, 10x repeated & averaged (33 MB) - 234 nodes set, 1064 brains, 1 000 000 streamlines, 10x repeated & averaged (71 MB) - 463 nodes set, 1064 brains, 1 000 000 streamlines, 10x repeated & averaged (138 MB) - 1015 nodes set, 1064 brains, 1 000 000 streamlines, 10x repeated & averaged (265 MB) The connectomes were generated from MRI scans obtained from the Human Connectome Project. To download the connectomes, you have to agree to the terms and conditions of Human Connectome Project. Please uncompress the data before running the code. A set of sample data is included in the data folder. The metadata of each subject consists of subject id, family id, twin id and zygosity. It can be downloaded from the publicly available HCP website https://www.humanconnectome.org/study/hcp-young-adult/data-releases. We have uploaded the metadata as a CSV file in the data folder. Please agree to the same data use terms of the Human Connectome Project as above. Please agree to the HCP open access data use terms and conditions. https://www.humanconnectome.org/storage/app/media/data_use_terms/DataUseTerms-HCP-Open-Access-26Apr2013.pdf ``` # save the metadata file from HCP dataset with open(path +'/data/HCP_zygocity.csv', newline='') as csvfile: reader = csv.DictReader(csvfile) dic = {} for row in reader: if(row['ZygosityGT'] == "MZ" or row['ZygosityGT'] == "DZ"): if not dic.get(row['ZygosityGT'] + "_" + row['FAMILY_ID']): dic[row['ZygosityGT'] + "_" + row['FAMILY_ID']] = [row['SUBJECT_ID']] else: dic[row['ZygosityGT'] + "_" + row['FAMILY_ID']].append(row['SUBJECT_ID']) print(row) # print(dic.keys()) ``` We explore the dataset by showing illustrative connectivity matrices of MZ and DZ twin pairs with 83 ROIs and fiber_length_mean as edge weight. ``` import matplotlib.pyplot as plt labels_str = ['MZ twin pairs ', 'DZ twin pairs'] G1 = nx.read_gpickle(path + "/data/repeated_10_scale_33/195445.gpickle") weight = "fiber_length_mean" A1 = nx.adjacency_matrix(G1, weight = weight).todense() G2 = nx.read_gpickle(path + "/data/repeated_10_scale_33/151425.gpickle") weight = "fiber_length_mean" A2 = nx.adjacency_matrix(G1, weight = weight).todense() G1 = nx.read_gpickle(path + "/data/repeated_10_scale_33/898176.gpickle") weight = "fiber_length_mean" A_1 = nx.adjacency_matrix(G1, weight = weight).todense() G2 = nx.read_gpickle(path + "/data/repeated_10_scale_33/109123.gpickle") weight = "fiber_length_mean" A_2 = nx.adjacency_matrix(G1, weight = weight).todense() fig = plt.figure(figsize=(12, 4)) ax = fig.add_subplot(121) imgplot = ax.imshow(A1) ax.set_title(labels_str[0]) ax = fig.add_subplot(122) imgplot = ax.imshow(A_1) ax.set_title(labels_str[1]) plt.show() fig = plt.figure(figsize=(12, 4)) ax = fig.add_subplot(121) imgplot = ax.imshow(A2) ax = fig.add_subplot(122) imgplot = ax.imshow(A_2) plt.show() ``` We can directly compare the connectivity matrices using Euclidean distance. However, the Euclidean space may not fully describe the actual geometry of the data as shown in Figure 1. So, we compute the graph Laplacian which transforms the matrices in the symmetric, semi-positive definite manifold. Finally, we regularize the graph Laplacian with a small parameter to analyze the connectivity data in the symmetric, positive-definite manifold. We eigen decompose raw correlation matrices and lower-bound small eigenvalues to 0.5, and re-compose them into regularized correlation matrices to ensure that the matrices are SPD. <table><tr> <td> <img src="geodesic.jpeg" style="width: 200px;"/> <figcaption>Figure 1 - Euclidean distance in blue and the corresponding geodesic distance in orange along the manifold.</figcaption></td> </tr></table> ``` def findSPD(L1): eigval, eigvec = np.linalg.eig(L1) eigval[eigval < 0.5] = 0.5 return eigvec.dot(np.diag(eigval)).dot(eigvec.T) ``` Using `geomstats`, we check that these matrices belong to the space of Symmetric Positive Definite (SPD) matrices. ``` G1 = nx.read_gpickle(path + "/data/repeated_10_scale_33/715950.gpickle") weight = "fiber_length_mean" weight = "number_of_fibers" D1 = nx.adjacency_matrix(G1, weight = weight).todense() L1 = nx.normalized_laplacian_matrix(G1, nodelist = G1.nodes(), weight = weight).toarray() m = L1.shape[0] manifold = spd.SPDMatrices(m) print(gs.all(manifold.belongs(findSPD(L1)))) ``` ## 2.1. Distance functions Euclidean Distance: We compute the 2-norm and Frobenius norm between two connectivity matrices using `geomstats` tools. <!-- The Euclidean norm or ${\displaystyle \ell _{2}}$-norm for vectors, the induced matrix norm. --> ${\displaystyle \|A_1 - A_2\|_{2}={\sqrt {\lambda _{\max }\left((A_1 - A_2) ^{*}(A_1 - A_2)\right)}},}$ ${\displaystyle \|A_1 - A_2\|_{\text{F}}={\sqrt {\sum _{i=1}^{m}\sum _{j=1}^{n}|{a_1}_{ij} - {a_2}_{ij}|^{2}}}={\sqrt {\operatorname {trace} \left((A_1 - A_2) ^{*}(A_1 - A_2)\right)}},}$ where $A_1$ and $A_2$ are adjacency matrices of a twin pair. ``` def euclidean(G1, G2, weight): G1.remove_nodes_from(list(nx.isolates(G1))) G2.remove_nodes_from(list(nx.isolates(G2))) G1.remove_nodes_from(np.setdiff1d(G1.nodes,G2.nodes)) G2.remove_nodes_from(np.setdiff1d(G2.nodes,G1.nodes)) A1,A2 = [nx.adjacency_matrix(G, weight = weight).todense() for G in [G1,G2]] return gs.linalg.norm((A1 - A2), 2) def frobenius(G1, G2, weight): G1.remove_nodes_from(list(nx.isolates(G1))) G2.remove_nodes_from(list(nx.isolates(G2))) G1.remove_nodes_from(np.setdiff1d(G1.nodes,G2.nodes)) G2.remove_nodes_from(np.setdiff1d(G2.nodes,G1.nodes)) A1,A2 = [nx.adjacency_matrix(G, weight = weight).todense() for G in [G1,G2]] return gs.linalg.norm(A1 - A2) ``` We compute the Bures-Wasserstein distance $d(A_1, A_2)$ on the manifold of n × n positive definite matrices using `geomstats` tools, where $d(A_1, A_2)=\left[ trace\, A_1+trace\, A_2-2 \times trace(A_1^{1/2}A_2A_1^{1/2})^{1/2}\right]^{1/2}.$ ``` def buresWasserstein(G1, G2, weight): G1.remove_nodes_from(list(nx.isolates(G1))) G2.remove_nodes_from(list(nx.isolates(G2))) G1.remove_nodes_from(np.setdiff1d(G1.nodes,G2.nodes)) G2.remove_nodes_from(np.setdiff1d(G2.nodes,G1.nodes)) L1 = nx.normalized_laplacian_matrix(G1, nodelist = G1.nodes(), weight = weight).toarray() L2 = nx.normalized_laplacian_matrix(G2, nodelist = G2.nodes(), weight = weight).toarray() L1 = findSPD(L1) L2 = findSPD(L2) m = L2.shape[0] manifold2 = spd.SPDMetricBuresWasserstein(m) return manifold2.squared_dist(L1, L2) ``` ## 2.2. Statistical Analysis Euclidean distance and Bures-Wasserstein distance was computed for each pair of twins (MZ and DZ) providing group-wise statistics to investigate the impact of genetics on structural connectivity of brain networks. Our working hypotheses is that connectivity between MZ pairs would be more similar than between DZ pairs. We utilized Wilcoxon rank sum non-parametric test given the small sample size. We compare the p-values for both the metrics to highlight the sensitivity towards manifolds. The following demonstration is for 45 MZ and 45 DZ pairs with 83 nodes due to data uploading limit. However, the tables show the results of the full dataset (206) analysis and with different nodes count. ``` # For number of nodes = 83: d_MZ_fiber_length_mean_E = [] d_DZ_fiber_length_mean_E = [] d_MZ_number_of_fibers_E = [] d_DZ_number_of_fibers_E = [] d_MZ_fiber_length_mean_B = [] d_DZ_fiber_length_mean_B = [] d_MZ_number_of_fibers_B = [] d_DZ_number_of_fibers_B = [] countM = 0 countD = 0 for key in dic.keys(): try: G1 = nx.read_gpickle(path + "/data/repeated_10_scale_33/" + dic[key][0] + ".gpickle") G2 = nx.read_gpickle(path + "/data/repeated_10_scale_33/" + dic[key][1] + ".gpickle") if (key.split("_")[0] == "MZ"): d_m = euclidean(G1, G2, "fiber_length_mean") d_MZ_fiber_length_mean_E.append(d_m) d_m = euclidean(G1, G2, "number_of_fibers") d_MZ_number_of_fibers_E.append(d_m) d_m = buresWasserstein(G1, G2, "fiber_length_mean") d_MZ_fiber_length_mean_B.append(d_m) d_m = buresWasserstein(G1, G2, "number_of_fibers") d_MZ_number_of_fibers_B.append(d_m) elif (key.split("_")[0] == "DZ"): d_d = euclidean(G1, G2, "fiber_length_mean") d_DZ_fiber_length_mean_E.append(d_d) d_d = euclidean(G1, G2, "number_of_fibers") d_DZ_number_of_fibers_E.append(d_d) d_d = buresWasserstein(G1, G2, "fiber_length_mean") d_DZ_fiber_length_mean_B.append(d_d) d_d = buresWasserstein(G1, G2, "number_of_fibers") d_DZ_number_of_fibers_B.append(d_d) except: continue import seaborn as sns import pandas as pd import matplotlib.pyplot as plt import matplotlib.patches as mpatches import matplotlib as mpl cmap = sns.color_palette("Set2"); mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) mpl.rc('axes', labelsize=14) fig = plt.figure(figsize=(15, 15)) plt.subplot(2, 2, 1) d = pd.DataFrame(list(zip(d_MZ_fiber_length_mean_E, d_DZ_fiber_length_mean_E)), columns =['MZ', 'DZ']) fvalue, pvalue = stats.wilcoxon(d['MZ'], d['DZ']) ax = sns.boxplot( data=d, fliersize = 0.01, width = 0.3, linewidth = 2,palette = cmap) ax.annotate('$nodes = 83, p = $'+ str(round(pvalue,5)), xy=(350, 350), xycoords='axes points', size=14, ha='right', va='top', bbox=dict(boxstyle='round', fc='w')) # plt.figure(figsize=(6,6)) plt.ylabel('Euclidean Distance') plt.title('weight = "fiber_length_mean"') plt.subplot(2, 2, 3) d = pd.DataFrame(list(zip(d_MZ_number_of_fibers_E, d_DZ_number_of_fibers_E)), columns =['MZ', 'DZ']) fvalue, pvalue = stats.wilcoxon(d['MZ'], d['DZ']) ax = sns.boxplot( data=d, fliersize = 0.01, width = 0.3, linewidth = 2,palette = cmap) ax.annotate('$nodes = 83, p = $'+ str(round(pvalue,5)), xy=(350, 350), xycoords='axes points', size=14, ha='right', va='top', bbox=dict(boxstyle='round', fc='w')) plt.ylabel('Euclidean Distance') plt.title('weight = "number_of_fibers"') plt.subplot(2, 2, 2) d = pd.DataFrame(list(zip(d_MZ_fiber_length_mean_B, d_DZ_fiber_length_mean_B)), columns =['MZ', 'DZ']) fvalue, pvalue = stats.wilcoxon(d['MZ'], d['DZ']) ax = sns.boxplot( data=d, fliersize = 0.01, width = 0.3, linewidth = 2,palette = cmap) ax.annotate('$nodes = 83, p = $'+ str(round(pvalue,5)), xy=(350, 350), xycoords='axes points', size=14, ha='right', va='top', bbox=dict(boxstyle='round', fc='w')) # plt.figure(figsize=(6,6)) plt.ylabel('Bures-Wasserstein Distance') plt.title('weight = "fiber_length_mean"') plt.subplot(2, 2, 4) d = pd.DataFrame(list(zip(d_MZ_number_of_fibers_B, d_DZ_number_of_fibers_B)), columns =['MZ', 'DZ']) fvalue, pvalue = stats.wilcoxon(d['MZ'], d['DZ']) ax = sns.boxplot( data=d, fliersize = 0.01, width = 0.3, linewidth = 2,palette = cmap) ax.annotate('$nodes = 83, p = $'+ str(round(pvalue,7)), xy=(350, 350), xycoords='axes points', size=14, ha='right', va='top', bbox=dict(boxstyle='round', fc='w')) plt.ylabel('Bures-Wasserstein Distance') plt.title('weight = "number_of_fibers"') plt.show() ``` We compare the Euclidean distance with Bures-Wasserstein distance. Statistical analysis results for comparing structural networks of two groups (MZ & DZ) for number of nodes ranging from 83 to 1015. The following plot summarizes our finding, by directly comparing the accuracy of each method in the classification task. #### edge weight: fiber_length_mean | Distance metric/no. of nodes | 83 | 129 | 234 | 463 | 1015 | | :- | -: |-: |-: |-: | :-: | | 2-norm (p-value) | 0.0422 | 0.0553 | 0.0619 | 0.0919 | 0.1474 | | Frobenius norm (p-value) | 0.1257 | 0.16846 | 0.3429 | 0.32643 | 0.3746 | | Bures-Wasserstein (p-value) | 0.00379 | 0.01346 | 0.00234 | 0.00475 | 0.03645 | #### edge weight: number_of_fibers | Distance metric/no. of nodes | 83 | 129 | 234 | 463 | 1015 | | :- | -: |-: |-: |-: | :-: | | 2-norm (p-value) | 0.00198 | 0.00354 | 0.02049 | 0.03694 | 0.09631 | | Frobenius norm (p-value) | 0.2147 | 7.815e-05 | 0.00054 | 0.00152785 | 0.00666 | | Bures-Wasserstein (p-value)| 4.140e-06 | 3.9664e-05 | 1.05155e-05 |3.2756e-05 | 0.000320488 | ## 2.3. Results As seen above, the Bures-Wasserstein distance provides greater sensitivity for differentiating MZ from DZ based on the similarity between structural connectivity networks. Further analyzes on a different number of nodes also follow a similar trend. A smaller p-value shows that the distances significantly differ between the MZ and DZ groups. With the help of `geomstats`, we implemented other common geodesic distances defined on SPD manifold. - Log Euclidean Distance: ${\displaystyle \|\log(A_1) - \log(A_2)\|_{\text{F}}}$ - Affine Invariant Distance: ${\displaystyle \|\log(A_1^{-1/2}A_2A_1^{-1/2}\|_{\text{F}}}$ ``` from geomstats.geometry.matrices import Matrices def affineInviantDistance(G1, G2, weight): G1.remove_nodes_from(list(nx.isolates(G1))) G2.remove_nodes_from(list(nx.isolates(G2))) G1.remove_nodes_from(np.setdiff1d(G1.nodes,G2.nodes)) G2.remove_nodes_from(np.setdiff1d(G2.nodes,G1.nodes)) L1 = nx.normalized_laplacian_matrix(G1, nodelist = G1.nodes(), weight = weight).toarray() L2 = nx.normalized_laplacian_matrix(G2, nodelist = G2.nodes(), weight = weight).toarray() m = L2.shape[0] manifold = spd.SPDMatrices(m) L1 = findSPD(L1) L2 = findSPD(L2) A = gs.linalg.inv(gs.linalg.sqrtm(L1)) return gs.linalg.norm(manifold.logm(Matrices.mul(A, L2, A))) def LogEuclideanDistance(G1, G2, weight): G1.remove_nodes_from(list(nx.isolates(G1))) G2.remove_nodes_from(list(nx.isolates(G2))) G1.remove_nodes_from(np.setdiff1d(G1.nodes,G2.nodes)) G2.remove_nodes_from(np.setdiff1d(G2.nodes,G1.nodes)) L1 = nx.normalized_laplacian_matrix(G1, nodelist = G1.nodes(), weight = weight).toarray() L2 = nx.normalized_laplacian_matrix(G2, nodelist = G2.nodes(), weight = weight).toarray() m = L2.shape[0] manifold = spd.SPDMatrices(m) # print(gs.all(manifold.belongs(L1)), gs.all(manifold.belongs(L2))) L1 = findSPD(L1) L2 = findSPD(L2) return gs.linalg.norm(manifold.logm(L1) - manifold.logm(L2) ) d_MZ_fiber_length_mean_A = [] d_DZ_fiber_length_mean_A = [] d_MZ_fiber_length_mean_L = [] d_DZ_fiber_length_mean_L = [] d_MZ_number_of_fibers_A = [] d_DZ_number_of_fibers_A = [] d_MZ_number_of_fibers_L = [] d_DZ_number_of_fibers_L = [] for key in dic.keys(): try: G1 = nx.read_gpickle(path + "/data/repeated_10_scale_33/" + dic[key][0] + ".gpickle") G2 = nx.read_gpickle(path + "/data/repeated_10_scale_33/" + dic[key][1] + ".gpickle") if (key.split("_")[0] == "MZ"): d_m = affineInviantDistance(G1, G2, "fiber_length_mean") d_MZ_fiber_length_mean_A.append(d_m) d_m = affineInviantDistance(G1, G2, "number_of_fibers") d_MZ_number_of_fibers_A.append(d_m) d_m = LogEuclideanDistance(G1, G2, "fiber_length_mean") d_MZ_fiber_length_mean_L.append(d_m) d_m = LogEuclideanDistance(G1, G2, "number_of_fibers") d_MZ_number_of_fibers_L.append(d_m) elif (key.split("_")[0] == "DZ"): d_d = affineInviantDistance(G1, G2, "fiber_length_mean") d_DZ_fiber_length_mean_A.append(d_d) d_d = affineInviantDistance(G1, G2, "number_of_fibers") d_DZ_number_of_fibers_A.append(d_d) d_d = LogEuclideanDistance(G1, G2, "fiber_length_mean") d_DZ_fiber_length_mean_L.append(d_d) d_d = LogEuclideanDistance(G1, G2, "number_of_fibers") d_DZ_number_of_fibers_L.append(d_d) except: continue d = pd.DataFrame(list(zip(d_MZ_number_of_fibers_A, d_DZ_number_of_fibers_L)), columns =['MZ', 'DZ']) # d.boxplot(column=['MZ', 'DZ'], grid=False) fvalue, pvalue = stats.wilcoxon(d['MZ'], d['DZ']) d = pd.DataFrame(list(zip(d_MZ_fiber_length_mean_A, d_DZ_fiber_length_mean_L)), columns =['MZ', 'DZ']) # d.boxplot(column=['MZ', 'DZ'], grid=False) fvalue, pvalue = stats.wilcoxon(d['MZ'], d['DZ']) ``` #### edge weight: fiber_length_mean | Distance metric/no. of nodes | 83 | 129 | 234 | 463 | 1015 | | :- | -: |-: |-: |-: | :-: | | Affine Invariant (p-value) | 0.00492 | 0.01389 | 0.0018 | 0.0039 | 0.0143 | | Log Euclidean (p-value) | 0.00492 | 0.01410 | 0.0019 | 0.00393 | 0.0136 | #### edge weight: number_of_fibers | Distance metric/no. of nodes | 83 | 129 | 234 | 463 | 1015 | | :- | -: |-: |-: |-: | :-: | | Affine Invariant (p-value) | 3.0094e-06 | 3.4367e-05 | 9.9947e-06 | 2.2217e-05 | 0.00029 | | Log Euclidean (p-value)| 3.0910e-06 | 3.1217e-05 | 9.9947e-06 | 2.2767e-05 | 0.00028 | ## 2.4. Discussion This research work studies the difference between two graphs on SPD manifold. Applying geodesic distances on SPD manifold accounts for the complex geometric properties of connectivity graphs. The results of our study show how connectivity analysis discovers genetic influences on brain networks. Through the analysis of connectivity matrices on a manifold instead of a vector space, we demonstrate the sensitivity of geodesic distances as compared to Euclidean distances. Finally, we highlight the consistency of our results with different graph resolutions (node counts) and edge weights. ## 2.5. Role of Geomstats in the analysis In this study, we exploited the SPD distance metrics implemented in the package `geomstats`. Various manifolds and distance metrics to analyze data on manifolds are easy to understand. The class `ToTangentSpace` is also convenient to simply transform the data on the SPD manifold into tangent vectors and apply standard learning methods on them. # 3. Limitations and perspectives In this analysis, we have focused on SPD manifold and utilized the distance metrics implemented in `geomstats`. It was encouraging to see the improvement in p-value after using geodesic distances on SPD manifolds. In the future, we plan to analyze the transformed tangent vectors and apply learning methods. Furthermore, we intend to evaluate the reproducibility of our approach on additional datasets. Integrating distance metrics with data-driven learning approaches can greatly improve our understanding of human brain connectivity. ## Limitation of Geomstats We tried to utilize other geodesic distances defined for positive definite Riemannian metrics, it was not yet implemented in `geomstats`. It would be interesting to compare different distance metrics defined on SPD manifolds and compare their efficiency and performance. ``` import geomstats.geometry.riemannian_metric as rm def riemannianGD(G1, G2, weight): G1.remove_nodes_from(list(nx.isolates(G1))) G2.remove_nodes_from(list(nx.isolates(G2))) G1.remove_nodes_from(np.setdiff1d(G1.nodes,G2.nodes)) G2.remove_nodes_from(np.setdiff1d(G2.nodes,G1.nodes)) L1 = nx.normalized_laplacian_matrix(G1, nodelist = G1.nodes(), weight = weight).toarray() L2 = nx.normalized_laplacian_matrix(G2, nodelist = G2.nodes(), weight = weight).toarray() m = L2.shape[0] manifold = spd.SPDMatrices(m) L1 = findSPD(L1) L2 = findSPD(L2) manifold2 = rm.RiemannianMetric(m) return manifold2.dist(L1, L2) # d_MZ_fiber_length_mean_R = [] # d_DZ_fiber_length_mean_R = [] # for key in dic.keys(): # G1 = nx.read_gpickle(path + "/data/repeated_10_scale_33/" + dic[key][0] + ".gpickle") # G2 = nx.read_gpickle(path + "/data/repeated_10_scale_33/" + dic[key][1] + ".gpickle") # if (key.split("_")[0] == "MZ"): # d_m = riemannianGD(G1, G2, "fiber_length_mean") # d_MZ_fiber_length_mean_R.append(d_m) # d_m = riemannianGD(G1, G2, "number_of_fibers") # d_MZ_number_of_fibers_A.append(d_m) # elif (key.split("_")[0] == "DZ"): # d_d = riemannianGD(G1, G2, "fiber_length_mean") # d_DZ_fiber_length_mean_R.append(d_d) # d_d = riemannianGD(G1, G2, "number_of_fibers") # d_DZ_number_of_fibers_R.append(d_d) ``` ## Proposed features for Geomstats and Giotto-TDA A class that can be used to plot points in SPD space would be helpful in visualizing the geodesic distances. ## References .. [CBB2017] Csaba Kerepesi, Balázs Szalkai, Bálint Varga, Vince Grolmusz: The braingraph.org Database of High Resolution Structural Connectomes and the Brain Graph Tools, Cognitive Neurodynamics Vol. 11 No. 5, pp. 483-486 (2017) http://dx.doi.org/10.1007/s11571-017-9445-1. .. [CJ1985] Cuzick, J. (1985). A Wilcoxon‐type test for trend. Statistics in medicine, 4(1), 87-90.
github_jupyter
``` import torch import torch.nn as nn import numpy as np import matplotlib.pyplot as plt import sklearn.datasets as skl import torch.linalg as lin ``` ## LQR with deterministic dynamics ![image-5.png](attachment:image-5.png) 위에 기술된 최적화 문제는 환경 $f(\textbf{x}_t, \textbf{u}_t)$ 가 선형(Linear)이고, Cost function이 Quadratic 형태입니다. LQR 수업에서 배웠던 수식과 동일하게 구성이 되어 있습니다. $\textbf{x}_t$는 시점 $t$에서 시스템의 상태를 나타냅니다. $\textbf{u}_t$는 시점 $t$에서의 행동, 혹은 시스템에 넣어주는 저희의 action을 표현하는 벡터입니다. $\textbf{C}_t\ 와\ \textbf{c}_t$ 는 각각 cost fucntion에서 2차항과 1차항의 계수입니다. $\textbf{F}_t \ 와 \ \textbf{f}_t$ 는 Linear Dynamics의 계수들입니다. 위의 수식에서는 시간에 따른 가변성이 있을 수도 있어 다음과 같이 $t$를 첨자로 두었으나, 저희가 구현할 문제에 대해서는 시간에 invariant하다고 가정을 할 것입니다. 다음과 같은 문제를 효율적으로 풀 수 있는 기법 중 하나인 LQR을 구현해볼 것입니다. ## optimization 문제 기본 setting 우선 LQR 알고리즘을 구현하기 이전에 문제의 기본 세팅을 하겠습니다. state, action의 dimension 그리고 cost function, linear Dynamics의 계수를 세팅하겠습니다. (각 계수들은 미리 정해두었습니다. 위의 주석은 계수에 대한 기본 설명, 행렬의 크기입니다.) ``` state_dim = 2 # state dimension action_dim = 2 # action dimension T = 10 # Cost function's coefficient of second order term # matrix shape [(state_dim + action_dim) * (state_dim + action_dim)] C = torch.eye(n=(state_dim + action_dim)) / 10 # Cost function's coefficient of first order term # matrix shape [(state_dim + action_dim), 1)] c = torch.rand(size=(state_dim + action_dim, 1)) # Linear Dynamics's coefficient of first order term # matrix shape [state_dim * (state_dim + action_dim)] F = torch.rand(size=(state_dim, state_dim + action_dim)) / 10 # Linear Dynamics's coefficient of constant term # matrix shape[(state_dim, 1)] f = torch.zeros(size=(state_dim, 1)) # dictionary of K Large_K = dict() small_k = dict() # dictionary of V Large_V = dict() small_v = dict() # dictionary of Q Large_Q = dict() small_q = dict() ``` ## 시점 T에서 LQR을 통한 최적 행동의 계수 K 구하기 ![image.png](attachment:image.png) 강의에서 보셨던 것처럼 $\textbf{K}_T \ 와 \ \textbf{k}_T$ 를 구하고, 그 값을 바탕으로 시점 $T$에서의 optimal한 action, $\textbf{u}_T$ 를 구할 수 있습니다. $\textbf{u}_T$의 계산을 위해 필요한 계수를 구하고, 그 값들을 dictionary에 저장하도록 하겠습니다 (추후에 forward 과정을 위해서 필요하기 때문!!!!) ``` K_T = - torch.matmul(torch.linalg.inv(C[:action_dim, :action_dim]), C[state_dim:, :state_dim]) k_T = - torch.matmul(torch.linalg.inv(C[state_dim:, state_dim:]), c[state_dim:, :]) print("K_T: ", K_T) print("k_T: ", k_T) Large_K[T] = K_T small_k[T] = k_T ``` ### 함수화 ``` def calculate_K(Q, q, state_dim, action_dim): K_t = - torch.matmul(torch.linalg.inv(Q[:action_dim, :action_dim]), Q[state_dim:, :state_dim]) k_t = - torch.matmul(torch.linalg.inv(Q[state_dim:, state_dim:]), q[state_dim:, :]) return K_t, k_t ``` ## 시점 T에서의 최적 행동을 바탕으로 cost 계산 ![image.png](attachment:image.png) 앞서 구한 action $\textbf{u}_T$를 objective의 cost function에 대입하여 $Q((\textbf{x}_t, \textbf{u}_t)$를 $V(\textbf{x}_t)$로 상태변수 $\textbf{x}_t$ 만을 인자로 가지는 함수로 바꿀 수 있습니다. ``` V_T = C[:state_dim, state_dim] + torch.matmul(C[:state_dim, state_dim:], Large_K[T]) + torch.matmul(Large_K[T].T, C[state_dim:, :state_dim]) + torch.matmul(torch.matmul(Large_K[T].T, C[state_dim:, state_dim:]), Large_K[T]) v_T = c[:state_dim, :] + torch.matmul(C[:state_dim, state_dim:], small_k[T]) + torch.matmul(Large_K[T].T, c[state_dim:, :]) + torch.matmul(torch.matmul(Large_K[T].T, C[state_dim:, state_dim:]), small_k[T]) print("V_T: ", V_T) print("v_T: ", v_T) Large_V[T] = V_T small_v[T] = v_T ``` ### 함수화 ``` def calculate_V(C, c, state_dim, action_dim, K_t, small_k): V_t = C[:state_dim, :state_dim] + torch.matmul(C[:state_dim, state_dim:], K_t) + torch.matmul(K_t.T, C[state_dim:, :state_dim]) + torch.matmul(torch.matmul(K_t.T, C[state_dim:, state_dim:]), K_t) v_t = c[:state_dim, :] + torch.matmul(C[:state_dim, state_dim:], small_k) + torch.matmul(K_t.T, c[state_dim:, :]) + torch.matmul(torch.matmul(K_t.T, C[state_dim:, state_dim:]), small_k) return V_t, v_t ``` ## 시점 T-1 에서의 cost를 $\textbf{x}_{t-1}, \ \textbf{u}_{t-1}$ 로 표현 ![image.png](attachment:image.png) C와 F는 time-invariant 한 coefficient이고, $\textbf{V}_T\ 와 \ \textbf{v}_T $는 이전 셀에서 구했습니다. ``` Q_t = C + torch.matmul(torch.matmul(F.T, Large_V[T]), F) q_t = c + torch.matmul(torch.matmul(F.T, Large_V[T]), f) + torch.matmul(F.T, small_v[T]) Large_Q[T-1] = Q_t small_q[T-1] = q_t ``` ### 함수화 ``` def calculate_Q(C, c, Large_V, small_v, F, f): Q_t = C + torch.matmul(torch.matmul(F.T, Large_V), F) q_t = c + torch.matmul(torch.matmul(F.T, Large_V), f) + torch.matmul(F.T, small_v) return Q_t, q_t ``` ## Backword recursion 시점 T=0 까지 진행 ![image.png](attachment:image.png) 위의 3가지 과정을 시점 T-1, T-2, ... , 1 까지 반복하여 coefficient(V, K, Q)를 구하고 저장합니다. ``` T = 10 state_dim = 2 action_dim = 2 C = torch.rand(size=(state_dim + action_dim, state_dim + action_dim)) # invertible check about matrix C while True: if (torch.matrix_rank(C) == state_dim + action_dim) and (torch.matrix_rank(C[:state_dim, :state_dim]) == state_dim) and (torch.matrix_rank(C[state_dim:, state_dim:]) == action_dim): break else: C = torch.rand(size=(state_dim + action_dim, state_dim + action_dim)) c = torch.rand(size=(state_dim + action_dim, 1)) F = torch.rand(size=(state_dim, state_dim + action_dim)) / 10 f = torch.rand(size=(state_dim, 1)) K_t, k_t = calculate_K(C, c, state_dim, action_dim) Large_K[T] = K_t small_k[T] = k_t V_t, v_t = calculate_V(C, c, state_dim, action_dim, K_t, k_t) Large_V[T] = V_t small_v[T] = v_t for time in range(T-1, 0, -1): # calculate Q Q_t, q_t = calculate_Q(C, c, V_t, v_t, F, f) Large_Q[time] = Q_t small_q[time] = q_t K_t, k_t = calculate_K(Q_t, q_t, state_dim, action_dim) Large_K[time] = K_t small_k[time] = k_t V_t, v_t = calculate_V(C, c, state_dim, action_dim, K_t, k_t) Large_V[time] = V_t small_v[time] = v_t ``` ### 함수화 ``` def backward_recursion(state_dim, action_dim, C, c, F, f, T): # dictionary of K Large_K = dict() small_k = dict() # dictionary of V Large_V = dict() small_v = dict() # dictionary of Q Large_Q = dict() small_q = dict() K_t, k_t = calculate_K(C, c, state_dim, action_dim) Large_K[T] = K_t small_k[T] = k_t V_t, v_t = calculate_V(C, c, state_dim, action_dim, K_t, k_t) Large_V[T] = V_t small_v[T] = v_t for time in range(T-1, 0, -1): # calculate Q Q_t, q_t = calculate_Q(C, c, V_t, v_t, F, f) Large_Q[time] = Q_t small_q[time] = q_t K_t, k_t = calculate_K(Q_t, q_t, state_dim, action_dim) Large_K[time] = K_t small_k[time] = k_t V_t, v_t = calculate_V(C, c, state_dim, action_dim, K_t, k_t) Large_V[time] = V_t small_v[time] = v_t return Large_Q, small_q, Large_K, small_k, Large_V, small_v backward_recursion(state_dim, action_dim, C, c, F, f, T) ``` ## forward recursion 진행 ![image.png](attachment:image.png) ![image.png](attachment:image.png) ``` x_dict = dict() u_dict = dict() x0 = torch.randn(size=(state_dim, 1)) x_dict[1] = x0 for time in range(1, T): u_t = torch.matmul(Large_K[time], x_dict[time]) + small_k[time] u_dict[time] = u_t next_x = torch.matmul(F, torch.cat([x_dict[time], u_t], dim=0)) + f x_dict[time+1] = next_x ``` ### 함수화 ``` def forward_recursion(x0, Large_K, small_k, F, f, T): x_dict = dict() u_dict = dict() x_dict[1] = x0 for time in range(1, T): u_t = torch.matmul(Large_K[time], x_dict[time]) + small_k[time] u_dict[time] = u_t next_x = torch.matmul(F, torch.cat([x_dict[time], u_t], dim=0)) + f x_dict[time+1] = next_x return x_dict, u_dict x_dict, u_dict = forward_recursion(x0, Large_K, small_k, F, f, T) xs = dict() for state in range(state_dim): xs[state] = [] us = dict() for action in range(action_dim): us[action] = [] for key, item in x_dict.items(): print('x at time ' + str(key) + ': ', item ) for state in range(state_dim): xs[state].append(float(item[state])) for key, item in u_dict.items(): print('u at time ' + str(key) + ': ', item ) for action in range(action_dim): us[action].append(float(item[action])) ``` # Implementation: Data Center temperature control 비즈니스 전체 영역에서의 언택트 환경이 확장됨에 따라 데이터 사용량이 급증하고 전세계적으로 대용량 데이터 관리를 위한 하이퍼스케일 데이터센터 건립이 증가하고 있습니다. 데이터의 용량과 속도가 빠르게 증가함에 따라 서버의 처리량도 증가하고 온도 또한 높아집니다. 서버의 온도 상승은 고장 및 성능 하락의 원인이 돼 이를 냉각시키는 과정이 필요하며 데이터센터 에너지 소비의 35% 이상이 서버 냉각에서 발생합니다. 그래서 데이터 센터의 온도 제어를 LQR을 통해서 최적의 온도 trajectory와 action sequence를 구해보고자 합니다. 편의상 데이터 센터의 온도는 선형적인 시스템에 의해서 가동이 되고 있다고 가정하고, 최적의 온도 또한 0도로 가정을 하겠습니다. 구역의 개수와 에어컨 개수를 각 3개로 예를 들자면 다음과 같이 시스템을 수식으로 나타낼 수 있습니다. (수식의 계수들은 미리 정해두었습니다) ![image-4.png](attachment:image-4.png) ![image-3.png](attachment:image-3.png) ### coefficent $F_M $ : 기존 구역의 temperature가 유지(Maintain)되는 정도 $F_r $ : 구역에서 다른 구역으로 temperature가 방출(Release)되는 비율 $E_power $: 에어컨 성능(Power)의 정도 $f_i $: i번째 구역에 외부로 부터 들어오는 열의 정도 $I $: 항등행렬, 밑의 첨자는 항등행렬의 크기 $t_c $: 현재 temperature에 대한 페널티 크기 $e^{1}_{c},$ : 사용한 에너지에 대한 페널티 크기 $ \epsilon_{ij} $: i구역의 에어컨 제어가 j 구역에 미치는 영향 정도(random term) 로 생각하시면 됩니다. ### setting coefficient ``` import random # number of sector and airconditioner state_dim = 3 action_dim = 3 T = 20 total_dim = state_dim + action_dim # matrix shape [(state_dim + action_dim) * (state_dim + action_dim)] C = torch.eye(n=state_dim + action_dim) # set t_c and e_c C[:state_dim, :state_dim] = C[:state_dim, :state_dim] * 10 C[state_dim:, state_dim:] = C[state_dim:, state_dim:] * 5 # matrix shape [(state_dim + action_dim), 1)] c = torch.zeros(size=(total_dim, 1)) # matrix shape [state_dim * (state_dim + action_dim)] # set F_M, F_r, E_power, epsilon_ij F = torch.zeros(size=(state_dim, total_dim)) for i in range(state_dim): for j in range(state_dim): if i != j: F[i, j] = random.uniform(0, 1) / 10 else: F[i, i] = 0.9 for i in range(state_dim): for j in range(action_dim): if i != j: F[i, state_dim + j] = - random.uniform(0, 1) / 10 else: F[j, state_dim + j] = - random.uniform(0.5, 1) # matrix shape[(state_dim, 1)] # set f_i f = torch.rand(size=(state_dim, 1)) ``` ### initial temperature ``` x0 = torch.ones(size=(state_dim, 1)) * 50 ``` ### check invertible ``` torch.matrix_rank(C) == total_dim ``` # 직접 구현 ### def calculate_Q ![image.png](attachment:image.png) ``` def calculate_Q(C, c, F, f, V_t, v_t, state_dim, action_dim): """ C : torch.tensor() with shape(state_dim + action_dim, state_dim + action_dim) c : torch.tensor() with shape(state_dim + action_dim, 1) F : torch.tensor() with shape(state_dim, state_dim + action_dim) f : torch.tensor() with shape(state_dim, 1) V : torch.tensor() with shape(state_dim, state_dim) v : torch.tensor() with shape(state_dim, 1) """ """ Q : torch.tensor() with shape(state_dim + action_dim, state_dim + action_dim) q: torch.tensor() with shape(state_dim + action_dim, 1) """ return Q_t, q_t ``` ### def calculate_V ![image.png](attachment:image.png) ``` def calculate_V(C, c, K_t, k_t, state_dim, action_dim): """ C : torch.tensor() with shape(state_dim + action_dim, state_dim + action_dim) c : torch.tensor() with shape(state_dim + action_dim, 1) K : torch.tensor() with shape(action_dim, state_dim) k : torch.tensor() with shape(action_dim, 1) """ """ V : torch.tensor() with shape(state_dim, state_dim) v : torch.tensor() with shape(state_dim, 1) """ return V_t, v_t ``` ### def calculate_K ![image-2.png](attachment:image-2.png) ``` def calculate_K(Q, q, state_dim, action_dim): """ Q : torch.tensor() with shape(state_dim + action_dim, state_dim + action_dim) q : torch.tensor() with shape(state_dim + action_dim, 1) """ """ K : torch.tensor() with shape(action_dim, state_dim) k : torch.tensor() with shape(action_dim, 1) """ return K_t, k_t ``` ### backward recursion ![image.png](attachment:image.png) ``` def backward_recursion(state_dim, action_dim, C, c, F, f, T): """ C : torch.tensor() with shape(state_dim + action_dim, state_dim + action_dim) c : torch.tensor() with shape(state_dim + action_dim, 1) F : torch.tensor() with shape(state_dim, state_dim + action_dim) f : torch.tensor() with shape(state_dim, 1) """ # dictionary of K Large_K = dict() small_k = dict() # dictionary of V Large_V = dict() small_v = dict() # dictionary of Q Large_Q = dict() small_q = dict() """ calculate K, k, V, v at time T and save result at dictionary mentioned above by using function calcualte V, K """ """ calculate Q, q, K, k, V, v,at time T-1 to 1 and save result at dictionary mentioned above with for loop by using function calcualte V, K, Q """ return Large_Q, small_q, Large_K, small_k, Large_V, small_v ``` ### def forward_recursion ![image.png](attachment:image.png) ``` def forward_recursion(x0, Large_K, small_k, F, f, T): """ F : torch.tensor() with shape(state_dim, state_dim + action_dim) f : torch.tensor() with shape(state_dim, 1) K : torch.tensor() with shape(action_dim, state_dim) k : torch.tensor() with shape(action_dim, 1) """ x_dict = dict() u_dict = dict() x_dict[1] = x0 """ calculate x, u at time 1 to T-1 and save result at dictionary mentioned above with for loop """ return x_dict, u_dict Large_Q, small_q, Large_K, small_k, Large_V, small_v = backward_recursion(state_dim, action_dim, C, c, F, f, T) x_dict, u_dict = forward_recursion(x0, Large_K, small_k, F, f, T) ``` ### print and plot temperature and energy trajectory ``` xs = dict() for state in range(state_dim): xs[state] = [] us = dict() for action in range(action_dim): us[action] = [] for key, item in x_dict.items(): for state in range(state_dim): xs[state].append(float(item[state])) for key, item in u_dict.items(): for action in range(action_dim): us[action].append(float(item[action])) for state in range(state_dim): plt.plot(xs[state]) plt.legend(["Region" + str(i+1) for i in range(state_dim)]) plt.title("Temperature") plt.clf() for action in range(action_dim): plt.plot(us[action]) plt.legend(["Region" + str(i+1) for i in range(action_dim)]) plt.title("Energy") ```
github_jupyter
# BERT NER [Model files available here. They are quite large](https://drive.google.com/open?id=11CPrF1rlZ-5eCv0m-UlFiAbCy3Z-yG54) ### Setting up workspace ``` import os import pathlib # ********************************************************* # If you actually want to train, switch to GPU runtime now. # ********************************************************* # work_dir = INSERT DESIRED DIRECTORY # BE SURE TO MOUNT DRIVE TO HAVE PERMANENT STORAGE work_dir = pathlib.Path('/content/drive/My Drive/AISC-MLOps/BERT-NER') if not os.path.exists(work_dir): os.mkdir(work_dir) os.chdir(work_dir) !ls from google.colab import drive drive.mount('/content/drive') ``` ## Downloading training sets ``` %cd '/content/drive/My Drive/AISC-MLOps/BERT-NER' urls = { 'train':'https://raw.githubusercontent.com/davidsbatista/NER-datasets/master/CONLL2003/train.txt', 'dev':'https://raw.githubusercontent.com/davidsbatista/NER-datasets/master/CONLL2003/valid.txt', 'test':'https://raw.githubusercontent.com/davidsbatista/NER-datasets/master/CONLL2003/test.txt' } files = { 'train':'raw-train.txt', 'dev':'raw-dev.txt', 'test':'raw-test.txt' } for k, v in files.items(): url = urls[k] !wget $url -O $v -nc ``` ## Download HuggingFace utility files ``` run_ner_url = 'https://raw.githubusercontent.com/huggingface/transformers/master/examples/ner/run_ner.py' ner_utils_url = 'https://raw.githubusercontent.com/huggingface/transformers/master/examples/ner/utils_ner.py' preprocess_url = 'https://raw.githubusercontent.com/stefan-it/fine-tuned-berts-seq/master/scripts/preprocess.py' !wget $run_ner_url -nc !wget $ner_utils_url -nc !wget $preprocess_url -nc ``` ## Must install transformers from source ``` if not os.path.exists('transformers'): !git clone https://github.com/huggingface/transformers os.chdir("transformers") # *********************************************************** # This pip install might take a few minutes. I'm not sure why # *********************************************************** !pip install . os.chdir("..") !pip install -r ./transformers/examples/requirements.txt ``` ## Preprocess train/dev/test files It might actually be unnecessary if using conll2003 files. And for some reason it doesn't process raw-dev.txt ``` %cd '/content/drive/My Drive/AISC-MLOps/BERT-NER' !python3 preprocess.py raw-train.txt bert-base-cased 128 > train.txt !python3 preprocess.py raw-dev.txt bert-base-cased 128 > dev.txt !python3 preprocess.py raw-test.txt bert-base-cased 128 > test.txt ``` ## Check that they are there ``` for key, f in files.items(): print(f"{key+'.txt'} in dir? {os.path.exists(key+'.txt')}") ``` ## Create labels file ``` labels_file = 'labels.txt' if not os.path.exists(labels_file): !cat train.txt dev.txt test.txt | cut -d " " -f 4 | grep -v "^$"| sort | uniq > $labels_file ``` ## Setup training environment variables ``` output_dir = 'bert-ner-model' if not os.path.exists(output_dir): !mkdir $output_dir os.environ['MAX_LENGTH'] = '128' os.environ['BERT_MODEL'] = 'bert-base-cased' os.environ['OUTPUT_DIR'] = output_dir os.environ['BATCH_SIZE'] = '32' os.environ['NUM_EPOCHS'] = '3' os.environ['SAVE_STEPS'] = '750' os.environ['SEED'] = '1' os.environ['LABELS'] = 'labels.txt' ``` ## Training, evaluating, and predicting ``` # train on train.txt # eval on dev.txt # predict on test.txt !python3 run_ner.py --data_dir ./ \ --labels $LABELS \ --model_name_or_path $BERT_MODEL \ --output_dir $OUTPUT_DIR \ --max_seq_length $MAX_LENGTH \ --num_train_epochs $NUM_EPOCHS \ --per_gpu_train_batch_size $BATCH_SIZE \ --save_steps $SAVE_STEPS \ --seed $SEED \ --do_train \ --do_eval \ --do_predict ``` ## Load model back in ``` model_dir = 'bert-ner-model' os.chdir(work_dir / model_dir) from transformers import BertForTokenClassification, BertTokenizer model = BertForTokenClassification.from_pretrained('.') tokenizer = BertTokenizer.from_pretrained('.') model import json with open('config.json', 'r') as f: config = json.load(f) def do_ner(model, tokenizer, sentence): input_ids = tokenizer.encode(sentence, add_special_tokens=True, max_length=512, return_tensors='pt') return model(input_ids), input_ids config['id2label'] import torch sentence = "donald Trump says the United States will no longer pay the WHO" output, input_ids = do_ner(model, tokenizer, sentence) out_tensor = output[0] id2label = config['id2label'] def ids_2_labels(id_tensor): return [id2label[str(id_.item())] for id_ in id_tensor] def input_ids_to_string(tokenizer, input_ids): return tokenizer.convert_ids_to_tokens(input_ids.squeeze()) output = list() for input_, output_ in zip(input_ids, out_tensor): label_ids = torch.argmax(output_, 1) for i in list(zip(input_ids_to_string(tokenizer, input_ids), ids_2_labels(label_ids))): output.append(i) print(output) ``` # MLflow ``` ``` ## Install MLflow ``` !pip install mlflow ``` ## Save BertWrapper model, load and test ``` import os base_dir = '/content/drive/My Drive/AISC-MLOps/BERT-NER/' nlp_model_path = model_dir path = os.path.join(base_dir) os.chdir(path) import mlflow import pip # Create an `artifacts` dictionary that assigns a unique name to the saved XGBoost model file. # This dictionary will be passed to `mlflow.pyfunc.save_model`, which will copy the model file # into the new MLflow Model's directory. artifacts = { "nlp_model": nlp_model_path } # Define the model class import mlflow.pyfunc class BertWrapper(mlflow.pyfunc.PythonModel): def do_ner(model, tokenizer, sentence): input_ids = tokenizer.encode(sentence, add_special_tokens=True, max_length=512, return_tensors='pt') return model(input_ids), input_ids def load_context(self, context): import os import json from transformers import BertForTokenClassification, BertTokenizer model_dir = context.artifacts["nlp_model"] config_file = os.path.join(model_dir, 'config.json') with open(config_file, 'r') as f: self.config = json.load(f) self.model = BertForTokenClassification.from_pretrained(model_dir) self.tokenizer = BertTokenizer.from_pretrained(model_dir) def predict(self, context, model_input): import json def get_entities(text): output, input_ids = do_ner(self.model, self.tokenizer, text) out_tensor = output[0] id2label = self.config['id2label'] def ids_2_labels(id_tensor): return [id2label[str(id_.item())] for id_ in id_tensor] def input_ids_to_string(tokenizer, input_ids): return tokenizer.convert_ids_to_tokens(input_ids.squeeze()) output = list() for input_, output_ in zip(input_ids, out_tensor): label_ids = torch.argmax(output_, 1) for i in list(zip(input_ids_to_string(tokenizer, input_ids), ids_2_labels(label_ids))): output.append(i) return output try: ents = model_input.text.apply(get_entities) return ents.apply(lambda s: json.dumps(s)) except TypeError: return "DataFrame must contain strings" # Create a Conda environment for the new MLflow Model that contains the XGBoost library # as a dependency, as well as the required CloudPickle library import cloudpickle # Let's create our own conda environment conda_env = { 'channels': ['defaults', 'pytorch'], 'dependencies': [ f'python=3.6.9', { 'pip':[ f'pip=={pip.__version__}', f'mlflow=={mlflow.__version__}', f'cloudpickle=={cloudpickle.__version__}', f'torch=={torch.__version__}', f'transformers', ] } ], 'name': 'mlflow-env-bert' } # Save the MLflow Model mlflow_pyfunc_model_path = "bert_mlflow_pyfunc" # remove pre-existing folder !rm -rf $mlflow_pyfunc_model_path mlflow.pyfunc.save_model( path=mlflow_pyfunc_model_path, python_model=BertWrapper(), artifacts=artifacts, conda_env=conda_env) # Load the model in `python_function` format loaded_model = mlflow.pyfunc.load_model(mlflow_pyfunc_model_path) # Evaluate the model import pandas as pd test_predictions = loaded_model.predict(pd.DataFrame(data={'text':['What a beautiful day', 'That is the will of Parliament and the nation. The British Empire and the French Republic, linked together in their cause and in their need']})) print(test_predictions) ```
github_jupyter
``` import os import warnings from datetime import datetime, timedelta from typing import Tuple import matplotlib.pyplot as plt import pandas as pd from dotenv import load_dotenv from prometheus_api_client import MetricSnapshotDataFrame, MetricRangeDataFrame, PrometheusConnect from prometheus_api_client.utils import parse_datetime from skforecast.ForecasterAutoreg import ForecasterAutoreg from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_squared_error plt.style.use('fivethirtyeight') plt.rcParams['lines.linewidth'] = 1.5 warnings.filterwarnings('ignore') load_dotenv() class BtalertIA: def __init__(self, last_minutes_importance: int, regressor=None) -> None: """ Args: last_minutes_importance (int): The last minutes that matter to foreacasting (context) """ self.prom = PrometheusConnect( url=os.getenv('PROMETHEUS_URL'), disable_ssl=True) self.regressor = regressor if regressor is None: self.regressor = RandomForestRegressor( max_depth=40, n_estimators=3, random_state=123, ) self.forecaster = ForecasterAutoreg( regressor=self.regressor, lags=self.minutes_to_step(last_minutes_importance) ) self.original_dataframe = pd.DataFrame() self.data_train = pd.DataFrame() self.data_test = pd.DataFrame() self.predictions = pd.Series() self.value_column = 'value' self.timestamp_column = 'timestamp' def load_metric_as_dataframe(self, start: str, end: str, metric_name: str, alias: str) -> pd.DataFrame: start_time = parse_datetime(start) end_time = parse_datetime(end) original_dataframe = MetricRangeDataFrame( self.prom.custom_query_range( query=metric_name, start_time=start_time, end_time=end_time, step=15) )[['value']] original_dataframe[self.value_column] = [ float(value) for value in original_dataframe[self.value_column]] original_dataframe.rename(columns={'value': alias}, inplace=True) self.value_column = alias self.original_dataframe = original_dataframe return original_dataframe def split_test_train_dataframe(self, minutes_split: int) -> Tuple[pd.DataFrame, pd.DataFrame]: steps = self.minutes_to_step(minutes_split) self.data_train = self.original_dataframe[:-steps] self.data_test = self.original_dataframe[-steps:] return self.data_train, self.data_test def minutes_to_step(self, min: int) -> int: return int((min * 60) / 15) def train_model(self) -> None: self.forecaster.fit(y=self.data_train[self.value_column]) def predict(self, minutes_prediction: int) -> pd.Series: self.predictions = self.forecaster.predict(steps=self.minutes_to_step(minutes_prediction)) return self.predictions def plot_graphic(self): fig, ax = plt.subplots(figsize=(18, 12)) self.data_train[self.value_column].plot(ax=ax, label='train') self.data_test[self.value_column].plot(ax=ax, label='test') self.predictions.plot(ax=ax, label='predictions') ax.legend() def get_mean_squared_error(self) -> float: error_mse: float = mean_squared_error( y_true=self.data_test[self.value_column], y_pred=self.predictions ) return error_mse def execute(self, start: str, end: str, metric_name: str, minutes_split: int, minutes_prediction: int, alias: str): self.load_metric_as_dataframe(start, end, metric_name, alias) self.split_test_train_dataframe(minutes_split) self.train_model() self.predict(minutes_prediction) # consts #start = '2022-05-11 07:50:07' end = '2022-05-11 10:47:27' start = '2022-05-11 09:53:16' #end = '2022-05-11 08:03:53' min_split = 25 # teste req_failed_ia = BtalertIA(20) start_time = parse_datetime(start) end_time = parse_datetime(end) original_dataframe = MetricRangeDataFrame( req_failed_ia.prom.custom_query_range( query='btalert_failed_requests_percent', start_time=start_time, end_time=end_time, step=15) )[['value']] original_dataframe['value'] = [float(v) for v in original_dataframe['value']] original_dataframe.plot() req_failed_ia.original_dataframe['t'] = [datetime.fromtimestamp(tt) for tt in req_failed_ia.original_dataframe.index] req_failed_ia.original_dataframe['t'] = req_failed_ia.original_dataframe['t'].astype('datetime64[s]') print(req_failed_ia.original_dataframe['t']) req_failed_ia.original_dataframe.set_index(req_failed_ia.original_dataframe['t']) req_failed_ia.original_dataframe.drop(columns='t') req_failed_ia.original_dataframe.head() #req_failed_ia.load_metric_as_dataframe(start, end, 'btalert_failed_requests_percent', 'requests_failed') #req_failed_ia.value_column = 'requests_failed' #req_failed_ia.split_test_train_dataframe(min_split) #req_failed_ia.train_model() #req_failed_ia.predict(5) #req_failed_ia.plot_graphic() req_failed_ia = BtalertIA(10) req_failed_ia.execute(start, end, 'btalert_failed_requests_percent', 10, 25, 'requests_failed') req_failed_ia.plot_graphic() req_failed = req_failed_ia.load_metric_as_dataframe(start, end, 'btalert_failed_requests_percent', 'req_failed') requests_per_second = req_failed_ia.load_metric_as_dataframe(start, end, 'btalert_requests_per_second', 'request_per_sec') #pg_lock_count = req_failed_ia.load_metric_as_dataframe(start, end, 'pg_lock_count') max_cpu = req_failed_ia.load_metric_as_dataframe(start, end, 'max(rate(container_cpu_usage_seconds_total{image="api6-backend_cadastrol-server"}[1m:15s]))', 'cpu_percent') memory = req_failed_ia.load_metric_as_dataframe(start, end, 'container_memory_rss{name="cadastrol-server"} / container_spec_memory_limit_bytes{name="cadastrol-server"}', 'memory_percent') res = pd.concat([req_failed, requests_per_second, max_cpu, memory], axis=1) res.head(10) res.to_csv('dados.csv', sep=';', encoding='utf-8') cor = res.corr() cor import seaborn as sn plot = sn.heatmap(cor, annot=True, fmt='0.1f', linewidths=.6) plot plot = sn.heatmap(cor, annot=True, fmt='0.1f', linewidths=.6) plot ``` ### Lendo dados salvos ``` dados = pd.read_csv('dados.csv', delimiter=';') dados['timestamp'] = [datetime.fromtimestamp(timestamp) for timestamp in dados['timestamp']] dados = dados.set_index(dados['timestamp']) dados = dados.asfreq(freq='15S', method='bfill') dados.isnull() dados.fillna(0, inplace=True) #dados.isnull() #dados dados['req_failed'] = dados['req_failed'].div(100) #dados def minutes_to_step(min: int) -> int: return int((min * 60) / 15) regressor = RandomForestRegressor( max_depth=20, n_estimators=3, random_state=123, ) dados = dados.drop(columns='timestamp') ``` ### Gráfico dos dados de requisições falhadas, uso de CPU(%) e uso de memória(%) dos dados coletados ``` fig, ax = plt.subplots(figsize=(24, 12)) dados['req_failed'].plot(ax=ax, label='req_failed') dados['cpu_percent'].plot(ax=ax, label='cpu_percent') dados['memory_percent'].plot(ax=ax, label='memory_percent') ax.legend(prop={'size': 25}) #dados.plot() split_time = minutes_to_step(30) train = dados[:-split_time] test = dados[-split_time:] print(len(train), len(test)) #print(train.head()) predictions = {} for label in ['req_failed', 'cpu_percent', 'memory_percent', 'request_per_sec']: forecaster = ForecasterAutoreg( regressor=regressor, lags=minutes_to_step(30) ) forecaster.fit(y=train[label]) prediction = forecaster.predict(steps=split_time) predictions[label] = prediction ``` ### Gráfico comparativo da predição VS real das requisições por segundo ``` fig, ax = plt.subplots(figsize=(28, 12)) predictions['request_per_sec'].plot(ax=ax, label='pred_request_per_sec') test['request_per_sec'].plot(ax=ax, label='read_request_per_sec') ax.legend(prop={'size': 25}) ``` ### Gráfico da previsão das requisições falhadas, uso de memória(%) e uso de CPU(%) ``` fig, ax = plt.subplots(figsize=(24, 12)) predictions['req_failed'].plot(ax=ax, label='pred_req_failed') predictions['cpu_percent'].plot(ax=ax, label='pred_cpu_percent') predictions['memory_percent'].plot(ax=ax, label='pred_memory_percent') ax.legend(prop={'size': 25}) ``` ### Gráfico dos dados reais no mesmo período da previsão ``` fig, ax = plt.subplots(figsize=(24, 12)) test['req_failed'].plot(ax=ax, label='real_req_failed') test['cpu_percent'].plot(ax=ax, label='real_cpu_percent') test['memory_percent'].plot(ax=ax, label='real_memory_percent') ax.legend(prop={'size': 25}) predictions['req_failed'] res = test res['pred_req_failed'] = predictions['req_failed'] res['pred_cpu_percent'] = predictions['cpu_percent'] res['pred_memory_percent'] = predictions['memory_percent'] res['diff_req_failed'] = abs(res['pred_req_failed'] - res['req_failed']) res['diff_cpu_percent'] = abs(res['pred_cpu_percent'] - res['cpu_percent']) res['diff_memory_percent'] = abs(res['pred_memory_percent'] - res['memory_percent']) res.head() desc = res.describe() desc[['diff_req_failed', 'diff_cpu_percent', 'diff_memory_percent']] ```
github_jupyter
##### Exercise 5.1 Consider the diagrams on the right in Figure 5.2. Why does the value function jump up for the last two rows in the rear? Why does it drop off for the whole last row on the left? Why are the frontmost values higher in the upper diagrams than in the lower? Jumps up for last two rows in the rear since sticking for 20 and 21 in blackjack usually result in a win. It drops off for the last row on the left since if the dealer has an Ace, it's bad news for the player. Finally, the frontmost values are higher in the upper diagrams than in the lower since having a usable Ace makes it less likely that an Ace will make the player bust. ##### Exercise 5.2 The backup diagram for the Monte Carlo estimation of $Q^\pi$ is similar to the backup diagram for $V^\pi$, but the root is a state-action pair, not just a state. The diagram ends in a terminal state. ##### Some notes "Without a model (as we had in DP chapter 4)... state values alone are not sufficient. One must explicitly estimate the value of each action in order for the values to be useful in suggesting a policy. " By model, the author means, having the transition probabilities and transition rewards available at hand. " For policy evaluation to work for action values, we must assure continual exploration. One way to do this is by specifying that the first step of each episode starts at a state-action pair, and that every such pair has a nonzero probability of being selected as the start. This guarantees that all state-action pairs will be visited an infinite number of times in the limit of an infinite number of episodes. We call this the assumption of exploring starts." Has the following been proved on Monte Carlo ES? "Convergence to this optimal fixed point seems inevitable as the changes to the action-value function decrease over time, but has not yet been formally proved. In our opinion, this is one of the most fundamental open questions in reinforcement learning." ##### Questions **Question**: In 5.6, in the Figure 5.7, (c), the update to w, why is there a 1 in the numerator? It's because $\pi(s,a)$ is a deterministic policy! Why are we taking $\tau$ to be the latest time at which the actions are not greedy? Is it because our estimate for $Q^\pi$ only improves for nongreedy actions as stated in the section? **Question**: In 5.4, the conditions for the policy improvement theorem require optimal substructure right? So even though Monte Carlo is more robust to violations of the Markov Property since it doesn't bootstrap $V$ or $Q$, we are still assuming that greedy updates in GPI (Generalized Policy Improvement) will allow us to arrive at the optimal action-value functions due to the Markov Property? ##### Exercise 5.3 What is the Monte Carlo estimate analogous to (5.3) for action values, given returns generated using $\pi'$? I'm not sure, but I think it would be something like: $$Q(s,a) = \frac{\sum_i^{n_s} \frac{p_i(s,a)}{p_i'(s,a)} R_i(s,a) }{ \sum_i^{n_s} \frac{p_i(s,a)}{p_i'(s,a)} }$$ Where $R_i(s,a)$ is the reward following state $s$ and action $a$, and $p_i(s_t, a_t) = P^{a_t}_{s_t, s_{t+1}} \prod_{k=t+1}^{T_i(s_t, a_t) - 1} \pi(s_k, a_k)P^{a_k}_{s_k, s_{k+1}} $. ##### Exercise 5.4 See the code in `chapter5_racetrack` for the solution! ##### Exercise 5.5 Modify the algorithm for first-visit MC policy evaluation (Figure 5.1) to use the incremental implementation for stationary averages described in Section 2.5. We just need to update the algorithm so that $Returns(s)$ is a 1x1 array for each $s \in S$. Before part(b) of the algorithm, we need to initialize $k = 0$, and in part (b) in the loop, we do: - $Returns(s) \leftarrow Returns(s) + \frac{1}{k + 1}[R - Returns(s)]$ - $k \leftarrow k + 1$ ##### Exercise 5.6 Derive the weighted-average update rule (5.5) from (5.4). Follow the pattern of the derivation of the unweighted rule (2.4) from (2.1). We have, $ \begin{equation} \begin{split} V_{n+1} =& \frac{\sum_{k=1}^{n+1} w_k R_k }{\sum{k+1}{n+1} w_k}\\ =& \frac{w_{n+1} R_{n+1} + \sum_{k=1}^n w_k R_k }{W_{n+1}} \end{split} \end{equation} $ where $W_{n+1} = W_n + w_{n+1}$ and $W_0 = 0$. Then we have: $ \begin{equation} \begin{split} V_{n+1} =& \frac{1}{W_{n+1}} [w_{n+1}R_{n+1} + V_n W_n]\\ =& \frac{1}{W_{n+1}} [w_{n+1}R_{n+1} + V_n (W_{n+1} - w_{n+1})]\\ =& V_n + \frac{w_{n+1}}{W_{n+1}} [R_{n+1} - V_n ] \end{split} \end{equation} $ ##### Exercise 5.7 Modify the algorithm for the off-policy Monte Carlo control algorithm (Figure 5.7) to use the method described above for incrementally computing weighted averages. Before the repeat loop, we need to initialize $W = 0$. Then in the repeat loop we do: - get $w$ and $t$ as usual - delete lines for $N(s,a)$ and $D(s,a)$ - $W \leftarrow w + W$ - $Q(s,a) \leftarrow Q(s,a) + \frac{w}{W} [R_t - Q(s,a)] $
github_jupyter
``` %load_ext autoreload import numpy as np import os import matplotlib.pyplot as plt import pickle from enterprise import constants as const from enterprise.signals import parameter from enterprise.signals import selections from enterprise.signals import signal_base from enterprise.signals import white_signals from enterprise.signals import gp_signals from enterprise.signals import deterministic_signals from enterprise.signals import utils from utils import models from utils import hypermod from utils.sample_helpers import JumpProposal, get_parameter_groups from utils import sample_utils as su from PTMCMCSampler.PTMCMCSampler import PTSampler as ptmcmc from acor import acor %matplotlib inline %autoreload 2 ``` # Read in data ``` ephem = 'DE436' datadir = '/home/pbaker/nanograv/data/' # read in data pickles filename = datadir + 'nano9_{}.pkl'.format(ephem) with open(filename, "rb") as f: psrs = pickle.load(f) filename = datadir + 'nano9_setpars.pkl' with open(filename, "rb") as f: noise_dict = pickle.load(f) #psr_use = models.which_psrs(psrs, slice_yr, 3) # select pulsars psr_9yr = [ # the NG 9yr pulsars 'J1713+0747', 'J1909-3744', 'J1640+2224', 'J1600-3053', 'J2317+1439', 'J1918-0642', 'J1614-2230', 'J1744-1134', 'J0030+0451', 'J2145-0750', 'B1855+09', 'J1853+1303', 'J0613-0200', 'J1455-3330', 'J1741+1351', 'J2010-1323', 'J1024-0719', 'J1012+5307', ] psrs = [p for p in psrs if p.name in psr_9yr] ``` # setup models ## custom BWM w/ k param ``` @signal_base.function def bwm_delay(toas, pos, log10_h=-14.0, cos_gwtheta=0.0, gwphi=0.0, gwpol=0.0, t0=55000, psrk=1, antenna_pattern_fn=None): """ Function that calculates the earth-term gravitational-wave burst-with-memory signal, as described in: Seto et al, van haasteren and Levin, phsirkov et al, Cordes and Jenet. This version uses the F+/Fx polarization modes, as verified with the Continuous Wave and Anisotropy papers. :param toas: Time-of-arrival measurements [s] :param pos: Unit vector from Earth to pulsar :param log10_h: log10 of GW strain :param cos_gwtheta: Cosine of GW polar angle :param gwphi: GW azimuthal polar angle [rad] :param gwpol: GW polarization angle :param t0: Burst central time [day] :param antenna_pattern_fn: User defined function that takes `pos`, `gwtheta`, `gwphi` as arguments and returns (fplus, fcross) :return: the waveform as induced timing residuals (seconds) """ # convert h = 10**log10_h gwtheta = np.arccos(cos_gwtheta) t0 *= const.day # antenna patterns if antenna_pattern_fn is None: apc = utils.create_gw_antenna_pattern(pos, gwtheta, gwphi) else: apc = antenna_pattern_fn(pos, gwtheta, gwphi) # grab fplus, fcross fp, fc = apc[0], apc[1] # combined polarization pol = np.cos(2*gwpol)*fp + np.sin(2*gwpol)*fc # Define the heaviside function heaviside = lambda x: 0.5 * (np.sign(x) + 1) k = np.rint(psrk) # Return the time-series for the pulsar return k * pol * h * heaviside(toas-t0) * (toas-t0) ``` ## build PTA ``` outdir = '/home/pbaker/nanograv/bwm/dropout/' !mkdir -p $outdir # find the maximum time span to set frequency sampling tmin = np.min([p.toas.min() for p in psrs]) tmax = np.max([p.toas.max() for p in psrs]) Tspan = tmax - tmin print("Tspan = {:f} sec ~ {:.2f} yr".format(Tspan, Tspan/const.yr)) # find clipped prior range for bwm_t0 clip = 0.05 * Tspan t0min = (tmin + 2*clip)/const.day # don't search in first 10% t0max = (tmax - clip)/const.day # don't search in last 5% print("search for t0 in [{:.1f}, {:.1f}] MJD".format(t0min, t0max)) anomaly = { 'bwm_costheta':0.10, 'bwm_log10_A':-12.77, 'bwm_phi':1.15, 'bwm_pol':2.81, 'bwm_t0':55421.59, } # White Noise selection = selections.Selection(selections.by_backend) efac = parameter.Constant() equad = parameter.Constant() ecorr = parameter.Constant() ef = white_signals.MeasurementNoise(efac=efac, selection=selection) eq = white_signals.EquadNoise(log10_equad=equad, selection=selection) ec = white_signals.EcorrKernelNoise(log10_ecorr=ecorr, selection=selection) wn = ef + eq + ec # Red Noise rn_log10_A = parameter.Uniform(-20, -11) rn_gamma = parameter.Uniform(0, 7) rn_powlaw = utils.powerlaw(log10_A=rn_log10_A, gamma=rn_gamma) rn = gp_signals.FourierBasisGP(rn_powlaw, components=30, Tspan=Tspan) # BWM signal name = 'bwm' amp_name = '{}_log10_A'.format(name) bwm_log10_A = parameter.Uniform(-18, -12)(amp_name) pol_name = '{}_pol'.format(name) pol = parameter.Constant(anomaly[pol_name])(pol_name) t0_name = '{}_t0'.format(name) t0 = parameter.Constant(anomaly[t0_name])(t0_name) costh_name = '{}_costheta'.format(name) phi_name = '{}_phi'.format(name) costh = parameter.Constant(anomaly[costh_name])(costh_name) phi = parameter.Constant(anomaly[phi_name])(phi_name) k = parameter.Uniform(0,1) # not common, one per PSR bwm_wf = bwm_delay(log10_h=bwm_log10_A, t0=t0, cos_gwtheta=costh, gwphi=phi, gwpol=pol, psrk=k) bwm = deterministic_signals.Deterministic(bwm_wf, name=name) # Timing Model tm = gp_signals.TimingModel(use_svd=True) mod = tm + wn + rn + bwm pta = signal_base.PTA([mod(psr) for psr in psrs]) pta.set_default_params(noise_dict) summary = pta.summary() print(summary) ``` # Sample ## initial point and covariance matrix ``` x0 = np.hstack([noise_dict[p.name] if p.name in noise_dict.keys() else p.sample() for p in pta.params]) # initial point ndim = len(x0) # initial jump covariance matrix # set initial cov stdev to (starting order of magnitude)/10 stdev = np.array([10**np.floor(np.log10(abs(x)))/10 for x in x0]) cov = np.diag(stdev**2) ``` ## sampling groups ``` # generate custom sampling groups groups = [list(range(ndim))] # all params # pulsar noise groups (RN) for psr in psrs: this_group = [pta.param_names.index(par) for par in pta.param_names if psr.name in par] groups.append(this_group) # all k params this_group = [] for par in pta.param_names: if '_psrk' in par: this_group.append(pta.param_names.index(par)) groups.append(this_group) # bwm params this_group = [pta.param_names.index('bwm_log10_A')] for ii in range(2): groups.append(this_group) this_group = [pta.param_names.index(par) for par in pta.param_names if 'bwm_' in par] groups.append(this_group) ``` ## initialize sampler object ``` sampler = ptmcmc(ndim, pta.get_lnlikelihood, pta.get_lnprior, cov, groups=groups, outDir=outdir, resume=True) sumfile = os.path.join(outdir, 'summary.txt') with open(sumfile, 'w') as f: f.write(pta.summary()) outfile = os.path.join(outdir, 'params.txt') with open(outfile, 'w') as f: for pname in pta.param_names: f.write(pname+'\n') # additional proposals full_prior = su.build_prior_draw(pta, pta.param_names, name='full_prior') sampler.addProposalToCycle(full_prior, 5) # RN empirical #from utils.sample_utils import EmpiricalDistribution2D #with open("/home/pbaker/nanograv/data/nano11_RNdistr.pkl", "rb") as f: # distr = pickle.load(f) #Non4 = len(distr) // 4 #RN_emp = su.EmpDistrDraw(distr, pta.param_names, Nmax=Non4, name='RN_empirical') #sampler.addProposalToCycle(RN_emp, 10) ``` ## sample it! ``` thin = 50 Nsamp = 100000 * 50 sampler.sample(x0, Nsamp, SCAMweight=30, AMweight=20, DEweight=50, burn=int(5e4), thin=thin) ```
github_jupyter
<a href="https://colab.research.google.com/github/unicamp-dl/IA025_2022S1/blob/main/ex02/Fernanda_Caldas/FernandaCaldas_Semana2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Notebook de referência Nome: Fernanda Caldas ## Instruções Este exercício consiste em escrever um código para treinar um modelo linear usando SGD e vizualizar como a função de perda varia em função das pesos da rede. A implementação será considerada correta apenas se passar nos 3 asserts ao longo deste notebook. ## Problema de Regressão Linear O problema de ajuste de uma reta a um conjunto de pontos para verificar se existe uma previsão linear é um problema muito antigo, muito estudado e muito presente nos dias de hoje. Quando o ajuste é abordado como um problema de **otimização numérica**, ele é a base de boa parte dos **conceitos sobre redes neurais** e iremos explorá-lo aqui como uma forma de introdução às redes neurais. O modelo de regressão linear que iremos utilizar pode ser visto como uma rede neural de apenas uma camada e função de ativação linear. ## Conjunto de dados: Flores Íris Iremos utilizar duas propriedades do conjunto de dados das flores Íris [Wikipedia-Iris_flower_data_set](https://en.wikipedia.org/wiki/Iris_flower_data_set): * o comprimento das sépalas e * o comprimento da pétalas. A ideia será prever o comprimento da pétala, conhecendo-se o comprimento da sépala. Estaremos usando apenas uma propriedade, ou característica ou *feature* do objeto para que seja fácil visualizar o espaço de busca de parâmetros. Vamos utilizar as 50 amostras da variedade versicolor. ![](https://raw.githubusercontent.com/robertoalotufo/files/master/figures/iris_petals_sepals.png) ## Dados: leitura e visualização ``` %matplotlib inline import matplotlib.pyplot as plt import ipywidgets as widgets from IPython import display import numpy as np import pandas as pd from sklearn.datasets import load_iris import time iris = load_iris() data = iris.data[iris.target==1,::2] # comprimento das sépalas e pétalas, indices 0 e 2 x_in = data[:,0:1] y_in = data[:,1:2] iris_pd = pd.DataFrame(x_in, columns=['x_in']) iris_pd['y_in'] = y_in iris_pd.head() ``` ## Visualização dos dados `x_in` e `y_in` e normalizados ``` x = x_in - x_in.min() x /= x.max() # normalização y = y_in - y_in.min() y /= y.max() fig = plt.figure(figsize=(16,5)) ax_in = fig.add_subplot(1,2,1) ax_in.scatter(x_in, y_in) ax_in.set_xlabel('Comprimento sepalas') ax_in.set_ylabel('Comprimento petalas') ax_n = fig.add_subplot(1,2,2) ax_n.scatter(x, y) ax_n.set_xlabel('Comprimento normalizado sepalas') ax_n.set_ylabel('Comprimento normalizado petalas'); ``` ## Reta de ajuste A equação da reta no plano necessita de dois parâmetros, aqui denominados $w_0$ (*bias*) e inclinação $w_1$. Veja figura: <img src="https://raw.githubusercontent.com/robertoalotufo/files/master/figures/linhareta.png" width="300pt"> A reta de ajuste será dada por: $$ \hat{y} = w_0 + w_1 x $$ onde * $w_1$ é o coeficiente angular da reta e * $w_0$ é a interseção do eixo vertical quando x é igual a zero, também denominado de *bias*. * $x$ é a variável de entrada (comprimento das sépalas) e * $\hat{y}$ é a predição (comprimento estimado das pétalas). ## Representação gráfica da equação linear via neurônio $ \hat{y} = 1 w_0 + x_0 w_1 $ Temos: - 1 atributo de entrada: $x_0$ - 2 parâmetros para serem ajustados (treinados) $w_0$ e $w_1$ - 1 classe de saída $\hat{y}$ <img src="https://raw.githubusercontent.com/robertoalotufo/files/master/figures/RegressaoLinearNeuronio.png" width="300pt"> $$ \hat{y} = w_0 + w_1 x $$ $$ \mathbf{\hat{y}} = \mathbf{w} \mathbf{x} $$ ### Função Custo ou de Perda (MSE - Mean Square Error) <img src="https://raw.githubusercontent.com/robertoalotufo/files/master//figures/Loss_MSE.png" width = "600pt"> A função de custo depende do conjunto de treinamento ($y_i$) e dos valores de predição ($\hat{y_i}$): $$ J(\hat{y_i},y_i) = \frac{1}{M} \sum_{i=0}^{M-1} (\hat{y_i} - y_i)^2 $$ . ## Laço de minimização via gradiente descendente O código da próxima célula é a parte principal deste notebook. É aqui que a minimização é feita. É aqui que dizemos que estamos fazendo o *fit*, ou o treinamento do sistema para encontrar o parâmetro $\mathbf{W}$ que minimiza a função de perda $J$. Acompanhamos a convergência da minimização pelo valor da perda a cada iteração, plotando o vetor `J_history`. O esquema da otimização é representado pelo diagrama a seguir: <img src="https://raw.githubusercontent.com/robertoalotufo/files/master/figures/RegressaoLinear_Otimizacao.png" width = "600pt"> e é implementado pela próxima célula de código: ## Funções: Custo, Gradiente Descendente ``` # É importante fixar as seeds para passar nos asserts abaixo. import random import numpy as np import torch from torch import nn, optim random.seed(123) np.random.seed(123) class Model(): def __init__(self, n_in: int, n_out: int): self.w = torch.zeros((n_out,n_in), requires_grad=True) def forward(self, x): y_pred = torch.matmul(x, torch.t(model.w)) return y_pred def train(model, x, y, learning_rate: float, n_epochs: int): """Train a linear model with SGD. Returns: loss_history: a np.array of shape (n_epochs,) w_history: a np.array of shape (n_epochs, 2) """ n_samples = len(x) X = torch.cat([torch.ones(size=(n_samples,1)), torch.FloatTensor(x)],dim=1) Y = torch.FloatTensor(y) optimizer = torch.optim.SGD([model.w], lr=learning_rate) loss_func = torch.nn.MSELoss(reduction='mean') loss_history = [] w_history = [] for epoch in range(n_epochs): w_history.append([(model.w).detach().numpy()[0][0],(model.w).detach().numpy()[0][1]]) loss = loss_func(model.forward(X), Y) optimizer.zero_grad() (model.w).retain_grad() loss.backward() model.w = model.w - learning_rate*(model.w).grad optimizer.step() loss_history.append(loss.item()) return np.array(loss_history), np.array(w_history) ``` ### Testando as funções ``` model = Model(2, 1) # duas entradas (1 + x0) e uma saída y_pred loss_history, w_history = train(model=model, x=x, y=y, learning_rate=0.5, n_epochs=21) # Assert do histórico de losses target_loss_history = np.array( [0.40907029, 0.0559969 , 0.03208511, 0.02972902, 0.02885257, 0.02813922, 0.02749694, 0.02691416, 0.02638508, 0.02590473, 0.02546862, 0.02507267, 0.02471319, 0.02438681, 0.0240905 , 0.02382147, 0.02357722, 0.02335547, 0.02315414, 0.02297135, 0.0228054]) assert np.allclose(loss_history, target_loss_history, atol=1e-6) # Assert de histórico de pesos da rede target_w_history = np.array( [[0., 0. ], [0.6, 0.336644 ], [0.4339223, 0.27542454], [0.4641239, 0.31466085], [0.44476733, 0.3271254 ], [0.43861815, 0.3453676 ], [0.42961866, 0.3611236 ], [0.4218457, 0.37655178], [0.41423446, 0.3911463 ], [0.40703452, 0.4050796 ], [0.40016073, 0.41834888], [0.39361456, 0.43099412], [0.38737625, 0.44304258], [0.38143232, 0.4545229 ], [0.3757687, 0.4654618 ], [0.37037218, 0.4758848 ], [0.36523017, 0.48581624], [0.36033067, 0.49527928], [0.35566223, 0.50429606], [0.35121396, 0.5128876 ], [0.34697545, 0.52107394]]) assert np.allclose(w_history, target_w_history, atol=1e-6) ``` # Função de cálculo do grid de custos ## 1) Laço ``` def compute_loss_grid(x, y, w_0_grid, w_1_grid): """Returns: loss_grid: an array with the same shape of w_0_grid (or w_1_grid). """ n_samples = len(x) w0, w1 = np.meshgrid(w_0_grid, w_1_grid) loss_grid = np.zeros((len(w_0_grid),len(w_1_grid))) for i in range(n_samples): y_pred_i = w0 + w1*x[i] loss_grid += (y_pred_i - y[i])**2 loss_grid /= n_samples return loss_grid wmin = w_history.min(axis=0) wmax = w_history.max(axis=0) D = wmax - wmin wmin -= D wmax += D w_0_grid = np.linspace(wmin[0], wmax[0], 100) w_1_grid = np.linspace(wmin[1], wmax[1], 100) %%timeit loss_grid = compute_loss_grid(x, y, w_0_grid, w_1_grid) ``` ### Testando a função ``` !gsutil cp gs://unicamp-dl/ia025a_2022s1/aula2/target_loss_grid.npy . target_loss_grid = np.load('target_loss_grid.npy') assert np.allclose(loss_grid, target_loss_grid, atol=1e-6) ``` ## 2) Matricial: Kronecker ``` def compute_loss_grid(x, y, w_0_grid, w_1_grid): """Returns: loss_grid: an array with the same shape of w_0_grid (or w_1_grid). """ # Escreva seu código aqui. n_samples = len(x) w0, w1 = np.meshgrid(w_0_grid, w_1_grid) W0 = np.kron(np.ones((n_samples,1)),w0) W1 = np.kron(np.ones((n_samples,1)),w1) X = np.kron(x,np.ones((w_0_grid.shape[0],w_1_grid.shape[0]))) Y = np.kron(y,np.ones((w_0_grid.shape[0],w_1_grid.shape[0]))) Y_pred = W0 + W1*X aux = ((Y_pred - Y)**2/n_samples).reshape((n_samples,w_0_grid.shape[0],w_1_grid.shape[0])) loss_grid = aux.sum(axis=0) return loss_grid %%timeit loss_grid = compute_loss_grid(x, y, w_0_grid, w_1_grid) assert np.allclose(loss_grid, target_loss_grid, atol=1e-6) ``` ## 3) Matricial: Numpy.Tile ``` def compute_loss_grid(x, y, w_0_grid, w_1_grid): """Returns: loss_grid: an array with the same shape of w_0_grid (or w_1_grid). """ # Escreva seu código aqui. n_samples = len(x) w0, w1 = np.meshgrid(w_0_grid, w_1_grid) W0 = np.tile(w0,(n_samples,1)) W1 = np.tile(w1,(n_samples,1)) X = np.tile(x,(len(w_0_grid)*len(w_1_grid))).reshape(len(x)*len(w_1_grid),len(w_0_grid)) Y = np.tile(y,(len(w_0_grid)*len(w_1_grid))).reshape(len(x)*len(w_1_grid),len(w_0_grid)) Y_pred = W0 + W1*X aux = ((Y_pred - Y)**2/n_samples).reshape((n_samples,w_0_grid.shape[0],w_1_grid.shape[0])) loss_grid = aux.sum(axis=0) return loss_grid %%timeit loss_grid = compute_loss_grid(x, y, w_0_grid, w_1_grid) assert np.allclose(loss_grid, target_loss_grid, atol=1e-6) ``` ## Funcão de Plot ``` def show_plots(x, y, w_0_grid, w_1_grid, loss_grid, loss_history, w_history, sleep=0.3): n_samples = y.shape[0] # valor ótimo, solução analítica # ------------------------------ x_bias = np.hstack([np.ones((n_samples, 1)), x]) w_opt = (np.linalg.inv((x_bias.T).dot(x_bias)).dot(x_bias.T)).dot(y) x_all = np.linspace(x.min(), x.max(), 100).reshape(100, 1) x_all_bias = np.hstack([np.ones((100, 1)), x_all]) result_opt = x_all_bias.dot(w_opt) # Predição do valor ótimo # Gráficos: # -------- fig = plt.figure(figsize=(18, 6)) ax_grid = fig.add_subplot(1, 3, 1) # Grid de losses ax_loss = fig.add_subplot(1, 3, 2) # Função perda ax_loss.plot(loss_history) ax_loss.set_title('Perda', fontsize=15) ax_loss.set_xlabel('epochs', fontsize=10) ax_loss.set_ylabel('MSE', fontsize=10) ax_grid.pcolormesh(w_0_grid, w_1_grid, loss_grid, cmap=plt.cm.coolwarm) ax_grid.contour(w_0_grid, w_1_grid, loss_grid, 20) ax_grid.scatter(w_opt[0], w_opt[1], marker='x', c='w') # Solução analítica. ax_grid.set_title('W', fontsize=15) ax_grid.set_xlabel('w0', fontsize=10) ax_grid.set_ylabel('w1', fontsize=10) # Plot dinâmico # ------------- for i, (loss, w) in enumerate(zip(loss_history, w_history)): ax_loss.scatter(i, loss) ax_grid.scatter(w[0], w[1], c='r', marker='o') display.display(fig) display.clear_output(wait=True) time.sleep(sleep) ``` ## Plotagem iterativa do gradiente descendente, reta ajuste, parâmetros, função perda ``` %matplotlib inline try: show_plots(x, y, w_0_grid, w_1_grid, loss_grid, loss_history, w_history, sleep=0.01) except KeyboardInterrupt: pass ```
github_jupyter
# Subscriber with JSON export __NOTE__: this is an __outdated__ notebook, some of the functions that are used here are considered __private__ to QCoDeS and are not intended for use by users (for example, `DataSet.subscribe`). This notebook will be re-written in the future. ``` import logging import copy import numpy as np import json from qcodes import load_or_create_experiment, new_data_set, ParamSpec from qcodes.dataset.json_exporter import \ json_template_heatmap, json_template_linear, \ export_data_as_json_heatmap, export_data_as_json_linear logging.basicConfig(level="INFO") exp = load_or_create_experiment('json-export-subscriber-test', 'no-sample') dataSet = new_data_set("test", exp_id=exp.exp_id, specs=[ParamSpec("x", "numeric"), ParamSpec("y", "numeric")]) dataSet.mark_started() mystate = {} mystate['json'] = copy.deepcopy(json_template_linear) mystate['json']['x']['name'] = 'xname' mystate['json']['x']['unit'] = 'xunit' mystate['json']['x']['full_name'] = 'xfullname' mystate['json']['y']['name'] = 'yname' mystate['json']['y']['unit'] = 'yunit' mystate['json']['y']['full_name'] = 'yfullname' sub_id = dataSet.subscribe(export_data_as_json_linear, min_wait=0, min_count=20, state=mystate, callback_kwargs={'location': 'foo'}) s = dataSet.subscribers[sub_id] mystate for x in range(100): y = x dataSet.add_result({"x":x, "y":y}) dataSet.mark_completed() mystate mystate = {} xlen = 5 ylen = 10 mystate['json'] = json_template_heatmap.copy() mystate['data'] = {} mystate['data']['xlen'] = xlen mystate['data']['ylen'] = ylen mystate['data']['x'] = np.zeros((xlen*ylen), dtype=np.object) mystate['data']['x'][:] = None mystate['data']['y'] = np.zeros((xlen*ylen), dtype=np.object) mystate['data']['y'][:] = None mystate['data']['z'] = np.zeros((xlen*ylen), dtype=np.object) mystate['data']['z'][:] = None mystate['data']['location'] = 0 dataSet_hm = new_data_set("test", exp_id=exp.exp_id, specs=[ParamSpec("x", "numeric"), ParamSpec("y", "numeric"), ParamSpec("z", "numeric")]) dataSet_hm.mark_started() sub_id = dataSet_hm.subscribe(export_data_as_json_heatmap, min_wait=0, min_count=20, state=mystate, callback_kwargs={'location': './foo'}) for x in range(xlen): for y in range(ylen): z = x+y dataSet_hm.add_result({"x":x, "y":y, 'z':z}) dataSet_hm.mark_completed() mystate['json'] ```
github_jupyter
##### Copyright 2020 The Cirq Developers ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Quantum simulation of electronic structure <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/tutorials/educators/chemistry"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/educators/chemistry.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/educators/chemistry.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/educators/chemistry.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a> </td> </table> The quantum simulation of electronic structure is one of the most promising applications of quantum computers. It has potential applications to materials and drug design. This tutorial provides an introduction to OpenFermion, a library for obtaining and manipulating representations of fermionic and qubit Hamiltonians as well as compiling quantum simulation circuits in Cirq. ``` try: import openfermion as of import openfermionpyscf as ofpyscf except ImportError: print("Installing OpenFermion and OpenFermion-PySCF...") !pip install openfermion openfermionpyscf --quiet import numpy as np from scipy.sparse import linalg import cirq import openfermion as of import openfermionpyscf as ofpyscf ``` ## Background A system of $N$ fermionic modes is described by a set of fermionic *annihilation operators* $\{a_p\}_{p=0}^{N-1}$ satisfying the *canonical anticommutation relations* $$\begin{aligned} \{a_p, a_q\} &= 0, \\ \{a_p, a^\dagger_q\} &= \delta_{pq}, \end{aligned}$$ where $\{A, B\} := AB + BA$. The adjoint $a^\dagger_p$ of an annihilation operator $a_p$ is called a *creation operator*, and we refer to creation and annihilation operators as fermionic *ladder operators*. The canonical anticommutation relations impose a number of consequences on the structure of the vector space on which the ladder operators act; see [Michael Nielsen's notes](http://michaelnielsen.org/blog/archive/notes/fermions_and_jordan_wigner.pdf) for a good discussion. The electronic structure Hamiltonian is commonly written in the form $$ \sum_{pq} T_{pq} a_p^\dagger a_q + \sum_{pqrs} V_{pqrs} a_p^\dagger a_q^\dagger a_r a_s $$ where the $T_{pq}$ and $V_{pqrs}$ are coefficients which depend on the physical system being described. We are interested in calculating the lowest eigenvalue of the Hamiltonian. This eigenvalue is also called the ground state energy. ## FermionOperator and QubitOperator ### `openfermion.FermionOperator` - Stores a weighted sum (linear combination) of fermionic terms - A fermionic term is a product of ladder operators - Examples of things that can be represented by `FermionOperator`: $$ \begin{align} & a_1 \nonumber \\ & 1.7 a^\dagger_3 \nonumber \\ &-1.7 \, a^\dagger_3 a_1 \nonumber \\ &(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 \nonumber \\ &(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 - 1.7 \, a^\dagger_3 a_1 \nonumber \end{align} $$ - A fermionic term is internally represented as a tuple of tuples - Each inner tuple represents a single ladder operator as (index, action) - Examples of fermionic terms: $$ \begin{align} I & \mapsto () \nonumber \\ a_1 & \mapsto ((1, 0),) \nonumber \\ a^\dagger_3 & \mapsto ((3, 1),) \nonumber \\ a^\dagger_3 a_1 & \mapsto ((3, 1), (1, 0)) \nonumber \\ a^\dagger_4 a^\dagger_3 a_9 a_1 & \mapsto ((4, 1), (3, 1), (9, 0), (1, 0)) \nonumber \end{align} $$ - `FermionOperator` is a sum of terms, represented as a dictionary from term to coefficient ``` op = of.FermionOperator(((4, 1), (3, 1), (9, 0), (1, 0)), 1+2j) + of.FermionOperator(((3, 1), (1, 0)), -1.7) print(op.terms) ``` Alternative notation, useful when playing around: $$ \begin{align} I & \mapsto \textrm{""} \nonumber \\ a_1 & \mapsto \textrm{"1"} \nonumber \\ a^\dagger_3 & \mapsto \textrm{"3^"} \nonumber \\ a^\dagger_3 a_1 & \mapsto \textrm{"3^}\;\textrm{1"} \nonumber \\ a^\dagger_4 a^\dagger_3 a_9 a_1 & \mapsto \textrm{"4^}\;\textrm{3^}\;\textrm{9}\;\textrm{1"} \nonumber \end{align} $$ ``` op = of.FermionOperator('4^ 3^ 9 1', 1+2j) + of.FermionOperator('3^ 1', -1.7) print(op.terms) ``` Just print the operator for a nice readable representation: ``` print(op) ``` ### `openfermion.QubitOperator` Same as `FermionOperator`, but the possible actions are 'X', 'Y', and 'Z' instead of 1 and 0. ``` op = of.QubitOperator(((1, 'X'), (2, 'Y'), (3, 'Z'))) op += of.QubitOperator('X3 Z4', 3.0) print(op) ``` `FermionOperator` and `QubitOperator` actually inherit from the same parent class: `openfermion.SymbolicOperator`. ## The Jordan-Wigner and Bravyi-Kitaev transforms A fermionic transform maps `FermionOperator`s to `QubitOperator`s in a way that preserves the canonical anticommutation relations. The most basic transforms are the Jordan-Wigner transform (JWT) and Bravyi-Kitaev transform (BKT). Note that the BKT requires the total number of qubits to be predetermined. Whenever a fermionic transform is being applied implicitly, it is the JWT. ``` op = of.FermionOperator('2^ 15') print(of.jordan_wigner(op)) print() print(of.bravyi_kitaev(op, n_qubits=16)) ``` ### Exercise Below are some examples of how `FermionOperator`s are mapped to `QubitOperator`s by the Jordan-Wigner transform (the notation 'h.c.' stands for 'hermitian conjugate'): $$ \begin{align*} a_p^\dagger &\mapsto \frac12 (X_p - i Y_p) Z_0 \cdots Z_{p-1}\\ a_p^\dagger a_p &\mapsto \frac12 (I - Z_p)\\ (\beta a_p^\dagger a_q + \text{h.c.}) &\mapsto \frac12 [\text{Re}(\beta) (X_p ZZ \cdots ZZ X_q + Y_p ZZ \cdots ZZ Y_q) + \text{Im}(\beta) (Y_p ZZ \cdots ZZ X_q - X_p ZZ \cdots ZZ Y_q)] \end{align*} $$ Verify these mappings for $p=2$ and $q=7$. The `openfermion.hermitian_conjugated` function may be useful here. ``` a2 = of.FermionOperator('2') print(of.jordan_wigner(a2)) print() a2dag = of.FermionOperator('2^') print(of.jordan_wigner(a2dag*a2)) print() a7 = of.FermionOperator('7') a7dag = of.FermionOperator('7^') print(of.jordan_wigner((1+2j)*(a2dag*a7) + (1-2j)*(a7dag*a2))) ``` ### Solution ``` a2 = of.FermionOperator('2') a2dag = of.FermionOperator('2^') a7 = of.FermionOperator('7') a7dag = of.FermionOperator('7^') print(of.jordan_wigner(a2dag)) print() print(of.jordan_wigner(a2dag*a2)) print() op = (2+3j)*a2dag*a7 op += of.hermitian_conjugated(op) print(of.jordan_wigner(op)) ``` ### Exercise Use the `+` and `*` operators to verify that after applying the JWT to ladder operators, the resulting `QubitOperator`s satisfy $$ \begin{align} a_2 a_7 + a_7 a_2 &= 0 \\ a_2 a_7^\dagger + a_7^\dagger a_2 &= 0\\ a_2 a_2^\dagger + a_2^\dagger a_2 &= 1 \end{align} $$ ### Solution ``` a2_jw = of.jordan_wigner(a2) a2dag_jw = of.jordan_wigner(a2dag) a7_jw = of.jordan_wigner(a7) a7dag_jw = of.jordan_wigner(a7dag) print(a2_jw * a7_jw + a7_jw * a2_jw) print(a2_jw * a7dag_jw + a7dag_jw * a2_jw) print(a2_jw * a2dag_jw + a2dag_jw * a2_jw) ``` ## Array data structures - When `FermionOperator`s have specialized structure we can store coefficients in numpy arrays, enabling fast numerical manipulation. - Array data structures can always be converted to `FermionOperator` using `openfermion.get_fermion_operator`. ### InteractionOperator - Stores the one- and two-body tensors $T_{pq}$ and $V_{pqrs}$ of the molecular Hamiltonian $$ \sum_{pq} T_{pq} a_p^\dagger a_q + \sum_{pqrs} V_{pqrs} a_p^\dagger a_q^\dagger a_r a_s $$ - Default data structure for molecular Hamiltonians - Convert from `FermionOperator` using `openfermion.get_interaction_operator` ### DiagonalCoulombHamiltonian - Stores the one- and two-body coefficient matrices $T_{pq}$ and $V_{pq}$ of a Hamiltonian with a diagonal Coulomb term: $$ \sum_{pq} T_{pq} a_p^\dagger a_q + \sum_{pq} V_{pq} a_p^\dagger a_p a_q^\dagger a_q $$ - Leads to especially efficient algorithms for quantum simulation - Convert from `FermionOperator` using `openfermion.get_diagonal_coulomb_hamiltonian` ### QuadraticHamiltonian - Stores the Hermitian matrix $M_{pq}$ and antisymmetric matrix $\Delta_{pq}$ describing a general quadratic Hamiltonian $$ \sum_{p, q} M_{pq} a^\dagger_p a_q + \frac12 \sum_{p, q} (\Delta_{pq} a^\dagger_p a^\dagger_q + \text{h.c.}) $$ - Routines included for efficient diagonalization (can handle thousands of fermionic modes) - Convert from `FermionOperator` using `openfermion.get_quadratic_hamiltonian` ## Generating the Hamiltonian for a molecule The cell below demonstrates using one of our electronic structure package plugins, OpenFermion-PySCF, to generate a molecular Hamiltonian for a hydrogen molecule. Note that the Hamiltonian is returned as an `InteractionOperator`. We'll convert it to a `FermionOperator` and print the result. ``` # Set molecule parameters geometry = [('H', (0.0, 0.0, 0.0)), ('H', (0.0, 0.0, 0.8))] basis = 'sto-3g' multiplicity = 1 charge = 0 # Perform electronic structure calculations and # obtain Hamiltonian as an InteractionOperator hamiltonian = ofpyscf.generate_molecular_hamiltonian( geometry, basis, multiplicity, charge) # Convert to a FermionOperator hamiltonian_ferm_op = of.get_fermion_operator(hamiltonian) print(hamiltonian_ferm_op) ``` Let's calculate the ground energy (lowest eigenvalue) of the Hamiltonian. First, we'll map the `FermionOperator` to a `QubitOperator` using the JWT. Then, we'll convert the `QubitOperator` to a SciPy sparse matrix and get its lowest eigenvalue. ``` # Map to QubitOperator using the JWT hamiltonian_jw = of.jordan_wigner(hamiltonian_ferm_op) # Convert to Scipy sparse matrix hamiltonian_jw_sparse = of.get_sparse_operator(hamiltonian_jw) # Compute ground energy eigs, _ = linalg.eigsh(hamiltonian_jw_sparse, k=1, which='SA') ground_energy = eigs[0] print('Ground_energy: {}'.format(ground_energy)) print('JWT transformed Hamiltonian:') print(hamiltonian_jw) ``` ### Exercise Compute the ground energy of the same Hamiltonian, but via the Bravyi-Kitaev transform. Verify that you get the same value. ``` # Map to QubitOperator using the JWT hamiltonian_bk = of.bravyi_kitaev(hamiltonian_ferm_op) # Convert to Scipy sparse matrix hamiltonian_bk_sparse = of.get_sparse_operator(hamiltonian_bk) # Compute ground energy eigs, _ = linalg.eigsh(hamiltonian_bk_sparse, k=1, which='SA') ground_energy = eigs[0] print('Ground_energy: {}'.format(ground_energy)) print('BK transformed Hamiltonian:') print(hamiltonian_bk) ``` ### Solution ``` # Map to QubitOperator using the BKT hamiltonian_bk = of.bravyi_kitaev(hamiltonian_ferm_op) # Convert to Scipy sparse matrix hamiltonian_bk_sparse = of.get_sparse_operator(hamiltonian_bk) # Compute ground state energy eigs, _ = linalg.eigsh(hamiltonian_bk_sparse, k=1, which='SA') ground_energy = eigs[0] print('Ground_energy: {}'.format(ground_energy)) print('BKT transformed Hamiltonian:') print(hamiltonian_bk) ``` ### Exercise - The BCS mean-field d-wave model of superconductivity has the Hamiltonian $$ H = - t \sum_{\langle i,j \rangle} \sum_\sigma (a^\dagger_{i, \sigma} a_{j, \sigma} + a^\dagger_{j, \sigma} a_{i, \sigma}) - \sum_{\langle i,j \rangle} \Delta_{ij} (a^\dagger_{i, \uparrow} a^\dagger_{j, \downarrow} - a^\dagger_{i, \downarrow} a^\dagger_{j, \uparrow} + a_{j, \downarrow} a_{i, \uparrow} - a_{j, \uparrow} a_{i, \downarrow}) $$ Use the `mean_field_dwave` function to generate an instance of this model with dimensions 10x10. - Convert the Hamiltonian to a `QubitOperator` with the JWT. What is the length of the longest Pauli string that appears? - Convert the Hamiltonian to a `QubitOperator` with the BKT. What is the length of the longest Pauli string that appears? - Convert the Hamiltonian to a `QuadraticHamiltonian`. Get its ground energy using the `ground_energy` method of `QuadraticHamiltonian`. What would happen if you tried to compute the ground energy by converting to a sparse matrix? ## Hamiltonian simulation with Trotter formulas - Goal: apply $\exp(-i H t)$ where $H = \sum_j H_j$ - Use an approximation such as $\exp(-i H t) \approx (\prod_{j=1} \exp(-i H_j t/r))^r$ - Exposed via the `openfermion.simulate_trotter` function - Currently implemented algorithms are from [arXiv:1706.00023](https://arxiv.org/pdf/1706.00023.pdf), [arXiv:1711.04789](https://arxiv.org/pdf/1711.04789.pdf), and [arXiv:1808.02625](https://arxiv.org/pdf/1808.02625.pdf), and are based on the JWT - Currently supported Hamiltonian types: `DiagonalCoulombHamiltonian` and `InteractionOperator` As a demonstration, we'll simulate time evolution under the hydrogen molecule Hamiltonian we generated earlier. First, let's create a random initial state and apply the exact time evolution by matrix exponentiation: $$ \lvert \psi \rangle \mapsto \exp(-i H t) \lvert \psi \rangle $$ ``` # Create a random initial state n_qubits = of.count_qubits(hamiltonian) initial_state = of.haar_random_vector(2**n_qubits, seed=7) # Set evolution time time = 1.0 # Apply exp(-i H t) to the state exact_state = linalg.expm_multiply(-1j*hamiltonian_jw_sparse*time, initial_state) ``` Now, let's create a circuit to perform the evolution and compare the fidelity of the resulting state with the one from exact evolution. The fidelity can be increased by increasing the number of Trotter steps. Note that the Hamiltonian input to `openfermion.simulate_trotter` should be an `InteractionOperator`, not a `FermionOperator`. ``` # Initialize qubits qubits = cirq.LineQubit.range(n_qubits) # Create circuit circuit = cirq.Circuit( of.simulate_trotter( qubits, hamiltonian, time, n_steps=10, order=0, algorithm=of.LOW_RANK) ) # Apply the circuit to the initial state result = circuit.final_state_vector(initial_state) # Compute the fidelity with the final state from exact evolution fidelity = abs(np.dot(exact_state, result.conj()))**2 print(fidelity) print(circuit.to_text_diagram(transpose=True)) ``` ## Bogoliubov transformation - Single-particle orbital basis change - In the particle-conserving case, takes the form $$ U a_p^\dagger U^\dagger = b_p^\dagger, \quad b_p^\dagger = \sum_{q} u_{pq} a_q^\dagger $$ and $u$ is unitary. - Can be used to diagonalize any quadratic Hamiltonian: $$ \sum_{p, q} T_{pq} a_p^\dagger a_q \mapsto \sum_{j} \varepsilon_j b_j^\dagger b_j + \text{constant} $$ - Implementation from [arXiv:1711.05395](https://arxiv.org/pdf/1711.05395.pdf); uses linear depth and linear connectivity As an example, we'll prepare the ground state of a random particle-conserving quadratic Hamiltonian. ``` n_qubits = 5 quad_ham = of.random_quadratic_hamiltonian( n_qubits, conserves_particle_number=True, seed=7) print(of.get_fermion_operator(quad_ham)) ``` Now we construct a circuit which maps computational basis states to eigenstates of the Hamiltonian. ``` _, basis_change_matrix, _ = quad_ham.diagonalizing_bogoliubov_transform() qubits = cirq.LineQubit.range(n_qubits) circuit = cirq.Circuit( of.bogoliubov_transform( qubits, basis_change_matrix)) print(circuit.to_text_diagram(transpose=True)) ``` In the rotated basis, the quadratic Hamiltonian takes the form $$ H = \sum_j \varepsilon_j b_j^\dagger b_j + \text{constant} $$ We can get the $\varepsilon_j$ and the constant using the `orbital_energies` method of `QuadraticHamiltonian`. ``` orbital_energies, constant = quad_ham.orbital_energies() print(orbital_energies) print(constant) ``` The ground state of the Hamiltonian is prepared by filling in the orbitals with negative energy. ``` # Apply the circuit with initial state having the first two modes occupied. result = circuit.final_state_vector(initial_state=0b11000) # Compute the expectation value of the final state with the Hamiltonian quad_ham_sparse = of.get_sparse_operator(quad_ham) print(of.expectation(quad_ham_sparse, result)) # Print out the ground state energy; it should match print(quad_ham.ground_energy()) ``` Recall that the Jordan-Wigner transform of $b_j^\dagger b_j$ is $\frac12(I-Z)$. Therefore, $\exp(-i \varepsilon_j b_j^\dagger b_j)$ is equivalent to a single-qubit Z rotation under the JWT. Since the operators $b_j^\dagger b_j$ commute, we have $$ \exp(-i H t) = \exp(-i \sum_j \varepsilon_j b_j^\dagger b_j t) = \prod_j \exp(-i \varepsilon_j b_j^\dagger b_j t) $$ This gives a method for simulating time evolution under a quadratic Hamiltonian: - Use a Bogoliubov transformation to change to the basis in which the Hamiltonian is diagonal (Note: this transformation might be the inverse of what you expect. In that case, use `cirq.inverse`) - Apply single-qubit Z-rotations with angles proportional to the orbital energies - Undo the basis change The code cell below creates a random initial state and applies time evolution by direct matrix exponentiation. ``` # Create a random initial state initial_state = of.haar_random_vector(2**n_qubits) # Set evolution time time = 1.0 # Apply exp(-i H t) to the state final_state = linalg.expm_multiply(-1j*quad_ham_sparse*time, initial_state) ``` ### Exercise Fill in the code cell below to construct a circuit which applies $\exp(-i H t)$ using the method described above ``` # Initialize qubits qubits = cirq.LineQubit.range(n_qubits) # Write code below to create the circuit # You should define the `circuit` variable here # --------------------------------------------- # --------------------------------------------- # Apply the circuit to the initial state result = circuit.final_state_vector(initial_state) # Compute the fidelity with the correct final state fidelity = abs(np.dot(final_state, result.conj()))**2 # Print fidelity; it should be 1 print(fidelity) ``` ### Solution ``` # Initialize qubits qubits = cirq.LineQubit.range(n_qubits) # Write code below to create the circuit # You should define the `circuit` variable here # --------------------------------------------- def exponentiate_quad_ham(qubits, quad_ham): _, basis_change_matrix, _ = quad_ham.diagonalizing_bogoliubov_transform() orbital_energies, _ = quad_ham.orbital_energies() yield cirq.inverse( of.bogoliubov_transform(qubits, basis_change_matrix)) for i in range(len(qubits)): yield cirq.rz(rads=-orbital_energies[i]).on(qubits[i]) yield of.bogoliubov_transform(qubits, basis_change_matrix) circuit = cirq.Circuit(exponentiate_quad_ham(qubits, quad_ham)) # --------------------------------------------- # Apply the circuit to the initial state result = circuit.final_state_vector(initial_state) # Compute the fidelity with the correct final state fidelity = abs(np.dot(final_state, result.conj()))**2 # Print fidelity; it should be 1 print(fidelity) ```
github_jupyter
# Analysis Report We report the following SageMaker analysis. ## Pre-training Bias Metrics We computed the bias metrics for the label `sentiment` using label value(s)/threshold `1`. * **product_category** The groups are represented in the dataset with the following proportions. <img src='data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAjIAAAGlCAYAAADgRxw/AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy86wFpkAAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOydd3gUVduH79ndZNN77yEJvfcOAQRUqoJiJSo2FEEs74eiFBuoIHYFVLCiAoL0IgFCB0PvhHTSey+78/0RsmTddFLJua8rF2TmzDnPTpLZ3z7nKZIsyzICgUAgEAgEzRBFYxsgEAgEAoFAUFuEkBEIBAKBQNBsEUJGIBAIBAJBs0UIGYFAIBAIBM0WIWQEAoFAIBA0W4SQEQgEAoFA0GwRQkYgEAgEAkGzRQgZgUAgEAgEzRYhZAQCgUAgEDRbhJARCBoZSZIYOnRoY5shEAgEzRIhZASCFsbQoUORJKmxzaiQoKAgJEkiIiKisU0RCATNACFkBAKBQCAQNFuEkBEIBAKBQNBsEUJG0CLYu3cvkiQxf/58Dhw4wNChQ7G0tMTGxob777+fa9eu6Y338fHBx8eH9PR0XnzxRTw9PVGpVKxatUo3ZtOmTQQGBmJtbY2pqSldunRh6dKlFBcXl2vDypUr6dixIyYmJnh6evL666+Tn59f7tjS9cujoq0hWZb54YcfGDRoEDY2NpiZmREQEMCzzz5LVFQUUBKPs2/fPt3/S7+CgoKquIPls3//fiZMmICzszNqtRpPT0/uu+8+Dhw4oBtz48YN5s2bR9++fXFyckKtVuPj48P06dNJTEw0eN2rV68GwNfXV2fff2OIwsPDmTZtGl5eXqjValxdXQkKCiIyMrJcO9evX0/Pnj0xNTXF2dmZp59+mrS0tArvc3JyMrNmzcLX1xe1Wo2TkxMPPPAA586dMxhbuhV2/fp1lixZQvv27VGr1QQFBTF37lwkSeKPP/4o167vv/8eSZL44IMPKrvNAoGgElSNbYBA0JAcOXKEDz74gNGjRzNjxgzOnz/PX3/9RUhICEeOHKFVq1a6sQUFBQwbNozs7GzGjRuHSqXC2dkZgKVLl/LKK69gZ2fHww8/jLm5OX///TevvPIKISEhrF+/Xk9svPPOO7z99tu6N1EjIyN+//13Ll68WCevS6vV8uCDD7J27Vrc3d156KGHsLKyIiIigj/++IO7774bLy8v5s2bx6pVq4iMjGTevHm667t27VrjNT/99FNefvllTE1NmThxIl5eXsTGxnLgwAHWrl3LwIEDgRKxs2TJEoYPH06fPn0wMjLi5MmTfP311+zYsYPQ0FCsra0BmDVrFqtWreL06dPMnDkTGxsbAD2xcfToUUaNGkVOTg5jxowhICCAiIgIfvnlF7Zt28bhw4f1fo7ff/89Tz31FFZWVjz++ONYW1uzdetW7rrrLoqKijAyMtJ7XUlJSfTr14+wsDCGDh3KlClTCA8PZ+3atWzZsoUdO3boXltZZsyYwZEjR7j33nsZO3YsTk5OTJo0iQ8++ICVK1fywAMPGFyzYsUKVCoVTzzxRI3vv0AguIksELQAgoODZUAG5G+++Ubv3DfffCMD8pgxY3THvL29ZUAeNWqUnJubqzf+2rVrskqlkp2cnOSoqCjd8fz8fHngwIEyIP/444+641evXpVVKpXs7u4uJyQk6I5nZGTIbdq0kQF5yJAhemt4e3vL3t7e5b6WIUOGyP/90/38889lQB4+fLiBvbm5uXJKSkql19eUU6dOyQqFQnZzc5PDw8P1zmm1Wjk2Nlb3fUJCgpyVlWUwx+rVq2VAfvfdd/WOT506VQYM5pVlWS4sLJR9fHxkS0tLOTQ0VO9cSEiIrFQq9X6OaWlpsoWFhWxubi5fuXJFd7yoqEgeNmyYDBjc5yeeeEIG5Dlz5ugd37JliwzI/v7+skajMbDXw8NDjoyMNLD57rvvliVJMng9586dkwF5woQJBtcIBILqI4SMoEVQKmRat26t9yYky7Ks0WjkgIAAWZIkOTExUZblW0Lm9OnTBnMtXLhQBuTFixcbnDt48KAMyMOGDdMdW7BggQzIS5YsMRj/008/1YmQadeunaxUKvXerCuiLoTM888/LwPy999/X+s5tFqtbGVlJQ8dOlTveGVCZv369TIgL1y4sNw577vvPlmhUMgZGRmyLMvyqlWrZEB+6aWXDMYeOnTIQMgUFBTIJiYmsr29vZyTk2NwzV133SUD8v79+w3s/fTTT8u1aePGjTIgz507V+/4rFmzZEDesmVLudcJBILqIbaWBC2KAQMGoFDoh4YpFAoGDBjA1atXOX36NCNGjADAxMSETp06Gcxx8uRJgHJrv/Tr1w8TExNOnTqlO3b69GkABg0aZDC+vGM1JTs7m4sXL+Lv709AQMBtz1cdjh07BsDIkSOrNX79+vV8++23hIaGkpaWhkaj0Z27ceNGtdc9cuQIAJcvX2b+/PkG5+Pj49FqtVy5coWePXvq7n15W0F9+vRBpdJ/BF66dIn8/HwCAwMxMzMzuCYwMJBdu3Zx6tQpg59d7969y7X53nvvxd3dnR9++IH58+ejVCopLCzkp59+wtPTk9GjR1frtQsEgvIRQkbQoiiNcanoeEZGhu6Yk5NTuUG1mZmZFc4lSRLOzs7ExsbqjpXO6eTkVG17akLp/O7u7rc9V03WlCQJV1fXKscuWbKEV199FUdHR0aOHImHhwempqYALFu2jIKCgmqvm5qaCsAvv/xS6bicnBzg1s+qvHuvUChwcHDQO1bZzxbQvd7ScWWp6BqlUsm0adNYsGAB27ZtY8yYMfz111+kpKTw4osvGghrgUBQM4SQEbQoEhISKj1eGnQKVFg0zsrKSneNt7e33jlZlklISNCNKTtnYmKiwfiK7FEoFBQWFpZ7rqzYKjt/WfFU39jY2CDLMnFxcZUKqOLiYt555x1cXV05deqUnqCQZZkPP/ywRuuW3tdNmzYxZsyYao//b3YUlARIJycn69lf9mdbHvHx8XrjylJZkcFp06bx7rvvsmLFCsaMGcPKlStRKBQ8+eSTVb4GgUBQOeKjgKBFcfDgQbRard4xrVbLoUOHkCSJLl26VDlHt27dgJKU7v9y9OhR8vPz9bKASucMCQkxGF/eMQBbW1sSExMNUrlzcnK4evWq3jELCwvat29PeHi4wbnyUCqVAHrbOzWldBtl586dlY5LTk4mIyODfv36GXhFTpw4QV5eXo3s69OnDwCHDx+ulp2l9/7gwYMG544dO2Zwf9u2bYuJiQnHjx8nNzfX4JrSn3lNs7w8PDy499572bp1K4cOHeKff/5h1KhReHl51WgegUBQDo0coyMQNAi1yVqqKNi2NGvJ2dlZLzunoKBAHjx4cLlZS0qlskZZS88++6wMyKtWrdId02q18owZM3SvoyxffvmlDMgjRowwyFrKy8vTy1qaNGlShcG01eXMmTOyUqmU3dzc5IiICL1zZbOWNBqNbGpqKvv4+OgFz6ampsp9+vQpN2vo1VdflQE5ODjYYN38/HzZy8tLNjExkfft22dwvrCwUA4JCdF9X5q1ZGFhIV+7dk13vKioSB4xYkSlWUv/Dc7dtm1bpVlLVd3P0qwnNzc3GZDXr19f6XiBQFA9JFmW5YYWTwJBQ7N3714CAwMZNWoUwcHBjB49mg4dOnD+/Hk2bdqEvb09R48e1dUfKa1bUlG/n9I6Mvb29jzwwAOYm5uzadMmLl++zPjx4/nrr7/0thoWLlzIvHnzcHZ25oEHHkClUrFu3To6d+7M5s2bGTJkiJ6H59y5c/To0UNXH8bR0ZGQkBDS09OxsLDg9OnTlP3TlWWZKVOm8Mcff+Du7s64ceOwsrIiKiqKHTt28N133zFhwgQAvv76a6ZPn0737t25++67MTExoUuXLowdO7ZG9/SLL77gpZdewszMjAkTJuDt7U18fDz79+/n3nvvZdmyZQC8+uqrLFmyBH9/f8aOHUtmZibbtm3D29ubyMhIjIyM9O7ztm3buOeeewgICOD+++/H3Nwcb29vHnvsMQCOHz/O3XffTUpKCsOGDaNTp05IkkRkZCQhISHY29tz6dIl3XwrVqzgmWeewdramilTpujqyKjVauLi4lCr1Vy/fl03Pikpib59+3L9+nWGDRtGnz59iIiI4M8//8TY2NigjkxQUBCrV68mPDy8wiKGUOL5a9WqFZGRkbi4uBAdHW0QbCwQCGpB4+oogaBhKPXIzJs3Tw4JCZGHDBkim5uby1ZWVvLEiRPlq1ev6o2vzCNTysaNG+UhQ4bIlpaWslqtljt16iQvWbJELioqKnf8ihUr5Pbt28vGxsayh4eH/Oqrr8q5ubnlemRkWZb37Nkj9+nTR1ar1bK9vb382GOPyQkJCRWmT2u1WnnlypVy3759ZXNzc9nMzEwOCAiQn3vuOb16N0VFRfLrr78ue3l5ySqVSgbkqVOnVnkPyyM4OFgeM2aMbGdnp3td999/v3zw4EHdmMLCQvm9996TAwICZLVaLXt5ecmvvPKKnJWVVeF9/vDDD+WAgADZyMio3PsTExMjz5w5UzenlZWV3K5dO3natGnyP//8YzBf+/btZUBWq9Wyk5OTPG3aNDklJUW2sLCQu3TpohtXulZSUpL80ksvyd7e3rKRkZHs4OAgT5o0ST579qzB3NX1yMiyLM+dO1cG5P/7v/+rcqxAIKgewiMjaBGUemTmzZtXbtqu4PaoaTdtWZbJycnh008/Ze3atVy5coWioiIcHR3x9fVl4MCBTJs2DT8/vzqxb+jQoezbt0/Pi3Xt2jUCAgJ44IEH+P3333Wv47/esbpkzJgxbN26lStXruDv718vawgELQ3h1xQIBLdN2XYHpSxbtoyMjIxyz2VlZTFw4EDOnDmDv78/jz76KPb29iQnJ3Ps2DEWLVqEn59fnQiZtLQ0VqxYoRc8nJeXx8svvwyg23Krby5cuKBrjSBEjEBQdwghIxAIbpvyvFyrVq0iIyOj3HPvvPMOZ86cYdq0aSxfvtzAoxMeHl6j+jKVsW/fPp566ilGjhyJl5cXycnJ7Nmzh4iICIYNG8aDDz5YJ+tUxK+//srly5f58ccfgfJFn0AgqD1CyAgEAqB8MVIes2bN0jVzrCkRERH4+vrqareEhYXh4OBAamqqXrDsmTNn+OyzzwgNDSUzMxOtVsu8efN46623dOnZXbt21VXu/eGHH3Bzc2P+/Pn8+++/FBYWMmDAADZu3EiHDh1QKpWsWbMGExMTAOzt7bG0tGTPnj04ODgwdOhQoqOjgZLU7rfeeosFCxagUChYvXo1S5cu5cqVKzg6OjJjxgxee+21ar/m5cuXExISgre3N9999x39+/ev1b0TCAQV0LghOgKBoKnAzbTuqr6qm7Zd2q+qLOHh4TIgOzo6yoDcpk0befbs2fLUqVN1Kdv/93//JwOyu7u7/OSTT8pDhw7VrT1p0iRZlmU5OTlZliRJd3zixImysbGxfP/99+sCewF5wIABsizr95cq7ZVlZWUlA3KrVq300qJNTU1lQH7zzTflDz/8ULayspIfe+wx+aWXXpLd3d1lQF69enUd3XWBQHC7CCEjEAjqhcqETOmXpaWl/Morr8g7duyQk5OT5Z07d+q6jmdnZ8uyXFKzBtAJlLVr18rr1q2TAbldu3YyIKtUKvnAgQOyLMuyp6en7OvrqxNAhw8f1gmZsp3IP/30UxmQjYyM5D179uhq+gwcOFB2cnKSzczMZBcXFzksLExnf1RUlGxsbCx36tSp4W6kQCCoFFHZVyAQNDguLi4sXrwYWZZZsmQJo0aNwsHBQRd4O2fOHMzNzQHo2LEjDg4OFBYWIkkSv/32G8HBwVhYWHDPPfcAMGTIEAYMGEBYWBjR0dEEBgYydepUoKTuTCm//vorxcXFzJ49W9dm4NFHHyUwMJC5c+cCJZWFx4wZQ25uLs8//7yuthCAp6cnAwcO5MKFCwZVgQUCQeMgYmQEAkGD06VLF15//XWef/55tm/fzqFDhzhx4gQHDhwAYPjw4UyePJk2bdoA4OjoyMWLFzE1NeXSpUtcunSJQYMG6c6XEhwcDJR0qXZxcQEgPT1dd75sJ/Lz588Dt9oNlO1mXdocsrxWBK6urmg0GhISEhq0UadAICgfIWQEAkGDU9op2tLSksmTJzN58mQAjIyMKC4uRqPRsGbNGoPr8vLyyMrKIioqiqCgIIyMjAB0lXnLCpnSvlNl067LdiIvFTKlnpmy3atLK+6W1xyy9FxRUVGtXrtAIKhbxNaSQCBocCrrLG5vb6/rEn7ixAlkWebChQtASRfpjz/+GCgRK6VERESQkZHB3r17CQgIqNBTUrYT+X+pqOM1lDTrfP/99+nevTu//PILAH379mXQoEHMmTOHsLCwql5yk0GSJIYOHdrYZggEdYYQMgKBoMnQp08fUlJSDHoQtWvXDhcXF/bs2UNwcDC2tra6LuRQUil45cqV3Lhxo9I36dp0Is/KyqJ///68+eabZGVl6WJmhg0bRnZ2NosWLdJ5ggQCQcMjhIxAIGhwLl++rBeEW8pLL70ElNSXsbKyomPHjrpzQ4cO5fr166xZs4YhQ4agUNx6fBkbG7N48WJA31PzXx5++GGUSiVLly4lMzNTdzwzM5N333233GuWLVumK9535coV+vXrB8D777/PyZMnuX79ul4TSYFA0LAIISMQCBqc2NhYevfuTUBAAEFBQbzxxhvMnDmT999/XzemuLiYoKAg/u///o+nn36a0NBQoKTlwH/FSkBAAElJSQCVemT8/f15++23iY2N5a233gLgt99+o1OnTgQEBJR7zeHDhwF44YUXyt0S8/X1pW3btgBMnDgRhUKhs6WUrl27IkmSLjOqlFWrViFJEqtXr9Y7npiYyMsvv4y/vz9qtRoHBwfuv/9+zp07Z7B+cHAwTz75JG3atMHCwgILCwt69uzJ8uXL9cbt3btXZ/++ffuQJEn3tWrVKr2xGzduZPjw4dja2mJiYkLHjh35+OOP9eKNytq/atUqNm3axIABA7C0tNTrAr5u3TqGDBmCk5MTJiYmuLm5MWLECNatW2fwWgSC2iCCfQUCQYPTo0cPZsyYwa5du9i/fz9xcXEAuLu7M3XqVHr37s327dv5559/SE9Px97eXpdJBCXbOmVp27Yt58+fp02bNnrjyuPtt9/Gzc2NefPmkZmZyfHjx3nqqadYuHAhZmZmBuPt7e0BuHLlSrlZTGUJDAxkw4YN7N27VxfAnJKSwpkzZwAMtqDKBieXEhYWxtChQ4mJiWHkyJFMmDCBxMRE1q1bx44dO/jnn3/o06ePbvzixYu5du0affv2ZeLEiaSnp7N9+3aeffZZLl++zJIlSwDw8fFh3rx5LFiwAG9vb4KCgnRzlH1dc+bMYdGiRbi7u3PfffdhbW1NSEgIr732GkePHuXPP/80eN1//vknO3fuZMyYMUyfPl3n7fr666+ZPn06rq6uTJw4EXt7e+Lj4zl27Bh//fUX999/f6X3UyCoFo1cx0bQxClbEVUgaIls3Lix3OJ95VFavO/555/XHSst3jd8+HDZyMhIV+hPlkuK97Vq1Upvjv79+8tKpVLevn273vHLly/LlpaWBsX4rl+/bmBHUVGRfNddd8lKpVKOjIzUOwfIQ4YMKdf+8goSyrIsa7Va+bnnntMVJCzlhx9+kAFZoVDIu3btMpive/fusrGxsZyQkGBwrqJ7KBDUFLG11IKIiIjQcyeXfpmbm9O5c2cWLFhAdnZ2Y5spEDQpxo0bx5IlSwyK9/n7+/Piiy/q0rzhVvG+PXv26I6VFu97/fXXKSoq0gUVlxbvK7sVdvLkSQ4dOsTUqVMZNWqUnh2tW7fm6aef5uzZs3pbTL6+vgY2q1QqnnvuOTQaTY0Ckb/44gugpD9UaUFCKMl0WrRoka4g4X8ZP348I0aMKHdOIyMjXZp8WUo9XQLB7SK2llogfn5+PProo0BJtkdSUhLbtm1j/vz5bN++nQMHDuga8wkEApg9ezZPP/20XvG+o0eP8uWXX/Ldd9/x+++/M27cOF1q89q1a4mLi8PV1ZXg4GAGDRrE4MGDUavVBAcHM3r06HK3lY4cOQKUpIKX18Tz0qVLun9LA6GzsrL4+OOP2bBhA2FhYeTk5Ohdc+PGjWq/ziNHjmBubs73339f7vnSgoT/pXfv3uWOnzJlCq+//jodO3bk4YcfJjAwkIEDB5Zbn0cgqC1CyLRA/P39DR6SBQUF9OvXjyNHjrBv3z6DGASBoKXz3+J9GRkZvPHGG3z11Vc89dRTxMbGYmxsTGBgIGvXriU4OJi77rqL8+fPExQUhImJCf369dMJmPKETGpqKgBbtmxhy5YtFdpSKlYKCwsZOnQooaGhdOvWjcceewx7e3tUKhURERGsXr2agoKCar/G1NRUiouLWbBgQZVrl6VsMcGyvPrqq9jb2/P111+zZMkSPv74Y1QqFffeey+ffPJJud4kgaCmiK0lAQBqtVr3QE1OTq5yfHFxMUuXLqVLly6YmppibW1NYGAgmzZtMhg7f/58JEli7969BufKZj2UJTg4mLvvvhs3NzfUajXOzs4MGjTIIBMDIDw8nGnTpuHl5YVarcbV1ZWgoCAiIyMNxoaGhjJp0iTdWEdHR3r16sV7771X5WsWCMpibW3NF198gbe3N8nJyZw9exa4JUyCg4N1v/OlxwIDAwkNDa2weF+pp+Lzzz9HLmnqW+5XaR+pjRs3EhoaylNPPUVoaChff/017777LvPnz2f06NE1fk2lBQkrWzs8PNzguooKHEqSxJNPPsnx48dJSkrir7/+4r777mPjxo2MGTPGIAtKIKgNQsgIgJJPdqXpmVVlZsiyzKRJk3jllVfIz8/nhRde4OGHH+b06dOMGzeOTz755LZs2bJlC8OHD+fo0aOMGjWKV155hXHjxlFQUMBPP/2kN/bo0aN069aN1atX06NHD2bOnMmgQYP45Zdf6N27t650PcCpU6fo378/27ZtY+DAgcyePZtJkyZhZmZWrkASCKqiNMasLJUV7xs2bBgajabC4n2l2UilKd9VUVpRePz48QbnKirwp1AoKhQQpQUJy8b91BX29vZMmDCB33//nWHDhnHhwgWuXbtW5+sIWiANHl4saDTCw8NlQPbz85PnzZsnz5s3T3777bfl6dOny35+frKJiYn80Ucf6V1TXtbS6tWrdZkPBQUFuuORkZGyg4ODrFKp5LCwMN3xefPmyYAcHBxsYFNp1sMPP/ygO3bffffJgHzq1CmD8WUzHQoLC2UfHx/Z0tJSDg0N1RsXEhIiK5VKecyYMbpjs2fPlgF5w4YNlc4rEJTlm2++kY8dO1buub/++kuWJEm2sbGR8/PzdcenTJkiA7Kjo6M8YcIE3fGCggLZzMxMdnR0lAH5119/NZizT58+siRJ8po1awzOaTQaee/evbrvf/31VxmQX3/9db1xe/fulY2MjGRAnjdvnt45BwcH2cfHp9zXs23bNhmQBw4cWO7fRFxcnHzhwgXd9+X9/ZYlODhY1mq1escKCwvlrl27yoAcERFR7nUCQU0QMTItkLCwsHL3wMeMGVNh5kFZSot3ffjhhxgbG+uOe3l58fLLL/Pmm2/yyy+/6AqO1RZTU1ODY2UzHTZv3kxERAQLFy7UK1cPMHDgQMaPH8+GDRvIzMzUCy6sal6BoCzbtm3jueeew9/fnwEDBuDm5kZOTg4nT54kJCQEhULBV199hVqt1l0TGBjImjVrSEpK0ouBMTY2ZsCAAezatQsov3jfb7/9RmBgIFOmTGHZsmV0794dU1NToqKiOHz4MElJSeTn5wMwduxYfHx8+PDDDzl37hwdO3bk8uXLbN68mYkTJ7J27VqD+YcNG8Yff/zBhAkT6NatG0qlknHjxtG5c2dGjx7NW2+9xTvvvIO/vz+jR4/G29ublJQUrl27RkhICO+++y7t2rWr1r2bMGECVlZW9O3bF29vb4qKiti1axcXLlxg0qRJup5aAsHtIIRMC2TUqFFs375d931KSgoHDx5k5syZDBgwgD179ugV3PovJ0+exMzMrNxMhdKH9qlTp2pt35QpU1i/fj19+/bl4YcfZvjw4QwaNAgHBwe9caUZHpcvXy43wyM+Ph6tVsuVK1fo2bMnDzzwAMuWLWPixIk8+OCD3HXXXQwePLjCBoMCAZQUnCsVH+UV75sxYwY9evTQu6asePlv4HxgYCC7du2qsHifr68vJ0+eZOnSpWzYsIEffvgBpVKJq6srgwcPZtKkSbqxFhYW7Nmzh9dee439+/ezd+9eOnTowC+//IKzs3O5QubTTz8FYM+ePWzatAmtVouHhwedO3cGYOHChQwePJjPPvtMryChr68v8+fP55FHHqn2vfvggw/Yvn07x44dY9OmTZibm+Pn58fXX3/NU089Ve15BIJKaWyXkKDhKN1aGjVqVLnnd+3aJQPyiBEjdMfK21pSKpUVuqZL1yg7R023lmRZljds2CAPHjxYViqVMiBLkiQPGzZMPnnypG7MtGnTZKDKr7Ku+P3798ujR4+W1Wq17nyvXr3kPXv2VHTbBAKBQNCEER4ZgY5SL0x5zfzKYmVlRWJiYrnn4uPjdWNKKW3uV1xcbDA+IyOj3HnGjx/P+PHjycrK4uDBg6xfv57vvvuO0aNHc+nSJWxsbHRrbNq0iTFjxlTx6koYNGgQ27ZtIy8vj6NHj7Jp0ya++uor7r33Xs6dO6frbNxSyC0sJjGzgKz8YrIKisgp0JBTUExWQTE5N7+y8ospKNag0cpoZdDKMl2t87HIuYEkSSgUChQKBSqVij7KNqiUKiQjBZKxEoWJEoWpCoWZEQozFQpzIxQm4rEjEAjqDvFEEehIS0sDQKvVVjquW7du7Nmzh2PHjhlsL5Wmm5bNfLK1tQVKGgX+l5MnT1a6lqWlJaNHj2b06NFoNBq+//57XTZT2QyP6gqZUkxNTRk6dChDhw7FxsaGt99+m127dvHss8/WaJ6mTF6hhrCkbGLS8kjIzCchM5/4zHwSMwuIz8wnISOfrAJDcVkdrPyLIOaMwfF2xWaoiqtIhlQpUJobobA0QmmlRmWjRmmrJt+6AK2FjI2LK2oz88rnENSIoUOHsm/fPmRZbmxTBII6RwgZgY6lS5cCMHjw4ErHTZ06lT179jBnzhy2b9+uKz8eHR3N0qVLUalUevvovXr1AuDHH3/kscce03loDh8+zC+//GIw//79+xkwYIBBdeFSL5CJiQlQ4rXx8vJi6dKljBo1ysDuoqIijh49ysCBA3XrdevWTXd9KQkJCXrzNjcSM/O5lpRNWFIOYYnZhCVlcz0phxsZeTT0+1aVIgagWIsmowBNRgFF3GqJEWsfyYETawAwMbfAxsUVew9vHLy8cfDywdHLB3Mb2/oyvdmRk5PDp59+ytq1a7ly5QpFRUU4Ojri6+vLwIEDmTZtGn5+fo1tpkBQ7wgh0wK5du2aXnBsamoqBw8eJDQ0FFtbWxYvXlzp9Y899hjr169n48aNdO7cmTFjxpCTk8Pvv/9OamoqS5Ys0dui6du3ry6IuF+/fgwePJjIyEg2btzI2LFj+euvv/Tmf+mll7hx4wYDBw7Ex8cHSZI4cOAAx44do2/fvjpholarWbt2LXfffTdDhgxh2LBhdOrUCUmSiIyMJCQkBHt7e11J9cWLFxMcHMzgwYPx9fXFxMSE0NBQ/vnnH1q1asXEiRPr6A7XH3EZeZyOTudUdAano9M5fyODzPzaeVXqGpXq9h4n6TkJuv/n52QTH3aV+DD9eiamllY4eN4UNt6+uAa0wd7Dq8KCbHcqWVlZDBw4kDNnzuDv78+jjz6Kvb09ycnJHDt2jEWLFuHn5yeEjKBFIIRMC+S/6ddqtRoPDw+ef/55/u///g8vL69Kr5ckibVr1/Lpp5+yevVqPv/8c4yNjenevTuzZ89m3LhxBtds3LiR2bNns3nzZs6ePUuXLl3YtGkTN27cMBAyc+bMYf369fz777/s2LEDIyMjfHx8WLx4MdOnT9fz1PTq1YvTp0/z0UcfsXXrVg4ePIharcbd3Z0JEybw0EMP6cY+//zzWFtbc/ToUZ2b3cvLizfeeIOXX365yfV/ycov4lR0uk64nIlJJzGr+uXmG5rbFTLJadFVjsnLyiT6wlmiL5zVHVObm+Pq3wbXgLa4tWmHq38b1GZmt2VLU2fZsmWcOXOGadOmsXz5cgMhFx4eXqPWBAJBc0aSxaapQNAkKCzW8m9kGgevJXMwLJkzMRlotE3zzzPIvwhiTukdszC3YEpKxWn7laKAtRGfoNEU3rZtkqTA3tML97Yd8OncDa+OnTE2vbOEzT333MO2bds4efJklZW4ofIYmY0bN/LZZ58RGhpKXl4e/v7+BAUF8fLLL5fbPLa641etWsUTTzzBDz/8gK2tLe+99x7nzp3D3NycsWPH8sEHH1TYo0kgqAnCIyMQNBKyLHP+RiYHryVz4FoyJyLSyCtqvr1nVLfRMV0yV9WJiAGQZS3JUREkR0VweucWFEoVrgFt8OnSHZ/O3XBu5Y+kaN7dWUoLOF65cqVaQqYi5syZw6JFi3B3d+e+++7D2tqakJAQXnvtNY4ePcqff/55W+MB1q1bx44dO5g0aRIjRozgyJEj/PDDD4SEhHDs2DFdMoBAUFuEkBEIGpDCYi0HriWx/Vw8/1xMJCWnbt68mwIqZe0fJ7JhseU6Q6spJvbSeWIvnefg7z9hYmmFd8cu+PXoTasevZtlhtTkyZP5+eefmTZtGseOHWPkyJH06NGjRhWqd+3axaJFixg1ahTr1q3T9YySZZnp06fzzTffsG7dOu6///5ajS9l8+bNbN++nVGjRumOlQqit99+m88///x2b4eghSO2lgSCeia3sJi9l0vES/ClxFqnPDclyttacrF3Ykxsp1rNV+im4a+DH9eBZTVDoVTh1bEzAb3749+7H2ZW1g1uQ21ZunQp8+bNIzv7VuaXn58fo0ePZubMmQQEBOiOl7e1NH78eP7++28iIyMN4uIyMjKwtbXlvvvu01UHrun40q2lESNG6FoylJKdnY2npydarZa0tDRdJqNAUBuER0YgqAdyCorZeSGebWfj2X81ifyiymvz3AncjkcmT86pQ0uqj1ZTTMTpUCJOh7L7u6/wbN+JNv0GNQtRM3v2bJ5++mm2b9/OoUOHOHHiBEePHuXLL7/ku+++4/fffy838L6UI0eOYG5uzvfff1/ueVNTU13GX23GlzJo0CCDYxYWFnTt2pW9e/dy/fp1/P39q3q5AkGFCCEjENQRsixzNDyVP0/EsO1cHLmFzTfepTYoFbWPkckqTK1DS2qHrNUSde40UedO88/3X+PbrRcdh46gVfdeKG4j/qc+sbS0ZPLkyUyePBko8Yy88cYbfPXVVzz11FPExsbqNXYtS2pqKsXFxeU2kC0lJyen1uNLqSigt/R4RdW9BYLqIoSMQHCbxKbnse7fGNb+G0NUam5jm9NoqKTav9mnZydUPagB0Wo0hJ04QtiJI5hZ29B+8DA6Dr0Lew/PxjatUqytrfniiy/YsmULkZGRnD171qChZSlWVlZIkkRycnK15q7p+FJKC05WdNzauml7vgRNHyFkBIJaUFCsYfu5eP48EcOhsGSaaJZ0g6K6DY9McmrVNWQai9yMdE5sWs+JTetx9W9Dx8C7aDtgcJNN6ZYkSReIWxl9+vRh27ZtXL16VS+epq7GlxISEmJwLDs7m1OnTmFlZdXi+psJ6h4RYSUQ1ID4jHw+2nGJ/h/sYeaaUxy4JkRMKcraemQUkJwSWbfG1BNx1y6za8UXfPt8EMGrlpOeEN8odnz77bcVNnfdsGEDFy9exMbGho4dO1Y4x0svvQTAk08+SUpKisH5+Ph4Ll68WOvxpezevZsdO3boHXvvvfdIT0/n8ccfF4G+gttGeGQEgmoQGpXGdwfC2XEunmKhXMpFJdXuDamkhkzzyuQqzMsldNvfnNy+mVY9etPjnnF4dujcYOtv27aN5557Dn9/fwYMGICbmxs5OTmcPHmSkJAQFAoFX331FWq1usI5Ro8ezVtvvcU777yDv78/o0ePxtvbm5SUFK5du0ZISAjvvvsu7dq1q9X4UsaMGcPYsWOZNGkSPj4+HDlyhODgYPz8/Fi4cGG93idBy0CkXwsEFaDVyuy8EM/y/dcJjUpvbHOaFOWlX3fz7ECPqy41nkt2VvLHkffryLLGw9GnFd3vHkfbAUNQ3WykWl9cvnyZv//+m127dnHt2jXi4uIAcHd3Z+DAgcyYMUMvNqayyr67d+/ms88+48iRI6Snp2Nvb4+vry/33HMPU6dOxdPTs1bjK6rsa2Zmpqvs6+JS898XgeC/CCEjEPyHIo2Wtf/G8M2+MCJTWm7wbmWUJ2R6enai61WnGs9V6K7hrwMNX0OmvjC3saXXuPvpfNfdGBlX7BG50ykrZIKCghrbHMEdjNicFAhuUqzR8vvxKIYt2cuc9WeFiKkhKmoXI5Onza56UDMiJz2NvT+uZOWLT3Fi03qKCvIb26R6x8fHBx8fn2qNnT9/PpIksXfv3nq1SdByEEJG0OIp1mj543g0w5bs43/rzhKdmtfYJjVLlLV8nGQVNH4NmfogNyOdfT9/z4oXn+LYxrUU5TcvQZOTk8P7779P9+7dsbCwQK1W4+HhwaBBg5gzZw5hYWENao8kSQwdOrRB1xQ0D0Swr6DFotHKrA+N4Yvga8L7Ugco5doJmbTsxsn8aSjyMjMI+XUVJzatp+fY++g2egxGapPGNqtSsrKyGDhwIGfOnMHf359HH30Ue3t7kpOTOXbsGIsWLcLPzw8/P78az/3iiy8yZcoUgzYHAkFtEUJG0CLZfi6OxdsvE57cOKXx70SUslSr61LSYurYkqZJXlYmIb+u4uT2TfSf/Agdh45osl24ly1bxpkzZ5g2bRrLly9HkvR/tuHh4RQUFFQ6R1BQULmxMQ4ODjg4ONSluYIWTtP8KxII6onzNzJ48NvDPPdzqBAxdUytPDIKSEpuHjVk6ors1BR2fvsZP74+g6jzTVPEHT58GIAXXnjBQMQA+Pr60rZt2yrnWbp0KQqFguHDh5OVlQWUHyMTERGBJEkEBQVx8eJFJk6ciL29PZIksWrVKp0N+/btQ5Ik3deqVasA0Gq1rFy5kt69e2NnZ4epqSkeHh6MHTtWxOK0AIRHRtAiSMoq4KMdl1j7b4woYFdP1EbISBYqtNqW1ZOqFElpwqbPr+DZPpkB9/tj727R2CbpsLe3B+DKlSt07dq1xtfLssz//vc/PvroIyZPnszPP/9cYc+nsly7do2+ffvSqVMngoKCSElJoXXr1sybN48FCxbg7e2t5+UptW3OnDl8+OGH+Pn58fDDD2NpaUlsbCwHDhxg9+7dIrbmDkcIGcEdTUGxhpUh4Xy9N4zsguZVdK25odTUfGtJa9pCVaUkoTAeDED0hVR+v3ScjkPc6TPWF7VZ/dagqQ6l4mPatGkcO3aMkSNH0qNHD53AqYzi4mKeeuopfvzxR1544QU+++yzalfvPXjwIG+//bZBY8r+/fuzYMECfHx8mD9/vsF1K1euxM3NjTNnzmBmpt86IjX1zgwmF9xCbC0J7lh2nI9nxNJ9fLTjshAxDYBSW3MhU6gqrAdLmj7ubXqTkWSp+17WypwNjuGXeUe4eOhGuYXrGpJx48axZMkSZFlmyZIljBo1CgcHB/z9/XnxxRe5evVqudfl5uYyfvx4fvzxRxYsWMAXX3xRoxYELi4uvPnmm7Wy2djYGGU5Xcrt7OxqNZ+g+SCEjOCOIzEzn2d/OsGzP/0rUqkbEKW25o+TPG1WPVjStFEaGZObW35H6rysIvb8eIn1H4WSEtu49XVmz57NjRs3+OOPP5g1axYDBw4kKiqKL7/8ks6dO/P333/rjc/Ly2P48OFs376db775hrfffrvGa3bp0qVaW1D/ZcqUKURERNCxY0feeust9uzZQ16e+NtvKQghI7hjkGWZX49GMXzpPnacT2hsc1ocilpsLd2pNWQqw73tEPKyKk+/jr+ewR/vHyd002W0VWQH1SeWlpZMnjyZTz75hJCQEJKSkpg+fTr5+fk89dRTFBbe8qhlZWVx8uRJ7O3tCQwMrNV6zs7Otbru008/5aOPPsLY2Jh3332X4cOHY2dnx9SpU0lOTq7VnILmgxAygjuC60nZTFl+hDf+OktWvthGagxUtdhaSs9uWYLT1NKKtMR2VQ8EtBqZ4u3rCJ8wkdzQ0Hq2rHpYW1vzxRdf4O3tTXJyMmfPntWdc3JyYuPGjWRlZTF06FAuX75c4/nLy5CqDiqVildffZXz588TGxvLr7/+yqBBg/jxxx955JFHajWnoPkghIygWVOk0fLFnqvc/WkIR8Nb3qf7poSiuOZvQkmpUfVgSdPF0XcERYXVy7HwdJOx/OcnCsPDiXz0MeLffQ9tbuMXbpQkCXNz83LPjRo1ir///pv09HQCAwNrJWYqQqFQoNFUneHm5ubGQw89xPbt2/H392f37t1im+kORwgZQbPlakIW4784yMc7r1BQrG1sc1o8yppmUSskUlKi68WWpoi1sxtJMd7VGmtsosR379JbB7Ra0n7+metjx5Fz5Gg9WXiLb7/9luPHj5d7bsOGDVy8eBEbGxs6duxocP6uu+5i06ZNpKenM3ToUC5dulQnNtnZ2RETY1h3p6CggEOHDhkcz8nJITs7GyMjoxoFHAuaHyL9WtAs+elwBO9tvUh+kRAwTQVlDT0ykqWyRdWQsXAYRsqN6t2j9kaXUN24bnC8KDaWqCefxP7pp3Gc8SKSqn4e4du2beO5557D39+fAQMG4ObmRk5ODidPniQkJASFQsFXX32FWl1+d+/hw4ezefNmxo4dS2BgIHv27KFdu+ptqVXEsGHD+OOPP5gwYQLdunVDqVQybtw4vLy8GDBgAK1bt6ZHjx54eXmRnZ3N5s2biY+P59VXX63QTsGdgRAygmZFak4hr689ze6LiY1tiuA/KGoYmqQ1aTk1ZJx825Fyw6laY51dlNj+/mnFA7RaUr79ltxjx3D/+COM3N3ryMpbLF68mAEDBrBr1y72799PXFwcAO7u7kydOpUZM2bQo0f5mVelDBs2jC1btjBmzBidmGnfvn2tbfr005J7smfPHjZt2oRWq8XDw4N27dqxePFi/vnnH0JCQkhMTMTW1pY2bdrwwQcfMGXKlFqvKWgeSHJjFywQCKpJyNUkXvnjNIlZjZfFISghyL8IYk7pHXuycBiKGgT8FrgXs+HAkjq2rOkhSQoc/Z8iM9myyrFKlUS/qO8xvnKiWnMrrKxwXbgQq9GjbtdMgaDZIjYOBU2ewmIt726+wOPfHxMipomiUChqJGIA8rSNWyeloXBr26daIgagrW1CtUUMgDYzk9hZs4h7ex7a/PzamigQNGuEkBE0aaJTc7nv64OsPBCO8B02XVS1iNXIyk+pB0uaFipjY3JzuldrrK29CqeNi2u1TvoffxA+aRL5V67U6nqBoDkjhIygyXLgajLjvjjAudjMxjZFUAUqZc2FTFoLqCHj1iaQvKyqA00lCdpFrEMqrL1XpfBaGBGTHyBtzZpazyEQNEeEkBE0Sb7ZF8bUH46RllvU2KYIqkFtPDLJqXd26rWplQ2pCW2rNdbfJQeTf3fe9ppyQQHx8xdw4803kQtbZh8rQctDCBlBkyKvUMOLv4ayaNslNFqxl9RcUJXTrK9SlBLJKXd2MTxHnxEUF1V9X8ytVLhvXlSna2esW0/k41MpTkqq03kFgqaIEDKCJkNkSg4TvzrI5jNxjW2KoIaoFDXzyEgWSmT5zq0BZOPiQWKMZ7XGdsjchyKr7qtS5506RfikyeSVaSMgENyJCCEjaBLsv5LEuC8Ocim+5XVDvhNQKmrmkbnTa8iY2w0DueosLm83DRZ7f603O4oTEoh89DEytmyptzXqEx8fH3x8fBrbDEETRwgZQaPzx/Fonlx1nIw8EQ/TXDGq4dZSoerOTaN3btWRlDiHKscZmyrxDl5a5bjbRS4o4Marr5H05Zf1vlZ1yMnJ4f3336d79+5YWFigVqvx8PBg0KBBzJkzh7CwsMY2Ucf8+fORJIm9e/c2timCShCVfQWNyrLdV1i2+2pjmyG4TZRSzYRMnvbO9LxJkgJZMaBaYzsoz6OKi6hfg0qRZZI//4LCiEhc33sXhbFxw6z7H7Kyshg4cCBnzpzB39+fRx99FHt7e5KTkzl27BiLFi3Cz88PPz+/RrFP0DwRQkbQKBRrtMzdcI41x+/szJWWgqqGW0uZ+Xdmp3K3dn1JiSu/M3RZnF0U2Pz+RQNYpE/mpk0Uxcbi+fVXKK2tG3z9ZcuWcebMGaZNm8by5cuRJP3tt/DwcAoK7lxvnaB+EFtLggYnt7CYp388IUTMHURNY2TSs+PryZLGw0htQm5mtyrHKY0UtD6xHKmRKjzmhYaWZDQlJzf42ocPHwbghRdeMBAxAL6+vrRta5iynp2dzcyZM3Fzc0OtVtO5c2fWrl1b7hrJycnMmjULX19f1Go1Tk5OPPDAA5w7d85gbFBQEJIkcf36dZYsWUL79u1Rq9UEBQUxdOhQFixYAEBgYCCSJCFJkl7MztWrV3niiSd0a9nZ2dGlSxdmzZqF6P7TcAiPjKBBSc4u4MlVxzkTk9HYpgjqEBU1EzJJKZH1ZEnj4domkMToqovftbOOxejayQawqGIKLl8m8pFH8frhe4zc3BpsXXt7ewCuXLlC165dq3VNUVERI0eOJC0tjfvvv5/c3FzWrFnDAw88wPbt2xk5cqRubFJSEv369SMsLIyhQ4cyZcoUwsPDWbt2LVu2bGHHjh0MHDjQYI0ZM2Zw5MgR7r33XsaOHYuTkxNDhw4FYN++fUydOlUnYGxsbAC4ceMGvXv3Jicnh3vvvZcHH3yQnJwcrl69yldffcXHH39cq/pKgpoj7rKgwYhOzeWRlUeJSs1tbFMEdUyNYmSUEimpsfVnTCNgZm1LSnzrKsfZOhjh8PdHDWBR1RRGRhLxyKN4ff8dal/fBllz8uTJ/Pzzz0ybNo1jx44xcuRIevTooRM45XHjxg169erF3r17Mb4Z2/Pwww8zYsQIli5dqidk/ve//xEWFsacOXN4//33dce3bt3KvffeyxNPPMHly5dRKPQ3I86cOcPJkyfx8vLSOx4REcG+fft0HpqyrFu3jvT0dJYtW8bMmTP1zqWmpgoR04CIrSVBgxCRnMOD3x4WIuYORVWDR8mdWEPGwXsEmiqK30kStAv7HUVh04kBKY6LI/LRx8i/dKlB1hs3bhxLlixBlmWWLFnCqFGjcHBwwN/fnxdffJGrV8sP/P/kk090IgZg+PDheHt7c/z4cd2xwsJCfvvtN+zt7Zk7d67e9ffccw933XUX165d4+DBgwbzv/baawYiprqYmpoaHLOzs6vVXILaIYSMoN65npTNg8sPcyNDdOe9U1HW4FFyp9WQsXX1IjHao8pxAS5ZmJz8pwEsqhmalBQiH59K7smG2e6aPXs2N27c4I8//mDWrFkMHDiQqKgovvzySzp37szff/+tN97GxgbfcjxGHh4epKen676/dOkS+fn59O7dGzMzM4PxgYGBAJw6dcrgXO/evWv8OsaOHYu5uTkvvPACDz74ID/88APXr1+v8TyC20cIGUG9ci0xiweXHyEhs+l8ChXUPcoaxMgUKO+s3wVT20Cg8uJ3FtYq3DbVbRuCukSbmUnUU9PIuRmMW99YWloyefJkPvnkE0JCQkhKSmL69Onk5+fz1FNPUVimT5R1BdlVKpUKrfaWZy8zs6S5rLOzc7njXV1d9caVpaJrKsPHx4cjR44wfvx4tm7dypNPPomfnx/t2rXjzz//rPF8gtojhIyg3riSkMWU5UdIyrqz3rgEhiireCMvy51UQ8bZrxOpcRXHd5TSPn0Piuz0+jfoNpBzc4l+9jmy9gQ3+NrW1tZ88cUXeHt7k5yczNlatFWwsrICICGh/K7q8fHxeuPKUl4GVXXo2LEja9euJTU1lcOHD/P2228THx/Pgw8+WO4WlqB+EEJGUC9cjMtkyvIjJGeLDrwtAZVcfY9MZn7Dp/3WB5JCgZb+VY7zdivGYt/vDWDR7SMXFhI7cybZISENvrYkSZibV12DpyLatm2LiYkJx48fJzfXMBavtDpvdbOlAJQ3K1ZrNJpKxxkZGdG3b18WLFjAZ599hizLbN68udrrCG4PIWQEdc6FG5k8vOIIqTlCxLQUFNrqf6JNzyr/E3Nzw71tf7JSK3/jVZsp8dm9pIEsqhvkoiJiZrxEzrFjdT73t99+qxegW5YNGzZw8eJFbGxs6NixY43nNjY25qGHHiI5OZkPPvhA79z27dvZsWMH/v7+DBhQvcrLcCtoNzrasObVv//+W+42ValHyMTEpCbmC24DkR8mqFPCk3N4/PujpOWKvkktCZVc/c9ESanNv4aMkYkJ2ZldqxzXgTMoE6Pq36A6Rs7PJ+a55/H6/jtMa+DBqIpt27bx3HPP6QSFm5sbOTk5nDx5kpCQEBQKBV999RVqddX1eMpj8eLF7Nu3j3fffZdDhw7Rp08fIiIi+PPPPzEzM+OHH34wSL2ujNJCeG+88Qbnz5/H2toaGxsbXnzxRX766Se+/fZbBg8ejJ+fH1ZWVly4cIGtW7diZ2fHE088UavXIKg5QsgI6oy4jDweXXlUbCe1QJTVFTJKiZS05l9DxrX1MBKjK+9X5OqqwOa3rxrIorpHm5tL9CuvoPj9K/wd2tTJnIsXL2bAgAHs2rWL/fv3ExcXB4C7uztTp05lxowZ9OjRo9bzOzo6cvToUd555x02btxISEgI1tbWTJgwgXnz5tXY09O+fXt++OEHlixZwueff05BQQHe3t68+OKLPPTQQ+Tn53Pw4EGOHTtGQUEBHh4ePP/887eVzi2oOZIs6igL6oCi3HTu+foUV5NyGtsUQQMQ5F8EMad0349xHIBLdNWudMlaxZpT79WjZfWPuY0dGD9Wad0YlZGCfmFfY3T9TANaVrdI9nZ88Zgt52yz+fHuH/GyEm/MgqaJiJER3D6FORj9ch/fW6/AVFl5UJzgzkQpVy9GRmPa/D832XlWXfyunVV08xYxLk4sCjJnn2kkKfkpPLPrGZJykxrbLIGgXISQEdwemmL443GI/RfPmM0c8FqJrVFxY1slaGAUmuoJmUJl8y6KaOvmTVKMe6Vj7BxV2G/8uIEsqnskbw/eelTBv8ZxumOx2bE8t/s5sguzG9EygaB8hJAR1B5Zhr9fhGu7dYfs4/YR4vopHiaidkxLQllNIZOrad41ZEysh1FZ8TtJAe2u/IaiuJnGibX25ZUH87lkZJgifyXtCq/se4VirfigImhaCCEjqD17F8Hp3wwOWyT+yz92i+hgKeJlWgrVFTJZ+Sn1bEn94eLfhbR420rHtHbKRH16b8MYVMdoOrXhxQlpRCnTKxxz6MYhFh5e2HBGCQTVQAgZQe04vwH2La7wtDr1MhtNFjDQLqPhbBI0Gori6gmZtKy4qgc1QSSFgmJtv0rHWFgb4bq56bYhqIzC3h155p4YEpVVbx39de0vVp5d2QBWCQTVQ2QtCWpO3Gn4fjQUVd3JWmvqwMvGc9mY4NQAhgkqoyDuCukHfqEg9hJoizFy8MHYNQA0xRQmXKMwKQI0xdjfMwuLTiMMrs8+u5uUrctqvK6juR25RXm0cWjFqF7+2KuLOR+bwI7zV0jOzsHBwpyObs7suniV6YH98HW41Tk4PiOLT3aF8GCvLnT3rjw2pT7x7DCYpBs9Kx3TRxOMecjaBrKo7sgZ3JXn+l+kQKp+oL6ExMdDPmakz8h6tEwgqB6ijoygZmQnwm8PV0vEACjyklmmmYuD+1t8F+tZz8YJKiI/8gwJf7yNpDLCvN1gJGNTci8fIju0pIy6wtQKpbkdmszEKucyDehL77bekFnSuyY9PZ3Tp0/T1bUdga366MZdSY5gy+W9OJrbMcC7O9uvhLBky2Ye6duNVQdP4GJtSb9WXlyKT2Lnhat08nDREzFaWebPE2cIcHZoVBFjZGJKZnrnSsf4uBVh/mvzEzFpI3swvccZNNTs86yMzJsH3sTd0p0O9h3qyTqBoHoIISOoPsWF8PujkBlTo8ukwmzmpr+No/ccFkW2rifjBBUhazWkbP8cJAmXhxdj7NwKAJsBD3HjuxfQ5KTjOnUZORf3kb5vdZXzmQX0Zei4Ibo6MhEREZw+fZrAVn2YPfBJ3bhHfn8FPzsvtgWtRCEpeCpwCv0+uo/dF65iYmzEi8P6ozZSUfTvORIys1Gr9B9HB69GEJ+RxaujhtTdzagFVRW/MzFX4b3z/Qa0qG64Ma43szqE1vr6fE0+s4Jn8fuY37Ezsav6AoGgnhAxMoLqs/lliD5aq0slTQHPJr7DEr9TdWuToEryI09TnB6HefshOhEDoFCbYzPoMdAWk33unzpfNy4rkfZO/iikkseMq6sT5sbGZOUV4GhhjtpIRVRqOkeuR+FgYU52/q1Mt7ScPLafu8zojm2wNTetc9uqi7mtAylx/pWO6aANRZncvKoVX33g9kRMKfE58byyV2QyCRoX4ZERVI/DX8Kpn29rCknWcH/shzgEPM/Uq4PqyDBBVeRHnQXA1KebwTlT3+4AFESfxaSc8+VRmBDGoZ3paFOjsbGx0fXFCU+LYeXxP8gvLsTV0hE7MxsuJl5DK2tRSAoi0qPIKSzEw9aa5OwccgsL+fP4Gdq7OhGWnIq/mb1ujXWhZ3G2smRAgM9tvvrbw85jOEkxFRe/c3OVsP7t2wa06DaRJE4+2pMPPG5fxJQSmxlJ/MFP8Bj0Wp3NKRDUBCFkBFUTeQh2vlVn0w2J/ppNAZmMu3YPcjUrwgpqT3HaDQBUdoZxJkoLWyRjU4pS46otZLL+3cSuf8vMoSx5o99wYTcbLtyqKaSQFGhlLcO/m0peUT43shKRZUjNySO3sIgPt+0jv6gYjVZLflERfVqVxFCFRsZyNSGZWXcNIqegkGPh0cSkZRCTlkFaTh4AHz9wb4X2HQmLYt+V62Tm5eNibcmYLu30Ym9KuRyfxMqQYwYBxqXYufuSGONWYdUYlbECv8NfVHW7mg4qFSFTu/C508k6m7KPTWs+unIS2wvvgn1raD++zuYWCKqLEDKCyslNhXXTQK7b1gOdon9hr18WI68/QIFW7HDWJ9qCksBshdqs3PMKY1O0BVXX/FFZO2M74llMfbsT1NWavGtHCQ8PZ/fu3WRnZ/NC30d5qd9j5Bbl8++Ncyza+y3XUqO4lhKJhISNhSU+dhYoJInzNxLJLihEqZBQKZV093Ln+wMnyCkoKSTXt5UnrtaWbD59kb2Xr5esr1CgVCjQaLUV2ng6Oo61/57Fx8GWdq5OnI2NZ8X+Y7w+egg2Zre2qAqLNaz79yx9W3mVK2IA1JZDyc2tWGi3t4jEKOJ8lfetKSAZG7PlyXassj1dZ3M+YdOJmae2oyx9Nmx8EZw7gr1fna0hEFQH8Q4iqJwN0yGzfvb/vWP+5oDPd1iLlgbNAhOvTlj1GIuRnTtGxmqsrKzo0qUL05+fjlplzC+n/sZYZYSDuS2jAgYxvn1JCreExPHp61j90ms80rcbU3p3wc3GklaOdiy6/24C27YiNDKWAf7etHK0Q5IkwpJSCU9KYe/l63TycCFoQE98HWzRyhWLGICj16NwtDRnemA/xnVtz/ND+1Kk0RAaqf87vP3cZTRamXs7ty13HpeArqQlVFz8zr4ZtSGQzMz4/ekAVtnWjegyU5mxxNiH2Se33BIxAAWZ8MdUKMqrk3UEguoiPDKCijn8FVzZVq9LON4I5oBrFqMTXyA2X12va9WW7PPBFESfr16tlfPB5F4+SFFSBJrcDJBlVFZOmPh2w6r3RFSWDtVaszgrmdxLB8m7foKilBg0OWkoTC0wcW+PVZ/7Ubu1MbimMDmKtN3fUhB3FaWZFRadR2LV536dJ6bUM6MtKiDu+xcx8e6M/egZaAvzUKgtan1/PN096O3emZDIE1xNiaSdox95RQWsPPEn5sZm5BTmcibhMmiiADh6PZqo1HReHTkYSZIIuRJBgLMDnnY27Dh3hQndOvDXyfNsO3sFR0tzpvbvAYCHrRXvbt5TqS3pefm421ihkEo8KXbmZpgbG5OWe+vNNSo1nQNXI5javwcmRkYGcyiUSoqLKy5+p1BItLn8M5Km6QtwydqKlUEu7DC7XCfz+Zi7sywxBb/E/eUPSDgLW1+F8V/WyXoCQXUQHhlB+dw4CbvnNchSlokn+MduMe0sqlebpqFJ3/8T2ae3U5yRiNK88jTT3Iv7KUqJwditDZZdRmPRdTRKCxuyTvxN3HcvUJgUWa01s/7dTNqeFRSnx2Pq2w2r3hMwcW9P7tUjxP/8GjkX9d9ItAW5JP4+l8KkCCw634XK1p30/T+SdeJvVLZuABSnlnglMg78glxcgG3gk2iy05AL8zCyc63FnSlBpVBia2YNQF5hSVPI/RHHyMjPwt/OC4BD0SdZeziEfy5eY9PpC4xsH4CDpTkASVnZOFtZsu7fs/Rp5UUv35JYmeScHFytrXTr2JiZIlURUmVjasKN9Ey0N+t8puXkkVNYiO3NbSWNVsufx8/Q0d2ZDu7O5c7h3nYg2ekVZ0q1dkzD5ExIVbel0VE42PPJk/bsMLteJ/MF2rbnt2vn8Uu8UvnAkz+XfAkEDYTwyAgMyc+EP58ATcM1vjNJvcQmq4VMNZrDwTTrBlu3Otjf/RJGtm6orJ3IOPJnpbVWHCfMQVIZ1hzJOr2T1O2fkXHwVxwnzKlyTbVra5wf+gATr056x/Ojz5GwZi6pO7/CLKAfkqrEo5AXdhxNdirOj3yIiUd7ABLWvEHW6R3YDX+azCN/khdxEiN7DzJPbMRx3P9QqM3JvnKkZD1P/XVqgiRJnIm/BIC7dYk4OBtf8maXlJMKwMpjf+hdUzbVGiAsKYVirZZ7O7eluEwMjEE8TBV12/q08uKnw6F8HXwELztrzsbGY6RU6grq7b10nfS8fJ4e3Lvc641NzchIq7j4nZWtES5/f1C5EU0Ayc2FhQ8pOWMcfdtzKSQFL1i25+nQbUjVLZy35VVw7wFO7W57fYGgKoRHRmDI5lmQFt7gy6oyo/hJepsxjoaddxsTU5+uqKyr12KhPBEDYN52IABFadXrNWTWpr+BiAEw8eyIiVcntPnZJdtcNynOSgJA7XKr5omxSwCazCRMfLqisnEh58I+kv7+CFP/3pi16Y+2IIeMI3+AUoVFx2G667T5ORSlRBsEAOdFnKIoJbpky6x0rFbL6p9+JCItlv5e3XC2KNk6S8lNA+BGViLGSiP+mvktD/fuiiSBrZkJ+69GcOhaiXfK2syUG+mZjO/aAVNjIy7FlVQXdrQw53pyKvlFJVs44cmpVb6NdvF05b7uHckuKODw9SgsTdQ8Pbg3NmamJGVls+viVcZ0bouVqQl7Ll5jwd+7ef3PrXwdfJikrBxcAoZTmGe43VRK+8RtKPKq7kfUqLTy4o2HZc4YJ9z2VNbGVnyFC8+c3lp9EQNQnFcS/FtJYLZAUFcIj4xAn9Nr4Ny6RltekZvE5+o3cXB7i1U3PBrNjromL+w4AMaO3rc9l6Qs+bOVFLfqm6gsHYGSGi9q95JPwYXxYSitHJEUSuxHv0TC73MpTo3B2MWP1D0ryb18CE1mIsYu/qQf+JWim8Io48gfpO1ZgbF7Oyw6jcCyyygAEn+fC4CRow+7r3YjLzGSyMhIUlJSUEpKTI1MeTf4K3KL8tlyea/OtuUT3sXbzZnNZy4yKMCXXr6eLN25n31XrtOnlSf5hUUAHLgazvWkFE5ExOBqbcndndrwxZ7DfLnnEP5O9pyKjkOSJKpqD9ff35v+/vr3WZZl/jxxFm87G3r7enIyKpZtZy8zqmNrPO1s2HLmEj8dOYWnT8XeslZuhZj9+ldVP55GRW7nx+yxacQqM297rraW3nwSE4lH6rnaTRB7Ao5+A/2m37YtAkFlCCEjuEVWAmz7X2NbgVSQxTzN2zh4v8HHkZVXVW2q5FwMoSglCrmogKLkKPLCQ1FZO2M98JHbmrc4M5G8iFMoLewwKiOKTP16oTS3JfGv9zBvN4Ti1FjyI09hG/gUAEorR1AoMbL1IO/acdBqMHL0RmXtREH0OQrjr+nm0t70uBTGXqTA1k0nZIzd2lJ44xLF6fEc3rUJpULCzs6O9m3aYZGj4lzCFQ5EnECSJExUJYHbXV3aMty/HzP/egeVUsHojm0wVimxNzcjOTuXXeevkl9czIh2/hyLiCY6NQNfRzsm9eiEvYUZU3p3Ydf5qxwKi8LNxgqlQiI9N7/G962iAOMR7QNKXptKyZd7DnMh8iTtPXsZXG9irsJzx7s1XrchKe7ajhmjYklR3H6s2Rjbjsw7G4zJ7WYg7XkX2t4Dtj63bZNAUBFCyAhusWU25Kc3thUASMX5vJC4ACe/V3k9rEtjm1Njci+FkHvlkO57Y5cAHMa9jpGNS63nlDXFJG9eCpoibIYE6XlkFGoznB58h7R/lpN9ZidKUytsBj+OZc9xAKTu+KIk7ubhRRTGXSF119cUxl9DaWmP/T0vY9FpeJXruz52K904yL9I12upvUdr+l/Tbwj62+nNvL79QxwsSoKjXx73AP29bm27lWYL9ff3ZnSnkgys0n/L0tPHg54+tzxzi7ftBWDl/mNEpKSh0cq4WlsyuI0vXT3dyrU7My+fzWcu6gUYJ2RmY22qZsHfu8ktLMLarCSzKzrpioGQ0Wo1HD/8MfNPHCBdo6GTiQlznJzxVxtm2f2Rns6ixAQ2+PjiZVxxf6a6pqBvJ54fGka2dHtxbSqFilfNAngkdGvdGFaUA5tmwuMb62Y+gaAcRIyMoISza+HS5sa2Qg9J1vBA7GK+DzjY2KbUGMeJb+D9v814zlyD85T3kRRK4lbPIi+ydgXJZFlL8tZPKIg+h0WXUXoxLaUYO/rgPOV9vF7+E/fnvsO63wNICiXZZ3aSH3MB+9EzkAvzSFy7AIXaHKcHFmIW0I+UrZ9QcKP26blKyfAx0s+rpErwtZSSOJi0rHjdOY1WS0p2DsYqJebqmr3ZFxWX1C0JT06ji6cr/fy8yMov4OfDJ3WF8/7L+tBz2JmbMaRNSZ+pyJQ0CoqLSczKwdvehoEBPjjfzK4KPruO7PwMvev/jVjHt4e30crYmPusrblUUMDTMdHk/Cf+I6m4mCVJibxg79CgIiYrsBtPDb1y2yLGQW3Hd0U2PHJ2Rx1ZdpPrexssiykiIgJJkvS+jI2N8fT05OGHH+bMmTMNYseqVauQJIlVq1Y1yHotHSFkBJCTDNteb2wrKmRY9JdsbL0NSapBsGETQWFigYl3Z5weWIhCZUzK5qXINaw/IstaUrZ+Su6FfZh3CMRu1AvVvlaTk0Za8PdY938QI3sPci7sRZuXhf09L2Pq0xW7Ec+gsnUj80TtPzGrMOxF5GPrzmCfXkSkxfLb6c0kptxKO99zMYy8omI6urugVJQ8gnIKCknMzNZV9i33tWi1ZN7Mdpoe2JfJPTszrmt7Zo8chKOlOdvOXiY1R39b5WxMHBduJDK5ZyfdWn+eKOk9ZW1qwmP9uvPM5Efo7V9SWj87P4NNx76/9dqMFew9+Av9zMz4xsOT/3Ny5lM3dxKKi9mXrR/0+05CPB5GRky1a7hO0Ml392Ran7MUSrdXeburlR9/3Iine1Td9WDSY8ebJVvXDYSfnx/z5s1j3rx5vPTSS3h7e/Pbb7/Ru3dvDh5sfh+MBJUjtpYEJQWsclMa24pK6RL1E3v8shh9fVKzbGmgUJth7NaGvKtHKE6Lw8jBs+qLKBUxy8g5twezdkOwv2cWUjkekIpI3fUtSksHrPtMAqAoNRaFmRUqq1uF+YydWlGUElOzF1QGZQWfh94bOZuJP0/n9e0f0s7VCSdLc2LTM7mWmIKtmSljylTVPXA1gl0XrnJX+wBGdWytN8+aYyVerIy8fF19mJCrEbrzw9r6MaydP78fO82JiFhGdiiJe8krLOKv0PMMDPDB084GgOTsHOIzsrA3NyUlJ49v9h2l1XUlB87+g5udL+k5yRy/upv7+j2H2siUDubhJOTlcJftrSq/HU1MAIgrKtId25WVRXB2Nr97+6CqqthNHRE9oTevtLt94fGgbSf+d2onRtqiqgfXlvx02PoKPNgwnhl/f3/mz5+vd2zu3Lm89957vPnmm+zdu7dB7BA0DM3vHUFQt1zcBOebdiZGKb4xGzjg8z2WqqZfUbU8NNklNVVQVtxNuSx6IqbtIBzGzNaLi6mK3GtHyb1yCPvRM3SZTgBysf4blqwpQrqNN9/yPDJQ4pXZMnU5D/S4l5i0DA5ciyA5O4f+/t68NGIAVqYm1Zr/REQMJyJiuJqQbHDsREQMmfkFtHEuEWbXk24J8s1nLuoCjEvJuunR8bCz4d7ObUnPL2bf6R34OLXjmVELsbN0prA4n4iEizg4qbDbuAQXlREXy9S9uZBfEmzsejPOJ0uj4d2EBB63taO9SfVe0+1yacrtixi1Us27JgHMDd1SvyKmlIub4MLf9b9OBcyYMQOA48ePc+PGDebNm0ffvn1xcnJCrVbj4+PD9OnTSUxMNLg2KCgISZIIDw/ns88+o23btqjVary9vVmwYAHaMtuMQUFBPPHEEwA88cQTettcpcTFxTFz5kwCAgIwNTXFxsaGdu3a8dxzz5GRkWGwvqByhEemJZOXBlteaWwraoTjjT0cdMtiVOILxOU3XBxCddAW5KLJTsXI3jBtPPvMTgrjrqCydcPI9lZQqqwppjg9DhQqjGxvVdct3U7KObcHszYDcRj7ao1EjLYgl9SdX2PZ/V69dgZG9p7Ihbnkx1zAxKM92oJcCmLOY9KqRy1fdcUeGQA3K2c+enQOvQ9XPseojq0NPDGllHa6/vHQv5yJiWfWiIF42BkWTVSrlCRn36p9M7mnYWE785uxK6k5uUwbPohxw5ZRdLNujFbWkpZd8iaWlBlDm/MXkbQa7rexZklSEs/HRONjbMymzEycVCqGWJS0dfg4KREThcQMh+q1n7gtFApOPNadD91uT8S4mzmzNC2X9jf+qSPDqsnWV8F3MJjaNOy6ZZAkif3797NkyRKGDx9Onz59MDIy4uTJk3z99dfs2LGD0NBQrK0Nf8dee+019u3bx5gxYxg1ahQbNmxg/vz5FBYW8t577wEwYcIE0tPT2bhxI+PHj6dr1656c+Tm5jJgwAAiIiIYOXIkEydOpLCwkPDwcH766SdeffXVctcWVIwQMi2Z4Pchu+H2resKq8TjBNt/yISM2VzKLr+jc12SdXoHBTEXAHS1VrJP7yQ/qiTWQu3RHssuo9DmZ3Fj5fMYu/pjZOeB0tK+pHBd3FUKE8KQjM1wuPdlvbk12SncWPk8SisnPJ6/FZuRcfA3cs79g2RsisrOnYxDawzsMgvoh7Fzq3JtTtu3CiQJm8GP6x03bz+E9JCfSPrrfczbDSY/+iza/Byseo6v7e1BKVfuzSlQ1jxdujxKC+OZGJX/2FIbqXRjKsLR0hx7czOiUzOI1TjhUab4XfDZdeTkl9RfMTdKQn2+RCwE2dpRJMusy8jg37w8OpqY8IaTM+YKBcdzc1mbkcEKD09UksTixAT+ysggX5YZaG7OAmcX7FV19JhVqQgO6szXjqdua5p+Nm348PK/2OSm1o1dNSE7oSReZkLD92L66quvAOjduzfDhg0jPj4eCwv9HmM//vgjU6dO5YsvvuDNN980mCM0NJQzZ87g6lryoeOtt94iICCAzz//nHnz5mFsbKwnZCZMmEBQUJDeHP/88w/h4eHMmjWLTz75RO9cdnY2RuX0/xJUjhAyLZWEC3D8u8a2otaYpFxgk9U7PKqaw9F0q6ovuA0KYi6Qc07/k2tB7AUKYi/ovrfsMgqFqTXW/aeQH32W/IhTaPKykJQqVNZOWPYcj1WviXqxKZVRnFHiGZAL88g8/Hu5Y1TWzuUKmfyYC2Sf3IbTpHkojPV7BimMTXGaNI/UXd+QdWorKgt77MfMRu1avjekOijlyneoczW3X5ytrpAkift6dOT7AydY/PNXdPY5jaOVKzEp17kUcwI3O19upIZjdXEfWFkCoJQknrN34Dl7/Z9dgVbLvPh4xltZ0d/cnOUpKfyWns7/OTrhbKTi3YQE3oiP41uP6sVDVWq3iQkbn2jNzza3l3XzlE0nXjq1DUUVXcTrlVM/Q6dJ4BdYb0tcu3ZNFyOTk5PD0aNHCQkJwcTEhPfeew8np/IrdT/22GPMmDGD3bt3lytk3nrrLZ2IAXBwcGD8+PGsXr2ay5cv06lT9Vt9mJoa9vP6r7ASVA8hZFoq2/8H8u1lOjQ2RpmR/Gr+NjMc57I1qf7c+g73vmzgSSkPhbEJNoNqVvBOZe2M9/8M096ru2Z5mHi0x/t/myo8r3ZtjevjS2s1d3lUJWQy8+omkLzUE1OR16WgqBhT46o/zbZxcWTBtHn8tn0rV2JPcj7qCK62vjw9cgGXY09yIzUc+2r8bXyVkkKWVsPrTiX9pX5KS2W8lRVTbgYGZ2u1/C8ujvDCAnyNa9/ZXbIw56cgL/62vFD14AowV5nxruTIiJNbaj1HnbJpJkw/Asb141ENCwtjwYIFABgZGeHs7MzDDz/M//3f/+nExvr16/n2228JDQ0lLS0NjebWz/zGjRvlztujh+EWrIdHyVZyenp6tWwbPHgwrq6uLFq0iNOnTzNmzBiGDBlCu3btbitWrSUjhExL5PwGCN9f5bDmgDInkS/Vc5nn9jY/3ii/IJqgflFWkUWWlln+m0JNcbC42S07O8cgRiYzL5+CYo0uO6kyjsal8+cfJZ3dX5nwOb7O7XXnjlxZC9zKTCqLVpb5NT2NP9MziCgsoAjoamJClkaDEkjRaGirLrlOI8uczSupint/RATdTE1rVURPsrXh68cd2GN2tcrXVRG+5u4sS0yiVWIT6tidHllS9Xf0+/Uy/ahRo9i+fXuF55csWcKrr76Ko6MjI0eOxMPDQ+chWbZsGQUFBeVeZ2Vl6P1V3dw6LCuEKsPa2pojR47w9ttvs2nTJrZuLSk+6Onpyf/93/8xfbpo6VBThJBpaRTlwc63GtuKOkUqyGSB5i0cvd9gSaRfY5vT4qjKI1O2hszt4Odkz55LYVyJT6Kbl75ovXwzo6mVo32lc8RlZLHh0DGMVSYUFuvH7uRok7kQcRY/Y2Naqw2FzPyEeNZmZOBnZIS1UolCkjhfUMADkRGs9CwpqFd4Mz18dVoqP9/8hN7PzIxT+fk8HRPNZt9WmCtu3a/KiuhJzo589IgZx9QRVd+cChhu2573zh/EvCCr1nPUG8eWQ6+nwL5h/2aLi4t55513cHV15dSpU3rbTLIs8+GHH9a7DV5eXqxatQqtVsuZM2fYuXMnn332GS+88AK2trY89NBD9W7DnYRIv25pHPwUMqIa24o6RyrO48WkBXzQ6mxjm9LiUFb2QdRIQUZG3QSU+zvZY29uxsmoG8Sm3UpRzSssYs/FaygVCnr6uOuOZ+blk5iZTd7NppQarZZ1p6/iatuKzj4D9ObOK8jmty3/hwZ42dHRYO2juTmszcigp6kpE2xsyNXK/Oblzedu7mRotSxLSsJRqWJ/TkmRvLXpGXjeDNp8rRZF9CRPN+Y/asQxdWyt7pVCUjDTqgOfhO5omiIGQFsE/yxs8GWTk5PJyMigX79+BrEyJ06cIC/vNvtLAcqbJRaq8tIoFAq6du3K66+/zm+//QbA3383Xop6c0V4ZFoS6dFwYFljW1FvSNpiptxYhFPAizx1tV9jm9NiqGxrSbKofsp4lesoFEzu1YkV+4/xVfARunq5olapOBsTT1puHmO6tMPO/FbMxdazlzkREcODvTrTy9eTPZeuE5OUzD0972X7vyWF2UIu/M2hS1u5EH2YjJx0XnJwYJiFpcHaa9NLhNNDNrbMjY/jZUdHXI2McDUyorepGQdzc5hmZ8fK1FSei4kmuqgQLTDU3BwfY2Ncbm4/VKuInr8Pr03MJkKVXqv7ZGNszeIic/qf3lar6xuUCxsh9l9wr336f01xcnLC1NSU0NBQcnNzMbvZZystLU1Xa+Z2sbspTKOjow3OnT9/HgcHB5ydnfWOJySUCH6TBqpFdCchhExLYvc8KL79TxtNGQmZ4dGf81frTCZeGdXY5rQIFJqKAxQ1JnUbUO7v5MALgf3Zcf4Kp6Pj0Gi1uFhbcW/ntnT1qjhGKiYtg90XrnJPzydo79mb/ec2UFicz4mrezBTW9DdwoQn7DzpY2Ze7vXHcnMxlSTWZaTTWq3mYZtblX4HmJtzLC+XVkbGPGlrx1+ZGWgBO6WS91xKMlyqW0RP2yGAWfcmEa/U99xUl3aW3nwSE4l7anPxTMqwax4ENVyfN4VCwfTp01myZAldunRh7NixZGZmsm3bNry9vXFzu/1Yu379+mFqasqyZctIS0vD8aaXb+7cuezatYvXXnuNAQMG0Lp1a+zt7bl+/Tp///03JiYmvPBC9VuQCEoQQqalEH8Ozq1vbCsajG5Rq9njn8mo65Mo0opMgPpEVcn9LVDUTQ2ZsnjZ2/D04N5VjpvSuwtTenehWKPh038O4eHgz11dHkShUNLGvTtHr+zk5fHLmNDaBPuNSyqcJ1erJUlTTICxWhcLUxbvm5lS0cVFvOrkxKtOTnyXmsKSpCTeiI+rdhG9oh7tmT4iioxa3rNxtp146+weTIqa2YeViBC4shNaj2ywJT/44APs7OxYtWoVX331Fc7Ozjz00EPMnz+fjh073vb8dnZ2rF27lvnz57NixQrddtXcuXMZNWoUERER7N+/n/Xr15OdnY27uzsPPvggr7/+Ou3bt69idsF/kWRZbn6d+AQ157eH4PLWxraiwUlwG8GIqMfJKhaavS4J8i+CmFMATDEbikVq+VtIKc5J7D7yfbnnGorNpy9y4FoUr9/3DW52vgD8FLyYo1d28s6TX3Lf4a+QtBV7jhKLixgaFkZ3U1N+9vI2OH8oJ4dpMdE8amPLGze3CzSyzIrUFNZlZJCh0eiK6Pmr1RzPzSUoOooVHp70NjNjSVIif+XkkKUtwqKTBe5PuKOyqv7vq0qh4nVTfx46t7OGd6YJ4dQBnjsAChG2Kag54remJRD7b4sUMQDON3Zz0P1LXNQVd1UW3B7KSorpZuYlV3yyAYhITmPflXBGd39EJ2LK4n1tS6UipraUFtHb1cqPYwGt+d7TC3+12qCI3vepqfyWmYHVg454vuBJflQ+MSur38DT0cSOHwqtm7eIAUg8D2fKL/woEFSFEDItgT3vNrYFjYpVwlGCHT4mwLyZudybCcriireWUjPjG9ASfTRaLWuOncbb2Yu7ujxc7hijiPNVzmNxs8dVlqb8arjZN4WQpbLqx+l/i+itzs3CfKA1NsNtsepmhfP9zmSfyaYgrvw6JmXpbu3PHzFxdI0+WeXYZsG+xaBpng1hBY2L8Lff6UQegrA9jW1Fo2Oaco6t1u/yiNH/OFbPLQ1aGopK3nuSUiIazI7/UlisITk7h+TsHGauLD/w+6Gokho3n7m5M8LSMFsJwEyhwFGpIqaoEI0so/xP9dXIm+nd3kaVNzG9nJ/PD6kpLHJ1w0ap5Ny9XUlbeglXr1uBwybeJYG/BXEFqF0rrgb8kE0nXju9s2G6VjcUaeFwZg10e7SxLRE0M4SQudNp4d6YshhlhPOb+TxecJzL9qTKC6cJqk9FQkYyUpCRmdiwxpRBpVQypOsICvMN2xbExh4mKiudQAsL7JRK3Kto1NfLzJStWVmczMujp5l+Wf2DOSUdt3uYVVxuXyPLvJ0QzwBzc+6xtubcQz2Z53gCALn4Vpii7v8VOLlMlGreNvJkbFNpNVDX7P8IOk8BpXhrElQf8dtyJxO2ByIPNrYVTQplTgJfq+cy1/UtfokTLQ1uF4VCgaKiOjJ1WEOmNvh06Mtkh4EGx/3d8vhmyUGigGfs7OlSpnlfWnExaRoNtkoltmW6Vk+2sWFrVhafJSex0tML45temf3Z2RzLy2WAmXmlYuintDTCCgpZ5u/PkaAeLHU5iRIlKhsVWWeycBhVkr2UdbqkeF153hh3M2eWpebQNu4O9rCmRcDp36D7Y41tiaAZIYTMnczexY1tQZNEKsjgXc1bOHq9ybIow+7RguqjUlX8CKnrGjI1QWlkRF5eT4PjZpYqPLYsqvC6t+Pj+ScnG0elknStliJZ5j0XFyZa2zDJ2pq1GRlMighniIUFScXFbMvKwlqh4M0yxc1O5+WxJj2N8/n5JBYXk6fVUgy0NTFhy70+7HQ5pRtrf5c9CX8mcHXuVYpSitDmaVGYKSjOKkbtckvMDLBpy+LLxzl6NokOv+SyP8iMAV536OM75GPo8pDwygiqjQj2vVOJPgbRRxrbiiaLVJzHzOT5vNfqXGOb0qxRVfJmUyA1XnC1W9sh5GYaVkjtkHsIRUbFmVRHc0u2ibK0WhyV+h6l+c4uzLlZ0v6ntDT25+QwwsKC37198CnTJ+nfvFwO5eTiY2zMWCtrnFQqrJRKrhQXsPTTnaQdTNONdbjbActulhTEFKDN12LsZIzCREHkx5EUphQiIfG0dSe+OrUbo4xUntucx7M9jO5cEQM3vTK/NrYVgmbEHfzX0MI59HljW9DkkbTFPHzjAxz9Z/DMtb6NbU6zpDKPTI4mswEtuYWJhRXpSR0Mjnu6yVj+uhqA913deN/VcGtxmbsH3sbGuBsZsSIlhU+Sk3TnFJLEY7Z2PGZrZ3BdWR6xseVJu1sxWHP9/fnhCTfWp50nbEEY8b/HY9PfBkmSkBQScpGMsYsxAe8HICkkCpMLufLaFfKO5fHV2ACGnyqJh5m7p4BCDSwa0QJK2O//GLo+AorG3Z4UNA+ER+ZOJDUcLjVcye/mjITMyJjPWBewq7FNaZaoKnmjaawaMk6thlNUoG+XsYkS371Lq7y2v3nlsS7VQV2mqJtkb8cXTzmx1TwMEw8T1G5qNJkatHm3UrmLUosw9TJFUpTE3Rg7GGNsZczgG3kMvxoCwPFYDZ8dLeSre02wUreAStXpkXC5GfSKEjQJhJC5EznyNcjl17wQlE+P6B/YHbAOI4UodF0TKttaSsu80YCWlGDt5EpSrGH13fbGl1HduN6gtkguTiwKMmefaUmKd0FiAQVxBRjZGaE0uyW0jOyMyI/OR9aW/O71KfahOLOQjsYlgb/FWplpm/KY2E7FuDa3J7KaFScatyK0oPkgtpbuNPLS4OTPjW1Fs8Q/eh0hvpkMj3ycnGLh0q4Oyko8MokpUQ1oSQmWTsNJjtX/fObkrMT2j2UNaofk48H0Hqmc23IRNFCYUkjWyRJh4jZVf0vLdrAt0V9FE7EoggFtPdizfwumKnikU4lo+fBgIdEZWnY8WnF69x1J2J4S73I5FZkFgrIIj8ydxokfoCinsa1otrjE7uKQ+5c4qe+gQmP1iEpZvpCRjBVkNnANGUeftiTHOukdUyglWp/6DqkhW8q19uWVB/K5FBVH0sYkkjYnkXE4A4WxAq+XvLDsol94z7q3Nf5PtsI+U+bgtss4WyjY8agZntYKrqRoeGd/AR/dZYKLhYJFBwpw+TgLo3cyCVydw9WUxssMq39k+PeHxjZC0AwQHpk7CU0RHFve2FY0e6wTjrDXMYexqbMIyzWt+oIWjEpRwSOkoWvISBIKo0EGh9vZJWD8z/EGM0PTuQ0zRyeQqMzGfoQ99iPs0RZqKUwoJHl7MpFLInF5wAWHu291vW5v6cMnnSNw81QBt6pOy7LMM5vy6euh5MluRvx2tog3/ilgYaCaXm5K/rc7n4m/53HmeXMU0h0aN3PyFwicC6rKqyYLWjbCI3MncW49ZMU1thV3BGbJZ9lu+R7drbMa25QmjUoq/xFSrG7YnjnubXqTkazv6bCxN8JpY8PVUirs3ZFn7o4hUZmtd1xhrMDE0wSPpz2w6GRB/B/x5MfkAzDBthM/XjiGW5rhNtyK0CKOxmpYMdYUSZL49GghI1opmTtYzSh/FV/eY8L5JC27wu5gr0xuMlz8u7GtEDRxhJC5kwhd3dgW3FEYZVznT9U87nJIbWxTmiwVxcgUSPkNZ4ORMbm5PfQPStAuaj1SYcPYkTO4K08Nu0qWovJmjxYdLUCG/Kv5vGXelndCt6AuNrQxLkvL67vymTdEjb9dyWP6coqGri637nc315L/X0q+g4UMwPHvGtsCQRNHCJk7hZQw0Y6gHlDmxPNt8VymuApPV3moKF/I5GoyGswG93ZDycvSr63i75KL6fHtDbJ+6sieTBtwngKpakFRnF7iqXpCNuWBczsrHPfC1nx8bBS82l9/S6Wg2PD/d+quko6oQ5B4qbGtEDRhhJC5Uzj5U2NbcMeiyE/ng+y3mOEV3timNDmUFWwtNVQNGVNLa1IT2ukdM7eqvA1BXZFWXMyRAW2Y1voEGm4FE+eFl1/ROC8yj4y96Rgp4UnbioXx+otF/H25mJXjTFEpbqmUdg5Kdl4vpvhmmvbWq8W643c8IhVbUAki2PdOQFMMp35rbCvuaKSiXGYnz8fR93+8Hd6+sc1pMlTkkUlpoBoyjr4jSIzWt6F9dgiKzJRazbc2PZ3QvFwArhSUbBOty8jgeG7Jse6mZkyysQHgc3cz1ny/EcfxjjhPvNVrKeqLKCSlhImPCcZ2xsgamYK4AnIu5IAs8+koE3xsyheAGfkyL27NZ2YfY3q66b+umX2MmbIuj8DVuXR1VrDqdBGdnBQMb9UChMzpNTBiPhi3sBR0QbUQQuZO4OpOyI5vbCvueCRtEY/FvYej/0yev9a7sc1pEigrcOomJUXU+9rWzu4kxXjpHfNy02L5a+3rKIXm5bIhM/M/x/IIzbvlZZlka8vJR3vyTwVbV45jHck6lUVeWB5Zp7JABkcbUyZ0VPFiLyP6eFT82H1tVz4mKnhnmGH36wc7GhGVoWXZ0UKOx2oY5K3k2zGmd27GUlkKMuDcWuj+eGNbImiCSLLckAUWBPXCbw/B5a2NbUWL4pjnUzxwdXhjm9FoBPkXQcwp+nl2o8NV/d5DklrBmksf1LsNHh0eI/mGo+57Y1Ml/U4tRhlXj1uAKhUhU7vwudPpag33MHNhWUo2beIv1J9NLQW3bvDM3sa2QtAEETEyzZ2shBKPjKBB6R39HTsD/kIptexWEErK8QaY1/9Wh5Nvez0RA9BBeaFeRYxkbMzWaR2qLWIG2rRlzfUrQsTUFTdOlnwJBP9BCJnmzulfQduwNTsEJbSO/pOQVj9jrmy5YkYpGz5C6ruGjCQpQDVQ75izixKbLfXX8V0yM2PNMwGssj1f9VgknrHuxJendmOdl15vNrVIRCq2oByEkGnunPmzsS1o0bjFbueg51c4GrfMlgYq2dD7UqAoP2unrnBv24fMZAvd90qVRMC/y+utDYFkbcXKpz1YZ3m5yrGWRhZ8qvJkxqktKETj1rrn3Hooqt/fL0HzQwiZ5kzSZUis+hOioH6xiT/EPqcltDJruCJwTQWl1nBrKbe4/mrIqIyNyc7WL37X1jYO46uh9bKe5OjAJ0/as8Os6s7Z/hae/JpWQODVA/Vii4CSPnLX/mlsKwRNDCFkmjPn/2psCwQ3MUs+w3ar9+hunV314DuI8raWMnLrr4aMW9tA8rNvFYmztVfhuPHDellLcnPhncfVHDKJrnLsSNsO/HLlNCf3X+Kun3Kw/zALk3cz8f00i4fW5RKdUT3vjFaW+fxoIZ2+zsb0vUwcPyq5/nqa4fUarcz7IQW0+jQL60WZ3PVTDheSyi/Kt/zfQszeyyQs9Q7wEl3c1NgWCJoYQsg0Z4SQaVIYp4fxp9F8htu3nJYGSo2hR6a+asiYWtmQGt9W970kQbvwP1EUVt4WoFa08uKNh2XOGCdUOkwpKXnFsgMf/7uVl9cmcd8feYSnaZnSQcWsvsYM8lJxKFpDZDWFzLOb8nlpez6yDC/1Nma0v4r1F4vptcKw0/UnRwp5c08B7RyVPNnVmFPxWkb+lEt2of4WW2m7g/lD1fjZ3QGP/CvbSxrk1iPBwcE8+OCDeHp6olarsbOzY+DAgXzyySfk59++5zUoKAhJkoiIiLh9YwWijkyzJekyJImy3U0NZfYNVpi8xf9c3uLPeJfGNqfeUZSztZSUHFEvazn66Be/C3DJxiR4d52vI7fzY9bYVOKUlTcMtVPb8GGBmj5ntvHp0UKWhxYxvacRn91tglKhf19Kq/FWRnB4MStPFjHYW8mux8wwVpbM8XBHFff8mseL2/LZ8ai5bvyK0CJGtFKy5eGSInH3tVMxeFUum68UM6WjkW7cC1vzaWWrYHa/O6SDdH46RISA37A6n7q4uJgXXniB5cuXY25uzt13342/vz8ZGRns3LmT2bNn880337Blyxb8/f3rfH1B7RBCprki3KtNFkV+Gh9q3sLJ802+jPZpbHPqFaVW/xO+pFaSnV33HikbFw8SYzx135tbqXDbVPe1aoq7tWPGyFhSFLmVjuto5csnkWG4pMeQVySzYF8BrWwlPi1HxAB6rQYqYkVoIQDvBKp1Igbg7gAjhvoUsjNMQ1SGFi/rknsenaFlYttb4qS0EnBUGe9PabuDY0+bV8uGZsPFTfUiZObMmcPy5cvp1asXf/31F+7u7rpzGo2GhQsXsnDhQkaPHk1oaChWVlZ1boOg5twBfsYWyqXNjW2BoBKkohxeTZnHPN+LjW1KvaL8b0iGRf28WZrbDQP51twdM4JRZKfX6Rr5/TrzzKjIKkXMfbadWH3uCC7pMQDsDCsmLR8mtDFCoy0RD4sOFPDNiUKu1SAmZW+EBnMjGOBpmAk2yq/kM+e+iFup7Z7WCk7G3/oBhMaV/L9U6JS2O3i5rzHdXe+wNgaXtoC2buN9rly5wtKlS7Gzs2PTpk16IgZAqVSyYMECHn74YcLCwvj444915yRJYujQoeXO6+Pjg4+Pj973q1evBsDX1xdJksq9Pjw8nGnTpuHl5YVarcbV1ZWgoCAiIyMN1ii9PjY2lscffxwXFxcUCgV79+5l4sSJKBQKkpKS9K7p2rUrkiQxd+5cveOrVq1CkiSdjaUkJiby8ssv4+/vj1qtxsHBgfvvv59z584Z2BMcHMyTTz5JmzZtsLCwwMLCgp49e7J8+fJy71Fl9lcH4ZFpjmTEiMJQzQBJW0RQ/Hs4+s/ixWs9G9uceuG/MTLFxlV3gK4pzq06khLnoPve202D+a9r6nSNrMBuPN/nAoWVdLA2Uhgxx6QVk0O36B3/96aAUCqg8zc5XEm59QarkODlvsZ8PFK/O/d/ySmUicuW6eikKNejE3AztuVqGWE0rZsRr+8uYMyvubS2V/DzmSLcLCXGtC55rL+2Kx9TI1gYaNjuoLkj52eSFXsRK88OdTbn6tWr0Wq1PPPMMzg7O1c47q233uLXX3/l+++/Z+HChTVeZ9asWaxatYrTp08zc+ZMbG727iordo4ePcqoUaPIyclhzJgxBAQEEBERwS+//MK2bds4fPgwrVq10ps3JSWFfv36YWdnx5QpU8jPz8fKyorAwEA2bNjA3r17mTx5sm7smTNngBLRUZbS7wMDA3XHwsLCGDp0KDExMYwcOZIJEyaQmJjIunXr2LFjB//88w99+vTRjV+8eDHXrl2jb9++TJw4kfT0dLZv386zzz7L5cuXWbJkicF9qcj+6iCETHPk8rbGtkBQTSRZy5iYpTgETGPK1bp3hTc2imL9N926riEjSQpkxQDd92pTJd7/1G2WUvLdPXmhy6myDh8DnEwc+CRLQ+ewXQbnEnNK4l+WHi6ku6uCY9PMaeeo4GSchmc257PkcCF+tgqe71VxjEpGQckc1uryjbC6eTyjTJzp7H7GFGjgu5OFhEQV08tNyWd3m2BhLLEvopiVoUXsfMwMIyW8siOfH04VklsEo/1VLB9rgpN583LIa8xduG43kK0FXfk+zouHzyv5n2fV11WXQ4cOATB8eOWtR9q2bYubmxuxsbFER0fj6VkzI2bNmsWpU6c4ffo0s2bN0hMwAEVFRUyZMgWtVsuxY8fo1q2b7tyBAwcYOnQoM2fOZNMm/fCCc+fO8cQTT7BixQqUylseOCOjknip4OBgnZDZt28fsiwzfPhw9u/fT05ODubm5rpxrVq1wsvrVh+zxx9/nLi4OLZv386oUaN0x+fOnUvPnj15+umndcII4Ouvv8bX11fPvuLiYu655x4+/fRTZs6cqTd/ZfZXh+b1mywoIWxPY1sgqCF9o1eyI2DjHdfS4L9bSzlF6XU6v3u7fmSm3Apwba84iyrB0LVeW6In9GZ618pFTE/rAP6IjqZzTPmtCUrjeI2VsGGKGb3clVgYSwzyVvHnZFMUEiw5XPeZVUqFxNzBasJnWpLxf1bsftyc9o5K8otlntmcz+NdjBjRSsVHBwv58ngh7w0z4c/JppyM1xC0oXnUPMqz78hRz6eZbb0M/9Ql3HX1Pj6JakVGkYrgS4l1ulZ8fEnj3eoIk9IxcXFxdWoDwObNm4mIiOC1117TEzEAAwcOZPz48WzdupXM/zQ3NTY25sMPPzQQAR07dsTBwYE9e269bwQHB2NhYcHrr79OUVERISEhQInnJTo6Wm+b6+TJkxw6dIipU6fqiRiA1q1b8/TTT3P27Fm9Lab/ihgAlUrFc889h0ajMfACVWZ/dRAemeaGphgiRMGt5kib6N/Z3yqTkRGPkKO5Mz5DKIr1X0dGXt3VkDFSm5CdeetB7uKiwHbNl3U2/6UpvXnbt/JCeo/ZdGb26e2oKmkDUupF6emmxM1S/350dFLSylbBtVQt6fkyNiblK6bSOUo9M/8ls9RjU/kOFQAL9xWQni+zdFTJ4E+PFvJ4FyOdRyizAB79K4/LyRraODSt2BlZqSbVqS8HlT35PrENp2ItKhx7KT6LG+l5uNmYNqCF9c+RI0cAuHz5MvPnzzc4Hx8fj1ar5cqVK/TseWvL2tfXFwcHB4PxpfEna9euJS4uDldXV4KDgxk0aBCDBw9GrVYTHBzM6NGjy91WKrUnISGhXHsuXbqk+7djx44AZGVl8fHHH7NhwwbCwsLIycnRu+bGDcMSDRXZXx2EkGluxP4LBZlVjxM0Sdxjt3HAM5MRN54hpdCo6guaOIpiGco0jkzNqLsaMq6th5IYU/LmqzRS4H/867qZWKHg+GPd+citYhFjqjRhgcqNu09WHVTfxqFEvFQkUmxuio+8ooqFjLmxhKuFRHiaFo1WNoiTKY2NCaiiDsyZBA0fHSrkp4mm2JlKZOTLJOTIdHW5JVi6uZbMcSlZ2ySEjNbMgUj7gWwr7MZ3cT6khFf/7yL4ciKP9PGuEztcXFy4dOkS0dHRtGnTptKx0dElRRJdXV3rZO2ypKaWZP398ssvlY77rzioLK4nMDCQtWvXEhwczF133cX58+cJCgrCxMSEfv366QRMeUKm1J4tW7awZcsWw8n/Y09hYSFDhw4lNDSUbt268dhjj2Fvb49KpSIiIoLVq1dTUGDooazM/qoQQqa5cd3QJSdoXtjGH2S/cw73Jr9ERF41PmI3YZT/iZGpqxoyZta2pCTcejNpbxWDcdiZSq6oJioVwUGd+drxVIVDPM1cWJacSeuEvdWaMtCn5DF6Mdlw27BII3MtVYu5ETiaV57RNcRHyZpzxRyM1jDYW//RvCOsxCP03+Nl0Whlpv2dxyg/lV4dGYCCYrnM/0v+lRoxG7vArg3nzPvzZ3ZH/ox3RpNaOw/loWspdSZk+vfvz969e/nnn38YMWJEheMuXbrEjRs3cHd3120xSZJEcXH5XruMjAysra2rbUdpgOumTZsYM2ZMta+TKvmBlgqT4OBgXcxM6bHAwEAWLlxIRkYGe/fuJSAgQC9jq9Sezz//nBdffLFKOzZu3EhoaChPPfUUK1eu1Du3Zs0ag2yo6thfFXeGf7slESaEzJ2AedIpdtp8QFer5tvSQKVSIZXxxkhqJdk5aXUyt4PXCDRFJd4COwcV9hs/uu05JRMTNj7djq8dKxZEg23aseb6ZVonVL/YpJ+dgpF+Sq6lall5sxZMKYsOFJKeDxPbGenquCTnarmUrCE5V1/4PNO9xPv0VnABhZpbwmPb1SL2RmgY6afE26biR/anRwu5mKzl63tviWNrkxJPz9Zrt95kt14t+X87h4Z7/MsKI9JcBrDNYxYPmHxDmxvzuP/qXayJc0VTTpuL6nL4egpyHTULffzxx1EoFKxYscIgVbks7733HgBPPvmk7pitrS2xsbEGYyMiIkhPTzc4XhoHotEYZsmVZv8cPny4RvZXRrt27XBxcWHPnj0EBwdja2uri78ZNmwYGo2GlStXcuPGDYM08JraExYWBsD48eMNzpXG4tQ1Qsg0JwqyIPZEY1shqCOM066yVr2AoXZ18+bf0KiU//EOWNTN48TW1YvEGA+gxGvQ9trvKIoLq7iqciQLc36a5ssvNuXX9ZGQeN6qI1+c3IlVXs2bXn51jylO5hJPb8pnzK+5vLozn+E/5vD23gK8rSU+uutWCvQXxwpp92UOXxzTf02BviqmdTNif6SG7t/m8L9d+Tz+Vx4Tfs/DzlTi87sr9t5FpGt5O7iA94eZ4Gmt/3OY2ceY3dc13PtrLs9uymP+vgLGtFYRYF+/20paE1uiPMay0mUe/TQr6BbxAs9f682x9LorIpeaU8iFuLrZam/Tpg0zZ84kJSWFsWPHGgTyarVa3nnnHX7++Wf8/Px49dVXded69epFREQE+/bt0x0rLCxk9uzZ5a5lZ2cH3NqiKsv48ePx8vJi6dKl7N+/3+B8UVERBw7UPE5y6NChXL9+nbVr1zJkyBAUipLfk969e2NmZsbixYsB/W2l0vN9+vTht99+4/fffzeYV6vV6r1ub+8SD9l/bdy3bx8rVqyosd3VQWwtNSciDkAlQYeC5ocqK5bvTd/mNee5rEuo/R5xY6BS6T8+itV187tpahtIXlyJ9yLAOROT4NvL0pNsbfj6cQf2mF0t97ylkQUfaG0Zcnprrdfws1Nw4mlz3t5bwPZrxewMK8bFQuKFXka8PURd7VTnb8ea0MlZwfJ/i/j0aCEWxhIT26p4b5hJpX2Snt2cRydnJS/0NowvebW/MSl5Mj+cKiK4UGZcGxXfjqmfLc1CGz8uWA5gfU5nfot3pSi9/vevDoel0MGt+ls3lfHhhx+SkZHB999/T0BAAPfeey9+fn5kZmayc+dOrl69SkBAAFu3btWrcTJ79mx27tzJPffcw0MPPYSZmRm7du3Cxsam3DiaYcOG8fHHH/PMM89w//33Y25ujre3N4899hhqtZq1a9dy9913M2TIEIYNG0anTp2QJInIyEhCQkKwt7fXBdlWl8DAQNasWUNSUpKeWDE2NmbAgAHs2lVSWqC8wn6//fYbgYGBTJkyhWXLltG9e3dMTU2Jiori8OHDJCUl6XpQjR07Fh8fHz788EPOnTtHx44duXz5Mps3b2bixImsXbu2RnZXB0muK7+coP7Z+joc+7axrRDUA7KxOYus3uLbGK+qBzcBgvyLsMkMZ1LirayJHLc8Nh/87LbmdfbrREbqXQBYWBvR85/XUGTX3ENSiuTsyEePmHFMbej2B/C38OTTuBt4JYfXeo2WjKxQkenYk2NGvVmd2o4DqXUjKGrCXe2dWfF43Rac3L17N8uXL+fgwYMkJSVhbm5Ou3btmDRpEs8//zympoaZUmvXrmXhwoVcvnwZOzs7Jk+ezPvvv6/L5Plvg8iPPvqIFStWEBERQVFREUOGDNGrZBsbG8tHH33E1q1biYqKQq1W4+7uzoABA3jooYcYNuxWXSpJkgyu/y9Xr16ldevWAJw9e1ZnF8AHH3zAG2+8QZs2bSoUSGlpaSxdulSXiaRUKnF1daVXr15MmjSJiRMn6saGh4fz2muv6WrUdOjQgVdeeQVnZ2cCAwOZN2+eXgZUdeyvDCFkmhNfD4SEs41thaCekJXGfOc4h3cjKs+YaAoE+RfhkBPNhLiuumOJTvEEHy0/kK86SAoFDr5Pk5VaUjemj2Yv5iF/1n4+T3fmPyBz3rj8eiN323Zk/rl9mBXmlHteUD6y2opYx4Hs0XRneZwfMfmNWznYxsyIk2/ddVvBooLmjdhaai4U5kLihca2QlCPSJpCnkp4Byf/l3npWo/GNqdK/hsjk5lXcYBkdXBv25/kuBIR4+NWjPmvtRcx+Pvw2sRsIlTpBqdUkopZ5q2ZGlr7raSWRpG1D5etBrAhrzO/xLmRl9H4adulpOcWcTkhi7YuooFjS0UImeZC3CmQ676PjaBpIclaxsUswSHgGR6+OrSxzakUpUI/ZiM1vfY1ZIxMTMi6WfzOxEyJ967ad7bWdgjgpTGJJCoMPS12als+zjem19nttZ6/JSBLCrIdu3NC3YefUtuxJ8EOEhrbqoo5GZUuhEwLRgiZ5kLsv41tgaAB6R+9nK0BWdx7bQxyZfXzGxGVQv/xkZhS+9YBrq2HkRhdEqjaXj6NMimmVvMU9WjP9BFRZCgMS/B3smrF0siruKSXHy/T0pGNLUhw7M8euQcr4wO4HtV8ahydikrnod7NI75MUPcIIdNciBFp1y2N9tG/EeKXyV3hU8jTNB1Xfikq6ZZNkomSnFrWkDG3sSMlPgAAV1cJm99qV8E3b0AXnh10mXzJMHvqfttOvHF6N8aauu951JwptnTnms1ANuZ34ccbXuRkNs+KHKei0xvbBEEjIoRMcyG28p4wgjsTj5gtHPDKYkTMNNKKmtafq1JxS8jIVVStrQx7zxEkxihRGSvwP1K7XkoZI3rwfM+zFP+nKaexwpg3THy4P7Ti0uotCRmJXMcunDTpwy/pHdiW5AC3F9rUaDhbmeDqbI6RjTFZFipyNBrMa9FwUND8aVpPRkH5ZCdCRlRjWyFoJOzj9hPiksU9yTOIakItDcp6ZDTGtashY+fuQ2JMSTn0dhZRGIWfq+IKQxLu7cVLnU4adLB2MXXkk4xCOobtrpVtdwqykRlJjn3ZTy9WJgZwKdqssU2qMTZmRng4W2BmZ0K2uZIoNUQqZG5tZmq5mJ1PT2vzSmYR3KkIIdMcEPExLR6LpJPssl3EJNVrnM1qGg9rpXRrGyJfyqvVHGqrQHJzJewdVTisr3kbgvD7e/O/1obeyt7Wrfno6inscuquG/ft8POZQkIiNfwbp+FsopZCDfww3oSgrsa1nrNQI9N7RQ6nE7S0sVdw6cVbnaI1Fq5ctx3I3D05bNgRQlHWDowcLmIb+BQmHu0N5soLDyXxz/k4P7yo3PMNiZmxEm9nCyzsTSmwVBGrhlilTLxuRPkVQ85m5wkh00IRQqY5ILaVBIA67Qp/WS7gSaM32J9q09jmoOKWRyanuOZF61z8u5AWb4ukgLaXf0HS1MCrI0mcfbgX73gZ/m1MtenEy6e2o2xCWX5z9xQQmSHjYFbS+ygy4/bLdy3YW8C11FtbaXkOHTlt2o/fMjvwd6Ij2QcOkrzxO9Tu7bFo1YvcK4dJ/ONt3KZ9jcrKUXedtiif1B1fYtF1dIOLGCOlhLeTBTYOphRbGZFgoiBSqSFVVxOm+vfpXFZu/RgpaPIIIdMcSDjf2BYImgiqrBhWmb7NbOe5bEhwalRblGVatWXkll90riIUSiXF2n4AtHHKQL3HsKdMxQsrOfx4Vz5x0RcxpipTFipcGH2y6cXDrBxnSoCdAm8bBYsOFDDnn9sLOj4Wq2HxoULef6Aj/1tzjhjJhXYxb+iNyT69A5WdB86PLEKSFFj1GEfst9PIubAX676TdePS9/+ErCnGdkjQbdlUFQoJvBzMsXc0A2tjkk0lwlUy5/S2BLVA7eKtzmbXzisoaP4IIdMcSL7S2BYImhCKvGQ+0czFyWMuyxuxpUFZIZOaUbMaMu5tB5B0wwxLGyNcNi2q/oVGRux8ogMr7U/rHfY2d2NZUhr+CfsquLBxGdGqbh61WjNHLlv2Y9w3OzH19OFLrw+AsRRoDbONNFnJGDv5It3cAlRZO6EwtaI481Z0b0HcFbL+3YTjxDdRqOs2dsbN1hRnRzOUNmrSzRWEGclc0tModVtU/nJOPsVaWddlXNByEEKmqaMpgjTRB0agj1SYzZy0eTj4zOH9iNaNYkPZraXEGvQqMjY1IzO9CwAdkrajyKle92LJ1JS1T/jxu7V+QPBQ23a8f+EIlvm178nUlMm3a8s5i/78kdmRPxOcSflzJdmpGbg+8V6lZfmVlg4UJoYjy1okSUFxZiLavEzdtpKs1ZCy7TPMWvfDLKDP/7N33uFRVVsffs/UzKT3HhKS0HsV6R0UpYpYAb3Y62dFRbFjvQpeO4oooigqUlUwQUBEeu+QACG9t+nn+2PIhCEhpE8C+32ePCH77LP3mkmY+c3aq9TJxgBPLeFB7mh9tRR6KEnSyJyQ4IRjRsN3wjHaZI6UGGjnUbEPUm1JSkoiJiaGkSNHsmaNKKLYVBFCpqmTc0J0vBZUimQ1MiP9JQJiH+P/jndt9P2V59KEJJ2SkpLqiRGAkHh78buYMBP6b3+q1j2SpydfTg9jlXt5QzuFpOAez3bcs2M1UiO8UTYWslJDXmBPNqt68kVmW7ad9XRcM5zeR+H25fgOuRO1b8Wuyufj0XkkWcvmkP7tTLShrSg58jeSSoN7u0EAFGxZirUwi+DJL9fIPk83FVHBHrj7uVHiqeK0Fs4oZMpLGLrud7G3sLRehYygeSCETFNHHCsJqkCSbUxIeYvA+Hu47eiARt1bKduPLGpSQ8bdN4Cs1Djc3FVE/fZKte6R/P2Ye7sPG9yOO8a8NJ68bvFmwO7Lo1+STefHaf9+/G7pymepLclIUlecYzKQveo9tGFt8Ox+3SXXdG/TD1vpfRRsW0bhrtWoA1vgP+YxVF6BmHNSyP/7O3yH3YPSw5f8f36gYNsybKWFaCPa4T/yAdR+4ejUSqKC3fHy12HyVJPqJnFaYSWzFsG4jcGJUlHw8EqkeZZxvJIQQkZQDfqf/piV8SuQpMZ7Y1HK9qMlSw1qyPhFDsVmUdDeuh1lduol50uhwcyZ5s4Gt/I6Sq08ovguq5gBx/+uudFNCIs+mJ1RU5nl9xat8+cx8NgUXk1qTYaxoogByE2Yj7UoB/9rHnbEvVwKz67XED7jE6L+70dCb3sHt4h2yLJM9pp5aMJa49FpOMUH1pO3fiE+3a+j039exw0Dhb+9SdjgMAqGBLO7vScbQlRscZc5pbQhN+Eu06dcKGSSk5O58847CQ8PR6PREBERwZ133smpUxVrgEVHRxMdHV3pOoMGDapwZDh79mwkSSIxMZFvv/2WLl26oNPpCA0N5eGHH6a0tGKgs8Vi4fXXXyc2NhY3Nzfi4uJ4/fXXOXHiBJIkMW3atAr3ZGRk8OijjxIXF4dWqyUgIICJEyeyb1/N6zs1JsIj09TJOupqCwTNhPanv2V9bCHDT9xYafBnfaO02V9sjVQv7dU/oiUZp8OICJXwXvzpJedL0RE8N8nCYXW54Bnt24EX9yWiMzW/VFtZoaIgsDtHvS1AAk/nj8fjyLBq3Ws4tYeiXavxGXQHar/wOtlRtPs3TKlH6PZ/84lqF8j6X1bj2603ni/8H+kSKDoEkv/QdA7u3oy259V12quxOWUwuWTfI0eO0K9fPzIzM7nuuuto3749+/bt44svvmD58uVs3LiRVq3qHsv2wQcfsGbNGsaOHcuQIUNYs2YNc+fOJSsri0WLFjnNveOOO/j6669p2bIl999/P0ajkf/+979s3ry50rWPHz/OoEGDOHPmDCNGjGDcuHFkZGSwdOlSfvvtN9atW0fv3nWLpWoohJBp6giPjKAGRJ1Zzt8tChh25s4Gb2lQdrRUZK5ekK3GYyBqs5LYv+ddcq7cuiWPjy3glDIPAJWk4v/cW3HbjuZ1lCRrvUkJ7Mdaazc+S21JSrKW/OwfaryOKd0eTJ2X+AV5iV9UuG7JOUPyG2OQtO5EPfJ9hevB3m6EBrljoYDfPvgK36l3c2ZUB84AuWeT0V07AcM5J4A6rjUA1lNJ0MyEzGkXCZl77rmHzMxMPvnkE+666y7H+Icffsj999/Pvffey7p16+q8z9q1a9m+fTutW9t/R6+++ipdunThu+++46233iIsLAyAdevW8fXXX9OlSxc2bdqEXm/PSHv22Wfp2rXyeLrbb7+d1NRU1qxZw8iRIx3jzz33HD169GDGjBns2bOnzo+hIRBCpqmTdczVFgiaGf6p69kQWsSojAc4Y9A22D7Kc/XmqlNDJjS+K7npvnTyPYkq+UCVc62dWvPg6DSyFMUA+Gt9ebtURY+9zSNrxOwdwyGvq/mlpBPfpIZjzK+7d0wd2AKPTiMqvVa053ckrTvurfsiqbX4umuICHbHzdeNYg8lSZrycv55zz8LIaGoptzutIZsNp33b7P9H033BOmiZJgslFpt6JSNFzVx6tQpEhISaNeuHTNmzHC6ds899zBv3jz+/PNPTp8+TWRkZJ32evjhhx0iBkCn03HTTTfx4osvsn37doeQ+eabbwB4/vnnHSIGcBxFPfOMc82hnTt38vfff3PHHXc4iRiAVq1aMWPGDN5991327dtHhw4d6vQYGgIhZJoyhgIwXp4ppYKGxSNjO+v85jCh8HH2N1BLA8W546vsvJSq5ymVmK1XERCkwn/pu1XONfXqwL2DT1KosMc6dPKK5d2kwwTXsE5NYyJLSooCu7JV25uvs9uSkO4H6bVfz1qSj620AIXOC6XeGwBddBd00V0qzHXXKjmw53c8/AMY+NqbnHGDVIVM+WFcecyU4a91GP9ej9//vkJSlr/0q6JiMG37B9lqQVKqMG7ZCIAyqmXtH4QLOW0w0cq98XqS7dq1C4CBAwdWiG1RKBQMGDCAQ4cOsWvXrjoLme7du1cYi4iIACAvL88xtnu3vc5Sv379Kszv27dvhbF//vkHgPT0dGbPnl3h+qFDhxzfhZAR1IzCtEvPEQgugjbnMMu8XmKa+hk25njX+/rKc0ImIzu5ynnhbfuTnaqnc8bXVbYhKB7QlXuuPoBRsrt6bvDtyMzdf6C2uua4oCpkjQdpgX1JsHXj07R4kk5V/cZZuPs3jGfsnihzZhIARbt/x3BqLwDaiHZ4drZ/Ei7csYL8TYvx7nsTPv1ucayhUSloEeSOj78Os7eaNK3EKaW9RUGJUmKr/uKB3raiQgrnzUE/8SbUrds7XdNPvJn8l58m9//uQhXbCsNvy1G1jEfTrVfNnpQmQmMLmYICe+mB4ODgSq+HhoY6zasLXl5eFcZUKvvbuNVa3pKjoKAAhUJBQEBAhfmV2ZmTkwPAypUrWbny4pWxi4uLa2xzYyCETFOm8NJZHQJBVagKTrNQP4tHg59jWT23NFBZ7TVkDKUXf4HW6PTk53SkVWAO2j83XnRezoge3N99N1ZktEotz2paMH5H02o1YPGM4KhPP341dGbh2UiKC6p/fGE8c4Difc4xEsaUAxhT7OLGknsWjw5DnTwlPu4aurcLBG8NmefK+e8994Ffttko/eV7Slfa6/BYU06R9/LTeNz5AKqwCKd9ZKuV3CfuwZaXS8nKn7GcOIrn/U+gio4FwG3wSKzpaZQsXYT5wB6wyXjc/wSSonkmtTZ2nEyZuEhPr9wNl5aW5jQP7J4ak6lyO/Pz6+6F9/LywmazkZWVRWBgoNO1yuwss23evHk88MADdd6/sWmef6lXCkLICOoBRUkW7xlmcUf46fpd1ypdsoZMSPxQ3Nz0hPx68TYEZ8f24p7uu7AiE6oL5KtSN8YfWFuvttYGGYniwC5siLyHezznEZf5JqOPXs9Hp1tQbK3ZS2fAtY8S8eAiJJUGbWQHPDqNwOuqG/DoMhqlVxDGlIMYVr9O954hdJ/1NC3W7cTw9CNsitSwyQuOqGXM5z3Vhe++QuEHbyLLMvop03AbMhrjxj/Jue9WLGecPWQlPy7CcvgAmh590I8eh/nYYXKfvA9baXnml/uUqfh9vAjJTYfHjAfRdu1Zp+fOlaQbzY26X5cuXQD466+/kGVnr5gsy/z1119O8wB8fX3JyMjAYnH2UBYXF3P0aN0zVTt3tlfO3rRpU4Vrf/9dsWxBWTbSxTKamjpCyDRlhJAR1BOSsZBZec/zVIv6S+dXWqquIePhH0hWaizt0lehKC2qdM7Ryb14pJ29+WNvn1Z8n5xE+5S99WZjTZHV7qSHDWNJ+FOMVH5O+9NPctvRAazJ9K/z2gqdJ5GPfE+Hu//LkAdnM+LxmfR9fQ4tf1yFuksPMvZv4c8jf7NfY3NkEFWGaedWSlf9jLpTN/w/WYznXQ/j/cwr+Lz0LnJBPoVz33CaX7ryJzTde+P7+jw8738cnxffwZaVgWmzc6POwvfnoAyNQH/DrXV+rK4k3dS4QiYqKorBgwezf/9+vvjCOaPs008/5eDBgwwZMsQpPqZnz56YzWanlGlZlpk5c2a9HN/ccov9SPKll15yqjGTlpbG+++/X2F+r1696N27N4sXL+b77ytmvdlsNtavb5p9zEAcLTVtRIyMoB6RrEbuyXiJoNjHeOx4lzqvp7BIGKqoIeMbOhRP2Yz+218qMUZi5609eD3CLmKm+3Tk4V1rUMrWinMbGKtHKMd9+7HS2JkvzkZReKL+Xha9dWoig93R+7lR4qHiVIVy/jZAiVu/wZh3bcOacmmvWdlxksf0+5DU5cXztL37oe7SA9O2zVjTU1EG22MzrJnpaPsNdsxTt25nH88of31xBAJ/+LXT8VZzJN1Y/y1d9u7dW2kBOYA2bdrw0Ucf0a9fP2bMmMHy5ctp164d+/fv59dffyUwMJCPPvrI6Z4HHniAL7/8kv/85z/88ccfBAYGsmHDBvLy8ujcubMjWLe2DBs2jJtvvplvv/2Wjh07Mm7cOIxGI0uWLKF3794sX74cxQVHh4sXL2bw4MFMmTKF9957j27duqHT6Th16hSbN28mMzMTg8FQJ7saiub9F3u5U9B0MzUEzRNJtjIx5U384+9j2tGKGQ01QWmRyDXnVXrNPzKWwrxI2m+eXfGiSsVfUzvxQdBO9Co9LyuCGLGz8eJhZCRKAzqwW9eH7wra2WOHsuq+rk6jpEWQB57+bpg81aTo7KKlPCKh8mBc2WbD+K/d3a+Mib3kPqbd25DcdKg7dKlwTdujD+Zd2zDt3o5uxBj7moHBWI4ddswxH7VnoCiDQoDzAoEn3YK6VdvqPdgmTGYDeGTOnj3LV199Vem1gQMH8vTTT7Nt2zZefPFF1qxZw8qVKwkMDGT69Om88MILtGjRwumeDh06sGbNGmbOnMmPP/6Ih4cH11xzDW+//TaTJ0+uF5u/+uor2rZtyxdffMG8efOIiIjgkUceYejQoSxfvrxC4HBMTAw7d+7k3Xff5ZdffuHLL79EqVQSGhrKgAEDmDRpUr3Y1RBI8oWHeoKmw+fD4cy/rrZCcJmyN+pWrj86GlmuecGQaXFm7jjhy1HdHnbt+63C9dA2dxCbewKv3+c7jUtaLSunt2GB736i3cN4LyOH2IyGL/ooq9zIDrqKjYoezE9vzd46pqSrlRJRgR74BuiweqvJcJNIUtmwVqP4imw2U7xoPiBjy8/HtPNfrKdO4jZqLN5Pzq763tJSMq69GlVMHP7zKxbWM/y1lvzZT+B+2ww8pt8HQPF3X1H06XtoruqPKiKK0rWrkJQq/Bf+gkKnp+CdlzHt+Bf/L35A0jZetk9DEaxRsbtv00sRbip8/vnnzJgxgw8//JB7773X1ebUC8Ij05QpqkMxCoHgEnQ89Q2JsQWMODG5Fi0NJBQ2BTmV1JAJbdUdvdKnoojR6/nujhiWeu5nsG87XjvwNx6GuqekXgybPpAk//6sMnVh/tlocmt5ZKSQIMJfT2CgO3hryNYrOKmysd9Js8hUt4KcbDFTvPCT8gFJQj/5djxmPHjJe23FhfZb3D0qvS7pPc7NK49J0t9wK7LZROnqXzDv3Ym6dTs8H3gShU6Pafc2Slf9jM+bH4JKReFH71C65ldkgwFtr6vx+r9ZKHz9qvW4mgpZZgs2WUbRhHtCNQZpaWkEBwc71bZJSUnhlVdeQalUMmbMGBdaV78IIdOUMTbci7xAANDizK9sjC5k2Onp5NegpYHyXJ5ARlaS07hCqUK29SZ2w1ynccnHm89uD2atx1Ee9OzAjB2rkRqgc7LBry173fuwpLADP6YHI+fU/M0sxMeNkCB31D5a8tyVnFDbOCJBud/IVicbFTo9wX/uRLbZsGVnYvx7PUXzP8B8YA8+r89DcRGRUlskpRKP22bgcZtz1VnZZKTgnVdwGzEGbferKP52PiW/LMHzvsdQBgZTMHcO+W88j++cD+rVnobGKkOWyUKQtvLmm1cKc+bMYeXKlfTv35+goCBOnTrFihUrKCwsZPbs2XUuzteUEEKmKWOsPNNDIKhPAs8msDG0kFEZ95NSzZYGCklhryFjcP4bDW/bn7DiNFSny2MypMAA/nurJ/u8MvmfJZR+u+uvX5Ks1JAb2Iu/VT35IrMNO8561uh+Pw8NEUHuaP3cKHJXkqyBJIVMkmNG3URLVUgKBcrAYPRjJ6Pw9iX/pScpXjQfz7sevug9Cnf745OLK39tkEuKzs27tBgqWvgpcnEhnvc+BkDJ0sXoRoxBP9Yeo+FRUkzBa89iOZWEKiq6Jg/N5WSbhZAZNWoUBw4cYOXKleTm5uLm5kanTp247777uPnmm11tXr0ihExTxWIEW+OmEQquXDwztrHO703GFzzGwSL9JecrUFSoIaN190Ald8Tv1/9zjEnhobw8RYHRX8F3KUVEZu+vs602nT+n/Pvyu6Ubn6fGkJFUvTcsD62KqBAP3P3cMHiqOK2Bs0qZ8pB614ULanpcBYBp17Yq50k6HQr/AKypKchWK5JS6XTdeuYUAMrwqCrXMR8/Qsn3C/Ge+TIKL29sRYXYcrNRxZX38SlrHmk5fbLZCZlSW8MJ0ObCqFGjGDVqlKvNaBSEkGmqmJpmKWjB5YtbzkGWe73EbeqZbM6tuqWBAmWFGjIhcUOI3fYdku1cCnXLKJ6eaKBNYDCz9ybgZi6tZKXqYfKNZ7/H1fxY1JHv0kKw5lYd06NVKWgR7IG3vw6zl5pUNzilsJHliBdoWjkOtuxMACTVpV+S1Z26Y0z4DfO+XWg6O/feMW6zFzTTdOp20ftlq5WCt19C07MPbkOc3+hkUyXNI5th90iDtWn9fgUNixAyTRVjoastEFyBqApOsUj/PA8FzmJFZsU+LY55KDBQLrY9A4LxN3qiOWB/I5XbxvH49fnc4hnCLTtqfpQkK1QUBPbgH3VvFmS3YXPqxYWVSiERGeiOf4Ae2VtNhk7BSaWVPRVEi2vfkC1Jx1GGhCG56ZzGZUMphR+9A9hrwZRhy8/Flp+HwtsHhbevY1w/ZiLGhN8o+vJDfN/62FFLxrhlI+Zd29D06IMyJOyidpT89C3WUyfxefFtx5jCwxOFfwCmfzfifq4gXlnzSFWLmDo+8sbHIDwyVxRCyDRVTCI+RuAaFCWZzNM+h3/YLL46G17pHKWspMhcXnwlJHIgwYvfAsDStS3PjyniZaOK7pWkZl8Mm5sPKQF9WWvtxuepLUlJrhivI0kQ7qcnKFCPwltDjt4ejHvQSaPYcLVoqQxD4h+U/PgN6g5dkLRumI8dwpaRDucaaSqjYtBPKm8SWfLz9xQv/AT32+/GY9o9jnFN156g02Hes4OMkRUbO5q2bSZ9SFd835vv8MzIVisl3y2g5NcfsGWmo4xsgVzi7PXVT7iZos/mknXnDViTT4IEmqsGoIpoUWGPpk59CpmkpCRiYmIYOXIka9asqbd1BfWHEDJNFXG0JHAhkrGA2dZZBLZ4hreT4ypcVyKRX5wBQEBUPBEH9yIZSzH06cTHYyx8cuoUQfmXbrFh9m7JIe+r+amoI9+mhWPMcz4yCvLSEhrkjsZXS767kiQNHJNkjjlmNJ9P3to+/e1ZStv/wZZ2LjJHrUHhF4xsMGA9dZKSZUtwn3z7Jddyn3Yf5m2bMR89iFxQAGo1ytBwVHFtMK5dieTphbpNeZfrkh8XUTT/AyRvHxR+AVjz88h98j78v/oZhc4eE6WffDvW9FRKf/0BlCq0Vw/A6/+ea5DnoqEx2sTR0pWEEDJNFXG0JHAxksXA/RkvEtjyCZ460cnpmlJWkH2uhkyEV1t0y/9L4eCuJIyQ+GDvOtQXCVSXJSWFgd3Ypu3NV9ltWZ/uS1npWx+9mvhgD/R+bhS7K0nWwimFzKnyuxvmgTYS6tbtUcW1xjR1Aqg1+P1voSOg1lZUSM59t1E0/wPcBgxDGRKGx7R7nDwx5+Nxw61QSU+k4iULMa5diduwa5A05R4tR7+ltz4GwLRnB7mP3Ilp81+OOBlJqcSWk40qrg1+HzXvVgUi2PfKQjSNbKrUITBSIKgvJNnKjWfn8EW8c8dclaQkIzuJiFZdCV61iNzRPUkfaOCp3SsqiBhZ68nZ8FEsCnuGwXxGp1OP8kByf9Ldo+nZOZhO/SPwHxZBWv8gtrXS81eAgu06mSxF8xYulWHasRXr2dO4DR3tEDFgj1Fxv+UOMJsp/X15rdcvXf0LALprxjmNWzPTUcWXtx+oqt+S1+PPN2sRA2CwukbIFBYW8sILL9C+fXt0Oh0+Pj6MHDmSjRs3Vjp/z549XHPNNXh6euLt7c0111zDvn37mDZtGpIkkZSUBMDu3buRJIkHHnjA6f5ffvkFSZLQarWUlDj3PYuOjiYmpmJ807Jlyxg6dCi+vr64ubnRoUMH3n77baxW5z5n+fn5vPHGGwwcOJCwsDA0Gg1hYWHcfvvtHD9+vMK6s2fPRpIkEhMTWbBgAd26dUOv1zNo0KAaPIO1o3n/tV7WXH4v4oLmy5DTH7AsPp+xR0cDYLWZsViMxGSbKRgch0/0Qa4+VJ5abfGK5Ih3P34t7cTijCgCS7zwCdDh202DUSeRrLCS00QziBoS0257erUyLILcpx/AvH83stWCOiYet9FjATDv3lHt9Wy5ORR/+wXGf/7CmpYKVguSmw7zvt2oY8uFkjIwCOOGdRjX/4GtIB9lhD09u7J+S+YjB8h5+A78P1+CW0QkKklCLUmoJCr5klEBSklGLckoke1jku3cv20osaHChlKyoZRt58asKLGiwori3Hf7mAUlFhTnvivlsu/m88ZM5342o5BNKDGjsJlQYEIpm1DIJtrLNwI31M8vrZrk5OQwYMAA9u/fT9++fbnnnnsoKChg2bJlDB48mB9++IFx48Y55u/evZv+/ftTXFzMhAkTiI+PZ9u2bfTr14/OnTs7rd2pUyf8/f1JSEhwGi/72WQysWnTJoYPHw7AyZMnSU5OZvr06U7zZ86cyZw5cwgPD2fChAl4e3uzYcMGnnjiCbZs2cIPP5S3vTh48CDPP/88gwcPZvz48bi7u3Po0CG+/fZbVq5cyY4dOyr0kAJ46623SEhIYOzYsYwYMQLlBSUCGgIhZJosTS9YUXBl0/n01yTEFTLixA0YzUXExXVGKaXQSv8n3ql5FAV2ZafbVayWu3FYHQ0+GjJ1ErntZNIdf84yNSnn31yQ4Lw3+/I3ffW578pzb/qH0k5RAhR/9TEqrRtRI0ag0etISfiTwndfQaFWoz6bxPVeBagkK0qc3/SV573hZxw+zsL7X8dQUELbfh3I0dtIPZpCYLAnAVt/4qFb9PY3ednM3BYqdm86gaevG/7h7iQfOYCkgHd6/kWAegMffraLPdoiXh27i4fu+ob/TPVnStgjzSkEyQk32/BG3/PBBx9k//79fPbZZ/znP/9xjL/++uv06NGDu+66i1GjRuHmZu9n9cADD1BYWMiiRYucCtQ9//zzvPzyy05rS5LEwIED+emnn0hPTyc4OBiwC5n+/fuzZcsWEhISHEKmTOAMHlze9fyPP/5gzpw5jBw5kqVLl+Lubu83Jssy9913Hx9//DFLly5l4sSJALRt25bU1FT8/JxbVCQkJDBs2DBeeeUVPvvsswrPw/r169myZQsdO3as3RNZC4SQaapc4X1CBE2TmDO/sCm6kO9Kb6CFdzaBvgVs9HqI1eruHNZ6c0ItU1KhB5EzSs692Ss492n/3Js9ZW/+suPTvv1TPRd8wnf+pK+SbOc+1duc3vDLP+lbzr35m1HIZT+bz/ukb0YhnxuTTSjOjSllE0pMSDYzStmIAvunfYXNiEI2opSNSLIRJUYk2VItx9KTeWfJBFSSlQ/+60Nc3G4Aim5044H71Zw5Y8ZWkMmN+dOrXKe42MaMR86glm3896NQQsPymXzDWdzcJD6bp8fNzYSy8EvH/KykU4SGqpBlCxkpOcTFaTh61MTJf7eT76/kjxWpzHkjhC8+PIpaDd8tzuarL7Po2VPHo/8XiK9vw3+qrk9kuXEVWFZWFt9//z1DhgxxEjEAQUFBPPHEEzz00EOsXbuWMWPGkJyczMaNG+ncuXOFKrtPPfUUH3zwAbm5uU7jgwcP5qeffiIhIYEpU6aQlZXFvn37eO2115AkiT///NMxt0zInH+s88EH9lYTn376qUPEgF0kzZkzh08++YTFixc7hIy3d+UlDwYPHkz79u1Zu3ZtpdfvuuuuRhUxIIRME0YIGUHTJOjsOga1HEa6dBNJ7hKS2sxoZSnXSHlgM4DSiEQpkqIUSSpBkkuRKEUlGVBJBhQKC2BBkqyABbACZsCKLFtANiPLFmTZ5Phus5mRZTPN/RiqsND+BntVHz1xceXBuB4eCm6+2Yc338zEYrn0Y/z11wIyMiw89ngALWO1rF5VQGmpzIgRHri7Vwx9zMqyMm68FzNm+ANgNNq49pokzp4189VXuQwf7kFJiczff5eiVMJ99wcQGKBk3rxs3nozg9deD62nZ6CxaNzXz61bt2K1WjEajcyePbvC9aNHjwJw6NAhxowZw+7ddgHbt2/fCnPd3d3p0qVLhWOkMu9KmZBJTExElmWGDBmCwWDg1VdfpbCwEE9PTxISEoiNjXXqp/TPP//g7u7OF198AcCCBQtITk7mhRdeAECn03Ho0CGnPRMTE3nvvffYsmULWVlZWCzlRTA1Gk2lz0WvXhVLAjQ0Qsg0VSQRhy1omvwVezWGAnfanS771KsA9Oe+KkeWZMxaMGttmDUyZo0Nk8qKWWnDpLBgkiyYsGCUzZhsJoxWM0aLCYPZiNFoxGgyIssySiWoVPYvtVqBSiWjUuE0rlTKTl8KpYxSYUOhkFEobCiUMgrJikJhQ5KsSAobCsmGJFng3HcJK0hlIsvi+JJlM8gWZM4TWzYzNrlMaFWNwWAXKa3iK9bI6dFTV2HsYiQmFiFJ0L+/B6dPm1i0KA+AyCg1ZrOMWu38Rh4YqOLYsfKqvUeP2v99+LCRoiIbt93uyyMPn0WjlRg61IPrr/cCoKTExuuvZ3L6tInIyMrfuJok9ejRPnPmDADbtl28fUROTg4AmzZtYtOmTRedV1xsL6tRUGBvCBwUFFTpvLKjo/Np3749QUFBDoGTkJCAl5cX3bt3p7S0lBdffJENGzYQHx9PSkpKBc9QTk4OFouFF1980Wn8/J/L7AP44YcfuPHGG/Hw8GDkyJFER0ej1+uRJMkhgqpre0MjhExTRRwtCZogue7+/EgAfbPOIunikUutl74JkGQJjQE0htodUciSjEUjY1aBRWPDpLZhVtkwKa2YlOeEkGzBZLRgtJkosZkwmk0YzCaM58SQrRFScsvElEoloVKDSglqtYRSJaNSgsUyD8impKQLubn9UCpkFEobSqWMyVgA/A9QgjyugsiSJLvwMltMJJ1chI+PlhUr3Jn/+X7kc06c+Z/nsnpVKa++1pqWLbXYbHav1jXXePPpp1k8+0waERFq1q0rxNtbwZYtJTz9dBCLF+ehUoHRIBMbWy5YyrxGp06Zm5WQkRrZI+PlZRd+jz32GG+//fYlZpfPz8jIqPR6enp6peODBg1iyZIlpKSkkJiYyIABA1AqlVx11VXodDoSEhJISbGXRTg/PqZsT0mSyMqyF7I8deoUJSUltGnTptK9Zs+ejZubG9u3byc+Pt7p2nfffXfRxya54L1LCJkmixAygqbHS6170S69DxZTCpYgG8pGqhIgyRJqo4TaCLWtGmHR2DBrZfuX2oZZLdtFkMJq9wphwYgZk83s8AgZzUYMJrtH6ML01Er3sNi/nI/Ayv8ty/Z2Art3p9G6ddnjsIu7Xbvsn/xtNtiw4eJdvIuKirBaZfLyTHwx/yCRkVGcOnWK/v37I0kSf/31F488nMQDDzyA6lzvpuBgG4MHb2Dnzl3s2lVKZGQUxcUlRER4U1ral9WrvuLBByczd+73rP3Dgy/mn8VkMtOqlb2ys9ncD5Mpzu7RkmznvFm287xa1nNeLKv9ZywgOXuzzvdkIVuwnXd0KNvMyFRPFFcHSdG4na979uyJJEls3ry5WvPLspL+/vvvCtdKSkocR08XMnjwYJYsWcLixYs5cOCAw+ui1Wq5+uqr+fPPPx1C5sK05969e7N69WqOHj1KfHw8UVFVNxY9fvw47du3ryBiUlNTOXHiRLUeZ2MhhExTpQl7ZFIKbPxwwMyqoxYOZdlIK5Lx00n0jVLy5NUaekc4/1lJLxZccs1Tj3gQ6V31G1RN9wU4kGnlodUGtp61EqCX+E9XDU/21aBUOD+/pWaZjh8VMSRGxafXVd/FfyWxrO1Q1ubup3/+HRQYD5MvZ+GHv6vNqjYqkwKVCXROtSYrHvFcDKtaLj8a08r2ozHVuaMxhRUTFkyYMdrMGK3njsfMRrsYMhqxWCzodPa/rSNHjpCWlkZIiD392WAwOGqNnJ+uWlhYiNFoxMPDw5HtIp9zv8iyTI8ePdi/fz8KhYLevXvj7u5OdnY2+/fv58CBA3TqZC9kqFAoGDBgIAMGDARg8+bNJCYmMmnSjXz11bd06tQJX982aDQaDh5M5pprrsHLy4uff/4ZgLzcULb841bzJ70GSJL9SEylks4dHUqoVOXHiUqljEoloVTaUCpBobShUnLuuNAuqsqOD03GyEtvWI+EhIQwfvx4fvrpJ1q2bElpaSnZ2dkEBAQwbNgwXnjhBbKysujYsSN6vd6R7bNr1y6+//57brzxRsdab731luOo6sMPP+TNN990XCsTFU899RQAr776Krt37+bFF19k8ODBPP/88yQnJ9OqVSvCw8MZOHAgixYtYubMmQ6RNXHiRBISEpg4cSLr1693/D2lpaXxxRdf8Oyzz/Lll18SGBjI7t270el06PV6xowZw2uvvcb999+P2ex8jPrJJ5/wv//9D4AbbriB6dOn89JLL6HT6Rg4cCCJiYkN8KyXI4RMU0XRdLME5v1r4o1NJmJ9JUbEqgjUSxzNsfHLIQu/HLLw7QQdN3Yo/0T0wsDKXdLHcmQW7TXTLlBxSRFTm30LjTLDFpZgscGdXTUczLLyzJ9GtCr4vz7Ob2AvJBopMcNbwxv2xbq5ctY3ijnWs3QwB1OQ4YaptIgz2Yfx42pXm9ZoKM0SSrOS8r+Qmn3qt6pktugTOclJkOGbr75mWI+BaHVurN+xiezsbFRKFZ4enrQMjcZoNfL5qtX8u/VfJkyY4MgE0WrL/3b1ej0lJSW0bdvWkYnSqlUr9u/fz9mzZx1C5nzy8vJISEhg6NCh7Ny5E4PBwMiRI4FykXT06FE8PT0xneuGXTbekMiyhMkEJlPZXpXteX7qvvKC7+X4X5AyXB8UFRUxbdq0Sq+1adOGu+66i59//pmTJ0/i5+dH27ZtKS4u5ptvvmHRokXYbDZSU1PR6/XMmDGD1157DYVCwS233MLSpUuJi4tjx44d/PPPP3h6elJYWMjkyZMde2zZssWRUWSz2dDpdAwePJhFixaxevVqPvjgA2w2G9nZ2UyaNIkjR46QnZ1Nnz598PPzY+rUqWzYsIEdO3YQFxeH+lyz0RkzZnDs2DE2bNjA+PHjAfj1119JTU3FarXi5uaGSqVi4cKFLFmyhPDwcDp37uzwGpWli5f9/ZUdf10YONyQCCHTVNFc3LXsanqFK0mcqmdgtPOfz4ZkC0MXlnDvylLGtVGhVdlfcGYPqlwcPLjKfi5xZ9fqvSHUdN8VRyykFslsnK6nb5T9nqELi/l0u9lJyOxMtfLff0x8P0mHt1vT9YS5Cpuk4JkW8RTlH2V46XCKCyxYSws4XrCdTlF9wdq8M4kaC6VFItbT7il49OrpbEvZy5///oXZZqFNYEseGXEbM39/hzaeLRhyMhaA30v9+Bfoa2jNJNMQLFoZk7fMfO/PyMjPIvd0JgC3DZ9Ml8jOmDBjyLEHbGpVGgL9AuyxQkaDQ5SsWLGC4OBgWrRowWeffcb48ePR6XQYDAbMZjMtW7YkJSUFk8lETEwMJ06cICsri4CAi3dDb2qUHanVJ0ajka+++qrSawMHDuTee+/l9OnTfPvtt3z//fccOnQIm81GSEgIqampDBw40PEctmjRglGjRrF69WoGDBjA6tWrkSSJfv36sWDBAoegiIuz9zkzm81MmTIFm83G6NGjWb16Nddccw0//PADGzduZNCgQSxYsAAPDw+KiooYNGgQn3zyCfv27WP69Ol89tlnDk/f2rVrmTt3rqMB5ooVK4iJiWH27NnodDp+/PFHli9fTmJiIvv372fevHkcP34cjUaDwWBg3rx5vPHGG4Dds/jaa68RHh7OTTfdxNtvv839999P9+7dueqqq+r9d3AxhJBpqmibrpCZ0LZy4dG/hYrBMUp+P25lb4aNHmEX9yoZLHZvjEYJt3WqnpCp6b6nC+zBnd3Ps6NHqJLNp8uzN6w2mTt/LeW6VqqLrn+ls7DjSLbn26v2hhg7kqyyYTCcC47xU0KmpYq7BedzVWQX/vfPN5zKP8vXk99yuvbD3tWOOWX899pn+O+1z9h/sIGmVEJTCv0jurM0/zfubDeBCTfY+ydhz/Bl6167kOmjbMvYs+UVYm0KGYsGbrpzEAaVhVs+eIA+7XrwyIg7MSms5JbmA9Cvx9WMGDoMg9XE8ZMneP2tOWg0GiRJahTPTH1Q5m2oDyIiIgCq1f3a29ubJ554gieeeMJpvFOnTiQnJzsJrHvuucchZNavX+8Yf/jhhwHw8fHBx8cHsIuNpKQkXnrpJWbNmuW0dr9+/Rg7diy//PILubm5jkDim266CY1Gw5tvvul0XDls2DCGDRvGoEGDWL9+Pamp5c1dFyxYAMDNN99Mv3796NevH3fffbfj2vTp0zl27JjjqOjFF1/EarXy2GOP8eijj/LWW+V/088991yFGjkNhRAyTRU3L1dbUCvU52JPVJc4KfrpoIVcA0xqpyKwkroX9bFvpJf9h52pVvpE2v/Ud6RZiTrvGOudzSZO5NpYcfPFU4evZI4Et2Fesf0dMt7sj6E4CK2ulLKuLsWaQtwRMUXVpV90N6J8wlh2YC13dJ9I+2B7zEOBsYgP/vkGjVLNxA4jHfPTi7IoNBYT5OGPl9bDMX5rl+tZuv83/vfPIobG9sHbzf7BJ6Mom/nbfkQhKbim9UCnvRW28syxRf/+SFLaKf68diFhJ8rW9SbIw5+TWw4xMOYuAOb+8y8AM9xGEW2IsGeOuVWROYY9e8xoM9m/yjLHTEYMRkOjCaGL1ThpaGpSd2XkyJGEhITw5ZdfMnv2bJRKJSaTyRE/c8MN5S0W/vnnHwAOHz5caZ2atLQ0bDYbR44coUePHo7xmJiYWnnSunfvXmGsTNDl5eU5xsqOl/r161dhfmU1choKIWSaKtrmJ2RO5dtYe8JCqIdEx6Cqxcn8nXavyH+61v0F52L7XttKRYiHxPjvS7m5o5rD2TbWnrDyzgj7sdLxHBuzE438d6QbYZ6ibs+FmJUaZgYFYCqy958enx1PRqaMh095oF96YRItaXuxJQQXoFKoeGvUk9y65HEmffsg17cdirtGx+rDf3GmII3nBt9HpHd58bk56z/lx31reOeamUzuONox3iOiIzN6TuazrUsY8cV0hsVdjcVm5fejG8kqyeWpAXfR0q/ygNfT+am8s/ELnh54F2FezjU/7uw+idfXf8LUH54k1DOQ7/euYljs1cT4RYJMvWeOmdQ2zEprvWaOgXMcUWNR07orpaWlZGZmYrVamThxIpGRkaxevZrS0lI8PDyc6ruUBf8uWrSoShvOrwMDta/pUubVOZ8yb9L5v4Oq6uE0Zj0ZIWSaKloPe1G8Ri61XVvMVpnbfi7FaIU3hmkrZAWdz8lcGwknrUR5SwyPrVtQc1X7emkl/rhNz8NrDHy+w0SAXuLVIVoe7m0XT3evKKVHmJK7uqvZcsbC/asM7EqzEe4l8dIgLVO7NJ+6GQ3BvE7DOZK31/FzuKkDBy0yKvs7GQDHT2+npbcQMjXh6hbdWHrLB7y78UuWH/zTESMzc9DdXN92aLXXeX7IA7QJjOWrHT/zw741SEi0D47ntZGPMbrVgIve9/Sat2kT2JKp3cZXuHZ3rynklhawZO8q/j61k+FxfZkz8vFaPc7KqDxzrPrYM8fsBRWryhzzcHO/9GL1TE3rruj1em6++Wa+/vprVq5ciSRJjiOgdevWERpaLmjLhMXy5csZM2ZMtW1q6Jou59fDubCB5MVq4TQEQsg0ZbSeYMh3tRWXxCbLTFtWyl/JVmZ0U3Nb56oFwBc7TcjA9C5qFHX4j1adfTsEKVl3e8UXtS92mth4ysrue9wpMsG135bSJUTBmlv1rDhiYdoyA20CFJWmdF8JbI/qzlf55d2sIyzeWG0xAChUBsd4Tk4KUqQKuUDEydSErmHtKsTIVIZTjEwlTO442slTUx0W3fjORa8pFUqeHXwvzw6+t0ZrNhb2zDEJN4dHqPJYGE8XCJma1l3RaDQsXLiQ/Px8Vq1aRWJiIv369WP06NEVyvz37t0bsKfN10TINDSdO3fm559/ZtOmTfTs2dPpWmU1choK4U9vyjSD4yWbLHPHMgPf7rVwayc1H4+pOn3ZJsss2G1GIcEddThWqum+55NeZOPx3w08N0BL6wAli/aaySmVWTBOx7CWKt4b5Ua8n4L3tpguvdhlSLHWk2c9FdjO8wZOyYwh3egLgITBab7J48p8ngRNFAkU+sYP3G/RogXHjh1z8kQYDAbuvffeCnVXzufuu+/GYrFwww03IMsyM2bMqDBn7NixREVF8e677/LXX39VuG42mx11iBqTKVOmoFAoeOeddxwVg8F+xPXqq682mh01/riZlJRETEyM05hOp8PHx4e2bdvSt29fpk6dSmxsbL0ZecXSxIWMTZaZvszAwt1mbuqgYsFYt0t6WNYcs3CmQGZkrNIp6Lah9z2fB1cbiPBS8FRfu5A6nGUjQC8R4VVuT5cQBYeymsexXn0zp/0AUnL3Oo21ygxkR3HZi7FzOd9scxohhDWSdQJB1UhaFVIVR9u1Ze/evVXWkXnwwQd58MEH6dq1K5MmTcJisfDHH38gy7JT3ZULGTVqFC1atCA5OZmQkBCuu+66CnO0Wi0//vgjo0ePZuDAgQwZMoSOHTsiSRLJycls2LABf3//Rq3dAtC6dWuefvppXnvtNTp27MjkyZNRqVT89NNPdOzYkX379qFQNLy/pNZ+89jYWG699VbAnl+fkZHBv//+y8svv8xrr73Gk08+yauvvuqSvguXDTofV1twUc4XEze2V/H1eF2VcTFlzN9pfzP8T7faeWNqu28Zyw+bWXrQwt936FEry+8zXlALxWi9MptErIvvzy8XiJggqwc2dXk/FpulxOn6qfR9hCiEkBE0DZTuDXMcfPbs2SrryCQkJKBWq5k3bx6fffYZPj4+XHvttbz++utOGUgXolAouO2223jllVeYNm3aRWvg9OzZk927d/PWW2+xatUqNm3ahFarJTw8nHHjxnHTTTfVy+OsKa+++ioRERHMmzePjz/+mKCgIKZMmcLDDz/M8uXLKw0crm8kuYb5cGUemYvl1G/cuJHbbruNpKQknnvuOV5++eV6M/aK44fpsP8nV1tRgbJjna92m7mhnYpvJ+pQVUNMZBbbCH+3CB83iTP/54FGWfk9ZqvM8VwbaoVErF+5mq/tvmUUGGXaf1jEhDZq3h9dfhT16XYTd68wOArnFRplWrxXyOh4FYsmXDlp2VkeQUyMCCHHmOc0/mBGJ0JK7+D0Wftz7RuQQOrRnY7rCoWSyfFPIpuuTA+WoGmhifQk6P4urjajRowZM4ZVq1Zx5MgRRxG85s7atWsZPnw4Tz75pKOAXkNR7z6ffv36sWbNGrRaLW+++SanT58G7MV0ytLQli9fTt++ffH09CQ6Otpxr8lk4t1336Vbt264u7vj6elJ//79+fXXXyvsk5+fz/PPP0+7du3w8PDAy8uLuLg4pk6d6pTmZjAYeOedd+jcuTPe3t64u7sTHR3N5MmTK3X1LVu2jKFDh+Lr64ubmxsdOnTg7bffrpD2Z7PZ+Pzzz+nVqxd+fn7odDoiIiK47rrr6q+vhGfopee4gJfWG/lqtxkPDbTyV/DKX0ZmJxqcvnalVUyTXLjbjNlmL4B3MREDkFIo0/Z/xQxd6JxKWNt9y3h6rQGFBK8OdU7NvLmjmkC9xIQlpTyyxkD/L4vJM8AjvRs/hdOVzI7vWkHEAHQ/KpGeU/4p0Wwscrpus1mx+l2J/itBU0Shb14B+gcOHGDVqlUMHz68WYqYshTy88nLy2PmzJkAjBs3rsFtaJDfeOvWrZk8eTJff/01v/zyCw8++KDj2g8//MDvv//OmDFjuO+++xx56EajkVGjRpGYmEiXLl248847MZvNrFy5krFjxzJv3jweeOABwN73Y+TIkWzZsoW+ffsyatQoFAoFycnJ/Prrr9x2222OVLCpU6eyZMkSOnXqxPTp09FqtZw+fZqEhAS2bt3q6EIKMHPmTObMmUN4eDgTJkzA29ubDRs28MQTT7BlyxZ++OEHp7lvvvkmsbGx3HzzzXh6epKSksLGjRtZu3Zthc6jtcKraQqZpDy7E6/IBK9uqDzQM9pHQZcQ59Tq8mOl2gXi1XZfgE2nLHy8zczKm3V4aJzfdD00Eitv1vPA6lI+2mYi3FPi6/E6eoY33X5X9c2P7YezPu9ghXFvmxuSLQKTofyFylRaMXe2UMrBG+8GtVEgqA4Kj+ZRNuHbb7/l8OHDLFy4EIAXXnjBxRbVjkWLFvH2228zZMgQwsLCSE1NZc2aNWRkZDBt2jT69OnT4DY0mHQdNGgQX3/9NVu3bnUaX7NmDb/99hvDhg1zGn/ppZdITExk1qxZvPjii47YmsLCQoYMGcJjjz3GhAkTCAsLY9++fWzZsoVx48Y5urOWYTQaHRHi+fn5/PDDD3Tv3p0tW7Y4lWm2Wq0UFpa/IP/xxx/MmTOHkSNHsnTpUkcDLFmWue+++/j4449ZunSpo2nX559/TlhYGHv27EGvdz5+KCteVGeaqEdmwTgdC8bVvJrrgfs9Lj0JuxiRX6h4rlrbfQH6RqmwVbJmGT3DlWz5T/Xsu9w47R/NW6bTlV67MSeO/PDucF5JCENRxW7mZ/OO4U3FaqACQWOj8m8ejV8//fRTNmzYQIsWLZg/fz5XX908G7BeffXVdO/enbVr15KTk4NSqaRt27bMmjWL++67r1FsaLBw4rAwe/Df+SlZYE8ju1DE2Gw2PvroI2JjY51EDICnpyfPP/88JpOJn35yjhfR6Sq+qWm1Wjw87G9IZb1B3NzcKkROK5VKRx8LgA8++ACw/3GViZiyNebMmYMkSSxevNhpDY1G4ySOyvCrr86rXuH1s45AcBGskpJnImIouSCAt4yrjkC6rbxCp6S0YSqtOPdY0tYrMzpa0ORQBTSPlhmJiYlYrVZOnDjB9OnTXW1OrenVqxfLli3j7NmzGAwGiouL2bZtGw888ECjZCyBCwriXVjoB+z9I3JzcwkLC3Mqy1xGZqa9w2tZalnbtm3p1KkTixcv5syZM4wbN45BgwbRpUsXpyfOy8uLa665hlWrVtGtWzduuOEGBg0aRM+ePSs0Ffvnn39wd3fniy++qNRunU7nlNo2ZcoUPvzwQzp06MCUKVMYPHgwffr0qVRc1RqfykuMCwT1xRedRrKrYF+l1/Q2NfqkYrIV5cXu3NzMFyRf2ykpyUfyUyNnX7xehkDQGKj8m4eQEdQfDSZkzp49C0BgYKDTeGX9F8qOYvbv38/+/fsrXC+jrI+ESqXizz//ZPbs2SxdupTHHnvMsdcDDzzAs88+6/CU/PDDD7z22mt8++23PPvss4Bd4EyfPp3XXnvNcSyUk5ODxWKpVEhduD/A+++/T0xMDF9++SWvvPIKr7zyCm5ubkyePJl33nmnflree4aCQgU2UTVVUP8cDG3Hh0UXrzsxKb8VhW16QXlHAtRuFxcqJW7F6Gge8QmCyxdVQPM4WhLUHw3m9ynL3LmwbHFldWXK8swnTpyILMsX/fryyy8d9/j7+zNv3jxSUlI4cOAAH3zwAX5+frzwwgu8+eabjnl6vZ5XXnmFEydOcOLECebPn0/r1q15//33efTRR51s8Pf3r3L/kydPOuarVCoef/xx9u/fT0pKCt9++y39+/dn4cKF3HLLLfXyHKJQgqeozyGof4wqN2YG+GCpQiT3P6Yix7+d05hKffEqvpkllcfZCASNhcJDjULbvLKWBHWnQYTMkSNHWLJkCVqtlvHjKzYmu5C2bdvi5eXFtm3bqizlXBmSJNG2bVvuv/9+/vjjD4BK07XB3tL8jjvuYP369Xh4eDjN6927N9nZ2Rw9erRG+4M9Huimm25izZo1xMXFsXbtWkpLK3PA1wK/6PpZRyA4j/c6DuN40ZmLXtfISny2HSOtwDmQXakyXuQOOJmy86LXBILGQBwrXZnUu5DZtGkTI0eOxGg08vTTTxMefumAVZVKxb333ktycjKPP/54pWJm3759ZGRkAPaifElJSRXmlPW4cHOzuxYzMzPZt6/i+X9ubi5Go9ExD+Chhx4C4I477iA7O7vCPWlpaRw8aE9PNRqNlTbEKi4upqioCLVaXX9BTgGt62cdgeAcW2J6sihvb5VzxhXEYYjsRGmRs8dGki4u0NPSjyM1UFVVgaA6NJdAX0H9UutXnWPHjjF79mzAXsiurEXB3r17USqVPPfcczXKi3/xxRfZsWMHc+fOZeXKlQwYMICgoCBSUlLYu3cvu3fvZvPmzQQFBbFr1y4mTJhAr169aNeuHSEhIaSkpPDLL7+gUCgcR0YpKSl07dqVzp0706lTJ8LDw8nOzmbZsmWYzWYef7y8Pf2oUaOYNWsWL7/8MnFxcY7+F9nZ2Rw7dowNGzbwyiuv0LZtW0pLS+nbty+tWrWie/fuREVFUVRUxIoVK0hLS+Pxxx9Hq62nYmpBbS49RyCoJgU6b57T2ZBLqy7oPfiEjoJWA6CCpq/a02jxtqAsrnKKQNBgiPiYK5NaC5njx487AmPLmka2adOGWbNm1apppFarZfXq1cyfP5+FCxeydOlSjEYjwcHBtGvXjnvuuYeOHTsC0KNHD5566ikSExNZuXIleXl5hISEMGzYMJ544gmuuuoqAKKjo5k9ezZ//vkna9euJTs7m4CAALp168bDDz/MqFGjnGx46aWXGDBgAHPnzmXdunXk5eXh7+9PTEwMs2fPdsS+uLu788Ybb7Bu3To2bNhARkYGvr6+tG7dmtdff50pU6bU9mmtSKAQMoL649W2fUnLrTxLqQwlEgFbT7B/eCRwQUVra9VCJteaQQBBdTVTIKgV4mjpyqTGvZYEjUxxNrzV0tVWCC4D1rQeyBOmk5ecd31hPDcvSOWv3q9hu6CZpm/gelKPbL/ova3irqKrdWCdbRUIakPQg13RhF+ZhS2vZBqnWo2g9rj7g74eUrkFVzQZ3qG8UvGcqFKGn/SgpOe1FUQMgMVYsT3B+ZxI2gEqURlP4BpEjMyViRAyzYGgtq62QNCMkZGY1bID+aaKrQUqI2R7MrkhnSu9ZiotqnS8DIvFBH5XTo8qQdNB4alGoRV/e1ciQsg0BwJF5pKg9nzXYQR/5x2u1twRxS2R0zJIK/Wp9Lqh+NJiqFCVXxPzBIJ6QcTHXLkIIdMcEAG/glpyMjCW/xouHRdTxqhkH0yxnSjMq1gCQZJkjCWXTklKLzxRIxsFgvpAxMZcuQgh0xwI6eRqCwTNEItCxTNhEZRaDdW+J3JHCkUdhlV6TaO3QDVyA44lXzwYWCBoKDQtLt7dXnB5I4RMcyC0MyjUl54nEJzHp51Gsq+g+t6YfoZI5NMpZOkrL52g0V68PcH55OenI/mIv1dB46IVQuaKRQiZ5oDaDUI6utoKQTNib0QnPis4WKN7rj8ViKxxIy278pcFtebi7QkuxOBefS+QQFBXlD5alN71VIRU0OwQQqa5ENnL1RYImgmlGj3P+OixyDXrmh69M53S7iOxmGyVXleqqueRAcg2pNRob4GgLohjpSsbIWSaCxE9Lz1HIADeaT+YpOKzNbqnhzEMTiST1+LigrmqPksXkpxWdS8ngaA+0UZ5utoEgQsRQqa5IISMoBpsjO3D95doCFkZ41JCAMiwVFV8sfrHRadS9iO5iZoegsZBeGSubISQaS74tgCPYFdbIWjC5Ot9eV5Tfa/J+cTvzsYaGkNO1sWPo2zWGnSDlGWsvqL7iaDhkTQK1KEi9fpKRgiZ5oTwygiq4MU2V5FpyKnxfe1NQUiHjlPYdVSV82yWkhqtmy9XryWCQFAXNBGeSErRFuNKRgiZ5oQQMoKLsLztEP7I3V+reyelRgCQ41N14UWzser2BBeSklO9asICQV0Qx0oCIWSaEzH9XW2BoAmS5hPB69a0Wt/fZm8eskJJWr5blfPMl+izdCHHkreBQnxSFjQsQsgIhJBpToR2BZ2fq60QNCFkJJ6NbkOhuWYio4yWFl+U+49h7DQQQ3HV6drGkuo1nXTMNxSLBpKChkUSGUsCIWSaFwoFxA52tRWCJsTXHUfyb/6RWt9/Y3o02Gzkx/e9xEwZQ1HNxVKJtnYCSyCoDqpAHQq9qCJ9pSOETHMjrvI+OIIrj2PBrZlberxOa3TcZ89EylSEVzlPo7Mgy5UXyquKjKLkWtklEFQHt3hfV5sgaAIIIdPciB3iagsETQCzUsPM4CCM1uq3DbiQcKsXqj1HsHkHkJVZtUjRuFXshl0djp/ZWav7BILqoGvv72oTBE0AIWSaG54hENzB1VYIXMz/Og3nUGHdvB03ZrYEi4XiHtdis1Vd80WtqX57gvPJykpG8lTV6l6BoCoU7io00d6uNkPQBBBCpjkivDJXNDsju/Jlfu1Src+n6wG7NycnuNMl56pUtff8mDxr580RCKrCrY0/ksiKEyCETPMkbqirLRC4iBKtB894qbHVIl7lfAJs7mh3HAIgvfjSVVElRe27Wedaap8aLhBcDF07cawksCOETHMkqg9oRMrhlcgb7QdypqTuwuDGrJZgNmNq1YOi/Op0ya5ZVd/zSc6ou/dIIDgfSa3ArZWPq80QNBGEkGmOqLTQaqSrrRA0Mgnx/fgpt366Svc8bPfoFLav3jGlbKtdDyeA5FO7kdTipUZQf2jjfZHUokaRwI54dWmutB/vagsEjUiOewCzlYX1spa37Ib7dnv7gCy36GrdY7XUoGHkhfdaLdj8RCyDoP4Q2UqC8xFCprkSP1wcL11BzG7dgxxjbr2sdUNOHHKpAZubO+lZ1RMYFmPthQxAoaJ+bBcIUICurahwLihHCJnmikoLrUe72gpBI/Bzu2Ek5B6ot/X6HLGLl9Ieo7CYqxc0bDbUrUJvan7dCvcJBGVoo71FNV+BE0LINGfE8dJlzxm/KN4wn6m39dxkFV7b7C0N8iJ6VPs+Y3HN+ixdyLHkrSBOlwT1gJvIVhJcgBAyzZm4oaAVBaEuV2ySgmej4ii21D5j6EIm5bdCLrIfE6Wbq/eGICNjKK5bfE5RUS6Sr/gULag7Ij5GcCFCyDRnxPHSZc0XHUeyI/9Yva454JhdTFgi4snLrl6hOq2bFZvVWue9S3X1J8gEVybqMHdUvm6uNkPQxBBCprnTYYKrLRA0AIdD2vFhce27WleGRlbiu9UujAq7VD99X6Orn8q8WaX1d0QmuDJx7x7sahMETRDRBKW5EzsE3IOgOMPVlgjqCZNSy9OBvpiLTtfrumML4pDz7MXpsr1aQzXjd9Xq2vVZupCk1N1EaGPqZa26klqYycpDCfx54h+OZ58iszgHH50XPcI7cG/vm+ka1q7CPYXGYt7d+CWrj6wnsziHIHd/rm0ziEf7TsNdo6+1LTN/e4dvdi0DYPv9PxPk4Xx0svXMXl5J+JDDWScI9Qjkrl5TuKnzmArrZBbnMPjz27i71xQe7HNbre1pqkhqBfpuQsgIKiI8Ms0dpRq63OxqKwT1yPudhnGsnkUMwOCTOgBkpYq03OrHqyhUtW9PcD4pZw8j6ZtGEbMF25fy4p8fcCovlQExPbmr1430DO/I70c3Me6b+/j14Dqn+SWmUm749iE+37aEWL8o/tPjBlr6RfLJv99x43ePYLDUrhfVXye38s2uZejVukqvpxSkc8uSx8gqzuGWztfjo/PiyTVvsvrw+gpzn//jfcI8g7i39021sqWpo+sUiEInPnsLKiL+Ki4Hut0Om94Hqu5gLGj6bI3uydd5++p9XUmGoK0nsQGGLkMwlVY/5kWprH3DyAuxeNtQNoFQmS6hbVly01z6RHVxGt9yejc3ffcoz/z+LiPj+6NVaQD4aMti9mcc5b7eNzNz0D2O+a8nfsyHW77l860/8ECfW2tkQ4GxiMdXz+Ha1oPILsnjn9O7Ksz5ef8fGC0mltw8l3CvYKw2K0Pm386i3csZ3XqgY94fxzax+shf/HLbh6gUl+fLunvvEFebIGiiCI/M5YB/LMT0d7UVgjpS5ObFs3obcgMI0muL47BlZQOQH3t1De+ufXuCC8mzZdbbWnVhdOuBFUQMQO/IzvSJ6kq+oZBDmScAkGWZxXtW4K7R8fDVU53mP3z1VNw1OhbvWVFjG15YOxeDxcQrwx+96JyzhRn4630I97IfqSgVStoFxXG2IN0xp9BYzLO/v8v07hPpEtq2xnY0B9Qh7mijvFxthqCJIoTM5UK3qZeeI2jSvNauH6mlDfNGP+JkeRXoTEJrdK9sqz8XypnsQ/W2VkOhVto9GiqF/RjsZO4Z0ouy6BHeEb3G+QhIr9HRI7wjp/LOOomLS/HHsU38uG8NLw17mAB334vOC/MMIqckn9QCewycTbZxMOM4YV7lsSKvr/8ElULJE/3vrPb+zQ3hjRFUhRAylwttrwe9qK/QXPmj1QCW59b/kVIZoTvsMTc2vxCyMqvT7bocWz3WsTmetB2UTbcyXkpBOhuTthPk4U+bwJaAXcgAxPhGVHpP2XjZvEuRW5rPU2veYmR8f8a1G1bl3HHthqFWqrhh8cO8kvAhkxY9yPGcU9zS+TrAHgi8aNevvDbi8Qoi63JB0ijQdw1ytRmCJowQMpcLKg10vjyD/C53sjyDeUlquF5EQ0ujkc+mAVDU4xrkGp5cWUx167N0PmazAfyaRsDvhZitFh5e8QpGq4lnBt6D8pxHpsBoT+/y1LpXep/HufGCavajeub3dzFZzbw24v8uOTfCO4RvJr+Nr86Lr3cuI6c0nzdHPcno1gMxWc08teZNxrUbxqCWvfjz+GYGf3Yr0W8OZtBnt5Jw/J9q2dPU0XUKROF2ecb9COoHIWQuJ8TxUrNkVlxn8kz5Dbb+NcnlDfZyAjrU+H6ToX66bpdRrK5bu4OGwCbb+L9Vr7Hl9G5u7nwdEztUv85OTfj14DpWHErgxaEPVUizvhi9Izuz/PZPOPx/v5E44xtH6vXcvxeSXZLHC0Me4Ex+GjN+fo52wfF8M/ltOgS3YsbPz5FSg+OuporHVTU7ChVceQghczkR2ApiBrjaCkEN+L7DCDbmNWzcSNT2FMe/0wor9ypUhamkfoVMWuHJel2vrthkG4+tmsMvB9Yyof0IXh/5mNN1L60HYA+qrYyic+NeF/HYlJFbWsBzf7zH0Ng+dRZKhzNP8uE/i3hh6IP46X34eucvaJUa3h79NP2iu/PW6KfQKNV8vfOXOu3jatThHmgiPC89UXBFI/x1lxtXPwQn/3K1FYJqkBzQkncMSQ26x9WGSORTduFgateHksKaxccAGIrq14Ny7NR2Yn3b1+uatcUuYl7nx32/MbbtMN69ZiYKyfnz3aViYC4VQ1PG2YJ0ckvzWXd8M5FvVP6Bo/v/7I1g10ybT/vg+Iva/OSaN+nbojsT2o8A4HjOaVr6R6JTawHQqbW09I/kWPapKm1q6rj3EkG+gksjhMzlRvxwCGoPGftdbYmgCqySkmfCoygtONGg+1x/JhCwC5n8NoMgp2b3qzUWDJaai5+qyMtLRYpWIefX77o15XwRc12bIbw/5llHXMz5xPhGEOwRwLaUvZSYSp2CaktMpWxL2UuUd6hTJlFl+Oq8mNLp2kqv/Xl8MxnFOYxrNww3lRZf3cVTjRds/4lDmSdYe+cCp3GTxVzhZ6npxlVfEkmrRN9FBPkKLo0QMpcjfR+Cn+92tRWCKvi000j2FDRcllIZLXeWt67I0kYBthrdr9Y1jNgwepjQ5LvuZLvsOOnHfb8xpvVg5l73XKUiBkCSJG7qNIb3/l7A+39/5VQQ7/2/v6LYVMoDVzm3BCg1G0gpSEendnPUgAnzCuat0U9VuscN3z5ERnEOswbfX2XsTEpBOm9u+IzH+99JpHd57EicfwvWHvubM/lpRHiHcCY/jSNZSQyNq2nNoKaDe49gFNqmGRguaFoIIXM50mEirHsZCkSTvqbI/vCOfFrY8PVUuppC4VgSADZ3LzIya15oT6Opv6q+55NtPEsoVR/FNCTvbVrAj/vW4K7REeMXwdy/F1aYMzK+v+N4597eN/H7sY18uOVb9qUfpWNIK/amHeGvpK10Dm3DnT1ucLp3V+pBJi9+mKsiu/DDzXPrze5nfnuHOP8W3NljktP4bV3H8vnWJdz43SMMj+vLH8c2oVIqub3L2Hrbu1FRKfAcGOlqKwTNBCFkLkeUarjqXvj9WVdbIrgAg1rHTF8PLMUNl6VUxoSUEMBeP6akx2islpoLGWU9NYy8kFPp+whVuk7InM63p6MXm0qZt/nrSudEeIc6hIxeo+OHm+fam0YeXs/mUzsJ8vDnrp438mjf6Y7YlIbk5wN/8FfSVlZN/bxCHE+4VzCfTXiVVxI+ZOGOn4nxi2T+hNcI9WqeRzMevUJQemlcbYagmSDJck2rSgiaBcYi+G87MDT8G6ag+rzW9VoW5+1tlL2+/yUa6eAxAM7c9AZHUj1qvEZQ5GlO7fmhvk1DkhTc2PppZGP1ez4JrhBUCkKf7IHSq+HFoeDyQKRfX65oPaDHHa62QnAef7e8iu8aoCFkZbQxByAdOu74Oc3gU8uV6q/P0vnIsg2rr/gMJaiI3RsjRIyg+gghcznT+15QXZ5ly5sb+TofZmmNDdIQsjJuSI2irISvObo9Bbm1DNqVG0bIABRINUyhElz+qBR4DnLdkaOgeSKEzOWMZzD0vsvVVgiAV9r2IcOQ3Wj7tdtbfqRY2Gl4rdexWuqvPcGFnM092mBrC5onwhsjqA1CyFzu9H0EtN6utuKKZmWbQazJbby6PtEWH5T7ykVCtkdcrdeqzz5LF3I0aZt4BRKUo5KEN0ZQK8TLyOWO3g+uftDVVlyxpPmE86ot49IT65Eb02PAZq8XY9NoScuufS0Os6GovsyqgKG0AHxF4qTAjntP4Y0R1A4hZK4ErroX3ANdbcUVh4zErJh2FJobTgxURqf95V4UY5dhmI01K4J3PqbS+u2zdCGlbg3n8RE0I1QSXoNF3RhB7RBC5kpA6wH9H7v0PEG9sqjjCP7JO9yoe4ZYPVDvPuL4OS/6qjqtV999li4ko7h59wIS1A/CGyOoC0LIXCn0uAO8xSeexuJEUDzvlTZsH6XKmJIZC+f1RsqQa18QTam2YjE1TEG8Mk6m7GrQ9QXNAJWE1yDx2iSoPULIXCmotDCw8j4vgvrFrFAzMyQUo7VhyvtXRfeD5cLDGhRFdlbteyVpdeZLT6oj6RknkDxEnMyVjEefMJTewhsjqD1CyFxJdLkZAtu62orLno86jeBAYVKj7+tv0+O2o/woq6jbKOpStkatbVhvTBlmL9d2wRa4DqWXBq9hLVxthqCZIz4KXUkolDB6Dixspo3kmiBvbDTy9Dq752XznXrc+vTgi4IDVd5z4vUTlBwuqXJO+IxwfPv6On7OScwha00WljwL2nAtIVNCcI93d7pncnYsG3M3cveZ0yyMjMLXrx2creUDA1TqxvEo5VrTCSS4UfYSNC28r20pOlwL6owQMlcaLQdBmzFwaIWrLWn27Muw8kKiEXc1FJvBoNLxkpcGa0nV/YN8+/ni3sa94gUrZK7MBAk82pX3Rcrfms/ZBWfRx+vx7OxJwfYCkt9OJu61ODT+5Y31Ou03c1N6GpN9fOjq7s4/+Tqg9t4OhbJxhMzpjANCyFyBaON80HcW2ZSCuiOEzJXIyNfg2FqwGFxtSbPFbJWZ+kspXUKUxPsr+GaPmW9iu3G6JPWS9/r29610PH9rPsjg2dkTta/aMZ67PhdNiIaYmTFICgn/4f4ceeII+ZvzCRxjfyPwtGn5fOV6zLLM/wUEYurYn9Liuh3ZSDTO38eJ5F10ix0K5tqniQuaGUoJn+tjXW2F4DJBxMhcifi2gL4Pu9qKZs2rG4zsz7DxxVg3lJJ9LKHgeNU3XYLcDbkA+A5wFjrmHDO6KB2Swr6RJkCD0lOJOac8GLf3bj++ycxkVnAwHkol+a0G1MkWAFmu+virvrBaTeAnXoquJDz7haMO0rvaDMFlgnj1uFLp93/gG+1qK5olO1KtvLrBxAsDtbQLVGJS1T3jwpxjpmhvESofFZ6dPZ2uqf3UGE4bkG32yF1TtglroRW1n91rI1tlVnyzlaEengzxsN+bqQqvs002S+MIGYBCRV6j7SVwLUpvLZ5Do1xthuAyQgiZKxW1G4x+y9VWNDuMFpnbfy6lS4iCJ/va41N2eQfUed3cDbkgg09fH6QyF885fAf4Ykw1cnLOSVK/S+XknJMoNAq8+9h7aOWuyiYzr5Bng+1xJjZPPzIz695l22puvKq7qXX0ZgmaD95jWqLQiABfQf0hYmSuZFqNEIG/NeT5BCNHc2xsv8sdpULil7ZDSV2bUO37S06UkPFLBiVHS5CtMm4RbviP9Cd3Y+XHSgDevbyxFlnJ+j2L3IRctBFaQm8JJfu3bAq2F2DONqOTJB5KOYO/UsUu2xnydgwnNqQDN/Z7hN0nN/D3odUUGwtoEdiaSX3vJ7QSb9zGAytYuvlDnpn0OYHeYZiNjdda4dip7bQK6FyndHFB00fbyhd9x7oLf4HgfISQudK55i1I2gCGfFdb0uTZfNrC25tNzB6opUOQkrO+UbxhrX5+c9HBIpLfTkZSS3j39kbhpqBgWwFnPjoDgL61Hm1w5cdUfkP88BviB0BpcilJbydhLbGi1CkJ9tAzVKXh35ISdhuKuH3wrXjoOrBsy2e8v/z/KCzNpX1Ub4K8I9h6dC3/W/kUs25cgFatc6yfX5zNsi2fck332wn0DgPAVNKwfZbOp7AgEylWhZwnaspctqgkfEWAr6ABEEdLVzpeYTDydVdb0eSx2GSm/mKgU7CCp/tpsEkKnmkRT1E1j19kq8zZL8+CBDEzYwifHk7oTaHEvRyHws3+39Crm9cl17GWWjk1196fKPDaQGxGG1/FteK54BD0CgV9dDqG9rqLdpE9mdzvIQpLc4nwj+Pe0a8x8er7+M+I2eQVZ7E3ebPTuks2zsXfK5QhnW5wjDV0n6ULMbiLLLrLGc8BEagCdJeeKBDUEOGREUDXW+DAMjj6m6stabIUmeBojj09WPNKmafiZ6c5J16x91aKejAKr+7OoqToYBGmDBM+/X3QtTjvxVwG27m0Y2txef0Zm8lG+tJ0Sk+WYsowYS2yotQrkdQS5mwzwZODyVyeyeBR7Yg6YhcASSYTE6JbUZhnz2aK9I8DQKf1IHHfz2w6uJLM/BQA/tj1HdFBbQjwCmPXiQ3sTf6bJ8b/Dwn4bcci/j68iuLSLCL9fBjbpR0h3s4ByAD/HD/Fsl37eWzkAAI8KqmLU0OyDCmEI6q8Xo4o/dxEd2tBgyGEjMDOde/Dh1eBIc/VljRJtEq4s6s9Syhf50OC2oZNtguQ4sPFmNJNeHb1ROWpQh2grnB/8SG758ajvYfTeN7feXBOv5QcLc8Sshlt5PyZg66lDs9Onii9lNiKbeT8lQNA1uoslF5K2hXomJ+dTIxWgyzLFPmWZytZbHZBcybrKEfP7iLUN5ouMf3ZdmwdqTlJvPnTfTxw7Zss2TSPwR0nERnYig9XPcOB01sA6BwRyrHMbD7761+eHDUQrbr85aKg1MCKPQcZ0b4VAR7u5JWUsvbAMQ6lZVBoMOKu0dAqJJBRHVrho6/4Kfyf46dYf+QEBaUGQrw9GdO5LRGpewjXOAuZ9Sf/5fYfnuTHm+fRM6LjpX9RgqaHAvwmt0JSiwBfQcMghIzAjlcojH4Dfr7b1ZY0SXRqic+v12FWapjSriehRacc1858dgZTuonAawPRx5XXxrAUWrAWWlF6KjGl2fsWaUOcY2By/7IH+UoaCVN6eW8jpbuSth+1RaEqP/21WWzkJOaAAqyFVqyFVj5I3+a4rpEk/jq+n/7drCgVSvafsguSUlMxeq0nrcK6sv14At56f27o+yCf/zGbT3+bhUal4dqe0zibc5IDp7cgSQpk2caA1jH0jY/mw4TNHEjNoGtUmGOvn3bsw99dz4BWMWQVFfPBur8pMppoFRxA58gwsgqL2Z50hkOpGTww9Gonj83u06n8uH0v0QG+tA0NYm9KGp/99S++7nr6dR+HXGpXdqVmA8/89g63dLleiJhmjOfgKLTR3q42Q3AZI4SMoJzOU+xHTIdXudqSJsu8TsM5kre3WnOz12aTuSyTwLGBWM+9OSt05cKkNKkUw2kDbi3csORbHHMAJIXkKIBXhrXIas/qOZfZE9cxhC9LPLDIMkvy8/g4O5uMvDTe+/URIgNaseXIb7ip3TGYi1Ep1Gw5+rtT1lKEfyxnso8zdchM+2Nb8TgA8jlPU4nRTFyQPwB5JaUOO/aeSeXA2QweGtYXpULBsp0HKDKaGNulHf1bxTjm7T6dytebd/Dzjv3MGNDLMb7lxCkCPd25b3AfFJJEv/hoXl+VwI6kM1iGyijPbfXmX59jslqYOVCI6+aKpoUXXqJmjKCBEUJG4MyY9+DUZijNdbUlTY5tLbrzVf7+eluvzBvjO9CXzF8zL33DBanJd7fpgP8ee8bTgwGBHPMM5I+kQ6TlnuJ01lFiQzqSknMCjezGK7d+h0JR7to3W0zkFtn3lGWZj1c/S2FpHnqtPRamxFjI2gNHcdPYXyLKjodKTWZ+3rGf/q1iiPD1xmy1cjgtE083Lf3io53s6xwZyrqDXhxOyyS7qAR/D7u3Kq/UQLiPFwrJLtT83PW4azTklpSSL2fihz+7Ug/y5falfDr+ZTy1dY+/ETQ+kpsSvymtKwhygaC+EVlLAmc8g+Had1xtRZOjWOvJcx4KR1zM+UTMiKDDgg5Ox0oAweOD6bCgA8Hjg1Hq7CLCVlp+f9jtYXRY0AH/If7YDDbHnPOxWWyk/5xO+s/pZCzLcIx7dPRgTJZzqnKPriMB6N1qBO/9Zw13jXyJotI8/L1CnUQMwOrtC7HK9vuPpu7myNmdRAW2ZkSXmygx2oOZk3Py+HLjNrx0WtqFBgGwYs9B1Eolo9q3AuxeG5ss46PXIUkV37D83O0C6FhGlmPMR+fG2bwCbLJdmeUWl1JsMuGr15GScwSLzcKTq99kVKv+jIjvV2FNQfPAd3w8Kl83V5shuAIQQkZQkQ4TodtUV1vRpHi9fX9SStJrfb8mxF4F2JhWsaO0Oc+MzWBDE6ypcE22yGQuyyRzWSa5ieVestHXdEE+m+a8jjrU/t1q36PUZA8w1mmcPRop2cdZu2cJA9qNBWD3yY0A9Go1nCGdJhFxLtsJwFvnxl0DeqNVqziekc2/J04zsUdHFAqJX3cd4K3f1gOQmldAQWnF9OmcYvs5UWZheZp675ZRZBQW81HCPyzfdYCPEjejVirp1iKcYye38dG/izlbmMFLw0Q/sOaKvluQ6GwtaDSEkBFUzug3IVgEWAKsi+/Pstx9dVrDvbVdTBTtr1gtt2iffcy9TcUjFKWbkg4LOtD+i/a0frc1upZ2D8ffn+6hyFoeU2MJa8mxZHv6t59nyEXtsNmsfLv+HdpG9CA+vAtQLnhsNisKhZJw/5aO+aM6tCbE2xOz1cqP2/fSPTqCVsEBJB4+wd/HkrmmYxtCvD2x2Gx8krjFaa+9Z1I5m2evRWMwl3uPOkeGMqFbB4qMRjafOIWnm5YZA3rho9eRkp3B+5u+4rlB9xLk4c///vmGbh+MI+atwUxe/DAnc05f9LEJmgYqfzd8xsZdeqJAUE8IISOoHLUb3LAANBXrh1xJZHkE8WI9NDT0aOeBOlBN/uZ8SpPLA2etJVYyV2QiqSR8rvZxjJvzzBjPGrGW2MWKpJBQ+6kJvcnudUnJLWBuVvlxTXL81STu+wlJUtAlpj9Q7okpEyoACft+Ii3vFFP6P0JyxmEAYkM64KX3d2Q5nU+Qlz1d/I/9Ryk1mbm+c1sANh5Nont0OFfHteDm3l1QKRSkFxbxvz//ZsXugyzYtI2Fm3cQeq7+zIWnTlfHteCp0YN4bcIoHhzal5gAP2RZ5odte+kc3ZYbO13LLwfW8sb6z5jabTxfTnyDfEMh//n5uUqP9wRNBKWE35Q2KLQi1VrQeAghI7g4AXFw3XuutsKlzI7vSq6p7u0bJKVE+B3hIMPJ10+S8mUKqYtTOTbrGKY0E8ETg9EElh8tpf+QztFnjlKw3bm6rj5eT9tB9sJi3+fl8nJ6Gi+kpXLborfJK85iTM/pBPvYr2vVOrz0/mQXpGKzWckuTGPl1gVc1+tOvPT+JOz5EYCrWo1kcMfxHE7ZwUern+Fk+gEAov19CfR052xeAYmHTzC2a3v0Wg2lJjOFBiNhPvaif2E+XtzWpxsAqfmFbDiaRGZhMZO6d6R7C3tdGw/tpTuEbzlxmlM5eTxy/SQkSeKLbT/SL7o7D189lUEte/HK8Ec5knWSv05uu+RaAtfgNbwFmsgr+8OPoPERWUuCquk4CZI2wvYvXW1Jo/ND++GszztYb+t5tPUg5tkYMn7OIP/ffEfTyJDJIXj3rn6djQfbdOW+xNOoJYlf8vMBifDQ9tzQ91E6xzgHx8aHdmL78QROpO1nzc5FhPnFMKD9WAymEoqNdpH0zfryLujne2WSsnN5fMlK/D30tA4JdKojA2C1lntGfPT2oM6benWhfXiwY/y7f3cDEOFX9eNzFNhrF4+x5Ax4w4mcU9zU+TrHnA7B8QAcz0lmUMteF1tK4CK0cT54DoxwtRmCKxAhZASXZtQcSNkGadWrn3I5cNo/mrdMpy49sYboW+qJfiz6kvMCrgkg7PYwFFpnp6nNaOPbpX8DMMPfn7v9AzB0HcbvmiEUGfIpKs3HQ1cuGvq2vZbtxxNYse1LHrj2TVRKe9Xho2d3AeCtD6BdVE+nPY6l7iEzP4X2YcEUGgyk5hdy76CrHNd1GjVebloOpmUyoLU9nuZgqj2jquwoCuxxMQfOpqPXqGkVXHXH45927MPPXc/A1i3JzjmDFGl/aTJZy4sEGq32SsUSIp23qaFwV9mr91aSuSYQNDRCyAgujdoNbvgKPh0ExsZtJOgKrJKSmRExlBYcd5kN+f/mk/1bNvpWejQBGhRuCsy5Zgx7SjhQbKa7TsdUX3s37Py4vqxf/Qurty9kdPfbubZHecZZq/CuXN3mGv4+tIo3lt5D+6jeFJTksON4InqtJw9d97bjKKqMrxPnkJmfQo/ocBb/u5trO7XBR6+j2Gii2GjCXauhX3wMq/Ye4vMN/+Kl1bI16QxtQ4MI9LTH5VisVpZs3UOJyczYLu1QKy8eM1FWYO/BoVejVNiFm8nDRJx/C/46uRWLzYJKoSLhuL3RZZy/6MfUpFBK+N/SFqXXpY8PBYKGQAgZQfXwj4VJX8K3k0G2Xnp+M2Z+p5HsLqhbllJd8erihSXPQsmxEkqOlWAz2uvMtA0MZJIHTPD2RnXu02+mIrTKtaYMeJQwvxg2HVxJ4r6f0Kp1dIrpy3U97yTQO6zCfKXKXt/lz4PHCfX25Oq4aMAe4PvHgaMMbxfP8HbxlJhMbE06g8FsRqFQoFIqWLnnEAazmYOpmeSVlNK7ZWSFQnnnU1Zgr198NJF+Po7xHEsad/SYxP2/vsjkxY/QPiiOH/atpk1gS/pFd6/ZkyloUHyuj0Xb0sfVZgiuYISQEVSf+GEw8jVY85SrLWkwDoS156OiQ642A12MjvCY8Arj3/0ag2L/UcfPVt8gMjOtXNtjqpMn5nwUkoJBHScwqOOEau2tUNpjX8Z1a08Lf9/K5ygkxnRuy5jObcktLmX57oOcysnlwNkMNEoF4b7eXN+lLZ0iqhZZK/YcRKVUMKpDa6fx5PT9XN92OGcLMvh82w/sST1Er4hOvD7qcRSSyFFoKrj3CcWjd9W/Y4GgoZFkWZYvPU0gOI8Vj8K2L1xtRb1jVLkxuW03ThSdcbUplRJv8efVtzPgvP+yhcOmsdXSs4q7ao5faDZnD3xVr2vWFIVCyeT4J5FNItW6qaKN8yFgegckpYiLEbgW8dFGUHNGvwUxA11tRb3zXsehTVbEANyYGuUkYgByguq/aKFSWbH6cGNjs1mx+Yo3yKaKyt8N/5vbCBEjaBIIISOoOUoVTP4K/C+f6p3/xPRiUZ5r42IuRfu9hRXG0oo8KplZNySpYqsBV1CgEI1LmyIKvQr/ae1R6NWuNkUgAISQEdQWnS/c9D24+bjakjpToPPmOZ0F+cL20k2IKKsPyn1HncZMbXpSXGC5yB11QC699JxG4Gze0UtPEjQuKgX+t7dDHai/9FyBoJEQQkZQewLi4MavQVmx2WFz4tW2fUkvzbr0RBdyY3o0WJ2zxQraDmmQvWy2kgZZt6YcT96OKBnThJDAb3IrtNHVL97oChITE5EkidmzZ7vaFEEjIYSMoG7EDICJ80Fqnr1V1rQexKo6NoRsDDofqOglyXJrmHoq1vN6M7mS4uJcJF9xfNFU8B4dg75T3TtaJyUlIUkSo0aNqvT6+++/j0KhICoqisOHD9d5vzKio6OJjo6ut/UETQeRfi2oO+2uh+vnwrIHoAkfz1xIhncoL5PpajMuSZDNHc1O5xd0m86D9CyJhni+LaaKHbpdRamuGDeat8fvcsC9TyieAxq+/cDzzz/Pyy+/TJs2bfj999+JjIy89E0X0KtXLw4ePEhAQNXVpAWXD8IjI6gfut4KI191tRXVRkZiVssOFJgqBtA2NaZkxoHFORamtMdorOaGSU02lTad5ySz5LSrTbji0XcJxOe62AbdQ5ZlHnjgAV5++WV69OjBhg0baiViAPR6PW3atBFC5gpCCBlB/dHnfhjwpKutqBaLOwzn77z6c1s3JD0OmiuM5UU0XHVbQ3HTETInzu52tQlXNPougfhObo2kaLhgJbPZzK233sr//vc/hgwZwp9//ukkQqZNm4YkSZw8eZK5c+fSpk0btFotLVq04MUXX8Rmcxb0F8bIlB1lJScnk5ycjCRJjq/z42iWLl3KwIEDCQoKws3NjbCwMIYNG8bSpUsb7LEL6gdxtCSoX4Y8C4Z8+PcTV1tyUU4GxvJfw0lXm1EtfG06dDsOVzhASjf5ARUFTl2RJBlDSdOIkQFISzuK1EGFXNwA2VmCKmkMEVNaWsqkSZNYtWoV48ePZ/HixWi1lfdseuKJJ1i/fj1jxoxh5MiR/PLLL8yePRuTycSrr17cG+zj48MLL7zAe++9B8AjjzziuDZo0CAAPvroI+677z5CQ0MZP348/v7+pKWl8e+///Lzzz8zceLE+nrIggZACBlB/TP6DTDkwZ7vXW1JBSwKFTPDIjAUNA8hMzk7Dtm402nMHNWavOz6FzEAWr2Z0pymFedk8bKgbDra6opA1wgipqCggBEjRrBx40buuOMOPv30U5RVNBfdsWMHe/bsITTU3hJh1qxZxMfHM2/ePF544QU0mspjqXx8fJg9ezYLFiwAqDSb6fPPP0ej0bBr1y6CgoKcrmVnZ9fuAQoaDXG0JKh/JAnGfggdb3C1JRX4pNNI9jcTEQNw1ZGKY0WdRjbYfho3U4OtXVtybU0/IPtyQtclEL8GFjEAmzdvZuPGjfTp04f58+dXKWLALlzKRAxAQEAAY8eOpbCwsF6ym9RqNWp1xSw5f3//Oq8taFiEkBE0DEoVjP/UHgTcRNgT0ZnPCw662oxq4yFr8NhW8QU62yu+wfZUqhvG01MXTmc1n99Zc6exRAxAu3btCAsLY/Pmzbz00kuXnN+9e8W4sIgIeyZVXl5enWyZMmUKxcXFdOjQgSeeeIJVq1ZRUFBQpzUFjYcQMoKGQ6GA6z+Anv9xtSWUavQ846PDIjefWItJefHIJc7F6WwqDWk5DVdbRaVqGu0JzudE0nZQicp4DY2uc+OJGIDIyEjWr19PREQEL7zwAi+88EKV8728vCqMqVT26AjrBcUia8rjjz/O/PnzCQsL45133uHaa6/F39+fcePGcfJk8/HgXqkIISNoWCQJrn0H+jzgUjPeaT+Y5OKzLrWhpvQ7WtHVbuw6BJOhbi/aVdFU+iydj8ViAr/mWXCxuaDrHIjfjY0nYsqIi4tj/fr1REVF8dJLL/Hcc8816v5lSJLEHXfcwdatW8nMzOTnn39mwoQJLFu2jDFjxtRZKAkaFiFkBI3DyFdhwBMu2XpjbB++z9vrkr1ri1ZW4r31WIXxvJg+Dbxz0+izdCFFqnxXm3DZ4ioRU0bLli1JTEykRYsWvPrqq8ycObNB9lEqldUSJGWemO+//54hQ4Zw4MABjh2r+H9R0HQQQkbQeAx5DobMatQt8/R+PK9pmm/OVTG+IB65kjP6TEIadF/Z2jT6LF1IWqFw7zcE+i6uFTFlxMTEsH79emJiYpgzZw5PPln/9aj8/PzIysrCYKjodUxMTESWnbP1zGYzOTk5ALi5udW7PYL6Q6RfCxqXAY+DmzesfgrkhnfXvtSmN5m5+xt8n/pm0PGKL5xW/1CyMxs2xsdqbpp5zseStxHn18HVZlxWeA6JxGt4CySpacQftWjRgvXr1zN48GDeeustrFYr77zzTr2tP2TIELZt28bo0aPp378/Go2GAQMGMGDAAMaNG4eXlxdXXXUVLVq0wGw288cff3DgwAEmTZpEixYN09dMUD8IISNofHrNAO9I+PEOaMA3zuVth/BHMxQxSiT8tx6vUASvuPu1yA2cHW1uQn2Wzic/Px0pRoWc33yCtZssSgnfCfG4dw92tSUVKAsAHjx4MO+++269xqbMmjWL3NxcVqxYwYYNG7BarbzwwgsMGDCA119/nTVr1vDvv/+yfPly3N3diY2N5aOPPuLOO++sNxsEDYMkX+hPEwgai7O74NsboSit3pdO9Y1kYqAnheam+cZcFeMK47n5g4opx0k3/5cTZxu2gaJW8z356SkNukdtGd/vcTQpIui3Lij0KvxvbYe2pberTREI6g0RIyNwHWFdYMY6CGpXr8vKSDzXonWzFDEAw056VDqeVqhv8L2NTajP0oVkGZtX1llTQxWgI+i+LkLECC47hJARuBbvCLjjN2g5uN6WXNhxJP/mV1IStxkgyRC8LanCuLF9X0oKG/pYRcZY3HTFX3LqHleb0GzRtvQm6L7OqAJ0rjZFIKh3hJARuB43L7jlR+h6W52XOhbcmrklzTdVcmRJLHJ6xZL8BW0GNvjeWr0FWbZdeqKLOJWyH8lNHC3VFH33YALu7IBC33CFFAUCVyKEjKBpoFTB2A9g5OugqF0MulmpYWZwECZb0+sXVF1GJlesXgqQqY5s8L3Vbk2vPYETsozVV4T0VRsJvEZG43dDKySleKkXXL6Iv25B06LPfTB1BXjUvF7K/zoN51BhcgMY1XhEbK8YaGvz8CYjs+HfwNVqY4PvUVfyZdGJuDpIagV+N7XBa3DDC2CBwNUIISNoerToA3f/BS36VvuWnZFd+TK/+aVan8/A0hbIZyoGtJb0uAabteGFjFLV9IVMSm7zjH1qTJS+WgLv6oS+U6CrTREIGgUhZARNE89guP3XavVoKtZ6MtNLha0Jx3dUhzGn/Ssdzw3t2ij7KxRNr8/ShRxP2gYurkLblNF1DiT44W5oIj1dbYpA0GgIISNouihV9h5NkxeC5uIvzG+2H0BKSXojGtYwtNiRWul4usGnkSxo+q0cDIYi0UCyEiSNEt8bWuF/UxsUbqLOqeDKQggZQdOn3Vi4KxGC2le4lBDfj59ym1dDyMroaQyDk6crjJtjOlCQ2zhBuLKtafZZupASbdNso+Aq1BEeBD3UtUlW6hUIGgMhZATNg4A4uCsBrroPsB8tZHsEMlvZdAu41YRxZyoPbi7oOLzRbLCam4eQyShKcrUJTQMJPAZEEHRvZ9SiPozgCkYIGUHzQaWFUa/DrUvBM5TZ8d3JMea62qp6IW53VqXj2R5xjWaDpYn2WbqQE2d2udoEl6PwVBNwRwd8rokRqdWCKx7xP0DQ/IgbivG+v9H6XB6ppR1NQUiHT1QYt2m0pGc33n9RU2nzEDKZWUlInlduHIhbGz+CH+6GW7yvq00RCJoEQsgImiVanR9vD3ybtwa+hY/Wx9Xm1ImJZ8MrHTd0HYHZ2HiZWMaS5nNMZ/Js4sX7GgKVhPd1LQmY1h6lR8M2DxUImhNCyAiaNaOiR/Hz2J8ZHFl/vZoamzZ78ysdz4/u3Wg2yMhNumHkheRamn+WWk3QtPAi+IGuePatXPQKBFcyQsgImj0BugDmDpnLa/1ea3bemXizP4r9Ryu9lm5tvIJmGp0Fm9XaaPvVldMZzbv4YXVRuKvxnRRP4D2dUIe4u9ocgaBJIoSM4LLhutjrWDF+BVNaT0EpNY9aIzekRYFcsWqvJbgFOdkN3e26HK228faqD06e2oWkvoxfviRw7x1CyGPdce8RgiSJIoACwcW4jF8JBFci3lpvnr3qWb4f8z3dgrq52pxL0mF/5QG2xd1GQSP2R1Rrm357gvOxWi3Y/C7PN3d1uAdB93XBd3y86FgtEFQDIWQElyWt/Vrz1eivmNN/DkG6IFebUykRFm9UeyrvHZTt265RbWkOfZYupFCR52oT6hXJTYXP2FiC7u8iWgwIBDVACBnBZc21La9l+fjl3NHhDtSKpvXpdkpmDFQSlyJLEmn5bo1qS3Pos3QhqQXHXW1CvaHvFkTI493x6BOGJHpJCQQ1QggZwWWPXq3n0e6P8tP1P9EvvJ+rzXHQ5UDl4sHUaSCG4saOWWl+QuZ40rayIs/NFlWwnsC7O+E3ubVIqRYIaokQMoIrhmjvaD4a9hHzhswj0tO1xfSCrB5odh6u9Fp+fP9Gtqb59Fk6n8KibCTfpuVlqy5Kby0+42IJfqgb2hhvV5sjEDRrrtzymIIrlkGRg+gb3pefj/7Mp3s+Jd0FnbMnZ7cE845Kr2WqwoHGTYW2WZpnI8ZSXQluNB8xo/TW4DkoEveeIUgq8TlSIKgPhJARXJGoFWomt57MuLhx/HDkBz7f+zlZpZX3O2oIeh6qXKjYvPzJzGi8ar5lWEzNU8hklaYQQbSrzbgkSi8NnoOFgBEIGgIhZARXNBqlhlva3sLE+Il8d+g7vtz/JTmGnAbd09vmhn774Uqzq4t7XoPN2oh51+cwGZpHn6ULSUrdRYQ22tVmXBSFlwavQZG49xICRiBoKMT/LIEAcFO5Ma3DNH6b+BtP93qaYH1wg+11Y04csqHy4NrcoM4Ntm9VNKc+S+eTcvYwkq7pFT9UeGrwvq4loU/0xOPqMCFiBIIGRHhkBILzcFO5cUvbW5jcajK/Hv+V+fvmc7rwdL3ucdWRi6fapJV4AY3fENFYVNDoe9YXFh8bylJXW2FH4anGc2AkHr1DL+/KwwJBE0IIGYGgEtRKNRNbTWRc3Dh+T/6dxYcWszNjZ53X1dvUeG47Uumxkim+G0X5jS9i1Forhtzm1aLgfPLkTPwJcKkN6jB33HuH4t4tCEnd9DxEAsHljBAyAkEVKBVKRseMZnTMaI7kHmHJ4SWsPLGSInPtYkom5bdCLt5d6bXC9kMhrw7G1hKNm6nxN61HUrIPu0TISGoFuk6BeFwVKirxCgQuRJLlSjrWCQSCi1JiLmHVyVUsObyEgzkHa3TvZ1s74712e6XXDt/0P1JS68PCmuETlE/a4fmNv3E9oVHrGN/iYWikIGlVkB733iG4dwtGoROfBQUCVyP+FwoENUSv1jOp1SQmtZrEvqx9LDm8hDVJayi1VB2ooZGV+Gw9WumxkqzVkZ6tABo/9Vqlbn59ls7HZC4FPyVkNuDxmEpC1yEAj16haFuKAnYCQVNCRKO5gMTERCRJYvbs2a42RVBHOgR04KW+L7HuhnU83etp4nziLjp3XGE8cn7lQbWlPUZhMTW+iIHm2WfpQorVDROsrPJ3w3t0DKEze+M/pY0QMQJBE0R4ZOqRpKQkYmJiqpyTm5vbSNYIGhNPjSe3tL2FW9rewr6sfaw5uYbfk38ntbj8rGjw8Ys3gsyN6AmNX2D4HM1fyKQXJdGS+ukYrvDUoGvnh65jANpYHySpmTd0Egguc4SQaQBiY2O59dZbK73m5uZGr169OHjwIAEBrs20EDQMHQI60CGgA4/1eIw9WXtYc3INfyatJWDriUqPlQAyLP6AazKHmmOfpQs5fmo7LX1qL2RUATrc2vuja+ePJspTiBeBoBkhhEwDEBcXd8ljozZt2jSOMQKXIUkSnQM70zmwM0/2fBJDq90Url1L4bo/MZ086ZhnCY8jN9t16c82a/MXMjm5Z5GiVMgF1X8e1eEe6Nr7o2vvjzrYvQGtEwgEDYmIkXEBF4uRiY6OJjo6mqKiIh5++GHCwsLQarV06tSJH3/8sdK1TCYT7777Lt26dcPd3R1PT0/69+/Pr7/+WiObSkpKePLJJ4mMjMTNzY0OHTrw2WefXdTWn3/+mZtuuom4uDj0ej3e3t7079+fpUuXVvvxgv04TpIkpk2bVuHa+vXrGTBgAO7u7vj7+3PjjTdy+vRpBg0aVOknZlmW+eKLL+jbty9eXl7o9Xp69OjBF198UaPnoiGQJAldly4EYWxWoQAAF+1JREFUPf44satXEbtmNUFPP4X71X0o6nGtS21rrn2WLsToeYk0coWEtqU3Pte1JOTpXgQ/2BWvIVFCxAgEzRzhkWlimM1mRowYQW5uLhMnTqSkpITvvvuOyZMns2bNGkaMGOGYazQaGTVqFImJiXTp0oU777wTs9nMypUrGTt2LPPmzeOBBx645J5Wq5UxY8aQkJBAx44dufnmm8nJyeGxxx5j0KBBld4zc+ZMNBoN/fr1IzQ0lMzMTH799VcmTZrE3LlzefDBB+v0PPz+++9ce+21KJVKbrzxRsLCwkhISKBfv374+vpWmC/LMrfccguLFy8mPj6em2++GY1Gwx9//MGdd97JgQMHePvtt+tkU32iiY7Gf9o0/KdNI9RkJexoHqcP5nDmYA7ZKY0rLMzNtM/SheSYzhJChNOYyt8NTYw32pbe6Nr4odA3n07ZAoGgeggh0wAcO3asUu/DqFGjuOqqq6q89+zZs/Ts2ZPExEQ0Gg0AN998M8OGDePdd991EjIvvfQSiYmJzJo1ixdffNHhpSgsLGTIkCE89thjTJgwgbCwsCr3XLBgAQkJCYwePZrly5ejVNorkz766KN079690ntWrVpFy5YtncaKioq4+uqrmTVrFnfeeSd6vb7KfS+G1Wrlrrvuwmq1OsRLGVOnTmXhwoUV7vn8889ZvHgx06dP55NPPkGttr9hmUwmJk2axDvvvMNNN9100cfjStQaJS3a+9OivT8AJQUm0k7kk3o8n7Tj+WSeKsRqabiMpubaZ+lCktP3ExHaCm2Mt+NL6aVxtVkCgaCBEUKmATh+/DgvvvhihXEfH59LChmA//73vw4RAzB06FBatGjB1q1bHWM2m42PPvqI2NhYJxED4OnpyfPPP8/111/PTz/9dEmvzDfffAPAq6++6hAxAO3ateP222/n008/rXDPhSIGwMPDg2nTpvHYY4+xdetWBg4ceMnHWhkbN24kOTmZ66+/3knEALzyyissWrQIq9XqNP7BBx/g7u7O//73P4eIAdBoNLz66qssX76cxYsXN0khcyF6Lw0tuwTSsksgAFazjYxThaQezyPteD4ZSQUU59dfNV5DM+2zpPPyJjSuFaFxrQmJb01oXCu0enFMJBBcaQgh0wCMHDmSNWvW1OpeHx+fSlO4IyIi2Lx5s+Pnw4cPk5ubS1hYWKWiKTMzE4BDhw5dcs/du3fj7u5O165dK1zr27dvpUImIyODOXPmsHr1apKTkyktdS4Gd/bs2UvuW5U9QAURAxAZGUlUVBQnzwuWLSkpYe/evYSFhfHGG29UuMdstvcvqs5z0RRRqhWExnoTGltew6S00ETW6SIyzxSSnVJEztlictNKsJpr5rlRqa0YzE27RYGbuwc+oWH4hobjGxqGX1gkIbFxeAeFuNo0gUDQBBBCponh7V15wS2VSoXNVv4mlZOTA8D+/fvZv3//RdcrLr50vEVBQQGRkZGVXgsODq4wlpOTQ8+ePTl16hR9+/Zl2LBh+Pj4oFQq2bVrF8uWLcNorH212IICu4cgKCjoojadL2Ryc3ORZZmUlJRKRV0Z1Xkumgs6Tw2R7fyIbOfnGLPZZAoyS8lLL6Egu5SCLAMFWee+Z5diNlgrrKPVm2gKETIqtQafkFCHWLF/D8c3LBy9lyhCJxAILo4QMs0ULy8vACZOnHjRjKaarFXmwbmQ9PSKVdrmz5/PqVOnePnll3nuueecrs2ZM4dly5Y5jSkU9uQ4i6Viamx+fn6l9oDd61Mdm8rmd+/enW3btlV6z5WAQiHhE6zHJ7jy2CRDkZn8rFKK84yUFpooKTBhNuSSE9ofQ3ERhqJCx3djSQnUoQ2bQqlEo3dHq9ej1bmjdXdH5+mF3tsbnac3em8f9F5e6L188AoMwjMgUNRuEQgEtUIImWZK27Zt8fLyYtu2bZjNZqe4kJrSuXNnEhMT2bVrF126dHG69vfff1eYf/z4cQDGjh1b4dqGDRsqjL3zzjtA5Uc7O3furNQegE2bNvHEE084XTtz5gynTp1yGvP09KRt27YcPHiQvLw8fHx8KqwpADcPNW4eF/6dxADdKsyVbTZMBgMgI8v2Ly74XuHfyCiUKrR6PWrtxasYCwQCQX0i6sg0U1QqFffeey/Jyck8/vjjjjiQ89m3b99FvRplJCUlkZiYCMBzzz3ndHx16NAhvvrqqwr3tGjRArAH5YI960mSJO6++25WrVpVYX7ZcdnatWsdR2Jg96y88sorFeb369ePqKgoli9f7hQXBDBr1qwKgb4ADz30ECUlJcyYMaPSI6STJ0+SlJQEwOzZs5EkyfG4BRWRFAq7N0Xvjpu7BzoPT7tHxcvuTXH38cXD1w8PP388/QPwCgjEKyAID18/IWIEAkGjIjwyzZgXX3yRHTt2MHfuXFauXMmAAQMICgoiJSWFvXv3snv3bjZv3nzRWJPz8fX1ZeXKlXTt2pXRo0eTk5PDd999x/Dhw1m+fLnjeAjgtttu44033uDBBx8kISGBwkJ7+u5nn33GhAkT+Omnn5zWfuONN9DpdHz66ad069aNsWPHUlhYyPLlyxk4cKDDw1OGUqnk448/5vrrr2fIkCHceOONhIaGsn79elJSUujcuTN79uxxuufuu+/mn3/+4auvvmLTpk0MGzaMsLAw0tPTOXToEFu2bOHbb78lOjq6ls+2QCAQCJoiwiPTjNFqtaxevZpPPvmEkJAQli5dynvvvcdff/1FaGgoH330ER07dqzWWt27d+exxx4jKyuL9957j02bNvHOO+8wdepUoDwOBewZVOvXr2fo0KGsXbvW4dl4/PHHue666yqsXWbL7NmzsdlsfPzxx2zatIlZs2bx1ltvVWrP6NGj+f333+nRowdLlizh008/JSIigo0bN2K1Wp3sAXvl3AULFvD999/Tvn17VqxYwbvvvssff/yBm5sbb7/9NsOGDavWcyEQCASCZoQsuKI5efKkDMgjR450jE2dOlUG5BMnTsgjRoyQAVmlUslRUVHy7NmzZavVWmFuZV8Xzjl58qRjLCEhQQbkF154Qd60aZM8aNAg2cPDQw4ICJDvvfdeuaSkRJZlWV6xYoV81VVXyXq9Xg4KCpIffvhh2c3NTe7Vq5djrby8PHnOnDnygAED5NDQUFmtVsuhoaHybbfdJh87dszp8Q4cOLBSW1u0aOE0Lz09XX7kkUfk2NhYWaPRyP7+/vKECRPkvXv3VngOjxw5Ik+bNk2Ojo6WNRqN7OvrK3fq1El++OGHZZvNVqvfi0AgEAiqhzhaEgBUmi599913s3btWjQaDf/5z39YsWIFs2fPxmQy8eqrrwIwbtw48vLyWLZsGWPHjq0QLHwptmzZwhtvvMHIkSO5++67SUhI4KOPPiInJ4dRo0Zx7733MnbsWPr06cPKlSt5//33HfuWcfDgQZ5//nkGDx7M+PHjcXd359ChQ3z77besXLmSHTt2OOJ6yno6rV+/nqlTpzqOms4PED5+/DitW7fGarUyYsQIxo0bR0ZGBkuXLuW3335j3bp19O7dG7DXy+nVqxfFxcVce+213HjjjRQXF3P06FE+/PBD3n77bVQq8d9MIBAIGgxXKymBaynzyAQFBcmdO3eWZ8yYIXfo0MHJW/HFF1/IsizLmZmZso+Pj+zp6SkbjUbHGl9++aUMyF9++WWle1TlkQHkX375xTFuMpnkTp06yZIkyYA8aNAg+fHHH5fvvvtux3ylUinn5uY67snLy5Ozs7Mr7Pvnn3/KCoVC/s9//uM0/sILL8iAnJCQUKm9V199teM5OZ/Dhw/Lnp6ecseOHR1jc+fOlQH5vffeq7BOZTZd6rkSCAQCQc0QMTICwF5kztPTk59++slRYK9du3asWbOG6dOnAxAQEOAI1D18+HC97Dt48GCnNG61Ws2kSZOQZZm4uDiSk5P58MMP+fLLLwF7ywGr1Upubq7jHm9vb/z8/Cpdu3379qxdu7ba9uzcuZO///4bDw8PdDqd07VWrVoxY8YM9u7dy759+5yuXTgXqNQmgUAgENQvwuctACAsLMzRVmHatGl89dVXLF68mE6dOjnNi4iwdxfOy8url30rO4oKDQ0F4MEHH+Shhx5yjEuShKenJ9nZ2Zw9e9aplUNiYiLvvfceW7ZsISsry6n43vl9qy7FP//8A9gbV+bl5VVo/llWC+fQoUN06NCB6667jpkzZ3L//fezbt06Ro0axcCBAyvtRSUQCASC+kcIGcFFuTAzCHDEe1RWy6W+96jsWhlms5mzZ8/yySefsHjxYo4ePQqAXq+nffv2DBs2DA8PDxYsWEBycrLjPpPJ5KhNc80116BQKIiKimLUqFHMmjXLUeemtLSU0tLSi7Y8eP755xk5ciTR0dH8888/3HXXXSxdupQlS5YAdvE0efJkFixY4GjEWSYQAaZPn+7wdAHnCspBamoqc+bMYdWqVZw5cwatVktoaCgDBw7kjTfeuGgLi+aGJEm0bt262fa/EggETQchZATNlr/++ot33nkHSZJQKpVMnTqVpKQk/vzzTwoLC9mxYwffffedY35paSnDhw9n06ZNAIwaNYrY2FiOHj3KJ598wu233+4QT35+fnh6ejqK6MmyzFNPPcVbb73FDTfcwDfffOPw9CxatIjNmzcTHh5O165dyczMZNu2bXzzzTecOXOGhIQE4NKB0SUlJfTt25ekpCRGjBjB+PHjMZlMnDx5kq+//prHH3/8shEyAoFAUF8IISOoM2Ueh/ry0lSXIUOGkJaWRkBAAJ07d2b+/PkALFy4kKlTp/L6669z4sQJx/xZs2axadMmOnfuzO7du7n//vsZOnQoYO/5pFQqMZnsnaCNRiOenp6AvUfUnXfeycKFC7n//vuZO3euo0DgH3/8wZw5cxg5ciRLly7F3d3dyYbExESWLl3KxIkTnYTMuHHjHBlUZaxbt46TJ0/yyCOP8N///tfpWlFRUZ3aUAgEAsHligj2FdSZsqDW06dPN+q+QUFBeHh40KJFC44dO+ZoJnnbbbfh5eXF/PnzHa0bLBYLn376Kd7e3txyyy0V7PX29sbDw4NevXrRu3dviouLKS4upqSkhLFj/7+9ewuJ6vviAP51vJwZxzRnvI1pM5WTGlaGmYlQitO9IYgItQe7PAh28UEl6qWL08VSsKAiCzWDgh7EzLCwElHBUrSLhmAYBZqmNeS9i67fQ/85/8bRsqzGifUBQffZ5+x9joKLs9esvQmFhYU4cuQIzp49a7aflGmLhdzcXDGIAcw3vLx+/fpP3dd4icMuLi4QBOGnrmNriAh5eXmIioqCq6srnJ2dsXTpUuTl5Vn07ejowKFDh7B8+XJ4eXlBEARoNBokJyePuy3H9u3bYWdnh5cvX+Ls2bMICgqCIAhQq9U4cuSI2dYcADA6OorLly9j2bJlUCgUkMlk8PPzg16v560tGJtm+I0Mm7LIyEjIZDLk5OTAaDTC09MTACx2xv4TioqKIJFI0NvbCx8fH7Njjo6O4tuXlpYW9PX1QafTYcOGDdi/fz8OHjyI5uZmuLm5YebMmdizZw+Ar4GHVqtFT08PvL29MTAwAJ1Oh6amJqjVanR3d2N4eBgAUFdXB+DrJ6QUCgUEQUB3dzdaW1shlUphZ2c36TyQFStWQKVS4eTJk3jy5Ak2btyIlStXIjg4+J/fGZqIsG3bNvHZJyQkwMnJCeXl5di1axeeP3+OrKwssb9pWTE2NhYRERFwdHREY2MjLly4gLt376KhoWHcZbj09HRUVlZi48aNWLNmDYqLiy1qIwHAgQMHcOrUKcybNw8JCQmYMWMG2tvbUV1djXv37iE6OvpvPBbG2GRY9cPfzOq+V9n327ovJhPVYLl9+zaFh4eTTCb76cq+Y01UawUAKZVKcfysrCwCQJ6enhQeHk5KpZLs7e1JLpeTk5MT+fv7i5V8q6urCQAlJiYSEVFBQQEtXLiQBEEYt7Kvv78/OTg4iPVsnJ2dSavVUkJCAhUVFYn97O3tJ6xsbPrSaDQ/vDeTV69eUWJiIikUCvF8f39/Onfu3Lj9bRUACgwMFH/Ozc0lALRjxw769OmT2P7x40fS6/UEgOrr68X2rq4u6uvrs7julStXCAAZDAazdtPf4Jw5c6ijo0Nsn6g2kkKhIF9fXxoYGLAYY7z6QIwx6+FAhtmMb//5ff78mdzc3EilUlFXV5dZv9HRUZLJZGbBSVNTEwEgnU43qbHUajWp1Wq6c+cOSaVSUqlU1NLSYtFPoVCQUqmc9D1MtiDeyMgINTY2UmZmJs2aNYsA0LVr1yY9znQ3NpBZtGgRyeVycWuKbz19+pQAUGpq6g+vOzo6Sq6urhQdHW3WbgpkTMUdxzv29OlTsU2hUJBGo6Hh4eGfuS3GmBXw0hKzST09Pfjw4QNiY2Mtdveur6/H0NCQWVtgYCBcXV1RV1cHo9EId3f3SY2zZs0alJSUYNOmTYiJiUFFRQUCAwPF4xERESgrK0Nrayu0Wu0PrzfZxGiJRILQ0FCEhoYiMjISK1asQElJCeLj4yc1b1syODiIZ8+ewdfXF5mZmRbHTXlOY5foioqKcPHiRTQ0NMBoNJo9046OjnHHCgsLs2gbrzZSXFwczp8/j5CQEMTFxSEmJkZcQmWMTS8cyDCb5OXlBZlMhoaGBgwODsLZ2RkAYDQasXfvXov+Dg4OSEpKwunTp5GSkoL8/HwxqAD+/6klFxcXi3NXrVqFW7duQa/XIzo6GhUVFQgKCgIA7Nu3D2VlZdi5cyeKi4uhVCrNzu3s7ITRaERwcDCA7ydGNzc3w8PDA97e3mbtpiRmqVQ66edjS4xGI4gI7e3tE9btAYCBgQHx++zsbKSlpcHT0xOrV6+Gn5+fGGTk5OSMu3cYMPnaSGfOnMGcOXOQn58Pg8EAg8EAqVSKrVu3Ijs7Gx4eHr90r4yx348DGWaTJBIJkpOTkZ2djcWLF0Ov16O3txdlZWVQq9Xw9fW1OOfo0aOora3F1atXUVtbi3Xr1kEQBLS1teHOnTuorq6ecNPL2NhYlJaWQq/XIyYmBg8ePEBwcLBYSC8jIwMBAQFYu3Yt1Go13r17hxcvXqCqqgoGg0EMZL6XGF1eXo709HRERUVh/vz5UCqVaGtrQ0lJCaRSKXbv3v3Hnqc1mYKLsLAw1NfX/7D/ly9fkJGRAZVKhcePH5u9kSMinDp1aspzcnBwQFpaGtLS0tDR0YHKykrk5+ejsLAQnZ2duHv37pTHYIz9JtZe22JsMr58+UIAzDZs/PTpEx07doy0Wi0JgkCzZ8+m1NRU6uvrE3NcxhoeHqasrCwKDQ0lmUxGLi4utGDBAkpNTTXbiHKi8ysqKkgul5O3tzc1NzeL7eXl5aTX68nT05McHR3Jx8eHIiMjKSMjg16/fm12jYkSo58/f04pKSm0ZMkSUiqVJAgCzZ07lxITE83G+hdgTI5McHAwOTs7m/0OJvLmzRsCQJs3b7Y49ujRo3GTt38lgX2skZERCggIIIlEMm4uD2PMOjiQYTbB9M8rJibG2lNhv8HYQObChQsEgLZs2UL9/f0W/dva2sQgZGRkhGQyGWk0GrNPFb1//54iIiKmHMgMDw9TTU2NRb/e3l7y8fEhQRA4CZixaYSXlphNuHnzJoCvybXs35OUlITa2lpcuXIFNTU10Ol08PX1RVdXF1paWvDw4UNcu3YNGo3ml5YVf8bQ0JC4vBcWFobZs2ejv78fpaWl6OzsRFpa2j9fnJAxW8KBDJvWjh8/jqamJty4cQNyuRxJSUnWnhKbIlNS7be7ktvZ2aGgoADr16/HpUuXUFpaiv7+fnh5eUGr1SIrKws6nU7sf+LECSgUChQUFOD8+fPw9vZGfHw8Dh8+jJCQkCnNTy6XIzMzE/fv30dVVRXevn0Ld3d3BAYG4sSJE4iLi5vS9Rljv5cd0f+23WVsGnJ3d8fIyAgiIyNhMBgQHh5u7SmxKers7IRKpRKTphljbCr4jQyb1oxGo7WnwH4zXiZkjP1O/EaGMfZXfLtMKJVK0dTUBI1GY+1pMcZsHAcyjLG/gpcJGWN/AgcyjDHGGLNZEmtPgDHGGGPsV3EgwxhjjDGbxYEMY4wxxmwWBzKMMcYYs1kcyDDGGGPMZnEgwxhjjDGbxYEMY4wxxmwWBzKMMcYYs1n/AelKbyQw2BtSAAAAAElFTkSuQmCC' width='500' height='500'> Value(s)/Threshold: Blouses <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.736321</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>0.016356</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.000186</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.000737</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.016356</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.023131</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.016356</td> </tr> </tbody> </table> Value(s)/Threshold: Dresses <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.45682</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>0.022482</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.000352</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.001392</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.022482</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.031795</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.022482</td> </tr> </tbody> </table> Value(s)/Threshold: Pants <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.880668</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>-0.026661</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.000522</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.002119</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.026661</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.037704</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.026661</td> </tr> </tbody> </table> Value(s)/Threshold: Knits <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.59109</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>0.011213</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.000088</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.00035</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.011213</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.015857</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.011213</td> </tr> </tbody> </table> Value(s)/Threshold: Intimates <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.987006</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>-0.025599</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.000483</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.001959</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.025599</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.036203</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.025599</td> </tr> </tbody> </table> Value(s)/Threshold: Outerwear <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.971802</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>-0.026121</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.000503</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.00204</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.026121</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.036941</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.026121</td> </tr> </tbody> </table> Value(s)/Threshold: Lounge <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.940864</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>-0.045509</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.001573</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.006474</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.045509</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.06436</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.045509</td> </tr> </tbody> </table> Value(s)/Threshold: Sweaters <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.878016</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>0.021044</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.000305</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.001207</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.021044</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.029761</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.021044</td> </tr> </tbody> </table> Value(s)/Threshold: Skirts <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.92018</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>-0.021053</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.000323</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.001308</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.021053</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.029773</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.021053</td> </tr> </tbody> </table> Value(s)/Threshold: Fine gauge <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.906391</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>-0.020859</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.000317</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.001283</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.020859</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.0295</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.020859</td> </tr> </tbody> </table> Value(s)/Threshold: Sleep <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.981084</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>-0.047723</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.001743</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.007185</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.047723</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.067491</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.047723</td> </tr> </tbody> </table> Value(s)/Threshold: Jackets <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.939627</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>-0.035868</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.000961</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.003928</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.035868</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.050725</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.035868</td> </tr> </tbody> </table> Value(s)/Threshold: Swim <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.970653</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>0.01162</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.000094</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.000373</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.01162</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.016433</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.01162</td> </tr> </tbody> </table> Value(s)/Threshold: Trend <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.98957</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>0.110042</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.00748</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.028876</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.110042</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.155623</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.110042</td> </tr> </tbody> </table> Value(s)/Threshold: Jeans <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.902413</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>-0.055597</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.002382</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.009875</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.055597</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.078626</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.055597</td> </tr> </tbody> </table> Value(s)/Threshold: Legwear <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.986034</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>-0.027173</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.000545</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.002215</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.027173</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.038428</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.027173</td> </tr> </tbody> </table> Value(s)/Threshold: Shorts <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.973128</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>-0.019247</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.00027</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.001091</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.019247</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.027219</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.019247</td> </tr> </tbody> </table> Value(s)/Threshold: Layering <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th>name</th> <th>description</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>CI</td> <td>Class Imbalance (CI)</td> <td>0.988332</td> </tr> <tr> <td>DPL</td> <td>Difference in Positive Proportions in Labels (DPL)</td> <td>-0.086077</td> </tr> <tr> <td>JS</td> <td>Jensen-Shannon Divergence (JS)</td> <td>0.006138</td> </tr> <tr> <td>KL</td> <td>Kullback-Liebler Divergence (KL)</td> <td>0.026226</td> </tr> <tr> <td>KS</td> <td>Kolmogorov-Smirnov Distance (KS)</td> <td>0.086077</td> </tr> <tr> <td>LP</td> <td>L-p Norm (LP)</td> <td>0.121732</td> </tr> <tr> <td>TVD</td> <td>Total Variation Distance (TVD)</td> <td>0.086077</td> </tr> </tbody> </table>
github_jupyter
## Campaign 2, Day 1 *** #### STEP 1 (day1_step1_TransferCells.hso) * Transfer 180uL cells from 12 channel reservoir - column 1 to BlackwClearBottomAssay - columns 1-6 * Transfer 180uL cells from 12 channel reservoir - column 1 to BlackwClearBottomAssay - columns 7-12 #### STEP 2 (day1_step2_DiluteMuconate.hso) * Transfer specified volumes of buffer from 12 channel reservoir - column 6 to muconate dilution plate (PlateOne ConicalBottom) - all columns. * Transfer specified volumes of muconate from 12 channel reservoir - column 7 to muconate dilution plate (PlateOne ConicalBottom) - all columns. * Muconate stock concentration = 50nM * Final muconate concentrations per column: [14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 0] #### STEP 3 (day1_step3_CombineCellsGlucoseMuconate.hso) * Transfer 10uL from column in glucose dilution plate to matching column in assay plate * Transfer 10uL from column in muconate dilution plate to matching column in assay plate #### Incubate in hidex for 21 hours, then run Campaign 2, Day 2 protocol *** #### DECK LAYOUT: 1. TipBox.200uL.Corning-4864.orangebox 2. Empty (HEAT NEST) 3. Reservoir.12col.Agilent-201256-100.BATSgroup Columns 1,2 -> cells Column 6 -> Muconate stock (50nm) Column 7 -> Buffer for Muconate Dilutions (eventually) Column 12 -> lysis byffer (add just before runnig day 2 protocol) 4. Plate.96.Corning-3635.ClearUVAssay (same measurements as Corning Black UV) Empty at start, will be final assay plate 5. Plate.96.PlateOne-1833-9600.ConicalBottomStorage Empty at start, Muconate Dilution plate 6. Plate.96.PlateOne-1833-9600.ConicalBottomStorage Glucose Dilution Plate 7. TipBox.50uL.Axygen-EV-50-R-S.tealbox 8. Empty *** #### REQUIRED VOLUMES (day 1 only): NOTE: Min required volumes were only tested with water, add extra volume for more viscous liquids. | Plate Type | Deck Position | Column | z-shift | Actual Volume Used | Min Volume Required | | :-: | :-: | :-: | :-: | :-: | :-: | | Reservoir.12col.Agilent-201256-100.BATSgroup | 3 | 1 | 0.5 | 8640 uL (1080 uL per tip) | 8800 uL minimum, 9000+ uL recommended | | Reservoir.12col.Agilent-201256-100.BATSgroup | 3 | 2 | 0.5 | 8640 uL (1080 uL per tip) | 8800 uL minimum, 9000+ uL recommended | | Reservoir.12col.Agilent-201256-100.BATSgroup | 3 | 6 | 0.5 | 4752 uL (594 uL per tip) | 4900 uL minimum, 5100+ uL recommended | | Reservoir.12col.Agilent-201256-100.BATSgroup | 3 | 7 | 0.5 | 9024 uL (1128 uL per tip) | 9200 uL minimum, 9400+ recommended| | Plate.96.PlateOne-1833-9600.ConicalBottomStorage | 6 | each well | 1 | 10 uL | 20 uL minimum, 30+ uL recommended | ``` from liquidhandling import SoloSoft, SoftLinx from liquidhandling import Reservoir_12col_Agilent_201256_100_BATSgroup from liquidhandling import Plate_96_PlateOne_1833_9600_ConicalBottomStorage from liquidhandling import Plate_96_Corning_3635_ClearUVAssay #* Program Variables ---------------------------------------------------------------- default_z_shift = 2 reservoir_z_shift = .5 # z shift for 12 channel reservoirs lambda6_path = "/lambda_stor/data/hudson/instructions/" # Step 1 variables # mix before transfer? -> might be a good idea cell_transfer_volume = 180 cell_aspirate_z_shift = 1 cell_blowoff = 0 cell_mix_volume = 150 cell_num_mixes = 3 # Step 2 variables muconate_dilution_volumes = [84, 78, 72, 66, 60, 54, 48, 42, 36, 30, 24] # last column excluded (can't aspirate 0uL) muconate_12_channel_column = 6 muconate_blowoff = 0 buffer_dilution_volumes = [66, 72, 78, 84, 90, 96, 102, 108, 114, 120, 126, 150] buffer_12_channel_column = 7 buffer_blowoff = 0 dilution_mix_volume = 80 dilution_num_mixes = 3 # Step 3 Variables glucose_transfer_volume = 10 glucose_blowoff = 0 glucose_z_shift = 1 muconate_transfer_volume = 10 muconate_blowoff = 0 step3_mix_volume = 50 step3_num_mixes = 3 #* Initialize solosoft and deck layout ------------------------------------------------ soloSoft = SoloSoft( filename=lambda6_path + "day1_step1_TransferCells.hso", plateList=[ "TipBox.200uL.Corning-4864.orangebox", "Empty", "Reservoir.12col.Agilent-201256-100.BATSgroup", "Plate.96.Corning-3635.ClearUVAssay", "Plate.96.PlateOne-1833-9600.ConicalBottomStorage", "Plate.96.PlateOne-1833-9600.ConicalBottomStorage", "TipBox.50uL.Axygen-EV-50-R-S.tealbox", "Empty" ] ) #* STEP 1: Transfer Cells ------------------------------------------------------------- soloSoft.getTip() # 200uL tips -> all transfers are same cells, OK to keep same tips for all of step 1 for i in range(1,3): for j in range(1,7): soloSoft.aspirate( position="Position3", aspirate_volumes=Reservoir_12col_Agilent_201256_100_BATSgroup().setColumn(i, cell_transfer_volume), aspirate_shift=[0,0,cell_aspirate_z_shift], pre_aspirate=cell_blowoff, mix_at_start=True, # mix cells before aspirating them -> probably a good idea mix_volume=cell_mix_volume, mix_cycles=cell_num_mixes, dispense_height = cell_aspirate_z_shift, ) soloSoft.dispense( position="Position4", dispense_volumes=Plate_96_Corning_3635_ClearUVAssay().setColumn((6*(i-1))+j, cell_transfer_volume), dispense_shift=[0,0,default_z_shift], blowoff=cell_blowoff, # no need to mix because it will shake in the Hidex ) # for testing # j_column = (6*(i-1))+j # print("Cell spirate: 12 channel ( " + str(i) + " ) to BlackwClearBottomAssay ( " + str(j_column) + " )") soloSoft.shuckTip() soloSoft.savePipeline() #* STEP 2: Create Muconate Dilution plate --------------------------------------------- soloSoft = SoloSoft( filename=lambda6_path + "day1_step2_DiluteMuconate.hso", plateList=[ "TipBox.200uL.Corning-4864.orangebox", "Empty", "Reservoir.12col.Agilent-201256-100.BATSgroup", "Plate.96.Corning-3635.ClearUVAssay", "Plate.96.PlateOne-1833-9600.ConicalBottomStorage", "Plate.96.PlateOne-1833-9600.ConicalBottomStorage", "TipBox.50uL.Axygen-EV-50-R-S.tealbox", "Empty" ] ) # Dispense buffer into whole dilution plate soloSoft.getTip() # 200uL tips for i in range(1,13): soloSoft.aspirate( position="Position3", aspirate_volumes=Reservoir_12col_Agilent_201256_100_BATSgroup().setColumn(buffer_12_channel_column, buffer_dilution_volumes[i-1]), aspirate_shift=[0,0,reservoir_z_shift], pre_aspirate=buffer_blowoff, ) soloSoft.dispense( position="Position5", dispense_volumes=Plate_96_PlateOne_1833_9600_ConicalBottomStorage().setColumn(i, buffer_dilution_volumes[i-1]), dispense_shift=[0,0,default_z_shift], blowoff=buffer_blowoff, ) # dispense muconate into whole dilution plate, no need to get new tips here for i in range(1,12): soloSoft.aspirate( position="Position3", aspirate_volumes=Reservoir_12col_Agilent_201256_100_BATSgroup().setColumn(muconate_12_channel_column, muconate_dilution_volumes[i-1]), aspirate_shift=[0,0,reservoir_z_shift], pre_aspirate=buffer_blowoff, ) soloSoft.dispense( position="Position5", dispense_volumes=Plate_96_PlateOne_1833_9600_ConicalBottomStorage().setColumn(i, muconate_dilution_volumes[i-1]), dispense_shift=[0,0,default_z_shift], blowoff=buffer_blowoff, mix_at_finish=True, mix_volume=dilution_mix_volume, mix_cycles=dilution_num_mixes, aspirate_height=default_z_shift, # no need to mix, will shake in Hidex ) soloSoft.shuckTip() soloSoft.savePipeline() #* STEP 3: Combine muconate and glucose with cell plate -> New tips each transfer! ------------------- soloSoft = SoloSoft( filename=lambda6_path + "day1_step3_CombineCellsGlucoseMuconate.hso", plateList=[ "TipBox.200uL.Corning-4864.orangebox", "Empty", "Reservoir.12col.Agilent-201256-100.BATSgroup", "Plate.96.Corning-3635.ClearUVAssay", "Plate.96.PlateOne-1833-9600.ConicalBottomStorage", "Plate.96.PlateOne-1833-9600.ConicalBottomStorage", "TipBox.50uL.Axygen-EV-50-R-S.tealbox", "Empty" ] ) for i in range(1,13): # dispense glucose into cell plate soloSoft.getTip("Position7") soloSoft.aspirate( position="Position6", aspirate_volumes=Plate_96_PlateOne_1833_9600_ConicalBottomStorage().setColumn(i, glucose_transfer_volume), aspirate_shift=[0,0,glucose_z_shift], pre_aspirate=glucose_blowoff, mix_at_start=True, mix_volume=step3_mix_volume, mix_cycles=step3_num_mixes, dispense_height=glucose_z_shift, ) soloSoft.dispense( position="Position4", dispense_volumes=Plate_96_Corning_3635_ClearUVAssay().setColumn(i, glucose_transfer_volume), dispense_shift=[0,0,default_z_shift], blowoff=glucose_blowoff, ) # dispense muconate into cell plate soloSoft.aspirate( position="Position5", aspirate_volumes=Plate_96_PlateOne_1833_9600_ConicalBottomStorage().setColumn(i, muconate_transfer_volume), aspirate_shift=[0,0,default_z_shift], pre_aspirate=muconate_blowoff, mix_at_start=True, mix_volume=step3_mix_volume, mix_cycles=step3_num_mixes, dispense_height=default_z_shift, ) soloSoft.dispense( position="Position4", dispense_volumes=Plate_96_Corning_3635_ClearUVAssay().setColumn(i, muconate_transfer_volume), dispense_shift=[0,0,default_z_shift], blowoff=muconate_blowoff, # no need to mix -> will shake in the Hidex ) soloSoft.shuckTip() soloSoft.savePipeline() #* Add Steps 1-3 .hso files to SofltLinx .slvp file (and generate .ahk and manifest .txt files) # all .hso files must be in labautomation/instructions folder to run properly softLinx = SoftLinx("day1_cells_glucose_muconate", lambda6_path + "day1_cells_glucose_muconate.slvp") softLinx.soloSoftRun("C:\\labautomation\\instructions\\day1_step1_TransferCells.hso") # add the correct paths of the .hso files softLinx.soloSoftRun("C:\\labautomation\\instructions\\day1_step2_DiluteMuconate.hso") # assume transfered from lambda 6 or run locally for prep on hudson01? softLinx.soloSoftRun("C:\\labautomation\\instructions\\day1_step3_CombineCellsGlucoseMuconate.hso") softLinx.saveProtocol() ```
github_jupyter
``` import json import re import sentencepiece as spm import os os.environ['CUDA_VISIBLE_DEVICES'] = '1' from prepro_utils import preprocess_text, encode_ids, encode_pieces sp_model = spm.SentencePieceProcessor() sp_model.Load('sp10m.cased.bert.model') with open('sp10m.cased.bert.vocab') as fopen: v = fopen.read().split('\n')[:-1] v = [i.split('\t') for i in v] v = {i[0]: i[1] for i in v} class Tokenizer: def __init__(self, v): self.vocab = v pass def tokenize(self, string): return encode_pieces(sp_model, string, return_unicode=False, sample=False) def convert_tokens_to_ids(self, tokens): return [sp_model.PieceToId(piece) for piece in tokens] def convert_ids_to_tokens(self, ids): return [sp_model.IdToPiece(i) for i in ids] tokenizer = Tokenizer(v) import bert from bert import run_classifier from bert import optimization from bert import tokenization from bert import modeling import numpy as np import json import tensorflow as tf import itertools from unidecode import unidecode import re BERT_INIT_CHKPNT = 'bert-base-v3/model.ckpt' BERT_CONFIG = 'bert-base-v3/config.json' # !wget https://raw.githubusercontent.com/huseinzol05/Malaya-Dataset/master/subjectivity/subjectivity-negative-bm.txt # !wget https://raw.githubusercontent.com/huseinzol05/Malaya-Dataset/master/subjectivity/subjectivity-positive-bm.txt with open('subjectivity-negative-bm.txt','r') as fopen: texts = fopen.read().split('\n') labels = [0] * len(texts) with open('subjectivity-positive-bm.txt','r') as fopen: positive_texts = fopen.read().split('\n') labels += [1] * len(positive_texts) texts += positive_texts assert len(labels) == len(texts) tokenizer.tokenize(texts[1]) list(v.keys())[:10] from tqdm import tqdm input_ids, input_masks = [], [] for text in tqdm(texts): tokens_a = tokenizer.tokenize(text) tokens = ["[CLS]"] + tokens_a + ["[SEP]"] input_id = tokenizer.convert_tokens_to_ids(tokens) input_mask = [1] * len(input_id) input_ids.append(input_id) input_masks.append(input_mask) bert_config = modeling.BertConfig.from_json_file(BERT_CONFIG) epoch = 10 batch_size = 60 warmup_proportion = 0.1 num_train_steps = int(len(texts) / batch_size * epoch) num_warmup_steps = int(num_train_steps * warmup_proportion) bert_config.hidden_size def create_initializer(initializer_range=0.02): return tf.truncated_normal_initializer(stddev=initializer_range) class Model: def __init__( self, dimension_output, learning_rate = 2e-5, training = True, ): self.X = tf.placeholder(tf.int32, [None, None]) self.MASK = tf.placeholder(tf.int32, [None, None]) self.Y = tf.placeholder(tf.int32, [None]) model = modeling.BertModel( config=bert_config, is_training=training, input_ids=self.X, input_mask=self.MASK, use_one_hot_embeddings=False) output_layer = model.get_sequence_output() output_layer = tf.layers.dense( output_layer, bert_config.hidden_size, activation=tf.tanh, kernel_initializer=create_initializer()) self.logits_seq = tf.layers.dense(output_layer, dimension_output, kernel_initializer=create_initializer()) self.logits_seq = tf.identity(self.logits_seq, name = 'logits_seq') self.logits = self.logits_seq[:, 0] self.logits = tf.identity(self.logits, name = 'logits') self.cost = tf.reduce_mean( tf.nn.sparse_softmax_cross_entropy_with_logits( logits = self.logits, labels = self.Y ) ) self.optimizer = optimization.create_optimizer(self.cost, learning_rate, num_train_steps, num_warmup_steps, False) correct_pred = tf.equal( tf.argmax(self.logits, 1, output_type = tf.int32), self.Y ) self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) dimension_output = 2 learning_rate = 2e-5 tf.reset_default_graph() sess = tf.InteractiveSession() model = Model( dimension_output, learning_rate ) sess.run(tf.global_variables_initializer()) var_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope = 'bert') saver = tf.train.Saver(var_list = var_lists) saver.restore(sess, BERT_INIT_CHKPNT) from sklearn.model_selection import train_test_split train_input_ids, test_input_ids, train_Y, test_Y, train_mask, test_mask = train_test_split( input_ids, labels, input_masks, test_size = 0.2 ) pad_sequences = tf.keras.preprocessing.sequence.pad_sequences from tqdm import tqdm import time EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 1, 0, 0, 0 while True: lasttime = time.time() if CURRENT_CHECKPOINT == EARLY_STOPPING: print('break epoch:%d\n' % (EPOCH)) break train_acc, train_loss, test_acc, test_loss = [], [], [], [] pbar = tqdm( range(0, len(train_input_ids), batch_size), desc = 'train minibatch loop' ) for i in pbar: index = min(i + batch_size, len(train_input_ids)) batch_x = train_input_ids[i: index] batch_x = pad_sequences(batch_x, padding='post') batch_mask = train_mask[i: index] batch_mask = pad_sequences(batch_mask, padding='post') batch_y = train_Y[i: index] acc, cost, _ = sess.run( [model.accuracy, model.cost, model.optimizer], feed_dict = { model.Y: batch_y, model.X: batch_x, model.MASK: batch_mask }, ) train_loss.append(cost) train_acc.append(acc) pbar.set_postfix(cost = cost, accuracy = acc) pbar = tqdm(range(0, len(test_input_ids), batch_size), desc = 'test minibatch loop') for i in pbar: index = min(i + batch_size, len(test_input_ids)) batch_x = test_input_ids[i: index] batch_x = pad_sequences(batch_x, padding='post') batch_y = test_Y[i: index] batch_mask = test_mask[i: index] batch_mask = pad_sequences(batch_mask, padding='post') acc, cost = sess.run( [model.accuracy, model.cost], feed_dict = { model.Y: batch_y, model.X: batch_x, model.MASK: batch_mask }, ) test_loss.append(cost) test_acc.append(acc) pbar.set_postfix(cost = cost, accuracy = acc) train_loss = np.mean(train_loss) train_acc = np.mean(train_acc) test_loss = np.mean(test_loss) test_acc = np.mean(test_acc) if test_acc > CURRENT_ACC: print( 'epoch: %d, pass acc: %f, current acc: %f' % (EPOCH, CURRENT_ACC, test_acc) ) CURRENT_ACC = test_acc CURRENT_CHECKPOINT = 0 else: CURRENT_CHECKPOINT += 1 print('time taken:', time.time() - lasttime) print( 'epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n' % (EPOCH, train_loss, train_acc, test_loss, test_acc) ) EPOCH += 1 saver = tf.train.Saver(tf.trainable_variables()) saver.save(sess, 'bert-base-subjectivity/model.ckpt') dimension_output = 2 learning_rate = 2e-5 tf.reset_default_graph() sess = tf.InteractiveSession() model = Model( dimension_output, learning_rate, training = False ) sess.run(tf.global_variables_initializer()) saver = tf.train.Saver(tf.trainable_variables()) saver.restore(sess, 'bert-base-subjectivity/model.ckpt') strings = ','.join( [ n.name for n in tf.get_default_graph().as_graph_def().node if ('Variable' in n.op or 'Placeholder' in n.name or 'logits' in n.name or 'alphas' in n.name or 'self/Softmax' in n.name) and 'adam' not in n.name and 'beta' not in n.name and 'global_step' not in n.name ] ) strings.split(',') real_Y, predict_Y = [], [] pbar = tqdm( range(0, len(test_input_ids), batch_size), desc = 'validation minibatch loop' ) for i in pbar: index = min(i + batch_size, len(test_input_ids)) batch_x = test_input_ids[i: index] batch_x = pad_sequences(batch_x, padding='post') batch_mask = test_mask[i: index] batch_mask = pad_sequences(batch_mask, padding='post') batch_y = test_Y[i: index] predict_Y += np.argmax(sess.run(model.logits, feed_dict = { model.X: batch_x, model.MASK: batch_mask }, ), 1, ).tolist() real_Y += batch_y from sklearn import metrics print( metrics.classification_report( real_Y, predict_Y, target_names = ['negative', 'positive'],digits = 5 ) ) def freeze_graph(model_dir, output_node_names): if not tf.gfile.Exists(model_dir): raise AssertionError( "Export directory doesn't exists. Please specify an export " 'directory: %s' % model_dir ) checkpoint = tf.train.get_checkpoint_state(model_dir) input_checkpoint = checkpoint.model_checkpoint_path absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1]) output_graph = absolute_model_dir + '/frozen_model.pb' clear_devices = True with tf.Session(graph = tf.Graph()) as sess: saver = tf.train.import_meta_graph( input_checkpoint + '.meta', clear_devices = clear_devices ) saver.restore(sess, input_checkpoint) output_graph_def = tf.graph_util.convert_variables_to_constants( sess, tf.get_default_graph().as_graph_def(), output_node_names.split(','), ) with tf.gfile.GFile(output_graph, 'wb') as f: f.write(output_graph_def.SerializeToString()) print('%d ops in the final graph.' % len(output_graph_def.node)) freeze_graph('bert-base-subjectivity', strings) def load_graph(frozen_graph_filename): with tf.gfile.GFile(frozen_graph_filename, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def) return graph # g = load_graph('bert-base-subjectivity/frozen_model.pb') # x = g.get_tensor_by_name('import/Placeholder:0') # logits = g.get_tensor_by_name('import/logits:0') # test_sess = tf.InteractiveSession(graph = g) # result = test_sess.run(tf.nn.softmax(logits), feed_dict = {x: [input_id]}) # result import boto3 bucketName = 'huseinhouse-storage' Key = 'bert-base-subjectivity/frozen_model.pb' outPutname = "v34/subjective/bert-base-subjective.pb" s3 = boto3.client('s3') s3.upload_file(Key,bucketName,outPutname) ```
github_jupyter
# Visually Shaping Distributions with TrafPy This Notebook shows an example of how to shape distributions with `TrafPy`. We will save our shaped distributions, re-load them, and use them to generate custom flow-centric traffic data, which we will then save in .pickle format such that you'd be able to import the traffic into any simulation, emulation, or experimentation environment. We will also organise the demonstrated traffic into time slots and generate an sqlite data base which we can save to our disk and access during a simulation, thereby enabling us to scale to very large simulation sizes. Import the `trafpy.generator` module ``` import trafpy.generator as tpg ``` Set global path ``` from pathlib import Path import gzip import pickle PATH = 'data/visually_shape_and_generate_custom_traffic/' Path(PATH).mkdir(exist_ok=True, parents=True) ``` ## Generate Random Variables from 'Named' Distribution Generate a distribution of random variables using one of the following standard named distributions: - exponential - lognormal - weibull - pareto This might be e.g. interarrival times, sizes, number of nodes in a job, probability of job dependency/edge formation etc... Note that to turn on the interactive functionality of these plotting functions, we simply set `interactive_plot=True`. In the below example, try setting the arguments as `dist='weibull'`, `min_val=None`, `max_val=None`, `round_to_nearest=None`, and `size=150000`. Run the cell, and set the `TrafPy` parameters which pop up as `_alpha=5` and `_lambda=0.5` before clicking 'Run Interact'. You will see a print out of the distribution characteristics you've generated, a histogram, and the corresponding CDF. Feel free to play around with these parameters and to enter different named distribution names to shape your own distributions. ``` rand_vars = tpg.gen_named_val_dist(dist='weibull', interactive_plot=True, xlim=None, # [1, 10000] None min_val=None, # 50 None max_val=None, # 200 None round_to_nearest=None, # None 25 num_decimal_places=2, rand_var_name='Random Variable', # prob_rand_var_less_than=[4847, 9431], num_bins=0, size=150000) ``` Note that to use our `TrafPy` parameters to re-generate a distribution we've visually shaped, we simply make a note of the parameters and enter them into the same function but now setting `interactive_plot=False`. This is a key feature of `TrafPy` which enables users to share and re-generate distributions and traffic data given only a handful of `TrafPy` parameters. E.g. Assuming we shaped a distribution with `TrafPy` parameters `dist='weibull'`, `min_val=None`, `max_val=None`, `round_to_nearest=None`, and named distribution parameters `_alpha=5` and `_lambda=0.5`, we would reproduce this distribution with: ``` dist = tpg.gen_named_val_dist(dist='weibull', interactive_plot=False, params={'_alpha': 5, '_lambda': 0.5}, round_to_nearest=None, min_val=None, max_val=None) # save filename = PATH+'random_variable.pickle' with gzip.open(filename, 'wb') as f: pickle.dump(rand_vars.result, f) ``` ## Generate Random Variables from Arbitrary 'Multimodal' Distribution In previous cells we considered standard distributions (exponential, lognormal, weibull, pareto...). These are common distributions which occur in many different scenarios. However, sometimes in real scenarios distributions might not fall into these well-defined distribution categories. Multimodal distributions are distributions with >= 2 different modes. A multimodal distribution with 2 modes is a special case called a 'bimodal distribution', which is very common. The traffic toolbox allows you to generate arbitrary multimodal distributions. This is very powerful because with access to the above standard distributions and the arbitrary multimodal distribution generator, any distribution can be generated if you are able to shape it sufficiently. Generating multimodal distributions is a little more involved than generating the standard distributions was, but it can still be done in a matter of seconds using `TrafPy`. There are a few simple steps to generating an arbitrary multimodal distribution: 1. Decide the number of modes (i.e. peaks) and other distribution characteristics 2. Shape each mode individually 3. Combine all of modes together and add some 'background noise' to the distribution such that the modes are 'joined' together to form a single multimodal distribution (background noise can be set to 0 if desired) 4. Use your multimodal distribution to generate demands 5. Save the generated demands ``` # 1. define distribution variables min_val=1 max_val=1e5 num_modes=2 xlim=None rand_var_name='Random Variable' round_to_nearest=1 num_decimal_places=1 # 2. shape each mode data_dict = tpg.gen_skew_dists(min_val=min_val, max_val=max_val, num_modes=num_modes, xlim=xlim, rand_var_name=rand_var_name, round_to_nearest=round_to_nearest, num_decimal_places=num_decimal_places) # 3. combine modes to form multimodal distribution multimodal_prob_dist = tpg.combine_multiple_mode_dists(data_dict, min_val=min_val, max_val=max_val, xlim=xlim, rand_var_name=rand_var_name, round_to_nearest=round_to_nearest, num_decimal_places=num_decimal_places) # 4. use dist to generate random variables rand_vars = tpg.gen_rand_vars_from_discretised_dist(unique_vars=list(multimodal_prob_dist.result.keys()), probabilities=list(multimodal_prob_dist.result.values()), num_demands=150000) # 5. save filename = PATH+'multimodal_random_variable.pickle' with gzip.open(filename, 'wb') as f: pickle.dump(rand_vars, f) ``` ## Generate Discrete Probability Distribution from Random Variables Previous cells generated random variable data. However, sometimes it might be desirable to have the probability distribution/probability mass function (PMF) of the generated data rather than all the original generated data. Using the PMF, anyone can sample randomly from the PMF to produce new data with similar characteristics to the original data which you generated. Run this cell to load your previously generated distribution data and convert it into a PMF ``` filename = 'random_variable.pickle' with gzip.open(PATH+'random_variable.pickle', 'rb') as f: rand_vars = pickle.load(f) xk, pmf = tpg.gen_discrete_prob_dist(rand_vars, round_to_nearest=None, num_decimal_places=2) prob_dist = {var: prob for var,prob in zip(xk, pmf)} # save filename = PATH+'prob_dist.pickle' with gzip.open(filename, 'wb') as f: pickle.dump(prob_dist, f) ``` ## Generate Random Variables from Discrete Probability Distribution Load a previously saved distribution and sample from it to generate any number of random variable data points. This function/cell does not plot the distribution, which avoids long delay times when trying to generate very large amounts of data. ``` with gzip.open(PATH+'prob_dist.pickle', 'rb') as f: prob_dist = pickle.load(f) rand_vars = tpg.gen_rand_vars_from_discretised_dist(unique_vars=list(prob_dist.keys()), probabilities=list(prob_dist.values()), num_demands=150000) # save filename = PATH+'random_variables_from_prob_dist.pickle' with gzip.open(filename, 'wb') as f: pickle.dump(rand_vars, f) ``` ## Generate Source-Destination Node Distribution Generate a matrix describing the traffic distribution of each source-node pair in a network. ``` net = tpg.gen_arbitrary_network(num_eps=12, ep_label='ep') ENDPOINTS = net.graph['endpoints'] # comment out all except one below node_dist, _ = tpg.gen_uniform_node_dist(eps=ENDPOINTS, show_fig=True, print_data=False) # node_dist, _ = tpg.gen_multimodal_node_dist(eps=ENDPOINTS, # skewed_nodes=[], # skewed_node_probs=[], # num_skewed_nodes=None, # show_fig=True, # print_data=True) # node_dist, _ = tpg.gen_multimodal_node_pair_dist(eps=ENDPOINTS, # skewed_pairs=[], # skewed_pair_probs=[], # num_skewed_pairs=None, # show_fig=True, # print_data=True) # save filename = PATH+'node_dist.pickle' with gzip.open(filename, 'wb') as f: pickle.dump(node_dist, f) ``` ## Use Node Distribution to Generate Source-Destination Node Demands Sample from a previously generated source-destination matrix to generate source-destination node pair demands. ``` with gzip.open(PATH+'node_dist.pickle', 'rb') as f: node_dist = pickle.load(f) node_demands = tpg.gen_node_demands(eps=ENDPOINTS, node_dist=node_dist, num_demands=150000) # save filename = PATH+'node_demands.pickle' with gzip.open(filename, 'wb') as f: pickle.dump(node_demands, f) ``` ## Use Previously Generated Distributions to Create Flow-Centric 'Demand Data' Dictionary `TrafPy` can use your custom distributions to generate a `demand_data` traffic data set using the `trafpy.generator.create_demand_data()` function: ``` # flow-centric demand data with gzip.open(PATH+'node_dist.pickle', 'rb') as f: node_dist = pickle.load(f) with gzip.open(PATH+'prob_dist.pickle', 'rb') as f: flow_size_dist = pickle.load(f) with gzip.open(PATH+'prob_dist.pickle', 'rb') as f: interarrival_time_dist = pickle.load(f) network_load_config = {'network_rate_capacity': net.graph['max_nw_capacity'], 'ep_link_capacity': net.graph['ep_link_capacity'], 'target_load_fraction': 0.1} flow_centric_demand_data = tpg.create_demand_data(eps=ENDPOINTS, node_dist=node_dist, flow_size_dist=flow_size_dist, interarrival_time_dist=interarrival_time_dist, network_load_config=network_load_config, print_data=True) # save filename = PATH+'custom_demand_data.pickle' with gzip.open(filename, 'wb') as f: pickle.dump(flow_centric_demand_data, f) ``` `demand_data` is a dictionary storing the following information for each flow (where the values are a list of values corresponding to the values assigned to each flow): ``` print('Flow data keys:\n{}'.format(flow_centric_demand_data.keys())) ``` ## Analyse the Generated Traffic At this point, you could do your own analysis of the traffic you've generated by loading the saved data into your own scripts. However, `TrafPy` provides some useful tools for this. We can encode our saved `demand_data` files as `trafpy.generator.Demand()` objects, and then use the `trafpy.generator.DemandsAnalyser` and `trafpy.generator.DemandPlotter` objects to analyse them: ``` from trafpy.generator import Demand, DemandsAnalyser, DemandPlotter ``` First collect the demand objects from each demand_data file: ``` # collect demand objects demands = {} with gzip.open(PATH+'custom_demand_data.pickle', 'rb') as f: demand_data = pickle.load(f) demands['custom'] = Demand(demand_data, net.graph['endpoints'], name='custom') ``` Then use `trafpy.generator.DemandsAnalyser()` to print a summary table of all the demand data sets you generated: ``` # print summary table analyser = DemandsAnalyser(*list(demands.values()), jobcentric=False) analyser.compute_metrics(print_summary=True) ``` Finally, visualise your data: ``` # visualise distributions for name, demand in demands.items(): print(name) plotter = DemandPlotter(demand) plotter.plot_flow_size_dist(logscale=True, figsize=(12,6)) plotter.plot_interarrival_time_dist(logscale=True, figsize=(12,6)) plotter.plot_node_dist(eps=net.graph['endpoints'], chord_edge_width_range=[1,25], chord_edge_display_threshold=0.005) plotter.plot_node_load_dists(eps=net.graph['endpoints'], ep_link_bandwidth=net.graph['ep_link_capacity'], plot_extras=False) ``` ## Generate a slots dict data base Many network experiments are based on time slots. I.e. during a time slot of e.g. 10 time units, some number of flows arrive. The `trafpy.generator.Demand()` class has a useful `get_slots_dict()` method to automatically organise your generated traffic demands into time slots given the `slot_size` you want to use: ``` slots_dict = demand.get_slots_dict(slot_size=10) ``` The `slots_dict` dictionary contains indices 0-n for `n` slots, as well as some other useful information: ``` print(slots_dict.keys()) ``` E.g. To access the flows which arrived in the first time slot (with upper bound and lower bound times on the time slot also given since this is often useful): ``` print(slots_dict[0].keys()) ``` Next time slot flows: ``` print(slots_dict[1].keys()) ``` And so on. For large simulations, it is recommended to save the `slots_dict` as a database on your disk which you can query during your simulation. The `SqliteDict` library is particularly useful for this since it lets you save a database in .sqlite file format whilst still allowing you to query the database as if it were a normal Python dictionary. See [here](https://pypi.org/project/sqlitedict/) for more details. To save your `slots_dict` as a .sqlite database with `SqliteDict`, run: ``` from sqlitedict import SqliteDict import json with SqliteDict(PATH+'custom_demand_data_slots_dict.sqlite') as _slots_dict: for key, val in slots_dict.items(): if type(key) is not str: _slots_dict[json.dumps(key)] = val else: _slots_dict[key] = val _slots_dict.commit() _slots_dict.close() ```
github_jupyter
``` import os from concurrent.futures import ProcessPoolExecutor from pathlib import Path import matplotlib.pyplot as plt from lhotse import CutSet, Fbank, LilcomFilesWriter from lhotse.augmentation import SoxEffectTransform, RandomValue from lhotse.dataset import K2SpeechRecognitionDataset from lhotse.dataset.sampling import SingleCutSampler from lhotse.recipes.gigaspeech import download_gigaspeech, prepare_gigaspeech ``` # Settings for paths ``` root_dir = Path('data') corpus_dir = root_dir / 'GigaSpeech' output_dir = root_dir / 'gigaspeech_nb' ``` # Select data parts ``` train_set = 'XS' dataset_parts = (train_set, 'TEST') ``` # Download the data ``` password = ''# You need to fill out the Google Form to get the password# https://forms.gle/UuGQAPyscGRrUMLq6 download_gigaspeech(password, corpus_dir, dataset_parts) ``` # Prepare audio and supervision manifests ``` num_jobs = os.cpu_count() gigaspeech_manifests = prepare_gigaspeech(corpus_dir, dataset_parts, output_dir, num_jobs=num_jobs) ``` # [Optional] Data augmentation ``` use_data_augmentation = False augment_fn = SoxEffectTransform(effects=[ ['reverb', 50, 50, RandomValue(0, 100)], ['remix', '-'], # Merge all channels (reverb changes mono to stereo) ['rate', 16000], ]) if use_data_augmentation else None ``` # Extract features ``` for partition, manifests in gigaspeech_manifests.items(): manifest_path = output_dir / f'cuts_{partition}.jsonl.gz' if not manifest_path.is_file(): with ProcessPoolExecutor(num_jobs) as ex: cut_set = CutSet.from_manifests( recordings=manifests['recordings'], supervisions=manifests['supervisions'] ) if use_data_augmentation: cut_set = cut_set + cut_set.perturb_speed(0.9) + cut_set.perturb_speed(1.1) cut_set = cut_set.compute_and_store_features( extractor=Fbank(), storage_path=f'{output_dir}/feats_{partition}', storage_type=LilcomFilesWriter, augment_fn=augment_fn, num_jobs=num_jobs, executor=ex ) gigaspeech_manifests[partition]['cuts'] = cut_set cut_set.to_json(manifest_path) gigaspeech_manifests[partition] = CutSet.from_jsonl(manifest_path) ``` # Make PyTorch Dataset ``` dataset_test = K2SpeechRecognitionDataset(gigaspeech_manifests['TEST']) dataset_train = K2SpeechRecognitionDataset(gigaspeech_manifests[train_set]) ``` # Illustration of an example ``` sampler = SingleCutSampler(dataset_test.cuts, shuffle=False, max_cuts=4) cut_id = next(iter(sampler))[0] sample = dataset_test[[cut_id]] seg_id = 1 text = sample['supervisions']['text'][seg_id] start_frame = int(sample['supervisions']['start_frame'][seg_id]) end_frame = start_frame + int(sample['supervisions']['num_frames'][seg_id]) - 1 feats = sample['inputs'][0][start_frame:end_frame, :] print('Transcript:', text) print('Supervisions start frame:', start_frame) print('Supervisions end frame:', end_frame) print('Feature matrix:') plt.matshow(feats.transpose(0, 1).flip(0)); ```
github_jupyter
reference: [Google Colab Python API](https://worldbank.github.io/OpenNightLights/tutorials/mod2_5_GEE_PythonAPI_and_geemap.html#google-colab-python-api) ``` import geemap, ee ``` `True` if run in Colab; `False` if local ``` 'google.colab' in str(get_ipython()) try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() # set map parameters for Ireland center_lat = 53.5 center_lon = -9 zoomlevel = 7 ``` ### Query Sentinel-2A data ### Get image ID For selected years ``` # ee.ImageCollection("NOAA/DMSP-OLS/NIGHTTIME_LIGHTS") img_collection = "COPERNICUS/S2_SR" ``` ##### JavaScript Map = ee.ImageCollection('COPERNICUS/S2_SR').filterDate('2020-01-01', '2020-01-30') # Pre-filter to get less cloudy granules. # .filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE',20)) # .map(maskS2clouds); asdf = geemap.Map(center=[center_lat,center_lon], zoom=zoomlevel) # Map.setCenter(center_lat,center_lon,zoomlevel) Map.addLayer(nighttimeLights, nighttimeLightsVis, 'Nighttime Lights'); ``` # TODO # s2a_id = img_collection + 'F182013' s2a_id = img_collection + '_________' # TODO # create the ee object s2a = ee.Image(s2a_id) # initialize a map object, centered on Abuja Map6 = geemap.Map(center=[center_lat,center_lon], zoom=zoomlevel) # name it "DMSP NTL 2013", create a mask, and give it an opacity of 75%. Map6.addLayer(s2a.mask(s2a), name='DMSP NTL 2013 masked', opacity=.75) Map6.addLayerControl() Map6 ``` ### 6.3.1. Changing opacity ``` Map3 = geemap.Map(center=[center_lat,center_lon], zoom=zoomlevel) Map3.addLayer(dmsp92, name='DMSP NTL 1992', opacity=0.75) Map3.addLayerControl() Map3 ``` ## 6.4. Creating a mask ``` Map4 = geemap.Map(center=[center_lat,center_lon], zoom=zoomlevel) Map4.addLayer(dmsp92.mask(dmsp92), name='DMSP NTL 1992 masked', opacity=0.75) Map4.addLayerControl() Map4 ``` ## 6.5. Change the basemap ``` # initial map object centered on Abuja Map5 = geemap.Map(center=[center_lat,center_lon], zoom=zoomlevel) # add our alternate basemap Map5.add_basemap("SATELLITE") # add our 1992 (and remember to create a mask and change opacity to 75%) Map5.addLayer(dmsp92.mask(dmsp92), name='DMSP NTL 1992 masked', opacity=0.75) Map5.addLayerControl() Map5 ``` # (local PC only) Create a split panel view Warning, this is based on `ipyleaflet` a Python library that does not play well with Google Colab, so this code will not work in the Google Colab environment but should on your local machine. ``` # generate tile layers from the ee image objects, masking and changing opacity to 75% dmsp92_tile = geemap.ee_tile_layer(dmsp92.mask(dmsp92), {}, 'DMSP NTL 1992', opacity=0.75) dmsp2013_tile = geemap.ee_tile_layer(dmsp13.mask(dmsp13), {}, 'DMSP NTL 2013', opacity=0.75) # initial map object centered on Abuja Map7 = geemap.Map(center=[center_lat,center_lon], zoom=zoomlevel) # use .split_map function to create split panels Map7.split_map(left_layer=dmsp92_tile, right_layer=dmsp2013_tile) Map7.addLayerControl() Map7 ```
github_jupyter
### Working with Avro files Here are some examples of working with ZTF alerts stored as avro files. ``` import os import io import gzip import numpy as np import pandas as pd import matplotlib.pyplot as plt from avro.datafile import DataFileReader, DataFileWriter from avro.io import DatumReader, DatumWriter import fastavro from astropy.time import Time from astropy.io import fits import aplpy %matplotlib inline ``` A handful of sample alerts are available in the [ztf-avro-alert](https://github.com/ZwickyTransientFacility/ztf-avro-alert) repository, which also [documents](https://zwickytransientfacility.github.io/ztf-avro-alert/schema.html) the packet contents. ``` DATA_DIR = '../../ztf-avro-alert/data/' ``` Let's count packets. Just for fun let's make it a generator--we could have millions of these alerts to look at! ``` def find_files(root_dir): for dir_name, subdir_list, file_list in os.walk(root_dir, followlinks=True): for fname in file_list: if fname.endswith('.avro'): yield dir_name+'/'+fname print('{} has {} avro files'.format(DATA_DIR, len(list(find_files(DATA_DIR))))) ``` Let's grab the first file and look at it ``` fname = next(find_files(DATA_DIR)) fname ``` Let's use the python `avro` library to see what's in the file. ``` %%time with open(fname,'rb') as f: freader = DataFileReader(f,DatumReader()) for packet in freader: print(packet.keys()) ``` Now let's compare the call syntax of the faster `fastavro` package: ``` %%time with open(fname,'rb') as f: freader = fastavro.reader(f) schema = freader.schema for packet in freader: print(packet.keys()) ``` Basically the same, and the latter is faster. Here's the schema that was stored in the packet: ``` schema ``` Once we have the packet in python data structures our downstream processing should be independent of how we got the packet (from files or from a Kafka stream). ### Playing with packet contents Once these are in memory they are just a python dictionary, so the top-level attributes are easy to access. ``` type(packet) packet print('JD: {} Filter: {} Mag: {:.2f}+/-{:.2f}'.format( packet['candidate']['jd'],packet['candidate']['fid'], packet['candidate']['magpsf'],packet['candidate']['sigmapsf'])) ``` **NOTE ESPECIALLY**: the magnitudes here do not include the magnitude of the underlying reference source (if present), so if this is a variable star further adjustment is needed. Example to come... Record access like this is a little verbose; let's wrap things up in a dataframe for ease of access (and faster loading). Now let's extract the lightcurves. The alert packet formats are nested, so the historical detections (if present) have the same structure as the candidate triggering the alert (minus a couple fields). ``` def make_dataframe(packet): df = pd.DataFrame(packet['candidate'], index=[0]) df_prv = pd.DataFrame(packet['prv_candidates']) return pd.concat([df,df_prv], ignore_index=True) dflc = make_dataframe(packet) dflc dflc.columns ``` We see that some of the historical detections are upper limits, signified by the NaNs. Note that the most recent candidate has a few fields that are not present for the `prv_candidates`. Let's plot it! ``` def plot_lightcurve(dflc, days_ago=True): filter_color = {1:'green', 2:'red', 3:'pink'} if days_ago: now = Time.now().jd t = dflc.jd - now xlabel = 'Days Ago' else: t = dflc.jd xlabel = 'Time (JD)' plt.figure() for fid, color in filter_color.items(): # plot detections in this filter: w = (dflc.fid == fid) & ~dflc.magpsf.isnull() if np.sum(w): plt.errorbar(t[w],dflc.loc[w,'magpsf'], dflc.loc[w,'sigmapsf'],fmt='.',color=color) wnodet = (dflc.fid == fid) & dflc.magpsf.isnull() if np.sum(wnodet): plt.scatter(t[w],dflc.loc[w,'diffmaglim'], marker='v',color=color,alpha=0.25) plt.gca().invert_yaxis() plt.xlabel(xlabel) plt.ylabel('Magnitude') plot_lightcurve(dflc) ``` Now let's figure out how to display the cutout images. These are gzip-compressed fits files stored as bytes: ``` packet['cutoutScience'] stamp = packet['cutoutScience']['stampData'] type(stamp) with open('tmp.fits.gz', 'wb') as f: f.write(stamp) def plot_cutout(stamp, fig=None, subplot=None, **kwargs): with gzip.open(io.BytesIO(stamp), 'rb') as f: with fits.open(io.BytesIO(f.read())) as hdul: if fig is None: fig = plt.figure(figsize=(4,4)) if subplot is None: subplot = (1,1,1) ffig = aplpy.FITSFigure(hdul[0],figure=fig, subplot=subplot, **kwargs) ffig.show_grayscale(stretch='arcsinh') return ffig plot_cutout(stamp) ``` Now let's make a nice helper function: ``` def show_stamps(packet): #fig, axes = plt.subplots(1,3, figsize=(12,4)) fig = plt.figure(figsize=(12,4)) for i, cutout in enumerate(['Science','Template','Difference']): stamp = packet['cutout{}'.format(cutout)]['stampData'] ffig = plot_cutout(stamp, fig=fig, subplot = (1,3,i+1)) ffig.set_title(cutout) show_stamps(packet) ```
github_jupyter
``` %matplotlib inline import numpy as np import pylab as plt import cv2 data_root = '/diskmnt/a/makov/yaivan/2016-02-11_Pin/' ``` Список файлов: * empty - файл полученный с томографа без коррекций * corr - то же изображение что и empty, но с коррекцией * tomo - то же, что и empty, но полученное в ходе проведения эксперимента * white - пустой пучок используемый для нормировки изображений (получен в тот-же день при калибровке) * black_1, black_2 - темновые токи, полученные в разное время ``` empty = plt.imread(data_root+'first_projection.tif').astype('float32') corr = plt.imread(data_root+'first_projection_corr.tif').astype('float32') tomo = plt.imread(data_root+'Raw/pin_2.24um_0000.tif').astype('float32') white = np.fromfile(data_root+'white0202_2016-02-11.ffr',dtype='<u2').astype('float32').reshape((2096, 4000)) black_1 = np.fromfile(data_root+'black0101_2016-02-09.ffr',dtype='<u2').astype('float32').reshape((2096, 4000)) black_2 = np.fromfile(data_root+'black0201_2016-02-16.ffr',dtype='<u2').astype('float32').reshape((2096, 4000)) def show_frame(data, label): data_filtered = cv2.medianBlur(data,5) plt.figure(figsize=(12,10)) plt.imshow(data_filtered) plt.title(label+' filtered') plt.colorbar(orientation='horizontal') plt.show() plt.figure(figsize=(12,8)) plt.plot(data[1000]) plt.grid(True) plt.title(label+' filtered: central cut') plt.show() plt.figure(figsize=(12,10)) plt.imshow(data_filtered) plt.colorbar(orientation='horizontal') plt.title(label) plt.show() plt.figure(figsize=(12,8)) plt.plot(data_filtered[1000]) plt.grid(True) plt.title(label+': central cut') plt.show() ``` ## Вот пучок без объекта. По осям - отсчёты детектора. Здесь и далее первая картинка и центральное сечение - как есть. Вторая картинка - с применениейм медианной фильтрации (чтобы убрать шумы сцинстиллятора). ``` show_frame(white, 'White') ``` ## Вот темновой ток 1 по осям - отсчёты детектора ``` show_frame(black_1, 'Black_1') ``` ## Вот темновой ток 2 по осям - отсчёты детектора ``` show_frame(black_2, 'Black_2') ``` ## Вот разница между темновыми токами ``` show_frame(black_1 - black_2, 'Black_1 - Black_2') ``` ## Вот никак не скорректированное изображение ``` show_frame(empty, 'Empty') ``` ## Вот отнормированное изображение (силами томографа) Странно, что на центральном срезе максимум не на 65535 (2^16), а примерно 65535\*__0.8. Это значит что нам при реконструкции нужно нормироваться не на 65535 при взятии логарифма, а на максимум по синограмме?__ ``` show_frame(corr, 'Corr') ``` ## Вот изображение из томографического зксперимента ``` show_frame(tomo, 'tomo image') ``` ## Вот разница изображений отнормированных томографом в ручном режиме и режиме томографа Они видимо немого сдвинуты ``` show_frame(corr - tomo, 'corr / tomo image') ``` ## Вот моя попытка отнормировать изображение Видны следы от прямого пучка (сетка на заднем фоне), но это видимо связано с тем, что прямой пучок зависит от расстояний детектор-источник (сферичность интенсивности), и прямой пучок был померян для другого рассотояния. К тому-же интенсивнось прямого пучка видимо была меньше (в 16 раз?), чем при проведениии зксперимента. (__это надо проверить__) ``` white_norm = (white - black_1) white_norm[white_norm<1] = 1 empty_norm = (empty/16 - black_1) empty_norm[empty_norm<1] =1 my_corr = empty_norm/white_norm my_corr[my_corr>1.1] = 1.1 show_frame(my_corr, 'my_corr image') ``` ## Скорректированный пучок нами поделеённый на скорреткироваанный скайсканом. Они вроде совпадают, с точностью до шумов. Отсюда следует, что нормировка происходит по формуле $$Signal=k\times 2^{16}\frac{I_1-dark}{I_0-dark}, k=0.87$$ ``` show_frame(my_corr*65535*0.87/corr, 'my_corr/corr image') ```
github_jupyter
``` %load_ext autoreload %autoreload 2 import ray import ray.rllib.agents.ppo as ppo from ray.tune.logger import pretty_print from ray import tune from ray.rllib.agents.ppo import PPOTrainer from ray.rllib.models import FullyConnectedNetwork, Model, ModelCatalog import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' import sys sys.path.append("../../../") sys.path.append("/home/ctripp/src/cavs-environments") print(sys.path) import pygame import pymunk import numpy as np import math import tensorflow as tf import gym import cavs_environments import matplotlib from stable_baselines.common.policies import MlpPolicy from stable_baselines.common.vec_env import DummyVecEnv from stable_baselines.common.vec_env import SubprocVecEnv from stable_baselines import PPO2 from stable_baselines import A2C from stable_baselines import TRPO from stable_baselines.common.policies import FeedForwardPolicy, register_policy # from cavs_environments.vehicle.deep_road.deep_road import DeepRoad import cavs_environments.framework as framework import cavs_environments.vehicle.k_road.targeting as targeting import cavs_environments.vehicle.k_road.road as road # import cavs_environments.vehicle.k_road.targeting as ray.init() import pprint pp = pprint.PrettyPrinter(indent=4) # ModelCatalog.register_custom_model("ThisRoadEnv", ThisRoadEnv) # register_env("ThisRoadEnv", lambda config: ThisRoadEnv()) import pygame pygame.display.quit() from IPython.core.display import HTML HTML("<script>Jupyter.notebook.kernel.restart()</script>") # def make_target_env_with_baseline( # observation_scaling = 1.0, # action_scaling = 1.0 / 10.0, # max_distance_from_target = 125, # time_limit = 60): # return framework.FactoredGym( # targeting.TargetProcess(time_limit, max_distance_from_target), # targeting.TargetObserver(observation_scaling), # targeting.TargetTerminator(), # targeting.TargetRewarder(), # [framework.ActionScaler(action_scaling), targeting.TargetBaseline()] # ) class ThisRoadEnv(framework.FactoredGym): def __init__(self, env_config): observation_scaling = 1.0 action_scaling = 1.0 / 10.0 super().__init__( road.RoadProcess(), road.RoadObserver(observation_scaling), road.RoadTerminator(), road.RoadGoalRewarder(), [framework.ActionScaler(action_scaling)] ) def make_road_env( observation_scaling = 1.0, action_scaling = 1.0 / 10.0): return ThisRoadEnv(observation_scaling, action_scaling) class CustomPolicy(FeedForwardPolicy): def __init__(self, *args, **kwargs): super(CustomPolicy, self).__init__(*args, **kwargs, net_arch=[64, 64, dict(pi=[64], vf=[64])], feature_extraction="mlp") # class CustomPolicy(MlpPolicy): # def __init__(self, *args, **kwargs): # super(MlpPolicy, self).__init__(*args, act_fun=tf.nn.tanh, net_arch=[32, 32]) # register_policy('LargeMLP', LargeMLP) tune.run( "PPO", stop={ "timesteps_total": 10000 }, config={ "env": ThisRoadEnv, "model":{ "fcnet_hiddens":[64,64] }, "num_workers": 1 }) # env = make_road_env() # config = ppo.DEFAULT_CONFIG.copy() # config. config = config={ "env_config": {}, # config to pass to env class } config["num_gpus"] = 1 config["num_workers"] = 1 # config['use_eager'] = True # config['stop'] = 10e3 # config['model'] = {"custom_model":"my_model"} trainer = ppo.PPOTrainer(config=config, env=ThisRoadEnv) for i in range(1000): # Perform one iteration of training the policy with PPO result = trainer.train() print(pretty_print(result)) if i % 100 == 0: checkpoint = trainer.save() print("checkpoint saved at", checkpoint) ego_starting_distance = 10.0 target_reward = 90 iter = 0 step = 1e3 inner_env = make_road_env() inner_env.process.ego_starting_distance = ego_starting_distance inner_env.reset() env = DummyVecEnv([lambda: inner_env]) # The algorithms require a vectorized environment to run model = PPO2(CustomPolicy, env, verbose=0, tensorboard_log='/tmp/k_road_0/', gamma=.999, learning_rate=.001) while iter < 250e3: inner_env.process.ego_starting_distance = ego_starting_distance model.learn(total_timesteps=int(step), reset_num_timesteps = False) print('er: ', model.episode_reward) mean_reward = np.mean(model.episode_reward) if mean_reward >= target_reward: ego_starting_distance = min(600.0, ego_starting_distance + 10.0) print('iter: ', iter, 'reward: ', mean_reward, 'starting_distance: ', inner_env.process.ego_starting_distance) iter = iter + step model.save('k_road_0') print('done!') env = make_road_env() env.process.ego_starting_distance = 10 env = DummyVecEnv([lambda: env]) # The algorithms require a vectorized environment to run model = PPO2.load('k_road_0') for i in range(50): obs = env.reset() for i in range(100000): action, _states = model.predict(obs) obs, rewards, terminal, info = env.step(action) env.render() if terminal: break env.close() env = make_road_env() env.ego_starting_distance = 10 print(env.process.ego_vehicle, env.process.road_length, env.process.ego_starting_distance) obs = None for _ in range(20): env.reset() env.render() for _ in range(2000): action = np.empty(2) action[0] = .5 # np.random.normal(.5, .001) action[1] = 0 # np.random.normal(0, .01) result = env.step(action) obs = result[0] env.render() if result[2]: break env.close() n_cpu = 10 env = SubprocVecEnv([lambda: make_road_env() for i in range(n_cpu)]) model = PPO2(MlpPolicy, env, verbose=1, tensorboard_log='/tmp/k_road_0/', gamma=.99999, learning_rate=.001) model.learn(total_timesteps=int(200e3)) model.save('k_road_0') print('done!') env = make_road_env() env = DummyVecEnv([lambda: env]) # The algorithms require a vectorized environment to run model = PPO2.load('k_road_0') obs = env.reset() for i in range(100000): action, _states = model.predict(obs) obs, rewards, terminal, info = env.step(action) env.render() if terminal: break env.close() import numpy as np import matplotlib.pyplot as plt import matplotlib.lines as lines def two_sided_offset_exponential(gain, offset, x): y = 0 s = 1 if x < offset: y = 1 - (x + 1) / (offset + 1) s = -1 else: y = (x - offset) / (1 - offset) return s * (math.exp(gain * y) - 1) / (math.exp(gain) - 1) fig = plt.figure() x = np.arange(-1.0, 1.0, .01) y = np.array([two_sided_offset_exponential(.1, -.5, xi) for xi in x]) plt.plot(x,y) plt.show() import ray from ray import tune from ray.rllib.agents.ppo import PPOTrainer def train(config, reporter): trainer = PPOTrainer(config=config, env=YourEnv) while True: result = trainer.train() reporter(**result) if result["episode_reward_mean"] > 200: phase = 2 elif result["episode_reward_mean"] > 100: phase = 1 else: phase = 0 trainer.workers.foreach_worker( lambda ev: ev.foreach_env( lambda env: env.set_phase(phase))) ray.init() tune.run( train, config={ "num_gpus": 0, "num_workers": 2, }, resources_per_trial={ "cpu": 1, "gpu": lambda spec: spec.config.num_gpus, "extra_cpu": lambda spec: spec.config.num_workers, }, ) ```
github_jupyter
``` # !pip install kfp # !pip install google-cloud-aiplatform # !pip install google-cloud-pipeline-components import kfp from kfp.v2 import compiler from kfp.v2.google.client import AIPlatformClient from google.cloud import aiplatform # from google.cloud.aiplatform import pipeline_jobs from google_cloud_pipeline_components import aiplatform as gcc_aip from kfp import components as comp import kfp.dsl as dsl project_id = 'project-id' region = 'us-central1' pipeline_name = 'earthquake-prediction' main_bucket_name = 'bucket-name' pipeline_root_path = 'gs://' + main_bucket_name + '/pipelines/' model_name = '5390f2dc404cbe0cd01427a938160354' problem_statement_file = 'problem_statement/weekly.json' raw_location = 'data/quakes/raw/' silver_location = 'data/quakes/silver/' gold_location = 'data/quakes/gold/' + model_name + '/' artifact_location = 'models/' + model_name + '/' predictions_location = 'data/predictions/' + model_name + '/' metrics_location = 'data/metrics/' + model_name + '/' import google def save_pipeline(pipeline, bucket_name, files_path): #### Get the bucket that the file will be uploaded to storage_client = google.cloud.storage.Client() bucket = storage_client.get_bucket(bucket_name) #### Create a new blob my_file = bucket.blob(files_path + pipeline) #### Upload from file my_file.upload_from_filename(pipeline, content_type = 'application/json') def get_data_from_url( url: str, delta_days: int, downloaded_data_path: comp.OutputPath('csv') ) -> str: import requests from datetime import datetime, timedelta # get time request_time = datetime.now().astimezone().strftime('%Y-%m-%dT%H-%M-%S-%Z') + '/all_quakes.csv' # get resquest params end_range = datetime.now() + timedelta(days = 1) begin_range = end_range - timedelta(days = delta_days) end_range = end_range.strftime('%Y-%m-%d') begin_range = begin_range.strftime('%Y-%m-%d') query = {'format': 'csv', 'starttime': begin_range, 'endtime': end_range} response_content = None # make request try: with requests.get(url, params = query) as response: response.raise_for_status() response_content = response.content except requests.exceptions.Timeout: print('Timeout Exception') return '' except requests.exceptions.TooManyRedirects: print('Too Many Redirects Exception') return '' except requests.exceptions.HTTPError: print('Http Exception') return '' except requests.exceptions.ConnectionError: print('Error Connecting') return '' except requests.exceptions.RequestException: print('Request Exception') return '' except: print('Error requesting file') return '' # dump for next component with open(downloaded_data_path, 'w') as text_file: text_file.write(response_content.decode('utf-8')) return request_time get_data_from_url_op = comp.create_component_from_func( get_data_from_url, base_image = 'python:3.7', packages_to_install = [ 'requests', ], ) def save_data_to_gcp( file_to_save_path: comp.InputPath('csv'), file_name: str, bucket_name: str, bucket_folder: str, ): from google.cloud import storage # read from last step with open(file_to_save_path, 'r') as text_file: input_file = text_file.read() # if None exit if input_file is None: print('No response') return # get the bucket that the file will be uploaded to storage_client = storage.Client() bucket = storage_client.get_bucket(bucket_name) # create a new blob my_file = bucket.blob(bucket_folder + file_name) # upload from csv my_file.upload_from_string(input_file, content_type = 'text/csv') save_data_to_gcp_op = comp.create_component_from_func( save_data_to_gcp, base_image = 'path-to-artifact-registry/python-gcp:basic', packages_to_install = [], ) def clean_data( csv_file_path: comp.InputPath('csv'), cleaned_data_path: comp.OutputPath('csv'), ): import joblib import pandas as pd # convert to DataFrame df = pd.read_csv(csv_file_path) # select info to be used df = df.loc[:, ['time', 'id', 'latitude', 'longitude', 'depth', 'mag']].copy() # weird values z_0 = df['depth'] < 0 print(f'Depth above sup: {sum(z_0):,d} ({sum(z_0) / len(df):.2%})') df.loc[z_0, 'depth'] = 0 # data parsing date_col = 'time' datetimes = df[date_col].str.split('T', expand = True) dates = pd.to_datetime(datetimes.loc[:, 0], format = '%Y-%m-%d') df = pd.concat((df, dates.rename('date')), axis = 1) df = df.drop(date_col, axis = 1) # drop NA len_before = len(df) df = df.dropna() len_after = len(df) dropped_events = len_before - len_after if (dropped_events) == 0: print('No dropped events') else: print(f'Dropped events: {dropped_events:,d} ({dropped_events / len_before:.2%})') # dump df for next component joblib.dump(df, cleaned_data_path) clean_data_op = comp.create_component_from_func( clean_data, base_image = 'path-to-artifact-registry/python-pandas:basic', packages_to_install = [], ) def merge_new_data( new_data_path: comp.InputPath('csv'), bucket_name: str, file_path: str, file_name: str, silver_data_path: comp.OutputPath('csv'), ) -> bool: import joblib import pandas as pd # read from last step df_new = joblib.load(new_data_path) if len(df_new) == 0: return False # read source file df_hist = pd.read_csv('gs://' + bucket_name + '/' + file_path + file_name) print(f'Silver events: {len(df_hist):,d}') # check if there is new information # TODO: check if there is change, not only new records new_events = df_new[~df_new['id'].isin(df_hist['id'])].copy() print(f'New events: {len(new_events):,d}') if (len(new_events) > 0): df = pd.concat([df_hist, new_events]) len_df, nunique_ids = len(df), df['id'].nunique() print(f'Total events: {len_df:,d} | Unique: {nunique_ids:,d}') assert(len_df == nunique_ids) # dump df for next component joblib.dump(df, silver_data_path) return True else: # dump df for next component joblib.dump(df_hist, silver_data_path) return False merge_new_data_op = comp.create_component_from_func( merge_new_data, base_image = 'path-to-artifact-registry/python-pandas:basic', packages_to_install = [], ) def save_df_to_gcp( file_to_save_path: comp.InputPath('csv'), file_name: str, suffix: str, bucket_name: str, bucket_folder: str, ) -> str: import joblib # read from last step df = joblib.load(file_to_save_path) # if None exit if df is None: print('No response') return # save to GCS file_and_suffix = file_name file_and_suffix += '_' + suffix + '.csv' if suffix != '' else '.csv' df.to_csv('gs://' + bucket_name + '/' + bucket_folder + file_and_suffix, index = False) return 'saved' save_df_to_gcp_op = comp.create_component_from_func( save_df_to_gcp, base_image = 'path-to-artifact-registry/python-pandas:basic', packages_to_install = [], ) from typing import NamedTuple def get_problem_statement( bucket_name: str, source_file: str, ) -> NamedTuple( 'OpOutputs', [ ('main_id', str), ('time_ref', str), ('time_frequency', str), ('target_raw', str), ('early_warning_number', int), ('range_warning_number', int), ('pad_df', int), ('event_reference', float), ('degrees_latitude_grid', int), ('km_depth_grid', int), ('min_latitude', int), ('max_latitude', int), ('min_longitude', int), ('max_longitude', int), ('time_cut', str), ] ): import json from google.cloud import storage from collections import namedtuple # get bucket storage_client = storage.Client() bucket = storage_client.get_bucket(bucket_name) # get blob my_file = bucket.get_blob(source_file) # download from json problem_statement_config = json.loads(my_file.download_as_text()) config = namedtuple( 'OpOutputs', [ 'main_id', 'time_ref', 'time_frequency', 'target_raw', 'early_warning_number', 'range_warning_number', 'pad_df', 'event_reference', 'degrees_latitude_grid', 'km_depth_grid', 'min_latitude', 'max_latitude', 'min_longitude', 'max_longitude', 'time_cut', ] ) return config( problem_statement_config['main_id'], problem_statement_config['time_ref'], problem_statement_config['time_frequency'], problem_statement_config['target_raw'], problem_statement_config['early_warning_number'], problem_statement_config['range_warning_number'], problem_statement_config['pad_df'], problem_statement_config['event_reference'], problem_statement_config['degrees_latitude_grid'], problem_statement_config['km_depth_grid'], problem_statement_config['min_latitude'], problem_statement_config['max_latitude'], problem_statement_config['min_longitude'], problem_statement_config['max_longitude'], problem_statement_config['time_cut'], ) get_problem_statement_op = comp.create_component_from_func( get_problem_statement, base_image = 'path-to-artifact-registry/python-gcp:basic', packages_to_install = [], ) def filter_and_preprocess( silver_path: comp.InputPath('csv'), event_reference: float, min_latitude: int, max_latitude: int, min_longitude: int, max_longitude: int, time_cut: str, filtered_path: comp.OutputPath('csv'), ): import joblib import pandas as pd import numpy as np from datetime import datetime import zucaml.zucaml as ml # read from last step df = joblib.load(silver_path) df['date'] = pd.to_datetime(df['date'], format = '%Y-%m-%d', exact = True) # events df['event'] = ((df['mag'] >= event_reference) * 1).astype(np.uint8) # filter regions df['keep'] = (df['latitude'] >= min_latitude) & (df['latitude'] <= max_latitude) & (df['longitude'] >= min_longitude) & (df['longitude'] <= max_longitude) df = df.loc[df['keep']].copy().reset_index() df = df.drop(['keep', 'index'], axis = 1) # filter time time_cut = datetime.strptime(time_cut, '%Y-%m-%d') df = df[df['date'] > time_cut] df = df.reset_index().drop(['index',], axis = 1) # energy df['energy'] = 5.24 df['energy'] += 1.44 * df['mag'] df['energy'] = np.power(10, df['energy']) # print info number_events = df['event'].sum() min_date = df['date'].min() max_date = df['date'].max() print(f'Min date:\t{min_date}') print(f'Max date:\t{max_date}') print(f'Number of events:\t{number_events:,d}') ml.print_memory(df) # dump df for next component joblib.dump(df, filtered_path) filter_and_preprocess_op = comp.create_component_from_func( filter_and_preprocess, base_image = 'path-to-artifact-registry/zucaml:basic', packages_to_install = [], ) from typing import NamedTuple def reindex_df( filtered_path: comp.InputPath('csv'), degrees_latitude_grid: int, km_depth_grid: int, pad_df: int, main_id: str, time_ref: str, time_frequency: str, date_offset: int, is_training: str, grid_path: comp.OutputPath('csv'), ) -> NamedTuple( 'OpOutputs', [ ('x_degre_km', float), ('y_degre_km', float), ('dx', int), ('dy', int), ('range_x_min', int), ('range_x_max', int), ('range_x_step', int), ('range_y_min', int), ('range_y_max', int), ('range_y_step', int), ('range_z_min', int), ('range_z_max', int), ('range_z_step', int), ] ): import joblib from collections import namedtuple import pandas as pd import numpy as np from datetime import datetime, timedelta import zucaml.zucaml as ml # read from last step df = joblib.load(filtered_path) # grid x_degre_km = 94.2 y_degre_km = 111.2 dy = degrees_latitude_grid dx = int(round(dy * y_degre_km / x_degre_km)) dz = km_depth_grid grid_values = [ ('y', 'latitude', dy), ('x', 'longitude', dx), ('z', 'depth', dz) ] for new_feature, old_feature, increment in grid_values: old_feature_min = int(round(df[old_feature].min())) df[new_feature] = df[old_feature] - old_feature_min df[new_feature] = df[new_feature] / increment df[new_feature] = df[new_feature].round().astype(int) df[new_feature] = df[new_feature] * increment df[new_feature] = df[new_feature] + old_feature_min assert(sum(df['z'] < 0) == 0) df['zone_frame'] = df['x'].astype(str) + '|' + df['y'].astype(str) + '|' + df['z'].astype(str) min_x = df['x'].min() max_x = df['x'].max() min_y = df['y'].min() max_y = df['y'].max() min_z = df['z'].min() max_z = df['z'].max() range_x = range(min_x, max_x + dx, dx) range_y = range(min_y, max_y + dy, dy) range_z = range(min_z, max_z + dz, dz) all_zone_frames = [str(x) + '|' + str(y) + '|' + str(z) for x in range_x for y in range_y for z in range_z] used_x = df['x'].nunique() used_y = df['y'].nunique() used_z = df['z'].nunique() used_time = df['date'].nunique() print(f'Unique x:\t\t{used_x:,d}') print(f'Unique y:\t\t{used_y:,d}') print(f'Unique z:\t\t{used_z:,d}') print(f'Unique time:\t\t{used_time:,d}') print(f'All zones:\t\t{len(all_zone_frames):,d}') df = df.drop(['longitude', 'latitude', 'depth'], axis = 1) # offset date min_date = df['date'].min() print(f'Offset date. Min before: {str(min_date)} Records: {len(df):,d}') min_date = min_date + timedelta(days = date_offset) df = df.loc[df['date'] >= min_date].copy().reset_index().drop('index', axis = 1) min_date = df['date'].min() print(f'Offset date. Min after: {str(min_date)} Records: {len(df):,d}') # pad if is_training == 'True': pad_df = bool(pad_df) else: pad_df = True this_max_date = df['date'].max() ref_day = datetime.now() print(f'date: {str(this_max_date)} - reference day: {str(ref_day)}') if (this_max_date < ref_day): print('Adding dummy') len_before = len(df) record_dummy = df.iloc[0].copy() record_dummy['date'] = ref_day record_dummy['id'] = 'non_existant' for feature in ['mag', 'event', 'energy']: record_dummy[feature] = 0 df = df.append(record_dummy, ignore_index = True) len_after = len(df) print(f'Rows before: {len_before:,d} - Rows after: {len_after:,d}') assert(len_after - len_before == 1) new_max_date = df['date'].max() print(f'New max date: {str(new_max_date)}') if pad_df: zero_fill = ['mag', 'event', 'energy'] other_fill = {'id': 'non_existant'} df = ml.pad(df, 'zone_frame', 'date', all_zone_frames, 'min', zero_fill, other_fill) df = ml.pad(df, 'zone_frame', 'date', all_zone_frames, 'max', zero_fill, other_fill) df['x'] = df['zone_frame'].str.split('|').str[0].astype(int) df['y'] = df['zone_frame'].str.split('|').str[1].astype(int) df['z'] = df['zone_frame'].str.split('|').str[2].astype(int) assert(df.isna().sum().sum() == 0) # reindex df = ml.reindex_by_minmax( df = df.drop(['mag', 'x', 'y', 'z'], axis = 1), item = main_id, time_ref = time_ref, time_freq = time_frequency, forwardfill_features = [], backfill_features = [], zerofill_features = ['energy', 'event'], ) assert(df.isna().sum().sum() == 0) df['event'] = ((df['event'] > 0) * 1).astype(np.uint8) df['x'] = df['zone_frame'].str.split('|').str[0].astype(int) df['y'] = df['zone_frame'].str.split('|').str[1].astype(int) df['z'] = df['zone_frame'].str.split('|').str[2].astype(int) # print info ml.print_memory(df) # dump df for next component joblib.dump(df, grid_path) # return variables variables = namedtuple( 'OpOutputs', [ 'x_degre_km', 'y_degre_km', 'dx', 'dy', 'range_x_min', 'range_x_max', 'range_x_step', 'range_y_min', 'range_y_max', 'range_y_step', 'range_z_min', 'range_z_max', 'range_z_step', ] ) return variables( x_degre_km, y_degre_km, dx, dy, range_x.start, range_x.stop, range_x.step, range_y.start, range_y.stop, range_y.step, range_z.start, range_z.stop, range_z.step, ) reindex_df_op = comp.create_component_from_func( reindex_df, base_image = 'path-to-artifact-registry/zucaml:basic', packages_to_install = [], ) def get_neighbours_df( grid_path: comp.InputPath('csv'), x_degre_km: float, y_degre_km: float, dx: int, dy: int, range_x_min: int, range_x_max: int, range_x_step: int, range_y_min: int, range_y_max: int, range_y_step: int, range_z_min: int, range_z_max: int, range_z_step: int, full_path: comp.OutputPath('csv'), ): import joblib import pandas as pd import numpy as np from datetime import datetime import zucaml.zucaml as ml # read from last step df = joblib.load(grid_path) #### aux func def get_xyz(zone_frame): x, y, z = zone_frame.split('|') x = int(x) y = int(y) z = int(z) return x, y, z #### get neighbours def get_neighbours(zone_frame, neighbours, used_zone_frames): this_neighbours = [] this_x, this_y, this_z = get_xyz(zone_frame) for zf in used_zone_frames: x, y, z = get_xyz(zf) if zone_frame != zf and x in neighbours['x'][this_x] and y in neighbours['y'][this_y] and z in neighbours['z'][this_z]: this_neighbours.append(zf) return this_neighbours #### calculate distance in xy plane distance = max(x_degre_km * dx, y_degre_km * dy) #### aux variable range_x = range(range_x_min, range_x_max, range_x_step) range_y = range(range_y_min, range_y_max, range_y_step) range_z = range(range_z_min, range_z_max, range_z_step) ranges = {'x': range_x, 'y': range_y} #### neighbours coordinates neighbours = {} #### neighbours coordinates - xy for dim in ['x', 'y']: neighbours[dim] = {} ordered = {} for i, d in enumerate(ranges[dim]): ordered[i] = d for i, d in ordered.items(): neighbours[dim][d] = [d] if i > 0: neighbours[dim][d].append(ordered[i - 1]) if i < len(ordered) - 1: neighbours[dim][d].append(ordered[i + 1]) #### neighbours coordinates - z neighbours['z'] = {} for z in range_z: neighbours['z'][z] = [z] for z2 in range_z: if abs(z - z2) <= distance and z != z2: neighbours['z'][z].append(z2) #### neighbours per zone frame zone_frames_neighbours = {} used_zone_frames = df['zone_frame'].unique() for zone_frame in used_zone_frames: zone_frames_neighbours[zone_frame] = get_neighbours(zone_frame, neighbours, used_zone_frames) def get_energy_neighbours(df, used_zone_frames, zone_frames_neighbours): dfs = [] for zone_frame in used_zone_frames: df_zone = df.loc[df['zone_frame'] == zone_frame].copy() df_zone_neighbours = df.loc[df['zone_frame'].isin(zone_frames_neighbours[zone_frame])].copy() df_zone_neighbours = df_zone_neighbours.groupby(['date']).agg({'energy': np.sum}).reset_index() new_feature = 'neighbours_' + zone_frame df_zone_neighbours = df_zone_neighbours.rename({'energy': new_feature}, axis = 1) df_zone_neighbours = df_zone_neighbours.loc[df_zone_neighbours[new_feature] != 0].copy() df_zone = pd.merge( df_zone, df_zone_neighbours, how = 'left', on = ['date'], suffixes = ['_repeated_left', 'repeated_right'], ) dfs.append(df_zone) dfs = pd.concat(dfs) for feat in dfs: if 'repeated' in feat: print(f'Warning: repeated features') assert(len(dfs) == len(df)) neighbours_features = [feat for feat in dfs if feat.startswith('neighbours_')] dfs['energy_neighbours'] = dfs[neighbours_features].T.sum().T dfs = dfs.drop(neighbours_features, axis = 1) assert(dfs.isna().sum().sum() == 0) return dfs df = get_energy_neighbours(df, used_zone_frames, zone_frames_neighbours) # print info ml.print_memory(df) # dump df for next component joblib.dump(df, full_path) get_neighbours_df_op = comp.create_component_from_func( get_neighbours_df, base_image = 'path-to-artifact-registry/zucaml:basic', packages_to_install = [], ) def set_problem_statement( full_path: comp.InputPath('csv'), main_id: str, time_ref: str, target_raw: str, early_warning_number: int, range_warning_number: int, is_training: str, ps_path: comp.OutputPath('csv'), ): import joblib import pandas as pd import numpy as np from datetime import datetime import zucaml.zucaml as ml # read from last step df = joblib.load(full_path) # set target drop_na_target = is_training == 'True' df = ml.set_target( df = df, item = main_id, time_ref = time_ref, target = target_raw, early_warning = early_warning_number, range_warning = range_warning_number, drop_na_target = drop_na_target, ) balance = df['target'].sum() / len(df) print(f'Balance: {balance:.4%}') # print info ml.print_memory(df) # dump df for next component joblib.dump(df, ps_path) set_problem_statement_op = comp.create_component_from_func( set_problem_statement, base_image = 'path-to-artifact-registry/zucaml:basic', packages_to_install = [], ) def feature_engineering( ps_path: comp.InputPath('csv'), main_id: str, time_ref: str, target_raw: str, gold_data_path: comp.OutputPath('csv'), ): import joblib import pandas as pd import numpy as np from datetime import datetime import zucaml.zucaml as ml # read from last step df = joblib.load(ps_path) # create reset df = ml.create_reset( df = df, item = main_id, time_ref = time_ref, order = None ) # M.A. for window_rolling_mean in [30, 90, 180, 330, 360]: df = ml.ts_feature( df = df, feature_base = 'energy', func = 'rolling.mean', func_val = window_rolling_mean, label = None, ) for window_rolling_mean in [30, 90, 180, 330, 360]: df = ml.ts_feature( df = df, feature_base = 'energy_neighbours', func = 'rolling.mean', func_val = window_rolling_mean, label = None, ) # ratios df = ml.math_feature( df = df, feature_1 = 'energy|rolling.mean#30', feature_2 = 'energy|rolling.mean#360', func = 'ratio', label = None, ) df = ml.math_feature( df = df, feature_1 = 'energy|rolling.mean#90', feature_2 = 'energy|rolling.mean#360', func = 'ratio', label = None, ) df = ml.math_feature( df = df, feature_1 = 'energy|rolling.mean#180', feature_2 = 'energy|rolling.mean#360', func = 'ratio', label = None, ) df = ml.math_feature( df = df, feature_1 = 'energy|rolling.mean#330', feature_2 = 'energy|rolling.mean#360', func = 'ratio', label = None, ) # track last event df = ml.track_feature( df = df, feature_base = time_ref, condition = df[target_raw] > 0, track_window = 0, track_function = 'diff.days', label = 'days.since.last' ) # clean and order df = df.drop('reset', axis = 1) df = df.sort_values(['zone_frame', 'date']).reset_index().drop('index', axis = 1) # print info balance = df['target'].sum() / len(df) print(f'{balance:.6%}') ml.print_memory(df) # dump df for next component joblib.dump(df, gold_data_path) feature_engineering_op = comp.create_component_from_func( feature_engineering, base_image = 'path-to-artifact-registry/zucaml:basic', packages_to_install = [], ) def make_predictions( gold_data_path: comp.InputPath('csv'), bucket_name: str, artifact_folder: str, step_x: int, step_y: int, step_z: int, time_frequency: str, early_warning_number: int, range_warning_number: int, date_offset: int, predictions_data_path: comp.OutputPath('csv'), ) -> str: import joblib import pandas as pd import numpy as np import json from datetime import datetime, timedelta from pickle import loads from google.cloud import storage import zucaml.zucaml as ml # read from last step df = joblib.load(gold_data_path) # get bucket storage_client = storage.Client() bucket = storage_client.get_bucket(bucket_name) # get blobs config_file = bucket.get_blob(artifact_folder + 'notes.txt') model_file = bucket.get_blob(artifact_folder + 'model.pkl') # download model_config = json.loads(config_file.download_as_text()) model = loads(model_file.download_as_string()) # select max date df = df.loc[df['date'] == df['date'].max()].copy().reset_index().drop('index', axis = 1) # make predictions df['probability'] = model.predict_proba(df[model_config['features']])[:, 1] df['prediction'] = (df['probability'] > model_config['threshold']) * 1 # select fields df = df.loc[:, ['x', 'y', 'z', 'date', 'probability', 'prediction']].copy() # transform fields for location, step in [('x', step_x), ('y', step_y), ('z', step_z)]: df[location + '_min'] = df[location] - int(step / 2) df[location + '_max'] = df[location] + int(step / 2) df.loc[df['z_min'] < 0, 'z_min'] = 0 if 'D' in time_frequency: time_frequency_value = int(time_frequency.replace('D', '')) df['date_min'] = df['date'] + timedelta(days = early_warning_number * time_frequency_value) df['date_max'] = df['date_min'] + timedelta(days = range_warning_number * time_frequency_value) elif 'W' in time_frequency: time_frequency_value = int(time_frequency.replace('W', '')) df['date_min'] = df['date'] + timedelta(weeks = early_warning_number * time_frequency_value) df['date_max'] = df['date_min'] + timedelta(weeks = range_warning_number * time_frequency_value) else: print(f'Unknown timefrequency: {str(time_frequency)}') df['timestamp'] = datetime.now() df['offset'] = date_offset # check if is to predict dataframe_date = df['date'].iloc[0] this_day = datetime.now().date() print(f'date: {str(dataframe_date)} - today: {str(this_day)}') if dataframe_date >= this_day: df = df.drop(['x', 'y', 'z', 'date'], axis = 1) # print info number_predictions = df['prediction'].sum() print(f'Prediction true: {number_predictions:,d}') ml.print_memory(df) # dump df for next component joblib.dump(df, predictions_data_path) return str(date_offset) + '_' + datetime.now().astimezone().strftime('%Y-%m-%dT%H-%M-%S-%Z') else: return '' make_predictions_op = comp.create_component_from_func( make_predictions, base_image = 'path-to-artifact-registry/zucaml:basic', packages_to_install = ['google-cloud-storage', 'gcsfs', 'fsspec'], ) def calculate_metrics( dummy_input: str, main_bucket_name: str, silver_location: str, predictions_location: str, metrics_location: str, event_reference: float, min_latitude: int, max_latitude: int, min_longitude: int, max_longitude: int, ): from os import listdir from datetime import datetime, timedelta import pandas as pd import numpy as np from google.cloud import storage # read predictions storage_client = storage.Client() onlyfiles = [f.name for f in storage_client.list_blobs(main_bucket_name, prefix = predictions_location) if f.name.endswith('.csv')] dfs = [] for f in onlyfiles: dfs.append(pd.read_csv('gs://' + main_bucket_name + '/' + f)) predictions = pd.concat(dfs) predictions['date_min'] = pd.to_datetime(predictions['date_min']) predictions['date_max'] = pd.to_datetime(predictions['date_max']) first_prediction = predictions['date_min'].min() # filter predictions predictions = predictions.loc[predictions['prediction'] == 1].copy().reset_index().drop('index', axis = 1) # id prediction predictions = predictions.reset_index().rename({'index': 'id_pred'}, axis = 1) total_predictions = predictions[predictions['date_min'] <= datetime.now()]['id_pred'].nunique() # read events events = pd.read_csv('gs://' + main_bucket_name + '/' + silver_location + 'silver.csv') events['date'] = pd.to_datetime(events['date']) assert(events['id'].nunique() == len(events)) # filter events study = events['mag'] >= event_reference study = study & (events['longitude'] >= min_longitude) study = study & (events['longitude'] <= max_longitude) study = study & (events['latitude'] >= min_latitude) study = study & (events['latitude'] <= max_latitude) events = events.loc[(study) & (events['date'] >= first_prediction)].copy().reset_index().drop('index', axis = 1) for dimension in ['longitude', 'latitude', 'depth']: events[dimension] = events[dimension].round() # get tp, fp and fn events_values = events['date'].values predictions_min = predictions['date_min'].values predictions_max = predictions['date_max'].values i, j = np.where((events_values[:, None] >= predictions_min) & (events_values[:, None] <= predictions_max)) joined = pd.DataFrame( np.column_stack([events.values[i], predictions.values[j]]), columns = events.columns.append(predictions.columns) ) joined['keep'] = True joined['keep'] = joined['keep'] & (joined['longitude'] >= joined['x_min']) joined['keep'] = joined['keep'] & (joined['longitude'] <= joined['x_max']) joined['keep'] = joined['keep'] & (joined['latitude'] >= joined['y_min']) joined['keep'] = joined['keep'] & (joined['latitude'] <= joined['y_max']) joined['keep'] = joined['keep'] & (joined['depth'] >= joined['z_min']) joined['keep'] = joined['keep'] & (joined['depth'] <= joined['z_max']) joined = joined.loc[joined['keep']].copy().reset_index().drop('index', axis = 1) events['predicted'] = 'Missed' events.loc[events['id'].isin(joined['id']), 'predicted'] = 'Predicted' predictions['correct'] = 'False alarm' predictions.loc[predictions['id_pred'].isin(joined['id_pred']), 'correct'] = 'Correct' predictions.loc[(predictions['date_max'] > datetime.now() - timedelta(days = 1)) & (predictions['correct'] != 'Correct'), 'correct'] = '' # get metrics number_earthquakes_predicted = sum(events['predicted'] == 'Predicted') number_earthquakes_missed = sum(events['predicted'] == 'Missed') number_predictions_correct = sum(predictions['correct'] == 'Correct') number_predictions_false = sum(predictions['correct'] == 'False alarm') epsilon = np.finfo(float).eps precision = number_predictions_correct / (number_predictions_correct + number_predictions_false + epsilon) recall = number_earthquakes_predicted / (number_earthquakes_predicted + number_earthquakes_missed + epsilon) beta = 0.5 f05 = (1.0 + beta ** 2) * (precision * recall) / ((beta ** 2 * precision) + recall + epsilon) beta = 1.0 f1 = (1.0 + beta ** 2) * (precision * recall) / ((beta ** 2 * precision) + recall + epsilon) metrics = pd.DataFrame({ 'Predicted': [number_earthquakes_predicted], 'Missed': [number_earthquakes_missed], 'Correct': [number_predictions_correct], 'False alarm': [number_predictions_false], 'Precision': [precision], 'Recall': [recall], 'F0.5': [f05], 'F1': [f1] }) # dump files events.to_csv('gs://' + main_bucket_name + '/' + metrics_location + 'events.csv', index = False) predictions.to_csv('gs://' + main_bucket_name + '/' + metrics_location + 'predictions.csv', index = False) metrics.to_csv('gs://' + main_bucket_name + '/' + metrics_location + 'metrics.csv', index = False) calculate_metrics_op = comp.create_component_from_func( calculate_metrics, base_image = 'path-to-artifact-registry/python-pandas:basic', packages_to_install = ['google-cloud-storage'], ) @kfp.dsl.pipeline( name = pipeline_name, pipeline_root = pipeline_root_path ) def pipeline( is_training: str, maint_bucket: str, problem_statement_file: str, date_offset: int, raw_location: str, silver_location: str, gold_location: str, artifact_location: str, predictions_location: str, metrics_location: str, ): raw_data = get_data_from_url_op( url = 'https://earthquake.usgs.gov/fdsnws/event/1/query', delta_days = 3, ) with dsl.Condition(raw_data.outputs['Output'] != '', name = 'Download ok'): save_data_to_gcp_op( file_to_save = raw_data.outputs['downloaded_data'], file_name = raw_data.outputs['Output'], bucket_name = maint_bucket, bucket_folder = raw_location, ) cleaned_data = clean_data_op( raw_data.outputs['downloaded_data'], ) merged_data = merge_new_data_op( new_data = cleaned_data.outputs['cleaned_data'], bucket_name = maint_bucket, file_path = silver_location, file_name = 'silver.csv', ) with dsl.Condition(merged_data.outputs['Output'] == 'True', name = 'New info'): save_df_to_gcp_op( file_to_save = merged_data.outputs['silver_data'], file_name = 'silver', suffix = '', bucket_name = maint_bucket, bucket_folder = silver_location, ) problem_statement_config = get_problem_statement_op( bucket_name = maint_bucket, source_file = problem_statement_file, ) filtered_data = filter_and_preprocess_op( silver = merged_data.outputs['silver_data'], event_reference = problem_statement_config.outputs['event_reference'], min_latitude = problem_statement_config.outputs['min_latitude'], max_latitude = problem_statement_config.outputs['max_latitude'], min_longitude = problem_statement_config.outputs['min_longitude'], max_longitude = problem_statement_config.outputs['max_longitude'], time_cut = problem_statement_config.outputs['time_cut'], ) grid_data = reindex_df_op( filtered = filtered_data.outputs['filtered'], degrees_latitude_grid = problem_statement_config.outputs['degrees_latitude_grid'], km_depth_grid = problem_statement_config.outputs['km_depth_grid'], pad_df = problem_statement_config.outputs['pad_df'], main_id = problem_statement_config.outputs['main_id'], time_ref = problem_statement_config.outputs['time_ref'], time_frequency = problem_statement_config.outputs['time_frequency'], date_offset = date_offset, is_training = is_training, ) full_data = get_neighbours_df_op( grid = grid_data.outputs['grid'], x_degre_km = grid_data.outputs['x_degre_km'], y_degre_km = grid_data.outputs['y_degre_km'], dx = grid_data.outputs['dx'], dy = grid_data.outputs['dy'], range_x_min = grid_data.outputs['range_x_min'], range_x_max = grid_data.outputs['range_x_max'], range_x_step = grid_data.outputs['range_x_step'], range_y_min = grid_data.outputs['range_y_min'], range_y_max = grid_data.outputs['range_y_max'], range_y_step = grid_data.outputs['range_y_step'], range_z_min = grid_data.outputs['range_z_min'], range_z_max = grid_data.outputs['range_z_max'], range_z_step = grid_data.outputs['range_z_step'], ) ps_data = set_problem_statement_op( full = full_data.outputs['full'], main_id = problem_statement_config.outputs['main_id'], time_ref = problem_statement_config.outputs['time_ref'], target_raw = problem_statement_config.outputs['target_raw'], early_warning_number = problem_statement_config.outputs['early_warning_number'], range_warning_number = problem_statement_config.outputs['range_warning_number'], is_training = is_training, ) gold_data = feature_engineering_op( ps = ps_data.outputs['ps'], main_id = problem_statement_config.outputs['main_id'], time_ref = problem_statement_config.outputs['time_ref'], target_raw = problem_statement_config.outputs['target_raw'], ) save_df_to_gcp_op( file_to_save = gold_data.outputs['gold_data'], file_name = 'gold', suffix = str(date_offset), bucket_name = maint_bucket, bucket_folder = gold_location, ) predictions = make_predictions_op( gold_data = gold_data.outputs['gold_data'], bucket_name = maint_bucket, artifact_folder = artifact_location, step_x = grid_data.outputs['range_x_step'], step_y = grid_data.outputs['range_y_step'], step_z = grid_data.outputs['range_z_step'], time_frequency = problem_statement_config.outputs['time_frequency'], early_warning_number = problem_statement_config.outputs['early_warning_number'], range_warning_number = problem_statement_config.outputs['range_warning_number'], date_offset = date_offset, ) with dsl.Condition(predictions.outputs['Output'] != '', name = 'New prediction'): df_saved = save_df_to_gcp_op( file_to_save = predictions.outputs['predictions_data'], file_name = 'predictions', suffix = predictions.outputs['Output'], bucket_name = maint_bucket, bucket_folder = predictions_location, ) calculate_metrics_op( dummy_input = df_saved.outputs['Output'], main_bucket_name = maint_bucket, silver_location = silver_location, predictions_location = predictions_location, metrics_location = metrics_location, event_reference = problem_statement_config.outputs['event_reference'], min_latitude = problem_statement_config.outputs['min_latitude'], max_latitude = problem_statement_config.outputs['max_latitude'], min_longitude = problem_statement_config.outputs['min_longitude'], max_longitude = problem_statement_config.outputs['max_longitude'], ) return compiler.Compiler().compile( pipeline_func = pipeline, package_path = pipeline_name.replace('-', '_') + '.json' ) save_pipeline(pipeline_name.replace('-', '_') + '.json', main_bucket_name, 'pipelines/json/') # from datetime import datetime # api_client = AIPlatformClient( # project_id = project_id, # region = region # ) # parameter_values = { # 'is_training': 'False', # 'maint_bucket': main_bucket_name, # 'problem_statement_file': problem_statement_file, # 'date_offset': 5, # 'raw_location': raw_location, # 'silver_location': silver_location, # 'gold_location': gold_location, # 'artifact_location': artifact_location, # 'predictions_location': predictions_location, # 'metrics_location': metrics_location, # } # run_time = datetime.now().strftime('%Y%m%d%H%m%S%f') # api_client.create_run_from_job_spec( # job_spec_path = pipeline_root_path + 'json/' + pipeline_name.replace('-', '_') + '.json', # job_id = pipeline_name.replace('-', '') + '{0}'.format(run_time), # pipeline_root = pipeline_root_path, # enable_caching = False, # parameter_values = parameter_values # ) # def local_get_data_from_url(url, delta_days, end_date): # import requests # from datetime import datetime, timedelta # import pandas as pd # # get resquest params # if end_date is None: # end_range = datetime.now() + timedelta(days = 1) # else: # end_range = end_date - timedelta(days = 1) # begin_range = end_range - timedelta(days = delta_days) # end_range = end_range.strftime('%Y-%m-%d') # begin_range = begin_range.strftime('%Y-%m-%d') # query = {'format': 'csv', 'starttime': begin_range, 'endtime': end_range} # # get time # request_name = begin_range + '_' + end_range + '_.csv' # response_content = None # # make request # try: # with requests.get(url, params = query) as response: # response.raise_for_status() # response_content = response.content # except requests.exceptions.Timeout: # print('Timeout Exception') # return '' # except requests.exceptions.TooManyRedirects: # print('Too Many Redirects Exception') # return '' # except requests.exceptions.HTTPError: # print(query) # print('Http Exception') # return '' # except requests.exceptions.ConnectionError: # print('Error Connecting') # return '' # except requests.exceptions.RequestException: # print('Request Exception') # return '' # except: # print('Error requesting file') # return '' # # dump for next component # with open('./temp/' + request_name, 'w') as text_file: # text_file.write(response_content.decode('utf-8')) # df = pd.read_csv('./temp/' + request_name) # df['date'] = pd.to_datetime(df['time'].str.split('T').str[0], format = '%Y-%m-%d', exact = True) # assert((df['date'].max() - df['date'].min()).days + 1 == delta_days == df['date'].nunique()) # return df['date'].min() # min_date = None # for i in range(25): # min_date = local_get_data_from_url('https://earthquake.usgs.gov/fdsnws/event/1/query', 29, min_date) # from os import listdir # from os.path import isfile, join # import pandas as pd # onlyfiles = [f for f in listdir('./temp/') if f.endswith('.csv')] # dfs = [] # for f in onlyfiles: # dfs.append(pd.read_csv('./temp/' + f)) # df = pd.concat(dfs) # assert(df['id'].nunique() == len(df)) # def local_clean_data(df): # import joblib # import pandas as pd # # select info to be used # df = df.loc[:, ['time', 'id', 'latitude', 'longitude', 'depth', 'mag']].copy() # # weird values # z_0 = df['depth'] < 0 # print(f'Depth above sup: {sum(z_0):,d} ({sum(z_0) / len(df):.2%})') # df.loc[z_0, 'depth'] = 0 # # data parsing # date_col = 'time' # datetimes = df[date_col].str.split('T', expand = True) # dates = pd.to_datetime(datetimes.loc[:, 0], format = '%Y-%m-%d') # df = pd.concat((df, dates.rename('date')), axis = 1) # df = df.drop(date_col, axis = 1) # # drop NA # len_before = len(df) # df = df.dropna() # len_after = len(df) # dropped_events = len_before - len_after # if (dropped_events) == 0: # print('No dropped events') # else: # print(f'Dropped events: {dropped_events:,d} ({dropped_events / len_before:.2%})') # return df # df = local_clean_data(df) # df.to_csv('silver.csv', index = False) # df[:5] ```
github_jupyter
# 1.Loading libraries and Dataset ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns import warnings warnings.filterwarnings('ignore') from sklearn.model_selection import KFold from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression from sklearn.linear_model import Lasso from sklearn.linear_model import Ridge from sklearn.ensemble import RandomForestRegressor from sklearn.linear_model import ElasticNet from sklearn import metrics from sklearn.metrics import mean_squared_error from sklearn.model_selection import cross_val_score from scipy import stats #Reading Dataset df = pd.read_csv('../input/nyc-rolling-sales.csv') # Little peek into the dataset df.head() #Dropping column as it is empty del df['EASE-MENT'] #Dropping as it looks like an iterator del df['Unnamed: 0'] del df['SALE DATE'] #Checking for duplicated entries sum(df.duplicated(df.columns)) #Delete the duplicates and check that it worked df = df.drop_duplicates(df.columns, keep='last') sum(df.duplicated(df.columns)) ``` # 2.Data Inspection & Visualization ``` #shape of dataset df.shape #Description of every column df.info() #Let's convert some of the columns to appropriate datatype df['TAX CLASS AT TIME OF SALE'] = df['TAX CLASS AT TIME OF SALE'].astype('category') df['TAX CLASS AT PRESENT'] = df['TAX CLASS AT PRESENT'].astype('category') df['LAND SQUARE FEET'] = pd.to_numeric(df['LAND SQUARE FEET'], errors='coerce') df['GROSS SQUARE FEET']= pd.to_numeric(df['GROSS SQUARE FEET'], errors='coerce') #df['SALE DATE'] = pd.to_datetime(df['SALE DATE'], errors='coerce') df['SALE PRICE'] = pd.to_numeric(df['SALE PRICE'], errors='coerce') df['BOROUGH'] = df['BOROUGH'].astype('category') #checking missing values df.columns[df.isnull().any()] miss=df.isnull().sum()/len(df) miss=miss[miss>0] miss.sort_values(inplace=True) miss miss=miss.to_frame() miss.columns=['count'] miss.index.names=['Name'] miss['Name']=miss.index miss #plot the missing values sns.set(style='whitegrid',color_codes=True) sns.barplot(x='Name', y='count',data=miss) plt.xticks(rotation=90) sns ``` There are many missing values in the columns : * LAND SQUARE FEET * GROSS SQUARE FEET * SALE PRICE We can drop the rows with missing values or we can fill them up with their mean, median or any other relation. For time being, let's fill these up with mean values.<br> Further, We will try to predict the value of SALE PRICE as test data. ``` # For time being, let's fill these up with mean values. df['LAND SQUARE FEET']=df['LAND SQUARE FEET'].fillna(df['LAND SQUARE FEET'].mean()) df['GROSS SQUARE FEET']=df['GROSS SQUARE FEET'].fillna(df['GROSS SQUARE FEET'].mean()) # Splitting dataset test=df[df['SALE PRICE'].isna()] data=df[~df['SALE PRICE'].isna()] test = test.drop(columns='SALE PRICE') # Print first 5 rows of test print(test.shape) test.head() #Printing first rows of our data print(data.shape) data.head(10) #correlation between the features corr = data.corr() sns.heatmap(corr) ``` Last row represents the correlation of different features with SALE PRICE ``` #numeric correlation corr['SALE PRICE'].sort_values(ascending=False) numeric_data=data.select_dtypes(include=[np.number]) numeric_data.describe() ``` **SALE PRICE** ``` plt.figure(figsize=(15,6)) sns.boxplot(x='SALE PRICE', data=data) plt.ticklabel_format(style='plain', axis='x') plt.title('Boxplot of SALE PRICE in USD') plt.show() sns.distplot(data['SALE PRICE']) # Remove observations that fall outside those caps data = data[(data['SALE PRICE'] > 100000) & (data['SALE PRICE'] < 5000000)] ``` Let's Check Again ``` sns.distplot(data['SALE PRICE']) #skewness of SalePrice data['SALE PRICE'].skew() ``` SALE PRICE is highly right skewed. So, we will log transform it so that it give better results. ``` sales=np.log(data['SALE PRICE']) print(sales.skew()) sns.distplot(sales) ``` Well now we can see the symmetry and thus it is normalised. **Let's Visualize Numerical data** **SQUARE FEET** ``` plt.figure(figsize=(10,6)) sns.boxplot(x='GROSS SQUARE FEET', data=data,showfliers=False) plt.figure(figsize=(10,6)) sns.boxplot(x='LAND SQUARE FEET', data=data,showfliers=False) data = data[data['GROSS SQUARE FEET'] < 10000] data = data[data['LAND SQUARE FEET'] < 10000] plt.figure(figsize=(10,6)) sns.regplot(x='GROSS SQUARE FEET', y='SALE PRICE', data=data, fit_reg=False, scatter_kws={'alpha':0.3}) plt.figure(figsize=(10,6)) sns.regplot(x='LAND SQUARE FEET', y='SALE PRICE', data=data, fit_reg=False, scatter_kws={'alpha':0.3}) ``` **Total Units, Commercial Units, Residential Units** ``` data[["TOTAL UNITS", "SALE PRICE"]].groupby(['TOTAL UNITS'], as_index=False).count().sort_values(by='SALE PRICE', ascending=False) ``` Removing rows with TOTAL UNITS == 0 and one outlier with 2261 units ``` data = data[(data['TOTAL UNITS'] > 0) & (data['TOTAL UNITS'] != 2261)] plt.figure(figsize=(10,6)) sns.boxplot(x='TOTAL UNITS', y='SALE PRICE', data=data) plt.title('Total Units vs Sale Price') plt.show() plt.figure(figsize=(10,6)) sns.boxplot(x='COMMERCIAL UNITS', y='SALE PRICE', data=data) plt.title('Commercial Units vs Sale Price') plt.show() plt.figure(figsize=(10,6)) sns.boxplot(x='RESIDENTIAL UNITS', y='SALE PRICE', data=data) plt.title('Residential Units vs Sale Price') plt.show() ``` **Let's Visualize categorical data** ``` cat_data=data.select_dtypes(exclude=[np.number]) cat_data.describe() ``` **TAX CLASS AT PRESENT** ``` # Starting with TAX CLASS AT PRESENT data['TAX CLASS AT PRESENT'].unique() pivot=data.pivot_table(index='TAX CLASS AT PRESENT', values='SALE PRICE', aggfunc=np.median) pivot pivot.plot(kind='bar', color='black') ``` **TAX CLASS AT TIME OF SALE** ``` # TAX CLASS AT TIME OF SALE data['TAX CLASS AT TIME OF SALE'].unique() pivot=data.pivot_table(index='TAX CLASS AT TIME OF SALE', values='SALE PRICE', aggfunc=np.median) pivot pivot.plot(kind='bar', color='red') ``` **BOROUGH** ``` # BOROUGH data['BOROUGH'].unique() pivot=data.pivot_table(index='BOROUGH', values='SALE PRICE', aggfunc=np.median) pivot pivot.plot(kind='bar', color='blue') ``` ***It means max sale price is of BOROUGH==1 that is Manhattan.*** **BUILDING CLASS CATEGORY** ``` # BUILDING CLASS CATEGORY print(data['BUILDING CLASS CATEGORY'].nunique()) pivot=data.pivot_table(index='BUILDING CLASS CATEGORY', values='SALE PRICE', aggfunc=np.median) pivot pivot.plot(kind='bar', color='Green') ``` # 3. Data Pre Processing **Let's see our dataset again** ``` del data['ADDRESS'] del data['APARTMENT NUMBER'] data.info() ``` **Normalising and Transforming Numerical columns** ``` numeric_data.columns #transform the numeric features using log(x + 1) from scipy.stats import skew skewed = data[numeric_data.columns].apply(lambda x: skew(x.dropna().astype(float))) skewed = skewed[skewed > 0.75] skewed = skewed.index data[skewed] = np.log1p(data[skewed]) scaler = StandardScaler() scaler.fit(data[numeric_data.columns]) scaled = scaler.transform(data[numeric_data.columns]) for i, col in enumerate(numeric_data.columns): data[col] = scaled[:,i] data.head() #Dropping few columns del data['BUILDING CLASS AT PRESENT'] del data['BUILDING CLASS AT TIME OF SALE'] del data['NEIGHBORHOOD'] ``` **One hot encoding categorical columns** ``` #Select the variables to be one-hot encoded one_hot_features = ['BOROUGH', 'BUILDING CLASS CATEGORY','TAX CLASS AT PRESENT','TAX CLASS AT TIME OF SALE'] # Convert categorical variables into dummy/indicator variables (i.e. one-hot encoding). one_hot_encoded = pd.get_dummies(data[one_hot_features]) one_hot_encoded.info(verbose=True, memory_usage=True, null_counts=True) # Replacing categorical columns with dummies fdf = data.drop(one_hot_features,axis=1) fdf = pd.concat([fdf, one_hot_encoded] ,axis=1) fdf.info() ``` ## Train/Test Split ``` Y_fdf = fdf['SALE PRICE'] X_fdf = fdf.drop('SALE PRICE', axis=1) X_fdf.shape , Y_fdf.shape X_train ,X_test, Y_train , Y_test = train_test_split(X_fdf , Y_fdf , test_size = 0.3 , random_state =34) # Training set X_train.shape , Y_train.shape #Testing set X_test.shape , Y_test.shape ``` # 4. Modelling ``` # RMSE def rmse(y_test,y_pred): return np.sqrt(mean_squared_error(y_test,y_pred)) ``` ### 4.1 Linear Regression ``` linreg = LinearRegression() linreg.fit(X_train, Y_train) Y_pred_lin = linreg.predict(X_test) rmse(Y_test,Y_pred_lin) ``` ### 4.2. Lasso Regression ``` alpha=0.00099 lasso_regr=Lasso(alpha=alpha,max_iter=50000) lasso_regr.fit(X_train, Y_train) Y_pred_lasso=lasso_regr.predict(X_test) rmse(Y_test,Y_pred_lasso) ``` ### 4.3. Ridge Regression ``` ridge = Ridge(alpha=0.01, normalize=True) ridge.fit(X_train, Y_train) Y_pred_ridge = ridge.predict(X_test) rmse(Y_test,Y_pred_ridge) ``` ### 4.4. RandomForest Regressor ``` rf_regr = RandomForestRegressor() rf_regr.fit(X_train, Y_train) Y_pred_rf = rf_regr.predict(X_test) rmse(Y_test,Y_pred_rf) ``` # 5. Conclusion **We can see that Random Forest Regressor works best for this dataset with RSME score of 0.588**
github_jupyter
<img align="center" style="max-width: 1000px" src="banner.png"> <img align="right" style="max-width: 200px; height: auto" src="hsg_logo.png"> ## Lab 03 - "Supervised Machine Learning" Assignments GSERM'21 course "Deep Learning: Fundamentals and Applications", University of St. Gallen The lab environment of the "Deep Learning: Fundamentals and Applications" GSERM course at the University of St. Gallen (HSG) is based on Jupyter Notebooks (https://jupyter.org), which allow to perform a variety of statistical evaluations and data analyses. In the last lab we learned how to implement, train, and apply our first **Machine Learning** models, namely the Gaussian **Naive-Bayes (NB)** and the **Support Vectore Machine (SVM)** classifiers. In this lab, we aim to leverage that knowledge by applying it to a set of self-coding assignments. As always, pls. don't hesitate to ask all your questions either during the lab, post them in our CANVAS (StudyNet) forum (https://learning.unisg.ch), or send us an email (using the course email). ## 1. Assignment Objectives: Similar today's lab session, after today's self-coding assignments you should be able to: > 1. Know how to setup a **notebook or "pipeline"** that solves a simple supervised classification task. > 2. Recognize the **data elements** needed to train and evaluate a supervised machine learning classifier. > 3. Understand how a Gaussian **Naive-Bayes (NB)** classifier can be trained and evaluated. > 4. Understand how a **Suppport Vector Machine (SVM)** classifier can be trained and evaluated. > 5. Train and evaluate **machine learning models** using Python's `scikit-learn` library. > 6. Understand how to **evaluate** and **interpret** the classification results. Before we start let's watch a motivational video: ``` from IPython.display import YouTubeVideo # OpenAI: "Solving Rubik's Cube with a Robot Hand" # YouTubeVideo('x4O8pojMF0w', width=800, height=600) ``` ## 2. Setup of the Analysis Environment Similarly to the previous labs, we need to import a couple of Python libraries that allow for data analysis and data visualization. In this lab will use the `Pandas`, `Numpy`, `Scikit-Learn`, `Matplotlib` and the `Seaborn` library. Let's import the libraries by the execution of the statements below: ``` # import the numpy, scipy and pandas data science library import pandas as pd import numpy as np import scipy as sp from scipy.stats import norm # import sklearn data and data pre-processing libraries from sklearn import datasets from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split # import sklearn naive.bayes classifier library from sklearn.naive_bayes import GaussianNB # import sklearn support vector classifier (svc) library from sklearn.svm import SVC # import sklearn classification evaluation library from sklearn import metrics from sklearn.metrics import classification_report, confusion_matrix # import matplotlib data visualization library import matplotlib.pyplot as plt import seaborn as sns ``` Enable inline Jupyter notebook plotting: ``` %matplotlib inline ``` Ignore potential library warnings: ``` import warnings warnings.filterwarnings('ignore') ``` Use the 'Seaborn' plotting style in all subsequent visualizations: ``` plt.style.use('seaborn') ``` Set random seed of all our experiments - this insures reproducibility. ``` random_seed = 42 ``` ## 3. Data Download, Assessment and Pre-processing ### 3.1 Dataset Download and Data Assessment The **Iris Dataset** is a classic and straightforward dataset often used as a "Hello World" example in multi-class classification. This data set consists of measurements taken from three different types of iris flowers (referred to as **Classes**), namely the Iris Setosa, the Iris Versicolour and the Iris Virginica, and their respective measured petal and sepal length (referred to as **Features**). <img align="center" style="max-width: 700px; height: auto" src="iris_dataset.png"> (Source: http://www.lac.inpe.br/~rafael.santos/Docs/R/CAP394/WholeStory-Iris.html) In total, the dataset consists of **150 samples** (50 samples taken per class) as well as their corresponding **4 different measurements** taken for each sample. Please, find below the list of the individual measurements: >- `Sepal length (cm)` >- `Sepal width (cm)` >- `Petal length (cm)` >- `Petal width (cm)` Further details of the dataset can be obtained from the following puplication: *Fisher, R.A. "The use of multiple measurements in taxonomic problems" Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to Mathematical Statistics" (John Wiley, NY, 1950)."* Let's load the dataset and conduct a preliminary data assessment: ``` iris = datasets.load_iris() ``` Print and inspect the names of the four features contained in the dataset: ``` iris.feature_names ``` Determine and print the feature dimensionality of the dataset: ``` iris.data.shape ``` Determine and print the class label dimensionality of the dataset: ``` iris.target.shape ``` Print and inspect the names of the three classes contained in the dataset: ``` iris.target_names ``` Let's briefly envision how the feature information of the dataset is collected and presented in the data: <img align="center" style="max-width: 900px; height: auto" src="feature_collection.png"> Let's inspect the top five feature rows of the Iris Dataset: ``` pd.DataFrame(iris.data, columns=iris.feature_names).head(5) ``` Let's also inspect the top five class labels of the Iris Dataset: ``` pd.DataFrame(iris.target, columns=["class"]).head(5) ``` Let's now conduct a more in depth data assessment. Therefore, we plot the feature distributions of the Iris dataset according to their respective class memberships as well as the features pairwise relationships. Pls. note that we use Python's **Seaborn** library to create such a plot referred to as **Pairplot**. The Seaborn library is a powerful data visualization library based on the Matplotlib. It provides a great interface for drawing informative statstical graphics (https://seaborn.pydata.org). ``` # init the plot plt.figure(figsize=(10, 10)) # load the dataset also available in seaborn iris_plot = sns.load_dataset("iris") # plot a pairplot of the distinct feature distributions sns.pairplot(iris_plot, diag_kind='hist', hue='species'); ``` It can be observed from the created Pairplot, that most of the feature measurements that correspond to flower class "setosa" exhibit a nice **linear seperability** from the feature measurements of the remaining flower classes. In addition, the flower classes "versicolor" and "virginica" exhibit a commingled and **non-linear seperability** across all the measured feature distributions of the Iris Dataset. ### 3.2 Dataset Pre-processing To understand and evaluate the performance of any trained **supervised machine learning** model, it is good practice to divide the dataset into a **training set** (the fraction of data records solely used for training purposes) and a **evaluation set** (the fraction of data records solely used for evaluation purposes). Please note that the **evaluation set** will never be shown to the model as part of the training process. <img align="center" style="max-width: 500px; height: auto" src="train_eval_dataset.png"> We set the fraction of evaluation records to **30%** of the original dataset: ``` eval_fraction = 0.3 ``` Randomly split the dataset into training set and evaluation set using sklearn's `train_test_split` function: ``` # 70% training and 30% evaluation x_train, x_eval, y_train, y_eval = train_test_split(iris.data, iris.target, test_size=eval_fraction, random_state=random_seed, stratify=None) ``` Evaluate the dimensionality of the training dataset $x^{train}$: ``` x_train.shape, y_train.shape ``` Evaluate the dimensionality of the evaluation dataset $x^{eval}$: ``` x_eval.shape, y_eval.shape ``` ## 4. Gaussian "Naive-Bayes" (NB) Classification Assignments We recommend you to try the following exercises as part of the lab: **1. Train and evaluate the prediction accuracy of different train- vs. eval-data ratios.** > Change the ratio of training data vs. evaluation data to 30%/70% (currently 70%/30%), fit your model and calculate the new classification accuracy. Subsequently, repeat the experiment a second time using a 10%/90% fraction of training data/evaluation data. What can be observed in both experiments in terms of classification accuracy? ``` # *************************************************** # INSERT YOUR CODE HERE # *************************************************** ``` **2. Calculate the true-positive as well as the false-positive rate of the Iris versicolor vs. virginica.** > Calculate the true-positive rate as well as false-positive rate of (1) the experiment exhibiting a 30%/70% ratio of training data vs. evaluation data and (2) the experiment exhibiting a 10%/90% ratio of training data vs. evaluation data. ``` # *************************************************** # INSERT YOUR CODE HERE # *************************************************** ``` ## 5. Support Vector Machine (SVM) Classification Assignments We recommend you to try the following exercises as part of the lab: **1. Train and evaluate the prediction accuracy of SVM models trained with different hyperparameters.** > Change the kernel function $\phi$ of the SVM to a polynomial kernel, fit your model and calculate the new classification accuracy on the IRIS dataset. Subsequently, repeat similar experiment with different SVM hyperparameter setups by changing the value of $C$, $\gamma$ and the kernel function $\phi$. What pattern can be observed by the distinct hyperparameter setups in terms of classification accuracy? ``` # *************************************************** # INSERT YOUR CODE HERE # *************************************************** ``` **2. Train and evaluate the prediction accuracy of SVM models using different or additional features.** > Fix the hyperparameters of the SVM and evalute the classification accuracy on the FashionMNIST dataset using different features. For example, evaluate the prediction accuracy that can be derived based on a set of Scale-Invariant Feature Transform (SIFT) features. Or the combination of HOG and SIFT features. Will the consideration of additional features improve you classification accuracy? More information on the FashionMNIST dataset: visit Zalando research's [github page](https://github.com/zalandoresearch/fashion-mnist). ``` # *************************************************** # INSERT YOUR CODE HERE # *************************************************** ```
github_jupyter
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-use-adla-as-compute-target.png) # AML Pipeline with AdlaStep This notebook is used to demonstrate the use of AdlaStep in AML Pipelines. [AdlaStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.adla_step.adlastep?view=azure-ml-py) is used to run U-SQL scripts using Azure Data Lake Analytics service. ## AML and Pipeline SDK-specific imports ``` import os from msrest.exceptions import HttpOperationError import azureml.core from azureml.exceptions import ComputeTargetException from azureml.core import Workspace, Experiment from azureml.core.compute import ComputeTarget, AdlaCompute from azureml.core.datastore import Datastore from azureml.data.data_reference import DataReference from azureml.pipeline.core import Pipeline, PipelineData from azureml.pipeline.steps import AdlaStep # Check core SDK version number print("SDK version:", azureml.core.VERSION) ``` ## Initialize Workspace Initialize a workspace object from persisted configuration. If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure the config file is present at .\config.json ``` ws = Workspace.from_config() print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n') ``` ## Attach ADLA account to workspace To submit jobs to Azure Data Lake Analytics service, you must first attach your ADLA account to the workspace. You'll need to provide the account name and resource group of ADLA account to complete this part. ``` adla_compute_name = 'testadl' # Name to associate with new compute in workspace # ADLA account details needed to attach as compute to workspace adla_account_name = "<adla_account_name>" # Name of the Azure Data Lake Analytics account adla_resource_group = "<adla_resource_group>" # Name of the resource group which contains this account try: # check if already attached adla_compute = AdlaCompute(ws, adla_compute_name) except ComputeTargetException: print('attaching adla compute...') attach_config = AdlaCompute.attach_configuration(resource_group=adla_resource_group, account_name=adla_account_name) adla_compute = ComputeTarget.attach(ws, adla_compute_name, attach_config) adla_compute.wait_for_completion() print("Using ADLA compute:{}".format(adla_compute.cluster_resource_id)) print("Provisioning state:{}".format(adla_compute.provisioning_state)) print("Provisioning errors:{}".format(adla_compute.provisioning_errors)) ``` ## Register Data Lake Storage as Datastore To register Data Lake Storage as Datastore in workspace, you'll need account information like account name, resource group and subscription Id. > AdlaStep can only work with data stored in the **default** Data Lake Storage of the Data Lake Analytics account provided above. If the data you need to work with is in a non-default storage, you can use a DataTransferStep to copy the data before training. You can find the default storage by opening your Data Lake Analytics account in Azure portal and then navigating to 'Data sources' item under Settings in the left pane. ### Grant Azure AD application access to Data Lake Storage You'll also need to provide an Active Directory application which can access Data Lake Storage. [This document](https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory) contains step-by-step instructions on how to create an AAD application and assign to Data Lake Storage. Couple of important notes when assigning permissions to AAD app: - Access should be provided at root folder level. - In 'Assign permissions' pane, select Read, Write, and Execute permissions for 'This folder and all children'. Add as 'An access permission entry and a default permission entry' to make sure application can access any new files created in the future. ``` datastore_name = 'MyAdlsDatastore' # Name to associate with data store in workspace # ADLS storage account details needed to register as a Datastore subscription_id = os.getenv("ADL_SUBSCRIPTION_62", "<my-subscription-id>") # subscription id of ADLS account resource_group = os.getenv("ADL_RESOURCE_GROUP_62", "<my-resource-group>") # resource group of ADLS account store_name = os.getenv("ADL_STORENAME_62", "<my-datastore-name>") # ADLS account name tenant_id = os.getenv("ADL_TENANT_62", "<my-tenant-id>") # tenant id of service principal client_id = os.getenv("ADL_CLIENTID_62", "<my-client-id>") # client id of service principal client_secret = os.getenv("ADL_CLIENT_62_SECRET", "<my-client-secret>") # the secret of service principal try: adls_datastore = Datastore.get(ws, datastore_name) print("found datastore with name: %s" % datastore_name) except HttpOperationError: adls_datastore = Datastore.register_azure_data_lake( workspace=ws, datastore_name=datastore_name, subscription_id=subscription_id, # subscription id of ADLS account resource_group=resource_group, # resource group of ADLS account store_name=store_name, # ADLS account name tenant_id=tenant_id, # tenant id of service principal client_id=client_id, # client id of service principal client_secret=client_secret) # the secret of service principal print("registered datastore with name: %s" % datastore_name) ``` ## Setup inputs and outputs For purpose of this demo, we're going to execute a simple U-SQL script that reads a CSV file and writes portion of content to a new text file. First, let's create our sample input which contains 3 columns: employee Id, name and department Id. ``` # create a folder to store files for our job sample_folder = "adla_sample" if not os.path.isdir(sample_folder): os.mkdir(sample_folder) %%writefile $sample_folder/sample_input.csv 1, Noah, 100 3, Liam, 100 4, Emma, 100 5, Jacob, 100 7, Jennie, 100 ``` Upload this file to Data Lake Storage at location `adla_sample/sample_input.csv` and create a DataReference to refer to this file. ``` sample_input = DataReference( datastore=adls_datastore, data_reference_name="employee_data", path_on_datastore="adla_sample/sample_input.csv") ``` Create PipelineData object to store output produced by AdlaStep. ``` sample_output = PipelineData("sample_output", datastore=adls_datastore) ``` ## Write your U-SQL script Now let's write a U-Sql script that reads above CSV file and writes the name column to a new file. Instead of hard-coding paths in your script, you can use `@@name@@` syntax to refer to inputs, outputs, and parameters. - If `name` is the name of an input or output port binding, any occurrences of `@@name@@` in the script are replaced with actual data path of corresponding port binding. - If `name` matches any key in the `params` dictionary, any occurrences of `@@name@@` will be replaced with corresponding value in the dictionary. Note the use of @@ syntax in the below script. Before submitting the job to Data Lake Analytics service, `@@emplyee_data@@` will be replaced with actual path of `sample_input.csv` in Data Lake Storage. Similarly, `@@sample_output@@` will be replaced with a path in Data Lake Storage which will be used to store intermediate output produced by the step. ``` %%writefile $sample_folder/sample_script.usql // Read employee information from csv file @employees = EXTRACT EmpId int, EmpName string, DeptId int FROM "@@employee_data@@" USING Extractors.Csv(); // Export employee names to text file OUTPUT ( SELECT EmpName FROM @employees ) TO "@@sample_output@@" USING Outputters.Text(); ``` ## Create an AdlaStep **[AdlaStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.adla_step.adlastep?view=azure-ml-py)** is used to run U-SQL script using Azure Data Lake Analytics. - **name:** Name of module - **script_name:** name of U-SQL script file - **inputs:** List of input port bindings - **outputs:** List of output port bindings - **compute_target:** the ADLA compute to use for this job - **params:** Dictionary of name-value pairs to pass to U-SQL job *(optional)* - **degree_of_parallelism:** the degree of parallelism to use for this job *(optional)* - **priority:** the priority value to use for the current job *(optional)* - **runtime_version:** the runtime version of the Data Lake Analytics engine *(optional)* - **source_directory:** folder that contains the script, assemblies etc. *(optional)* - **hash_paths:** list of paths to hash to detect a change (script file is always hashed) *(optional)* The best practice is to use separate folders for scripts and its dependent files for each step and specify that folder as the `source_directory` for the step. This helps reduce the size of the snapshot created for the step (only the specific folder is snapshotted). Since changes in any files in the `source_directory` would trigger a re-upload of the snapshot, this helps keep the reuse of the step when there are no changes in the `source_directory` of the step. ``` adla_step = AdlaStep( name='extract_employee_names', script_name='sample_script.usql', source_directory=sample_folder, inputs=[sample_input], outputs=[sample_output], compute_target=adla_compute) ``` ## Build and Submit the Experiment ``` pipeline = Pipeline(workspace=ws, steps=[adla_step]) pipeline_run = Experiment(ws, 'adla_sample').submit(pipeline) pipeline_run.wait_for_completion() ``` ### View Run Details ``` from azureml.widgets import RunDetails RunDetails(pipeline_run).show() ```
github_jupyter
## WaMDaM Directions and Use Cases #### By Adel M. Abdallah, Utah State University, August 2018 The Water Management Data Model (WaMDaM) is a database design with companion software that uses contextual metadata and controlled vocabularies to organize water management data from multiple sources and models. The design addressed the problem of using multiple methods to query and analyze water management data to identify input data to develop or extend a water management model. The consistent design allows modelers to query, plot, compare data, and choose input data and serve it to run models. Active development of WaMDaM software continues at the WaMDaM project on GitHub at https://github.com/WamdamProject ### The instructions here will help you to replicate the process to * Use the WaMDaM Wizard to load 13 different water management datasets into a WaMDaM SQLite database file. * Execute SQL and Python to query, compare, and plot example data analysis in four use cases to choose it as input to a model in the Bear River Watershed, USA. * Serve the selected data for a fifth use case into a Water Evaluation and Planning system (WEAP) model in the Bear River Watershed Many WaMDaM steps can be executed within this Notebook, other steps require actions on a local machine. The Notebooks use the Python 2.7 libraries pandas and SQLite that are already installed as part of the Anaconda Distribution. If you want to share this Notebook with others, please share this URL. https://github.com/WamdamProject/WaMDaM_JupyteNotebooks/blob/master/1_QuerySelect/00_WaMDaM_Directions_and_Use_Cases.ipynb **Note**, URLs that contain (https://hub.mybinder.org.....) will become invalid after few minutes of inactivity. ### Required Software * Windows 7 or 10 64-bit operating systems, both up to date. * Internet connection and Google Chrome v. 69 or FireFox Quantum v. 62.0 browsers * Microsoft Excel (versions after 2007) * Anaconda distribution for Python 2.7. Having Anaconda distribution for Python 3.5 installed on the same machine may cause confusion in the used Python libraries. Its recommneded to only having the 2.7 installed for a smoother experiance. * Water Evaluation and Planning System (WEAP) (WEAP requires a license to run the model. If you don’t have access to a WEAP license, you will still be able to replicate all the results except for the last use case). ### Difficulty level No coding or database experience are needed to replicate the work but some knowledge of coding like in Python and awareness of Structure Query Language (SQL) are a plus. **Options and required time** Please expect to spend a few hours up to 2 days to complete these directions. This work is 6 years in the making so spending a few hours or 2 days to learn and replicate the results is quite amazing! Choose one of 2 replication options: ### Option 1: faster as it skips replicating loading datasets to WaMDaM and starts with a populated database Up to ~ 3.5 hours Steps| Activity | Expected time to complete :-- | :-- | :-- Step 1.1 | Install WaMDaM Wizard | ~ 30 min Step 1.3 | Install and setup  local Jupyter Notebook server | ~ 30 min Step 2.1 | Run Use Cases 1-3 on cloud | ~ 1 hour Step 2.2 | Use Wizard to compare scenarios for use case 4 | ~ 30 min Step 2.3 | Use WEAP and local Jupyter Notebook for use case 5 | ~ 1 hour ### Option 2: slower, mostly computer time to load large datasets into WaMDaM SQLite Up to ~2 days Options | Activity | Expected time to complete :-- | :-- | :-- Step 1 | Install WaMDaM Wizard,<br> load datasets into a new sqlite file, <br>and setup a local Jupyter Notebook  server | ~ 1.5-2 days Step 2 | Execute use cases 1-5 using <br>the new populated sqlite file | ~ 3 hours ### How to use the Notebook The Notebook reads data from a WaMDaM SQLite pre-populated file, runs SQL script for each use case, and then uses Plotly to visualize the results. The SQL queries used in the use cases are referenced directly from GitHub to maintain a manageable and simple Notebook at https://github.com/WamdamProject/WaMDaM_UseCases/tree/master/4_Queries_SQL Execute the Notebook cells that contain Python code by pressing `Shift-Enter`, or by pressing the play button <img style='display:inline;padding-bottom:15px' src='play-button.png'> on the toolbar above. **Note:** Any changes you make to the live Notebooks are temporary and will be lost once you close it. You always can start a new live Notebook that has the original content. <h1><center>Step 1: Setup / Installation on a local machine</center></h1> ## Step 1.1: Install WaMDaM Wizard [01_Step1.1_Install_WaMDaM_Wizard directions Notebook](https://github.com/WamdamProject/WaMDaM_JupyteNotebooks/blob/master/1_QuerySelect/01_Step1.1_Install_WaMDaM_Wizard.ipynb) <br> ## Step 1.2: Use the Wizard to load datasets to WaMDaM <font color=red>If you intend to run all the use cases #1-5 using the provided WaMDaM database file that is preloaded with data, skip this step! </font> [02_Step1.2_Load_Datasets_Wizard directions Notebook](https://github.com/WamdamProject/WaMDaM_JupyteNotebooks/blob/master/1_QuerySelect/02_Step1.2_Load_Datasets_Wizard.ipynb) <br> ## Step 1.3: Set up a local Jupyter Notebook server on a local machine [03_Step1.3_Setup_local_Jupyter directions Notebook](https://github.com/WamdamProject/WaMDaM_JupyteNotebooks/blob/master/1_QuerySelect/03_Step1.3_Setup_local_Jupyter.ipynb) <br> <h1><center> Step 2: Replicate use cases</center></h1> <font color=green> This step is ready for Option 1: Skip data loading and run Use Cases #1-3 live on the fly, with no setup at all </font> ## Step 2.1: Execute Jupyter Notebook in the cloud <a name="Use Case 1"></a> ### Use Case 1: What data entered by others can be used to develop a WEAP model for the entire Bear River basin? [UseCase 1 live Notebook](https://mybinder.org/v2/gh/WamdamProject/WaMDaM_JupyteNotebooks/master?filepath=1_QuerySelect/04_UseCase1_DataAvailabilityForModels.ipynb) <a name="Use Case 2"></a> ### Use Case 2: What network connectivity to use in a model? [UseCase 2 live Notebook](https://mybinder.org/v2/gh/WamdamProject/JupyteNotebooks/master?filepath=1_QuerySelect/05_UseCase2_Choose_Network.ipynb) <a name="Use Case 3"></a> ### Use Case 3: How do data values differ across datasets and which value to choose for a model? #### Use Case 3.1: What differences are there across datasets in flow data values at a site? a. [Use Case 3.1-a Flow TimeSeries plot live Notebook](https://mybinder.org/v2/gh/WamdamProject/JupyteNotebooks/master?filepath=1_QuerySelect/06_Use_Case_3.1_a_Flow_TimeSeries.ipynb) <br> b. [Use Case 3.1-b Flow Seasonal plot live Notebook](https://mybinder.org/v2/gh/WamdamProject/JupyteNotebooks/master?filepath=1_QuerySelect/07_Use_Case_3.1_b_FlowSeasonal.ipynb) ================================================================================================================ #### Use Case 3.2: What differences are there across datasets in agriculture water use in an area? [Use Case_3.2_Seasonal_Demand Live Notebook](https://mybinder.org/v2/gh/WamdamProject/JupyteNotebooks/master?filepath=1_QuerySelect/08_Use_Case_3.2_Seasonal_Demand.ipynb) ================================================================================================================ #### Use Case 3.3: What differences are there across datasets in volume and elevation curves of a reservoir? [UseCase_3.3_Array_reservoir live Notebook](https://mybinder.org/v2/gh/WamdamProject/JupyteNotebooks/master?filepath=1_QuerySelect/09_Use_Case_3.3_array_reservoir.ipynb) ================================================================================================================ #### Use Case 3.4: What differences are there across datasets in dam heights, installed hydropower capacity, and number of generators for two reservoirs? <font color=red> This is bonus Use case not presented in the WaMDaM paper</font> [UseCase_3.4_Hydropower live Notebook](https://mybinder.org/v2/gh/WamdamProject/JupyteNotebooks/master?filepath=1_QuerySelect/10_Use_Case_3.4_Hydropower.ipynb) # Step 2.2: Compare Scenarios using WaMDaM Wizard <a name="Use Case 4"></a> ### Use Case 4: What is the difference between two scenarios and which one to use in a model? This use case uses the WaMDaM Wizard. If you did not already install the WaMDaM Wizard, return to [01_Step1.1_Install_WaMDaM_Wizard Notebook](https://github.com/WamdamProject/WaMDaM_JupyteNotebooks/blob/master/1_QuerySelect/01_Step1.1_Install_WaMDaM_Wizard.ipynb) Follow instructions in the [UseCase4_Compare_Scenarios Notebook](https://github.com/WamdamProject/WaMDaM_JupyteNotebooks/blob/master/1_QuerySelect/11_UseCase_4_Compare_Scenarios.ipynb) # Step 2.3: Serve Data to WEAP ### Use Case 5: How does annual water shortages at the Bear River Migratory Bird Refuge in the Bear River basin change after updating the Bear River WEAP Model 2017 model with new bathymetry, flow, and demand data as selected in use cases 2-3? <font color=red> This use case only works on a local Jupyter Notebook server installed on your machine along with WEAP. So it does not work on the online Notebooks.</font> <br> If you did not already install the Jupyter Notebook Server, return to [Step 1.3: Set up a local Jupyter Notebook server on a local machine notebook](https://github.com/WamdamProject/WaMDaM_JupyteNotebooks/blob/master/1_QuerySelect/03_Step1.3_Setup_local_Jupyter.ipynb) Then, follow these instructions om GitHub to set up the WEAP API connection and run the Notebook on the local server.</font> <br> [UseCase5_Serve Data to Run WEAP](https://github.com/WamdamProject/WaMDaM_JupyteNotebooks/blob/master/1_QuerySelect/12_Use_Case_5_ServeDataRunWEAP.ipynb) # Info ### Sponsors and Credit This material is based upon work [supported](http://docs.wamdam.org/SponsorsCredit/) by the National Science Foundation (NSF) under Grants 1135482 (CI-Water) and 1208732 (iUtah). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF. ### License WaMDaM and its products are disturbed under a BSD 3-Clause [license](http://docs.wamdam.org/License/) ### Authors [Adel M. Abdallah](http://adelmabdallah.com/) has been the lead in WaMDaM development as part of his PhD dissertation at Utah State University under the advising of Dr. David Rosenberg. If you have questions, feel free to email me at: [amabdallah@aggiemail.usu.edu](mailto:amabdallah@aggiemail.usu.edu) ### Citation Adel M. Abdallah and David E. Rosenberg (In review). A Data Model to Manage Data for Water Resources Systems Modeling. Environmental Modelling & Software # The End :) Congratulations!
github_jupyter
``` import cv2 import numpy as np import matplotlib.pyplot as plt %matplotlib inline class Interpolation: def bilinear(self, img, scale=1.5): H, W = img.shape[:2] H_big, W_big = int(H * scale), int(W * scale) if len(img.shape) == 2: ch = 1 output_img = np.zeros((H_big, W_big)) elif len(img.shape) == 3: ch = img.shape[2] output_img = np.zeros((H_big, W_big, ch)) else: raise ValueError("invalid image shape: {}".format(img.shape)) for i in range(H_big): for j in range(W_big): y, x = min(H-2, int(i/scale)), min(W-2, int(j/scale)) dy, dx = i/scale - y, j/scale - x D = [(1-dy)*(1-dx), dy*(1-dx), (1-dy)*dx, dy*dx] if len(img.shape) == 3: I = [img[y, x, :], img[y+1, x, :], img[y, x+1, :], img[y+1, x+1, :]] output_img[i, j, :] = sum(d*z for (d, z) in zip(D, I)) elif len(img.shape) == 2: I = [img[y, x], img[y+1, x], img[y, x+1], img[y+1, x+1]] output_img[i, j] = sum(d*z for (d, z) in zip(D, I)) output_img = np.clip(output_img, 0, 255).astype("uint8") return output_img class Solver: def problem_73(self, img): img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) ip = Interpolation() output_img = ip.bilinear(img_gray, 0.5) output_img = ip.bilinear(output_img, 2.0) return output_img def problem_74(self, img): img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY).astype(np.int32) img_ip = self.problem_73(img).astype(np.int32) img_diff = np.abs(img_gray - img_ip) img_diff = (img_diff / img_diff.max() * 255).astype(np.uint8) return img_diff def problem_75(self, img): img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) ip = Interpolation() images = [] for i in range(6): scale = 1.0 / (2**i) scaled_image = ip.bilinear(img_gray, scale) images.append(scaled_image) text = "1/{}".format(2**i) plt.subplot(2, 3, i+1) plt.imshow(scaled_image, cmap="gray") plt.title(text) plt.show() return images def problem_76(self, img): images = self.problem_75(img) ip = Interpolation() resized_images = np.zeros((6, *img.shape[:2])) for i in range(6): scale = (2**5) / (2**(5-i)) if scale == 1.0: continue resized_images[i] = ip.bilinear(images[i], scale).astype(np.int32) u, v = 1, 4 output_img = np.abs(resized_images[u] - resized_images[v]) output_img = np.clip(output_img, 0, 255) output_img = (output_img / output_img.max() * 255).astype(np.uint8) return output_img input_img = cv2.imread("../imori.jpg") solver = Solver() output_img = solver.problem_76(input_img) plt.imshow(output_img, cmap="gray") plt.show() ```
github_jupyter
# Stablecoin Billionaires<br> Descriptive Analysis of the Ethereum-based Stablecoin ecosystem ## by Anton Wahrstätter, 01.07.2020 # Part II - USDC ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from datetime import datetime from collections import Counter from matplotlib import rc import re import random rc('font', **{'family':'serif','serif': ['Computer Modern']}) rc('text', usetex=True) #decimals dec = 6 #plots tx_over_date = '../plots/usdc/usdc_txs_over_date.csv' unique_senders_over_date = '../plots/usdc/usdc_unique_senders_over_date.csv' unique_recipients_over_date = '../plots/usdc/usdc_unique_recipients_over_date.csv' tx_count_to = '../plots/usdc/usdc_tx_count_to.csv' tx_count_from = '../plots/usdc/usdc_tx_count_from.csv' tx_over_date = '../plots/usdc/usdc_txs_over_date.csv' balances = '../plots/usdc/usdc_balances.csv' avg_gas_over_date = '../plots/usdc/usdc_avg_gas_over_date.csv' avg_value_over_date = '../plots/usdc/usdc_avg_value_over_date.csv' positive_cumulated_balances = '../plots/usdc/usdc_positive_cumulated_balances.csv' circulating_supply = '../plots/usdc/usdc_circulating_supply.csv' unique_recipients_per_day_over_date = '../plots/usdc/usdc_unique_recipients_per_day_over_date.csv' unique_senders_per_day_over_date = '../plots/usdc/usdc_unique_senders_per_day_over_date.csv' exchanges = '../plots/exchanges.csv' #data transfer = '../data/usdc/transfer/0_usdc_transfer_6082465-10370273.csv' mint = '../data/usdc/mint/usdc_mint.csv' burn = '../data/usdc/burn/usdc_burn.csv' ``` <center></center> # Data ``` df = pd.read_csv(transfer) pd.set_option('display.float_format', lambda x: '%.3f' % x) ``` ## Basics ``` df['txvalue'] = df['txvalue'].astype(float)/10**dec df.describe() (df['txvalue']/10**6).describe() ``` <center></center> ## Dataset ``` print('Start:') print('Block: {:^30}\nTimestamp: {:^20}\nUTC Time: {:^25}\n'.format(df['blocknumber'].iloc[0], df['timestamp'].iloc[0], str(datetime.fromtimestamp(df['timestamp'].iloc[0])) )) print('End:') print('Block: {:^30}\nTimestamp: {:^20}\nUTC Time: {:^25}\n'.format(df['blocknumber'].iloc[-1], df['timestamp'].iloc[-1], str(datetime.fromtimestamp(df['timestamp'].iloc[-1])) )) ``` ## Total Nr. of Blocks ``` print('Total Nr. of Blocks: {}'.format(df['blocknumber'].iloc[-1]-df['blocknumber'].iloc[0])) ``` <center></center> ## Total Nr. of Transfer Events ``` print('Total Nr. of Events: {:,.0f}'.format(df.describe().loc['count','timestamp'])) ``` <center></center> ## Total Nr. of Addresses ``` print('Total Nr. of Addresses: {}'.format(len(df['txto'].unique()))) ``` <center></center> ## Addresses with funds ``` bal = pd.read_csv(balances) print('Total Nr. of Addresses with funds: {}'.format(len(bal[bal['txvalue']>0]))) ``` <center></center> ## Avg. Transaction Value ``` print('Avg. Transaction Value: {:,.0f} USDC'.format(np.mean(df['txvalue']/ 10**6))) ``` <center></center> ## Total Gas Costs ``` df['costs'] = (df['gas_price']/10**18) * df['gas_used'] print('Total Gas spent for Transfers: {:,.3f} ether'.format(sum(df['costs']))) ``` <center></center> ## Initial USDC Supply ``` #first mint event #0xdc6bb2a1aff2dbb2613113984b5fbd560e582c0a4369149402d7ea83b0f5983e ``` <center></center> ## Total USDC Supply ``` sum(pd.read_csv(mint)['txvalue']/10**6)-sum(pd.read_csv(burn)['txvalue']/10**6) df = pd.read_csv(transfer) sum(df[df['txfrom'] == '0x0000000000000000000000000000000000000000']["txvalue"].astype(float))/(10**6)- sum(df[df['txto'] == '0x0000000000000000000000000000000000000000']["txvalue"].astype(float))/(10**6) ``` <center></center> <center></center> <center></center> # I. Event analysis ## I.I. Mint Event ## Plot new issued tokens over date ``` print('\n\n') fig = plt.figure(figsize=(40,25), dpi=250) ax = fig.subplots() plt.grid() plt.title(r'I s s u e d \ \ U S D C'+'\n', size= 120) ax.yaxis.get_offset_text().set_fontsize(50) plt.xlabel('\n'+r'D a t e ', size=120) plt.ylabel(r'U S D C'+'\n', size=120) plt.yticks(fontsize=60) plt.xticks(labels=["Okt '18","\nJan '19", "Apr '19","\nJul '19","Oct '19", "\nJan '20","Apr '20","\nJul '20"], ticks=[21,113, 203,294,386, 478,569,660], fontsize=60) def plot_issue_over_date(): _issue = pd.read_csv(mint) iss = _issue.loc[:, ['timestamp', 'txvalue']] iss['utc'] = iss['timestamp'].apply(lambda x: str(datetime.utcfromtimestamp(x))[0:10]) iss = iss.groupby('utc', as_index = False)['txvalue'].sum() a = iss['utc'].iloc[0] b = iss['utc'].iloc[-1] idx = pd.date_range(a,b) iss = iss.set_index('utc') iss.index = pd.DatetimeIndex(iss.index) iss = iss.reindex(idx, fill_value=0) counter = 0 for i in range(0, len(iss)): plt.plot([counter,counter], [0, iss['txvalue'].iloc[counter]/(10**6)], color= 'black', linewidth=3) counter += 1 return plt.tight_layout(pad=5) dfis = plot_issue_over_date() plt.savefig('../pics/usdc/usdc_issued_usdc_over_date.pdf') ``` ## Further info ``` df = pd.read_csv(mint) #df[df['txvalue'] == max(df['txvalue'])] print('Issue Events: {}\nIssued USDC: {:,.0f}\n'.format(len(df), sum(df['txvalue'])/10**6, ':,0f')) print('Largest issue: {:,.0f} USDC\n . . . to address: {}\n'.format(df.loc[3274, 'txvalue']//10**6,'0x55fe002aeff02f77364de339a1292923a15844b8')) ``` <center></center> <center></center> ## I.II. Burn Event ## Plot burned tokens over date ``` print('\n\n') fig = plt.figure(figsize=(40,25)) ax = fig.subplots() plt.grid() plt.title(r'B u r n e d \ \ U S D C \ \ o v e r \ \ D a t e'+'\n', size= 120) ax.yaxis.get_offset_text().set_fontsize(50) plt.xlabel('\n'+r'D a t e', size=120) plt.ylabel(r'U S D C'+'\n', size=120) plt.yticks(fontsize=60) plt.xticks(labels=["Okt '18","\nJan '19", "Apr '19","\nJul '19","Oct '19", "\nJan '20","Apr '20","\nJul '20"], ticks=[6,98, 188,279,371, 463,554,645], fontsize=60) def plot_burn_over_date(): _dbf = pd.read_csv(burn) dbf = _dbf.loc[:, ['timestamp', 'txvalue']] dbf['utc'] = dbf['timestamp'].apply(lambda x: str(datetime.utcfromtimestamp(x))[0:10]) dbf = dbf.groupby('utc', as_index = False)['txvalue'].sum() a = dbf['utc'].iloc[0] b = dbf['utc'].iloc[-1] idx = pd.date_range(a,b) dbf = dbf.set_index('utc') dbf.index = pd.DatetimeIndex(dbf.index) dbf = dbf.reindex(idx, fill_value=0) counter = 0 for i in range(0, len(dbf)): plt.plot([counter,counter], [0, dbf['txvalue'].iloc[counter]/(10**6)], color= 'black', linewidth=3) counter += 1 return plt.tight_layout(pad=5) plot_burn_over_date() plt.savefig('../pics/usdc/usdc_burned_usdc_over_date.pdf') ``` ## Further info ``` df = pd.read_csv(burn) print('Burn Events: {}\nBurned usdc: {:,.0f}'.format(len(df), sum(df['txvalue'])/10**6, ':,0f')) print('. . . from {} addesses\n'.format(len(df['address'].unique()))) print('Largest burn: {:,.0f} usdc\n . . . from address: {}\n'.format(df.groupby('address')['txvalue'].sum()[0]/10**6,df.groupby("address")["txvalue"].sum().index[0])) ``` <center></center> <center></center> ## Plot circulating supply ``` print('\n\n') fig = plt.figure(figsize=(20,12), dpi=500) ax = fig.subplots() plt.grid(True) plt.title(r'C i r c u l a t i n g \ \ U S D C \ \ S u p p l y'+'\n', size=60) plt.xlabel('\n'+r'D a t e', size= 60) plt.ylabel(r'U S D C'+'\n', size= 60) ax.yaxis.get_offset_text().set_fontsize(25) plt.yticks(fontsize=30) plt.xticks(labels=["Okt '18","\nJan '19", "Apr '19","\nJul '19","Oct '19", "\nJan '20","Apr '20","\nJul '20"], ticks=[21,113, 203,294,386, 478,569,660], fontsize=30) circ = pd.read_csv(circulating_supply, index_col='Unnamed: 0') plt.plot(range(0, 660), circ['txvalue'].cumsum()/10**6, color='black', linewidth = 4, label = 'USDC supply') plt.fill_between(range(0, 660),0 , circ['txvalue'].cumsum()/10**6, alpha=0.2, facecolor='#2D728F') lgnd = plt.legend(loc='upper left', fontsize=40) plt.tight_layout(pad=5) plt.savefig('../pics/usdc/usdc_cirulating_supply.pdf') ``` <center></center> <center></center> <center></center> ## I.III. Transfer Event ## Plot transfers over date ``` print('\n\n') fig = plt.figure(figsize=(20,12), dpi=500) ax = fig.subplots() plt.grid(True) plt.title(r'U S D C \ \ T r a n s f e r s'+'\n', size=60) plt.xlabel('\n'+r'D a t e', size= 50) plt.ylabel(r'U S D C'+'\n', size= 50) plt.yticks(np.arange(0, 30001, 5000), np.vectorize(lambda x: f'{x:,.0f}')(np.arange(0, 30001, 5000)), fontsize=30) plt.xticks(labels=["Okt '18","\nJan '19", "Apr '19","\nJul '19","Oct '19", "\nJan '20","Apr '20","\nJul '20"], ticks=[21,113, 203,294,386, 478,569,660], fontsize=30) def plot_txs_over_date(df, lwd, label, col = '#2D728F', plusbetween = False): plt.plot(np.arange(0 , len(df['txs'])), df['txs'], color = col, linewidth = lwd, label = label) if plusbetween: plt.fill_between(np.arange(0 , len(df['txs'])),0 , df['txs'], alpha=0.1, facecolor='#2D728F') plot_txs_over_date(df = pd.read_csv(tx_over_date, index_col=0), col = 'black', lwd = 2, label = 'Transfers', plusbetween=True) plot_txs_over_date(pd.read_csv(unique_senders_per_day_over_date, index_col='Unnamed: 0'), col='#9DB469', lwd = 2, label = 'Unique Senders per day') plot_txs_over_date(pd.read_csv(unique_recipients_per_day_over_date, index_col='Unnamed: 0'), lwd = 2, label = 'Unique Recipients per day') lgnd = ax.legend(loc='upper left', fontsize=35) lgnd.legendHandles[0].set_linewidth(5.0) lgnd.legendHandles[1].set_linewidth(5.0) lgnd.legendHandles[2].set_linewidth(5.0) plt.tight_layout(pad=5) plt.savefig('../pics/usdc/usdc_tx_over_date.pdf') plt.show() ``` <center></center> ## Most active addresses From: ``` fr = pd.read_csv(tx_count_from, index_col='Unnamed: 0').sort_values('txs', ascending = False) to = pd.read_csv(tx_count_to, index_col='Unnamed: 0').sort_values('txs', ascending = False) fr = pd.DataFrame(fr.loc[:fr.index[10],'txs']) fr['tag'] = ['Disperse.app', 'Uniswap: USDC', 'Binance', 'Kyber: Contract', 'Compound USD Coin', 'Binance 2', 'Binance 4', 'Binance 3', '1inch.exchange', 'Poloniex 4', '-'] fr ``` To: ``` to = pd.DataFrame(to.loc[:to.index[10],'txs']) to['tag'] = ['Binance', 'Uniswap: USDC', 'Kyber: Contract', 'Compound USD Coin', '1inch.exchange', 'Kyber: Old Contract', 'BlockFi', 'Nexo: Wallet', 'Celsius Network: Contract ', '-', 'Nuo Network: Kernel Escrow'] to ``` <center></center> <center></center> ## Activity distribution ``` df_from = pd.read_csv(tx_count_from, index_col=0) df_to = pd.read_csv(tx_count_to, index_col=0) df_all = pd.concat([df_from, df_to]) df = df_all.groupby(df_all.index).sum() print('{} addresses in total'.format(len(df))) df = df.sort_values('txs') gr0 = len(df.loc[df['txs'] >= 500000]) gra = len(df.loc[df['txs'] >= 100000]) - gr0 grb = len(df.loc[df['txs'] >= 50000]) - gr0 - gra grc = len(df.loc[df['txs'] >= 10000]) - gr0 - gra - grb grd = len(df.loc[df['txs'] >= 1000]) - gr0 - gra - grb - grc gre = len(df.loc[df['txs'] >= 100]) - gr0 - gra - grb - grc - grd grf = len(df.loc[df['txs'] >= 10]) - gr0 - gra - grb - grc - grd - gre grg = len(df.loc[df['txs'] <= 10]) grh = len(df.loc[df['txs'] == 1]) pd.DataFrame({'Transactions': ['> 500.000','100.000-500.000', '50.000-100.000', '10.000-50.000', '1.000-10.000', '100-1.000', '10-100', '< 10', '1'], 'Addresses':[gr0,gra,grb,grc,grd,gre,grf,grg,grh] }) ``` <center></center> <center></center> ## Plot average transfer amount ## Jan '20 - Jul '20 ``` print('\n\n') df = pd.read_csv(avg_value_over_date, index_col=0) df = df.loc[df.index[478]:,:] plt.figure(figsize=(12, 7), dpi=800) plt.grid(True) plt.plot(np.arange(0 , len(df.index.tolist())), df['txvalue'], color = 'black', label = 'Avg. Amount/Day', linewidth = 2) plt.fill_between(np.arange(0 , len(df.index.tolist())),0 , df['txvalue'], alpha=0.2, facecolor='#2D728F') plt.xlabel('\n'+'D a t e', fontsize=45) plt.ylabel('U S D C'+'\n', fontsize=35) plt.title("A v e r a g e \ \ T r a n s f e r \ \ A m o u n t"+"\n"+"J a n \ \ ' 2 0 \ \ - \ \ J u l \ \ ' 2 0\n", size = 30) plt.legend(loc='upper right', fontsize=20, shadow= True) plt.ticklabel_format(style = 'plain') plt.xticks(labels=["\nJan '20","Feb '20","\nMar '20","Apr '20","\nMay '20","Jun '20","\nJul '20"], ticks=[0,31,60,90,121,152,182], fontsize=23) plt.yticks(np.arange(0, 50001, 10000), np.vectorize(lambda x: f'{x:,.0f}')(np.arange(0, 50001, 10000)), fontsize=15) plt.tight_layout(pad=1) plt.savefig('../pics/usdc/usdc_avgtxvalue_jan20.pdf') ``` <center></center> ## Further Info ``` df.describe() ``` <center></center> <center></center> ## Plot average gas costs ## Jan '20 - Jul '20 ``` print('\n\n') df = pd.read_csv(avg_gas_over_date) df = df.loc[df.index[478]:,:] plt.figure(figsize=(12, 7), dpi=800) plt.grid(True) plt.plot(np.arange(0 , len(df.index.tolist())), df['gas'], color = 'black', label = 'Avg. Gas Costs/Day', linewidth = 2) plt.fill_between(np.arange(0 , len(df.index.tolist())),0 , df['gas'], alpha=0.2, facecolor='#2D728F') plt.xlabel('\n'+'D a t e', fontsize=35) plt.ylabel('E t h e r'+'\n', fontsize=30) plt.title("U S D C\nA v g. \ \ G a s \ \ C o s t s\n", size = 30) lgnd = plt.legend(loc='upper left', fontsize=20, shadow= True) plt.ticklabel_format(style = 'plain') plt.xticks(labels=["\nJan '20","Feb '20","\nMar '20","Apr '20","\nMay '20","Jun '20","\nJul '20"], ticks=[0,31,60,90,121,152,182], fontsize=20) plt.yticks(np.arange(0, 0.05, 0.01), np.vectorize(lambda x: f'{x:,.3f}')(np.arange(0, 0.05, 0.01)), fontsize=15) plt.tight_layout(pad=1) lgnd.legendHandles[0].set_linewidth(3.0) plt.savefig('../pics/usdc/usdc_avggascosts_jan20.pdf') df.describe() ``` <center></center> <center></center> # II. Balances Analysis ``` df = pd.read_csv(positive_cumulated_balances, index_col='Unnamed: 0') df ``` <center></center> ## II.I. Quick Summary ``` (df[df['balance']>0]['balance']).describe().apply(lambda x: format(x, 'f')) print('{}/{} with less than 1 usdc' .format(len(df[df['balance']<1]['balance']), len(df['balance']))) ``` <center></center> ## II.II. Balance Table ``` df = pd.read_csv(positive_cumulated_balances, index_col=0) def get_distribution(perc): per = round(df.index[-1]*perc) entities = df.index[-1]- per upper = df.loc[per:,:] lower = df.loc[:per,:] lower_ = lower['cum'].iloc[-1] upper_ = (upper['cum'].iloc[-1] - upper['cum'].iloc[0]) return entities, lower_, upper_, lower_/ upper['cum'].iloc[-1], upper_/(upper['cum'].iloc[-1]) idx90, lower90, upper90, per10, per90 = get_distribution(0.90) idx95, lower95, upper95, per05, per95 = get_distribution(0.95) idx99, lower99, upper99, per01, per99 = get_distribution(0.99) idx999, lower999, upper999, per001, per999 = get_distribution(0.999) df = pd.DataFrame([[f'{idx999:,.0f}', round(per999*100,2), f'{upper999:,.0f}'], [f'{idx99:,.0f}', round(per99*100,2),f'{upper99:,.0f}'], [f'{idx95:,.0f}', round(per95*100,2),f'{upper95:,.0f}'], [f'{idx90:,.0f}', round(per90*100,2),f'{upper90:,.0f}']], index=['0.1% of the richest accounts', '1% of the richest accounts','5% of the richest accounts','10% of the richest accounts'], columns=['Accounts in total', '% of total supply', 'usdc amount']) df ``` <center></center> <center></center> ## II.III. Rich list ``` pd.options.mode.chained_assignment = None df = pd.read_csv(positive_cumulated_balances) balance = df rich = df.loc[df.index[-10]:,:] ex = pd.read_csv(exchanges, header=None) loop = rich.iterrows() for i, j in loop: if j['address'] in ex[0].tolist(): rich.loc[i,'nametag'] = ex[ex[0] == j['address']][1].values[0] rich.loc[177493,'nametag'] = 'Binance/FTX' rich.loc[177492,'nametag'] = 'Binance?' rich ``` <center></center> <center></center> ## Huobi ``` ex = pd.read_csv(exchanges, header=None) df = ex.loc[0:73,:] bal = 0 for i in df[0]: val = balance['balance'][balance['address'] == i] if not val.empty: bal += balance['balance'][balance['address'] == i].values[0] else: pass print('Huobi Total Balance: {:.0f}\n{:.2f}% of Total'.format(bal, bal/balance.loc[balance.index[-1], 'cum']*100)) ``` <center></center> ## Binance ``` df = ex.loc[74:88,:] bal = 0 for i in df[0]: val = balance['balance'][balance['address'] == i] if not val.empty: bal += balance['balance'][balance['address'] == i].values[0] else: pass print('Binance Total Balance: {:.0f}\n{:.2f}% of Total'.format(bal, bal/balance.loc[balance.index[-1], 'cum']*100)) ``` <center></center> ## Bitfinex ``` df = ex.loc[89:110,:] bal = 0 for i in df[0]: val = balance['balance'][balance['address'] == i] if not val.empty: bal += balance['balance'][balance['address'] == i].values[0] else: pass print('Bitfinex Total Balance: {:.0f}\n{:.2f}% of Total'.format(bal, bal/balance.loc[balance.index[-1], 'cum']*100)) ``` <center></center> ## OKEx ``` df = ex.loc[111:115,:] bal = 0 for i in df[0]: val = balance['balance'][balance['address'] == i] if not val.empty: bal += balance['balance'][balance['address'] == i].values[0] else: pass print('OKEx Total Balance: {:.0f}\n{:.2f}% of Total'.format(bal, bal/balance.loc[balance.index[-1], 'cum']*100)) ``` <center></center> ## Bittrex ``` df = ex.loc[116:119,:] bal = 0 for i in df[0]: val = balance['balance'][balance['address'] == i] if not val.empty: bal += balance['balance'][balance['address'] == i].values[0] else: pass print('Bittrex Total Balance: {:.0f}\n{:.2f}% of Total'.format(bal, bal/balance.loc[balance.index[-1], 'cum']*100)) ``` <center></center> ## Compound ``` df = ex.loc[151:179,:] bal = 0 for i in df[0]: val = balance['balance'][balance['address'] == i] if not val.empty: bal += balance['balance'][balance['address'] == i].values[0] else: pass print('Compound Total Balance: {:.0f}\n{:.2f}% of Total'.format(bal, bal/balance.loc[balance.index[-1], 'cum']*100)) ``` <center></center> ## Poloniex ``` df = ex.loc[247:266,:] bal = 0 for i in df[0]: val = balance['balance'][balance['address'] == i] if not val.empty: bal += balance['balance'][balance['address'] == i].values[0] else: pass print('Poloniex Total Balance: {:.0f}\n{:.2f}% of Total'.format(bal, bal/balance.loc[balance.index[-1], 'cum']*100)) ``` <center></center> <center></center> <center></center> # II. IV. Pie Chart ``` df = pd.read_csv(positive_cumulated_balances, index_col='Unnamed: 0') aa = df.iloc[df.index[-1]-80:] bb = df['balance'].iloc[:df.index[-1]-80] df = aa.append(pd.DataFrame({'address': 'others', 'balance': sum(bb)}, index=[0])) label = [] counter = 0 def getlabel(i): global counter if i: if not i == 'others': label.append(i + '...') else: label.append(i) else: label.append('') counter += 1 [getlabel(i[:6]) if counter >= len(df)-12 else getlabel('') for i in df['address'] ] print() print('\n\n') # Colorspace colors by: https://colorspace.r-forge.r-project.org/index.html colorspace_set3 = ['#EEBD92','#FFB3B5','#85D0F2','#BCC3FE','#E7B5F5', '#FEAFDA', '#61D8D6','#76D9B1','#A4D390','#CFC982'] colorsp_dynamic =['#DB9D85', '#87AEDF', '#9DB469', '#6DBC86', '#3DBEAB', '#4CB9CC', '#C2A968', '#BB9FE0', '#DA95CC', '#E494AB'] colorspa_dark_3 = ['#B675E0','#5991E4','#00AA5A','#6F9F00','#CE7D3B'] colorspa_dyna_5 = ['#9DB469','#87AEDF','#DA95CC', '#DB9D85','#3DBEAB'] fig = plt.figure(figsize=(25,15), dpi=400) ax = fig.add_subplot() aa = plt.pie(df['balance'],colors=colorsp_dynamic, labels=label, autopct=lambda x: r'{:.1f}\%'.format(x) if x > 1.5 else r'{:.0f}\%'.format(x) if x > 5 else '', pctdistance= 0.8, labeldistance= 1.05, radius=1, explode=[0.05 for i in range(0, len(df['balance']))], wedgeprops = {'linewidth': 0.8, 'edgecolor':'k'}, startangle=220) # Custom Modifications aa[-1][-1].set_x(-0.7268917458682129) aa[-1][-1].set_fontsize(35) aa[-1][-2].set_fontsize(30) aa[-1][-2].set_x(0.19977073082370535) aa[-1][-2].set_y(0.8000006952023211) aa[-1][-3].set_fontsize(27) aa[-1][-4].set_fontsize(23) aa[-1][-5].set_fontsize(20) aa[-1][-6].set_fontsize(16) aa[-1][-7].set_fontsize(13) aa[-1][-8].set_fontsize(9) aa[-1][-9].set_fontsize(9) aa[-1][-10].set_fontsize(9) aa[-1][-11].set_fontsize(8) fontsize = -43 for i in aa[1]: i.set_fontsize(fontsize) fontsize += 1 aa[1][-1].set_fontsize(55) plt.tight_layout(pad=5) plt.title('U S D C \ \ D i s t r i b u t i o n', fontsize = 50) circ = plt.Circle((0,0),0.5,color='black', fc='white',linewidth=1.25) ax.add_artist(circ) plt.savefig('../pics/usdc/usdc_distribution_pie.pdf') ``` ## II.V. Lorenz curve ``` df = pd.read_csv(positive_cumulated_balances, index_col = 'Unnamed: 0') df y_all = df['cum']/df['cum'].iloc[-1] x_all = (np.arange(start = 0 , stop = len(df['cum']), step = 1)/(len(df['cum']))) y_25_75 = df['cum'].iloc[int(df.index[-1]*0.25):int(df.index[-1]*0.75)] y_25_75 = y_25_75/max(y_25_75) x_25_75 = np.arange(start = 0 , stop = len(y_25_75), step = 1)/(len(y_25_75)) print('Q3-Q1 (in usdc):') df['balance'].iloc[int(df.index[-1]*0.25):int(df.index[-1]*0.75)].describe().apply(lambda x: format(x/(10**0), 'f')) print('\n\n') fig = plt.figure(figsize=(15,15)) ax = fig.add_subplot() plt.grid() plt.title(r'L o r e n z \ \ C u r v e'+'\n', fontsize=50) plt.xlabel('\n'+r'\% \ \ of \ \ A d d r e s s e s', fontsize=30) plt.ylabel(r'\% \ o f \ \ t o t a l \ \ U S D C \ \ s u p p l y'+'\n', fontsize=30) plt.xticks(fontsize=20) plt.yticks(fontsize=20) ax.plot(x_all,y_all, linewidth = 5, color = '#2D728F', label = r'$\ All$') ax.plot(x_25_75,y_25_75, linewidth = 5, color = '#87AEDF', label = r'$\ Q_3 - Q_1$') plt.legend(fontsize= 35) plt.plot([0, 1], [0, 1], transform=ax.transAxes, linewidth = 4, ls = (0, (5, 10)), color = 'black') ax.set_xlim([0,1.05]) plt.savefig('../pics/usdc/uscd_lorenzcurve.pdf') ```
github_jupyter
# Data collection In this notebook, I'll use the **GitHub API** to extract various information from my user profile such as repositories, commits and more. I'll also save this data to **.csv** files so that I can draw insights. ## Import libraries and defining constants I'll import various libraries needed for fetching the data. ``` import json import requests import numpy as np import pandas as pd import requests from requests.auth import HTTPBasicAuth ``` I'll fetch the credentials from the json file and create an `authentication` variable. ``` credentials = json.loads(open('credentials.json').read()) authentication = HTTPBasicAuth(credentials['username'], credentials['password']) ``` ## User information I'll first extract the user information such as name and related URLs which would be useful ahead. ``` data = requests.get('https://api.github.com/users/' + credentials['username'], auth = authentication) data = data.json() data ``` From the json output above, I'll try to extract basic information such as `name`, `location`, `email`, `bio`, `public_repos`, and `public gists`. I'll also keep some of the urls handy inluding `repos_url`, `gists_url` and `blog`. ``` print("Information about user {}:\n".format(credentials['username'])) print("Name: {}".format(data['name'])) print("Email: {}".format(data['email'])) print("Location: {}".format(data['location'])) print("Public repos: {}".format(data['public_repos'])) print("Public gists: {}".format(data['public_gists'])) print("About: {}".format(data['bio'])) ``` ## Repositories Next, I'll fetch repositories for the user. By default, only 30 repositories are fetched in one go. So, I'll iterate over the API till all repositories are fetched. ``` url = data['repos_url'] page_no = 1 repos_data = [] while (True): response = requests.get(url, auth = authentication) response = response.json() repos_data = repos_data + response repos_fetched = len(response) print("Total repositories fetched: {}".format(repos_fetched)) if (repos_fetched == 30): page_no = page_no + 1 url = data['repos_url'].encode("UTF-8") + '?page=' + str(page_no) else: break ``` I'll first explore only one repository information and take a look at all the information I can keep. ``` repos_data[0] ``` There are a number of things that we can keep a track of here. I'll select the following: 1. id: Unique id for the repository. 2. name: The name of the repository. 3. description: The description of the repository. 4. created_at: The time and date when the repository was first created. 5. updated_at: The time and date when the repository was last updated. 6. login: Username of the owner of the repository. 7. license: The license type (if any). 8. has_wiki: A boolean that signifies if the repository has a wiki document. 9. forks_count: Total forks of the repository. 10. open_issues_count: Total issues opened in the repository. 11. stargazers_count: The total stars on the reepository. 12. watchers_count: Total users watching the repository. I'll also keep track of some urls for further analysis including: 1. url: The url of the repository. 2. commits_url: The url for all commits in the repository. 3. languages_url: The url for all languages in the repository. The commit url, I'll remove the end value inside the braces. ``` repos_information = [] for i, repo in enumerate(repos_data): data = [] data.append(repo['id']) data.append(repo['name']) data.append(repo['description']) data.append(repo['created_at']) data.append(repo['updated_at']) data.append(repo['owner']['login']) data.append(repo['license']['name'] if repo['license'] != None else None) data.append(repo['has_wiki']) data.append(repo['forks_count']) data.append(repo['open_issues_count']) data.append(repo['stargazers_count']) data.append(repo['watchers_count']) data.append(repo['url']) data.append(repo['commits_url'].split("{")[0]) data.append(repo['url'] + '/languages') repos_information.append(data) repos_df = pd.DataFrame(repos_information, columns = ['Id', 'Name', 'Description', 'Created on', 'Updated on', 'Owner', 'License', 'Includes wiki', 'Forks count', 'Issues count', 'Stars count', 'Watchers count', 'Repo URL', 'Commits URL', 'Languages URL']) repos_df.head(10) ``` ## Languages For topics of each repository, I'll iterate through all repos' `Languagues URL` and get the corresponding data. I'll also store them back to the dataframe. ``` for i in range(repos_df.shape[0]): response = requests.get(repos_df.loc[i, 'Languages URL'], auth = authentication) response = response.json() print(i, response) if response != {}: languages = [] for key, value in response.items(): languages.append(key) languages = ', '.join(languages) repos_df.loc[i, 'Languages'] = languages else: repos_df.loc[i, 'Languages'] = "" ``` I'll publish this data into a .csv file called **repos_info.csv** ``` repos_df.to_csv('repos_info.csv', index = False) ``` ## Commits I'll now also create a dataset with all the commits done till now. ``` response = requests.get(repos_df.loc[0, 'Commits URL'], auth = authentication) response.json() ``` I'll save the id, date and the message of each commit. ``` commits_information = [] for i in range(repos_df.shape[0]): url = repos_df.loc[i, 'Commits URL'] page_no = 1 while (True): response = requests.get(url, auth = authentication) response = response.json() print("URL: {}, commits: {}".format(url, len(response))) for commit in response: commit_data = [] commit_data.append(repos_df.loc[i, 'Id']) commit_data.append(commit['sha']) commit_data.append(commit['commit']['committer']['date']) commit_data.append(commit['commit']['message']) commits_information.append(commit_data) if (len(response) == 30): page_no = page_no + 1 url = repos_df.loc[i, 'Commits URL'] + '?page=' + str(page_no) else: break commits_df = pd.DataFrame(commits_information, columns = ['Repo Id', 'Commit Id', 'Date', 'Message']) commits_df.head(5) ``` I'll publish this data into a .csv file called **commits_info.csv** ``` commits_df.to_csv('commits_info.csv', index = False) ```
github_jupyter
# Bigram https://towardsdatascience.com/text-analysis-basics-in-python-443282942ec5 # Loading data ``` import pandas as pd import numpy as np #agar mudah, letakkan file data dalam satu folder dengan file jupiter notebook nya filedata = 'discussion' dataSB = pd.read_excel(filedata+".xlsx", sheet_name="Sheet1") dataSB.head() ``` # lower case ``` # ------ Case Folding -------- # gunakan fungsi Series.str.lower() pada Pandas dataSB['textdata'] = dataSB['message'].str.lower() print('Case Folding Result : \n') print(dataSB['textdata'].head(5)) #jika AttributeError: 'float' object has no attribute 'replace' #import ast dataSB['textdata'] = dataSB['textdata'].astype(str) ``` # Tokenizing Memecah teks menjadi perkata, dan membersihkan simbol-simbol ``` import string import re #regex library # import word_tokenize & FreqDist from NLTK from nltk.tokenize import word_tokenize from nltk.probability import FreqDist # ------ Tokenizing --------- def remove_tweet_special(text): # remove tab, new line, ans back slice text = text.replace('\\t'," ").replace('\\n'," ").replace('\\u'," ").replace('\\',"") # remove non ASCII (emoticon, chinese word, .etc) text = text.encode('ascii', 'replace').decode('ascii') # remove mention, link, hashtag text = ' '.join(re.sub("([@#][A-Za-z0-9]+)|(\w+:\/\/\S+)"," ", text).split()) # remove incomplete URL return text.replace("http://", " ").replace("https://", " ") dataSB['textdata'] = dataSB['textdata'].apply(remove_tweet_special) #remove number def remove_number(text): return re.sub(r"\d+", "", text) dataSB['textdata'] = dataSB['textdata'].apply(remove_number) #remove whitespace leading & trailing def remove_whitespace_LT(text): return text.strip() dataSB['textdata'] = dataSB['textdata'].apply(remove_whitespace_LT) #remove multiple whitespace into single whitespace def remove_whitespace_multiple(text): return re.sub('\s+',' ',text) dataSB['textdata'] = dataSB['textdata'].apply(remove_whitespace_multiple) # remove single char def remove_singl_char(text): return re.sub(r"\b[a-zA-Z]\b", "", text) dataSB['textdata'] = dataSB['textdata'].apply(remove_singl_char) #dataSB['textdata'] def remove_special_bigram(text): text = text.replace("kolom komentar", " ").replace("jutaan rupiah", " ").replace("rp", " ") text = text.replace("apple amazon", " ") text = text.replace("pendiri apple", " ") return text dataSB['textdataClean'] = dataSB['textdata'].apply(remove_special_bigram) def remove_special_bigram2(text): text = text.replace("selamat malam", " ") text = text.replace("konsep prinsip", " ") text = text.replace("terima kasih", " ") text = text.replace("kesamaan konsep", " ") text = text.replace("dimiliki amazon", " ") text = text.replace("apple amazon", " ") text = text.replace("prinsip dimiliki", " ") return text dataSB['textdataClean'] = dataSB['textdataClean'].apply(remove_special_bigram2) ``` # Proses ``` from textblob import TextBlob dataSB['polarity'] = dataSB['textdataClean'].apply(lambda x: TextBlob(x).polarity) dataSB['subjective'] = dataSB['textdataClean'].apply(lambda x: TextBlob(x).subjectivity) dataSB[0:5] from nltk.corpus import stopwords stoplist = stopwords.words('bahasa') + ['though'] #stoplist from sklearn.feature_extraction.text import CountVectorizer c_vec = CountVectorizer(stop_words=stoplist, ngram_range=(2,2)) # matrix of ngrams ngrams = c_vec.fit_transform(dataSB['textdataClean']) # count frequency of ngrams count_values = ngrams.toarray().sum(axis=0) # list of ngrams vocab = c_vec.vocabulary_ df_ngram = pd.DataFrame(sorted([(count_values[i],k) for k,i in vocab.items()], reverse=True) ).rename(columns={0: 'frequency', 1:'bigram/trigram'}) df_ngram[0:40] #list(df_ngram('bigram/trigram')) ``` # save to exccel ``` filedisimpan='data_bigram.xlsx' df_ngram[0:100].to_excel(filedisimpan, index = False, header=True) ```
github_jupyter
# Chapter 8: Planning and Learning with Tabular Methods ## 1. Models and Planning - **model-based** methods: - require a model of enviroment (DP, HS) - rely on **planning** - **model-free** methods: - does not require a model of enviroment (MC, TD) - rely on **learning** - heart of 2 methods is the computation of value functions - **Model**: anything that an agent can use to predict how the environment will repond to its actions - **Distribution Models**: description of all possibilities and their probabilites, $p(s',r | s,a) ~~~\forall s,a,s',r$ - **Sample Models**: produce just one of the possibilities (sample experiences for given $s,a$) - Model is used to *simulate* the environment and produce *simulated experience* - distribution models are stronger - however, easier to obtain sample models - **Planning**: any computational process that takes a model as input and produces or improves a policy for interacting with the modeled environment - **state-space** planning - **plan-space** planning: difficult to apply stochastic squential decision problems ![planning](assets/8.1.planning.png) - state-space planning methods view - compute value functions in order to improve the policy - apply the backup operations to simulated experience ![state-space planning](assets/8.1.state-space-planning.png) - *planning* methods use simulated experience generated by a model - *learning* methods use real exeperience generated by the environment - *random-sample one-step tabular Q-planning* ![Q-planning](assets/8.1.q-planning.png) ## 2. Dyna: Integrated Planning, Acting, and Learning - 2 roles of real experience: - **model learning**: improve the model - also, called **indirect RL** - **direct RL**: directyly improve the value funciton and policy ![Experience Relationship](assets/8.2.exp-relation.png) - *indirect RL* (model-based) - fuller use of a limited amount of experience, thus achieve a better policy with fewer environmental interactions - *direct RL* (model-free) - much simpler, not affected by biases of the designed model - planning, acting, model-learning, and direct RL occur simultaneously and in paralel in Dyna agents - Dyna Architecture ![Dyna Architecture](assets/8.2.dyna-architecture.png) - **Dyna-Q** algorithm - direct RL: step (d) - model-learning, and planning: steps (e), and (f) ![Dyna-Q](assets/8.2.dyna-q.png) ## 3. When the Model is Wrong - model may be incorrect - environment is stochastic and only limited number of samples - model learning function has generalized imperfectly - environment has changed and its new behavior has not yet been observed - when model is incorrect, suboptimal policy is computed - in some cases, this lead to discovery & correction of the modeling error - the general problem is the conflict between exploration and exploitation - probably is no solution is perfect - in practical, simple heuristics are often effective - **Dyna-Q+** method: - state-action pair is not visted in $\tau$ time steps - add bones reward: $r + \kappa\sqrt{\tau}$, for some small $\kappa$ ## 4. Prioritized Sweeping - started state-action pairs selection by uniform is usually not the best, should focus on particular state-action pairs - work back from any state whose value has changed - using queue to maintain every state-action pair whose estimated value would change nontrivially if updated - prioriteize by the size of the change - waste lots of computation on low-probability transitions ![Prioritized Sweeping](assets/8.4.prioritized_sweeping.png) ## 5. Exepected vs Sample Updates - for one-step updates, vary primarily along 2 binary dimensions - state values or action values - optimal policy or arbitrary given policy - expected updates or sample updates ![Backup Diagrams for one-step](assets/8.5.backup-diagrams.png) - expected updates are better but require more computation - let $b$ is *branching factor*, expected update requires roughly $b$ times as much computation as a sample update - in a large problem, sample updates are preferable ![Expected vs Sample updates](assets/8.5.expected_vs_sample.png) ## 6. Trajectory Sampling - **Trajectory Sampling**: simulates explicit individual trajectories and performs updates at the state or state-action pairs encountered along the way - Seem both efficient and elegant - Sampling according to the **on-policy** distribution - faster planning initially and retarded planning in the long run - in the long run, may hurt, sampling other states may useful - for large problems, can be great advantage ## 7. Real-time Dynamic Programming - *RTDP* - An on-policy trajectory-sampling version of the value-interation algorithm of DP - an example of an asynchronous DP algorithm - Allow completely skip states that cannot be reached by the given policy from any of the start states (*irrelevant*) - can find a optimal policy on the relevant states without visting every states as *Sarsa* - Greate advantage for very large state sets - Select a greedy action - value function approaches the optimal value function $v_*$ - policy used by the agent to generate trajectories approaches an optimal policy - Strongly focused on subsets of the states that were relevant to the problem's objective - Reduce 50% of computation required by sweep-based value iteration ## 8. Planning at Decision Time - 2 ways planning: - **background planning**: - use to gradually improve a policy or value function on the basis of simulated experience obtained from a model (such as DP an Dyna) - not focus on the current state - **decision-time planning**: - use to begin and complete it after encountering each new state $S_t$ - focus on a particular state - in general, can mix both - most useful in applications in which fast responses are not required ## 9. Heuristic Search - A decision-time planning method - For each state encountered, a large tree of possible continuations is considered - Approximate value funciton is applied to the leaf nodes, then backed up toward the current state at root - Backing up is just the same as in the expected updates with maxes ($v_*, q_*$) - Backing up stops at the state-action nodes for the current state - Once the backed-up values of these nodes are computed - The best of them is chosen as the current action - All backed-up values are discarded - Can be viewed as an extension of the idea of a greedy policy, beyond a single step - Seaching deeper than one step is to obtain better action selections - The deeper the search, the more computation is required, slower response time - Can be so effective, because of smart focusing on the states and actions that might immediately follow the current state - Method of heuristic search - Contruct a search tree - Perform the individual one-step updates from bottom up ![Heuristic Search](assets/8.9.heuristic-search.png) ## 10. Rollout Algorithms - An decision-time planning algorithm based on MC control - Simulate trajectories that all begin at the current environment state - Estimate action values $q_\pi$ by averaging the returns of many simulated trajectories - Action with highest estimated value is executed - The goal - Not estimate a complete optimal action-value $q_*$ or a complete action-value $q_\pi$ for a given policy $\pi$ - Produce MC estimates of action values only for each current state and for a given policy - **rollout policy** - Improve upon the rollout policy; not to find an optimal policy - The better the rollout policy and the more accurate the value estimates, the better the policy produced - It is important to tradeoff - better rollout polices require more time is needed to simulate enough trajectories - Run many trials in parallel on separate processors - Truncate the simulated trajectories, correcting the truncated returns by means of a stored evaluation function - Not a learning algorithms - do not maintain long-term memories of values or policies ## 11. Monte Carlo Tree Search - A successful example of decision-time planning - Is a rollout algorithm enhanced by the addition of a means for accumulating value estimates from MC simulations - Use in game and single-agent with simple model for fast multistep simulation - Execute after encountering each new state to select an action for that state - Each execution is an iterative process that simulates many trajectories starting from the current state - Core idea is focus muliple simulations starting at the current state - benfefits from online, incremental, sample-based value estimation and policy improvement - can avoid the problem of globally approximating an action-value function while it retains the benefit of using past experience to guide exploration ## 12. Summary - 3 key ideas in common: - Estimate value functions - Operate by backing up values along actual or possible state trajectories - Follow the general trategy of GPI (*generalized policy iteration*) ![Space of RL](assets/8.11.space-of-rl.png) - 3th dimension: *on-policy* or *off-policy* - most important dimension: **function approximation** - in the part 2 of the book
github_jupyter
**This notebook is an exercise in the [Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/data-leakage).** --- Most people find target leakage very tricky until they've thought about it for a long time. So, before trying to think about leakage in the housing price example, we'll go through a few examples in other applications. Things will feel more familiar once you come back to a question about house prices. # Setup The questions below will give you feedback on your answers. Run the following cell to set up the feedback system. ``` # Set up code checking from learntools.core import binder binder.bind(globals()) from learntools.ml_intermediate.ex7 import * print("Setup Complete") ``` # Step 1: The Data Science of Shoelaces Nike has hired you as a data science consultant to help them save money on shoe materials. Your first assignment is to review a model one of their employees built to predict how many shoelaces they'll need each month. The features going into the machine learning model include: - The current month (January, February, etc) - Advertising expenditures in the previous month - Various macroeconomic features (like the unemployment rate) as of the beginning of the current month - The amount of leather they ended up using in the current month The results show the model is almost perfectly accurate if you include the feature about how much leather they used. But it is only moderately accurate if you leave that feature out. You realize this is because the amount of leather they use is a perfect indicator of how many shoes they produce, which in turn tells you how many shoelaces they need. Do you think the _leather used_ feature constitutes a source of data leakage? If your answer is "it depends," what does it depend on? After you have thought about your answer, check it against the solution below. ``` # Check your answer (Run this code cell to receive credit!) q_1.check() ``` # Step 2: Return of the Shoelaces You have a new idea. You could use the amount of leather Nike ordered (rather than the amount they actually used) leading up to a given month as a predictor in your shoelace model. Does this change your answer about whether there is a leakage problem? If you answer "it depends," what does it depend on? ``` # Check your answer (Run this code cell to receive credit!) q_2.check() ``` # 3. Getting Rich With Cryptocurrencies? You saved Nike so much money that they gave you a bonus. Congratulations. Your friend, who is also a data scientist, says he has built a model that will let you turn your bonus into millions of dollars. Specifically, his model predicts the price of a new cryptocurrency (like Bitcoin, but a newer one) one day ahead of the moment of prediction. His plan is to purchase the cryptocurrency whenever the model says the price of the currency (in dollars) is about to go up. The most important features in his model are: - Current price of the currency - Amount of the currency sold in the last 24 hours - Change in the currency price in the last 24 hours - Change in the currency price in the last 1 hour - Number of new tweets in the last 24 hours that mention the currency The value of the cryptocurrency in dollars has fluctuated up and down by over $\$$100 in the last year, and yet his model's average error is less than $\$$1. He says this is proof his model is accurate, and you should invest with him, buying the currency whenever the model says it is about to go up. Is he right? If there is a problem with his model, what is it? ``` # Check your answer (Run this code cell to receive credit!) q_3.check() ``` # Step 4: Preventing Infections An agency that provides healthcare wants to predict which patients from a rare surgery are at risk of infection, so it can alert the nurses to be especially careful when following up with those patients. You want to build a model. Each row in the modeling dataset will be a single patient who received the surgery, and the prediction target will be whether they got an infection. Some surgeons may do the procedure in a manner that raises or lowers the risk of infection. But how can you best incorporate the surgeon information into the model? You have a clever idea. 1. Take all surgeries by each surgeon and calculate the infection rate among those surgeons. 2. For each patient in the data, find out who the surgeon was and plug in that surgeon's average infection rate as a feature. Does this pose any target leakage issues? Does it pose any train-test contamination issues? ``` # Check your answer (Run this code cell to receive credit!) q_4.check() ``` # Step 5: Housing Prices You will build a model to predict housing prices. The model will be deployed on an ongoing basis, to predict the price of a new house when a description is added to a website. Here are four features that could be used as predictors. 1. Size of the house (in square meters) 2. Average sales price of homes in the same neighborhood 3. Latitude and longitude of the house 4. Whether the house has a basement You have historic data to train and validate the model. Which of the features is most likely to be a source of leakage? ``` # Fill in the line below with one of 1, 2, 3 or 4. potential_leakage_feature = 2 # Check your answer q_5.check() #q_5.hint() #q_5.solution() ``` # Conclusion Leakage is a hard and subtle issue. You should be proud if you picked up on the issues in these examples. Now you have the tools to make highly accurate models, and pick up on the most difficult practical problems that arise with applying these models to solve real problems. There is still a lot of room to build knowledge and experience. Try out a [Competition](https://www.kaggle.com/competitions) or look through our [Datasets](https://kaggle.com/datasets) to practice your new skills. Again, Congratulations! --- *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161289) to chat with other Learners.*
github_jupyter
# Bagging Double Deep Q Learning - A simple ambulance dispatch point allocation model ## Reinforcement learning introduction ### RL involves: * Trial and error search * Receiving and maximising reward (often delayed) * Linking state -> action -> reward * Must be able to sense something of their environment * Involves uncertainty in sensing and linking action to reward * Learning -> improved choice of actions over time * All models find a way to balance best predicted action vs. exploration ### Elements of RL * *Environment*: all observable and unobservable information relevant to us * *Observation*: sensing the environment * *State*: the perceived (or perceivable) environment * *Agent*: senses environment, decides on action, receives and monitors rewards * *Action*: may be discrete (e.g. turn left) or continuous (accelerator pedal) * *Policy* (how to link state to action; often based on probabilities) * *Reward signal*: aim is to accumulate maximum reward over time * *Value function* of a state: prediction of likely/possible long-term reward * *Q*: prediction of likely/possible long-term reward of an *action* * *Advantage*: The difference in Q between actions in a given state (sums to zero for all actions) * *Model* (optional): a simulation of the environment ### Types of model * *Model-based*: have model of environment (e.g. a board game) * *Model-free*: used when environment not fully known * *Policy-based*: identify best policy directly * *Value-based*: estimate value of a decision * *Off-policy*: can learn from historic data from other agent * *On-policy*: requires active learning from current decisions ## Key DQN components <img src="./images/dqn_components.png" width="700"/> ## General method for Q learning: Overall aim is to create a neural network that predicts Q. Improvement comes from improved accuracy in predicting 'current' understood Q, and in revealing more about Q as knowledge is gained (some rewards only discovered after time). <img src="./images/dqn_process.png" width="600|"/> Target networks are used to stabilise models, and are only updated at intervals. Changes to Q values may lead to changes in closely related states (i.e. states close to the one we are in at the time) and as the network tries to correct for errors it can become unstable and suddenly lose signficiant performance. Target networks (e.g. to assess Q) are updated only infrequently (or gradually), so do not have this instability problem. ## Training networks Double DQN contains two networks. This ammendment, from simple DQN, is to decouple training of Q for current state and target Q derived from next state which are closely correlated when comparing input features. The *policy network* is used to select action (action with best predicted Q) when playing the game. When training, the predicted best *action* (best predicted Q) is taken from the *policy network*, but the *policy network* is updated using the predicted Q value of the next state from the *target network* (which is updated from the policy network less frequently). So, when training, the action is selected using Q values from the *policy network*, but the the *policy network* is updated to better predict the Q value of that action from the *target network*. The *policy network* is copied across to the *target network* every *n* steps (e.g. 1000). <img src="./images/dqn_training.png" width="700|"/> ## Bagging (Bootstrap Aggregation) Each network is trained from the same memory, but have different starting weights and are trained on different bootstrap samples from that memory. In this example actions are chosen randomly from each of the networks (an alternative could be to take the most common action recommended by the networks, or an average output). This bagging method may also be used to have some measure of uncertainty of action by looking at the distribution of actions recommended from the different nets. Bagging may also be used to aid exploration during stages where networks are providing different suggested action. <img src="./images/bagging.png" width="800|"/> ## References Double DQN: van Hasselt H, Guez A, Silver D. (2015) Deep Reinforcement Learning with Double Q-learning. arXiv:150906461 http://arxiv.org/abs/1509.06461 Bagging: Osband I, Blundell C, Pritzel A, et al. (2016) Deep Exploration via Bootstrapped DQN. arXiv:160204621 http://arxiv.org/abs/1602.04621 ## Code structure <img src="./images/dqn_program_structure.png" width="700|"/> ``` ################################################################################ # 1 Import packages # ################################################################################ from amboworld.environment import Env import matplotlib.pyplot as plt import numpy as np import pandas as pd import random import torch import torch.nn as nn import torch.optim as optim # Use a double ended queue (deque) for memory # When memory is full, this will replace the oldest value with the new one from collections import deque # Supress all warnings (e.g. deprecation warnings) for regular use import warnings warnings.filterwarnings("ignore") ################################################################################ # 2 Define model parameters # ################################################################################ # Set whether to display on screen (slows model) DISPLAY_ON_SCREEN = False # Discount rate of future rewards GAMMA = 0.99 # Learing rate for neural network LEARNING_RATE = 0.003 # Maximum number of game steps (state, action, reward, next state) to keep MEMORY_SIZE = 10000000 # Sample batch size for policy network update BATCH_SIZE = 5 # Number of game steps to play before starting training (all random actions) REPLAY_START_SIZE = 50000 # Number of steps between policy -> target network update SYNC_TARGET_STEPS = 1000 # Exploration rate (epsilon) is probability of choosing a random action EXPLORATION_MAX = 1.0 EXPLORATION_MIN = 0.05 # Reduction in epsilon with each game step EXPLORATION_DECAY = 0.9999 # Training episodes TRAINING_EPISODES = 50 # Set number of parallel networks NUMBER_OF_NETS = 5 # Results filename RESULTS_NAME = 'bagging_ddqn' # SIM PARAMETERS RANDOM_SEED = 42 SIM_DURATION = 5000 NUMBER_AMBULANCES = 9 NUMBER_INCIDENT_POINTS = 3 INCIDENT_RADIUS = 2 NUMBER_DISPTACH_POINTS = 25 AMBOWORLD_SIZE = 50 INCIDENT_INTERVAL = 20 EPOCHS = 2 AMBO_SPEED = 60 AMBO_FREE_FROM_HOSPITAL = False ################################################################################ # 3 Define DQN (Deep Q Network) class # # (Used for both policy and target nets) # ################################################################################ class DQN(nn.Module): """Deep Q Network. Udes for both policy (action) and target (Q) networks.""" def __init__(self, observation_space, action_space): """Constructor method. Set up neural nets.""" # nerurones per hidden layer = 2 * max of observations or actions neurons_per_layer = 2 * max(observation_space, action_space) # Set starting exploration rate self.exploration_rate = EXPLORATION_MAX # Set up action space (choice of possible actions) self.action_space = action_space super(DQN, self).__init__() self.net = nn.Sequential( nn.Linear(observation_space, neurons_per_layer), nn.ReLU(), nn.Linear(neurons_per_layer, neurons_per_layer), nn.ReLU(), nn.Linear(neurons_per_layer, neurons_per_layer), nn.ReLU(), nn.Linear(neurons_per_layer, action_space) ) def act(self, state): """Act either randomly or by redicting action that gives max Q""" # Act randomly if random number < exploration rate if np.random.rand() < self.exploration_rate: action = random.randrange(self.action_space) else: # Otherwise get predicted Q values of actions q_values = self.net(torch.FloatTensor(state)) # Get index of action with best Q action = np.argmax(q_values.detach().numpy()[0]) return action def forward(self, x): """Forward pass through network""" return self.net(x) ################################################################################ # 4 Define policy net training function # ################################################################################ def optimize(policy_net, target_net, memory): """ Update model by sampling from memory. Uses policy network to predict best action (best Q). Uses target network to provide target of Q for the selected next action. """ # Do not try to train model if memory is less than reqired batch size if len(memory) < BATCH_SIZE: return # Reduce exploration rate (exploration rate is stored in policy net) policy_net.exploration_rate *= EXPLORATION_DECAY policy_net.exploration_rate = max(EXPLORATION_MIN, policy_net.exploration_rate) # Sample a random batch from memory batch = random.sample(memory, BATCH_SIZE) for state, action, reward, state_next, terminal in batch: state_action_values = policy_net(torch.FloatTensor(state)) # Get target Q for policy net update if not terminal: # For non-terminal actions get Q from policy net expected_state_action_values = policy_net(torch.FloatTensor(state)) # Detach next state values from gradients to prevent updates expected_state_action_values = expected_state_action_values.detach() # Get next state action with best Q from the policy net (double DQN) policy_next_state_values = policy_net(torch.FloatTensor(state_next)) policy_next_state_values = policy_next_state_values.detach() best_action = np.argmax(policy_next_state_values[0].numpy()) # Get target net next state next_state_action_values = target_net(torch.FloatTensor(state_next)) # Use detach again to prevent target net gradients being updated next_state_action_values = next_state_action_values.detach() best_next_q = next_state_action_values[0][best_action].numpy() updated_q = reward + (GAMMA * best_next_q) expected_state_action_values[0][action] = updated_q else: # For termal actions Q = reward (-1) expected_state_action_values = policy_net(torch.FloatTensor(state)) # Detach values from gradients to prevent gradient update expected_state_action_values = expected_state_action_values.detach() # Set Q for all actions to reward (-1) expected_state_action_values[0] = reward # Set net to training mode policy_net.train() # Reset net gradients policy_net.optimizer.zero_grad() # calculate loss loss_v = nn.MSELoss()(state_action_values, expected_state_action_values) # Backpropogate loss loss_v.backward() # Update network gradients policy_net.optimizer.step() return ################################################################################ # 5 Define memory class # ################################################################################ class Memory(): """ Replay memory used to train model. Limited length memory (using deque, double ended queue from collections). - When memory full deque replaces oldest data with newest. Holds, state, action, reward, next state, and episode done. """ def __init__(self): """Constructor method to initialise replay memory""" self.memory = deque(maxlen=MEMORY_SIZE) def remember(self, state, action, reward, next_state, done): """state/action/reward/next_state/done""" self.memory.append((state, action, reward, next_state, done)) ################################################################################ # 6 Define results plotting function # ################################################################################ def plot_results(run, exploration, score, mean_call_to_arrival, mean_assignment_to_arrival): """Plot and report results at end of run""" # Set up chart (ax1 and ax2 share x-axis to combine two plots on one graph) fig = plt.figure(figsize=(6,6)) ax1 = fig.add_subplot(111) ax2 = ax1.twinx() # Plot results lns1 = ax1.plot( run, exploration, label='exploration', color='g', linestyle=':') lns2 = ax2.plot(run, mean_call_to_arrival, label='call to arrival', color='r') lns3 = ax2.plot(run, mean_assignment_to_arrival, label='assignment to arrival', color='b', linestyle='--') # Get combined legend lns = lns1 + lns2 + lns3 labs = [l.get_label() for l in lns] ax1.legend(lns, labs, loc='upper center', bbox_to_anchor=(0.5, -0.1), ncol=3) # Set axes ax1.set_xlabel('run') ax1.set_ylabel('exploration') ax2.set_ylabel('Response time') filename = 'output/' + RESULTS_NAME +'.png' plt.savefig(filename, dpi=300) plt.show() ################################################################################ # 7 Main program # ################################################################################ def qambo(): """Main program loop""" ############################################################################ # 8 Set up environment # ############################################################################ # Set up game environemnt sim = Env( random_seed = RANDOM_SEED, duration_incidents = SIM_DURATION, number_ambulances = NUMBER_AMBULANCES, number_incident_points = NUMBER_INCIDENT_POINTS, incident_interval = INCIDENT_INTERVAL, number_epochs = EPOCHS, number_dispatch_points = NUMBER_DISPTACH_POINTS, incident_range = INCIDENT_RADIUS, max_size = AMBOWORLD_SIZE, ambo_kph = AMBO_SPEED, ambo_free_from_hospital = AMBO_FREE_FROM_HOSPITAL ) # Get number of observations returned for state observation_space = sim.observation_size # Get number of actions possible action_space = sim.action_number ############################################################################ # 9 Set up policy and target nets # ############################################################################ # Set up policy and target neural nets policy_nets = [DQN(observation_space, action_space) for i in range(NUMBER_OF_NETS)] target_nets = [DQN(observation_space, action_space) for i in range(NUMBER_OF_NETS)] best_nets = [DQN(observation_space, action_space) for i in range(NUMBER_OF_NETS)] # Set optimizer, copy weights from policy_net to target, and for i in range(NUMBER_OF_NETS): # Set optimizer policy_nets[i].optimizer = optim.Adam( params=policy_nets[i].parameters(), lr=LEARNING_RATE) # Copy weights from policy -> target target_nets[i].load_state_dict(policy_nets[i].state_dict()) # Set target net to eval rather than training mode target_nets[i].eval() ############################################################################ # 10 Set up memory # ############################################################################ # Set up memomry memory = Memory() ############################################################################ # 11 Set up + start training loop # ############################################################################ # Set up run counter and learning loop run = 0 all_steps = 0 continue_learning = True best_reward = -np.inf # Set up list for results results_run = [] results_exploration = [] results_score = [] results_mean_call_to_arrival = [] results_mean_assignment_to_arrival = [] # Continue repeating games (episodes) until target complete while continue_learning: ######################################################################## # 12 Play episode # ######################################################################## # Increment run (episode) counter run += 1 ######################################################################## # 13 Reset game # ######################################################################## # Reset game environment and get first state observations state = sim.reset() # Reset total reward and rewards list total_reward = 0 rewards = [] # Reshape state into 2D array with state obsverations as first 'row' state = np.reshape(state, [1, observation_space]) # Continue loop until episode complete while True: #################################################################### # 14 Game episode loop # #################################################################### #################################################################### # 15 Get action # #################################################################### # Get actions to take (use evalulation mode) actions = [] for i in range(NUMBER_OF_NETS): policy_nets[i].eval() actions.append(policy_nets[i].act(state)) # Randomly choose an action from net actions random_index = random.randint(0, NUMBER_OF_NETS - 1) action = actions[random_index] #################################################################### # 16 Play action (get S', R, T) # #################################################################### # Act state_next, reward, terminal, info = sim.step(action) total_reward += reward # Update trackers rewards.append(reward) # Reshape state into 2D array with state observations as first 'row' state_next = np.reshape(state_next, [1, observation_space]) # Update display if needed if DISPLAY_ON_SCREEN: sim.render() #################################################################### # 17 Add S/A/R/S/T to memory # #################################################################### # Record state, action, reward, new state & terminal memory.remember(state, action, reward, state_next, terminal) # Update state state = state_next #################################################################### # 18 Check for end of episode # #################################################################### # Actions to take if end of game episode if terminal: # Get exploration rate exploration = policy_nets[0].exploration_rate # Clear print row content clear_row = '\r' + ' ' * 79 + '\r' print(clear_row, end='') print(f'Run: {run}, ', end='') print(f'Exploration: {exploration: .3f}, ', end='') average_reward = np.mean(rewards) print(f'Average reward: {average_reward:4.1f}, ', end='') mean_assignment_to_arrival = np.mean(info['assignment_to_arrival']) print(f'Mean assignment to arrival: {mean_assignment_to_arrival:4.1f}, ', end='') mean_call_to_arrival = np.mean(info['call_to_arrival']) print(f'Mean call to arrival: {mean_call_to_arrival:4.1f}, ', end='') demand_met = info['fraction_demand_met'] print(f'Demand met {demand_met:0.3f}') # Add to results lists results_run.append(run) results_exploration.append(exploration) results_score.append(total_reward) results_mean_call_to_arrival.append(mean_call_to_arrival) results_mean_assignment_to_arrival.append(mean_assignment_to_arrival) # Save model if best reward total_reward = np.sum(rewards) if total_reward > best_reward: best_reward = total_reward # Copy weights to best net for i in range(NUMBER_OF_NETS): best_nets[i].load_state_dict(policy_nets[i].state_dict()) ################################################################ # 18b Check for end of learning # ################################################################ if run == TRAINING_EPISODES: continue_learning = False # End episode loop break #################################################################### # 19 Update policy net # #################################################################### # Avoid training model if memory is not of sufficient length if len(memory.memory) > REPLAY_START_SIZE: # Update policy net for i in range(NUMBER_OF_NETS): optimize(policy_nets[i], target_nets[i], memory.memory) ################################################################ # 20 Update target net periodically # ################################################################ # Use load_state_dict method to copy weights from policy net if all_steps % SYNC_TARGET_STEPS == 0: for i in range(NUMBER_OF_NETS): target_nets[i].load_state_dict( policy_nets[i].state_dict()) ############################################################################ # 21 Learning complete - plot and save results # ############################################################################ # Target reached. Plot results plot_results(results_run, results_exploration, results_score, results_mean_call_to_arrival, results_mean_assignment_to_arrival) # SAVE RESULTS run_details = pd.DataFrame() run_details['run'] = results_run run_details['exploration '] = results_exploration run_details['mean_call_to_arrival'] = results_mean_call_to_arrival run_details['mean_assignment_to_arrival'] = results_mean_assignment_to_arrival filename = 'output/' + RESULTS_NAME + '.csv' run_details.to_csv(filename, index=False) ############################################################################ # Test best model # ############################################################################ print() print('Test Model') print('----------') for i in range(NUMBER_OF_NETS): best_nets[i].eval() best_nets[i].exploration_rate = 0 # Set up results dictionary results = dict() results['call_to_arrival'] = [] results['assign_to_arrival'] = [] results['demand_met'] = [] # Replicate model runs for run in range(30): # Reset game environment and get first state observations state = sim.reset() state = np.reshape(state, [1, observation_space]) # Continue loop until episode complete while True: # Get actions to take (use evalulation mode) actions = [] for i in range(NUMBER_OF_NETS): actions.append(best_nets[i].act(state)) # Randomly choose an action from net actions random_index = random.randint(0, NUMBER_OF_NETS - 1) action = actions[random_index] # Act state_next, reward, terminal, info = sim.step(action) # Reshape state into 2D array with state observations as first 'row' state_next = np.reshape(state_next, [1, observation_space]) # Update state state = state_next if terminal: print(f'Run: {run}, ', end='') mean_assignment_to_arrival = np.mean(info['assignment_to_arrival']) print(f'Mean assignment to arrival: {mean_assignment_to_arrival:4.1f}, ', end='') mean_call_to_arrival = np.mean(info['call_to_arrival']) print(f'Mean call to arrival: {mean_call_to_arrival:4.1f}, ', end='') demand_met = info['fraction_demand_met'] print(f'Demand met: {demand_met:0.3f}') # Add to results results['call_to_arrival'].append(mean_call_to_arrival) results['assign_to_arrival'].append(mean_assignment_to_arrival) results['demand_met'].append(demand_met) # End episode loop break results = pd.DataFrame(results) filename = './output/results_' + RESULTS_NAME +'.csv' results.to_csv(filename, index=False) print() print(results.describe()) return run_details ######################## MODEL ENTRY POINT ##################################### # Run model and return last run results last_run = qambo() ```
github_jupyter
![title](vw.png) https://github.com/VowpalWabbit/vowpal_wabbit webspam: https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html Реализация стохастического градиентного спуска для линейхных моделей, позволяющая запускаться на больших объёмах данных, за счет последовательной загрузки и обработки примеров. Основной интерфейс работы -- shell ``` !python phraug/split.py webspam_wc_normalized_trigram.svm \ train_v.txt test_v.txt 0.8 dat_seed ``` ### Формат входных данных [Label] [Importance] [Tag]|Namespace Features |Namespace Features ... |Namespace Features Namespace=String[:Value] Features=(String[:Value] )* где [] обозначает необязательные элементы, а (...)* означает повтор неопределенное число раз. Label - целевая переменная Importance - вес объекта Tag - является некоторой строкой без пробелов и отвечает за некоторое "название" примера, которое сохраняется при предсказании ответа Namespace - служит для группировки признаков. При использваонии в аргументах, Namespace именуются по первой букве Features - признаками объекта. Если признаком является строка, то она хешируется. Значение признака по умолчанию - 1, но его можно переопределить добавив значение после :. Все признаки не входящие в данную строку считаются равными 0. Для приведения данных к необходимому виду, нужно либо создать отдельный файл, либо можем сразу писать в stdin -b 29 – используем 29 бит для хэширования, то есть признаковое пространство ограничено признаками 2^29 --cache_file создаем на бинарный файл сразу из входных данных, что бы потом использовать при обучении ``` !python phraug/libsvm2vw.py train_v.txt /dev/stdout \ | vw -b 29 --cache_file train_v.cache !python phraug/libsvm2vw.py test_v.txt /dev/stdout \ | vw -b 29 --cache_file test_v.cache ``` ### Запускаем обучение Поддерживаемые функционалы (--loss_function): * squared 1/2(y-a(x))^2 * classic - quadratic без взвешивания весов * quantile \tau(a(x)-y)[y \leqslant a(x)]+(1-\tau)(y-a(x))[y \geqslant a(x)] * logistic log(1 + exp (-ya(x))) * hinge max(0,1 - ya (x)) Можно так же добваить регуляризацию: * --l1 value * --l2 value ``` !vw --cache_file train_v.cache --loss_function logistic -b 29 -P 10000 --passes 10 ``` Параметр отвечающий, за наличие валидационного датасета * --holdout_off ``` !vw --cache_file train_v.cache --loss_function logistic -b 29 -P 10000 --passes 5 --holdout_off -f model_v_logistic !vw --cache_file test_v.cache -i model_v_logistic -t -p preds.txt ``` Вычислим метрики: ``` import sys import numpy as np from sklearn.metrics import accuracy_score as accuracy from sklearn.metrics import roc_auc_score as AUC from sklearn.metrics import confusion_matrix y_file = "test_v.txt" p_file = "preds.txt" print "loading p..." p = np.loadtxt( p_file ) y_predicted = np.ones(( p.shape[0] )) y_predicted[p < 0] = -1 print "loading y..." y = np.loadtxt( y_file, usecols= [0] ) print "accuracy:", accuracy( y, y_predicted ) print "AUC:", AUC( y, p ) print print "confusion matrix:" print confusion_matrix( y, y_predicted ) ``` Дополнительные параметры: * -q AB -- Добавляет все парные признаки, где первый признак берется из Namespace A, а второй из Namespace B * --cubic ABC -- Аналогично с тройками признаков * --ngram AN -- генерирует n-граммы для пространста A * --skips AK -- позволяет делать пропуски длины k в n-граммах простанств A Автоматический подбор гиперпараметров можно делать через **vw-hyperopt** ### Анализ модели * --readable_model позволяет получить модель, в человеко-читаемом формате * --invert_hash не умеет работать с кешем, но выдает поятные имена фичей, вместо номеров * --audit_regressor позволяет получить описание модели с использованными фичами, однако требует повторного запуска. ``` ! vw -d test.vw -t -i small_model.vw --audit_regressor audit_regr features = {} with open ("audit_regr", "r") as audit: for l in audit: features[l.split (':')[0][5:]] = float(l.split (':')[2].strip ()) list (reversed (sorted(features, key=features.get))) ``` ### Дополнительные параметры * --progressive_loss прежде, чем пересчитать коэффициенты для текущего семпла, вычисляет для него функцию потерь. При множественных прохождениях теряет смысл. * --bs позволяет использовать boostrapping ### Сcылки https://habr.com/ru/company/ods/blog/326418/ http://fastml.com/vowpal-wabbit-liblinear-sbm-and-streamsvm-compared/ про значение весов: https://arxiv.org/abs/1011.1576 как сделан бустреп: https://arxiv.org/abs/1312.5021
github_jupyter
``` from keras.models import Sequential from keras.layers import Dense from keras.wrappers.scikit_learn import KerasRegressor from sklearn.model_selection import cross_val_score from sklearn.model_selection import KFold from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline from keras.optimizers import Adam from keras.callbacks import EarlyStopping from sklearn.metrics import r2_score from matplotlib import pyplot as plt # load dataset # l= ["f1","f2","f3","f4","f5","f6","f7","f8","label"] df= pd.read_csv("C:/Users/mkahs/REPOSITORY/DeepLearning---Natural-Language-Processing/Arup.csv", header=None, names=["f1","f2","f3","f4","f5","f6","f7","f8","label"]) # df = pd.ExcelFile(r"C:/Users/mkahs/REPOSITORY/DeepLearning---Natural-Language-Processing/Residential-Building-Data-Set.xlsx") df.head() train = df.loc[0:629] # print(len(train)) # train.head() Xtrain = train.iloc[:,0:8] Ytrain = train.iloc[:,8] Xtrain.reset_index(drop=True, inplace=True) # Ytrain.head() test = df.loc[630:944] # print(len(test)) test.tail() Xtest = test.iloc[:,0:8] Ytest = test.iloc[:,8] Xtest.reset_index(drop=True, inplace=True) # Ytest.head() print(Xtest) scaler = StandardScaler() X_train = scaler.fit_transform(Xtrain.as_matrix()) y_train = scaler.fit_transform(Ytrain.as_matrix().reshape(-1, 1)) X_test = scaler.fit_transform(Xtest.as_matrix()) y_test = scaler.fit_transform(Ytest.as_matrix().reshape(-1, 1)) # Defines "deep" model and its structure model = Sequential() model.add(Dense(8, input_shape=(8,), activation='relu')) model.add(Dense(6, activation='relu')) model.add(Dense(4, activation='relu')) model.add(Dense(2, activation='linear')) # model.add(Dense(2, activation='linear')) model.add(Dense(1,)) model.compile(Adam(lr=0.003), 'mean_squared_error') # Pass several parameters to 'EarlyStopping' function and assigns it to 'earlystopper' earlystopper = EarlyStopping(monitor='val_loss', min_delta=0, patience=15, verbose=1, mode='auto') # Fits model over 2000 iterations with 'earlystopper' callback, and assigns it to history history = model.fit(X_train, y_train, epochs = 2000, validation_split = 0.2,shuffle = True, verbose = 0, callbacks = [earlystopper]) y_test_pred = model.predict(X_test) # Calculates and prints r2 score of training and testing data # print("The R2 score on the Train set is:\t{:0.3f}".format(r2_score(y_train, y_train_pred))) print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred))) plt.plot(y_test,'bo',label='Actual') plt.plot(y_test_pred,'r',label='Predicted') print(Ytest) print(scaler.inverse_transform(y_test_pred)) import pandas as pd import matplotlib.pyplot as plt import numpy as np from keras.models import Sequential from keras.layers import Dense, Activation, Embedding, Flatten, LeakyReLU, BatchNormalization, Dropout from keras.activations import relu, sigmoid from keras.layers import LeakyReLU from sklearn.preprocessing import StandardScaler df= pd.read_csv("C:/Users/mkahs/REPOSITORY/DeepLearning---Natural-Language-Processing/Arup.csv", header=None, names=["f1","f2","f3","f4","f5","f6","f7","f8","label"]) df.head() train = df.loc[0:629] # print(len(train)) # train.head() Xtrain = train.iloc[:,0:8] Ytrain = train.iloc[:,8] Xtrain.reset_index(drop=True, inplace=True) # Ytrain.head() test = df.loc[630:944] # print(len(test)) test.tail() Xtest = test.iloc[:,0:8] Ytest = test.iloc[:,8] Xtest.reset_index(drop=True, inplace=True) # Ytest.head() print(Xtest) scaler = StandardScaler() X_train = scaler.fit_transform(Xtrain.as_matrix()) y_train = scaler.fit_transform(Ytrain.as_matrix().reshape(-1, 1)) X_test = scaler.fit_transform(Xtest.as_matrix()) y_test = scaler.fit_transform(Ytest.as_matrix().reshape(-1, 1)) from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import GridSearchCV from keras.wrappers.scikit_learn import KerasRegressor def create_model(layers, activation): model = Sequential() for i, nodes in enumerate(layers): if i==0: model.add(Dense(nodes, input_dim=X_train.shape[1])) model.add(Activation(activation)) else: model.add(Dense(nodes)) model.add(Activation(activation)) model.add(Dense(2, activation='linear')) model.add(Dense(1,)) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # model.compile(Adam(lr=0.003), 'mean_squared_error') return model # model=KerasClassifier(build_fn=create_model, verbose=0) # Pass several parameters to 'EarlyStopping' function and assigns it to 'earlystopper' # earlystopper = EarlyStopping(monitor='val_loss', min_delta=0, patience=15, verbose=1, mode='auto') model = KerasRegressor(build_fn=create_model, batch_size=32,epochs=120) # Fits model over 2000 iterations with 'earlystopper' callback, and assigns it to history # history = model.fit(X_train, y_train, epochs = 2000, validation_split = 0.2,shuffle = True, verbose = 0, # callbacks = [earlystopper]) model # layers = [[8,6,4],[8,8,6,6,4,4],[8,7,6,5,4,3,2],[8,8,8,4,4,4,2,2],[7,6,5,4,3],[8,8,8,8,8],[5,5,5,5,5]] layers = [[8,6,4],[8,8,6,6,4,4]] # activations = ['sigmoid', 'relu', 'tanh'] activations = ['relu', 'tanh'] param_grid = dict(layers=layers,activation=activations, batch_size=[20,50],epochs=[30]) grid = GridSearchCV(estimator=model,param_grid=param_grid) # Grid = GirdSearchCV() grid_result =grid.fit(X_train, y_train) [grid_result.best_score_, grid_result.best_params_] pred_y = grid.predict(X_test) from sklearn.metrics import r2_score print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, pred_y))) plt.plot(y_test,'bo',label='Actual') plt.plot(pred_y,'r',label='Predicted') print(Ytest) print(scaler.inverse_transform(pred_y)) ```
github_jupyter
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/sparse-data-train-test-split/auto-ml-sparse-data-train-test-split.png) # Automated Machine Learning _**Train Test Split and Handling Sparse Data**_ ## Contents 1. [Introduction](#Introduction) 1. [Setup](#Setup) 1. [Data](#Data) 1. [Train](#Train) 1. [Results](#Results) 1. [Test](#Test) ## Introduction In this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML for handling sparse data and how to specify custom cross validations splits. Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook. In this notebook you will learn how to: 1. Create an `Experiment` in an existing `Workspace`. 2. Configure AutoML using `AutoMLConfig`. 4. Train the model. 5. Explore the results. 6. Test the best fitted model. In addition this notebook showcases the following features - Explicit train test splits - Handling **sparse data** in the input ## Setup As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ``` import logging import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.train.automl import AutoMLConfig ws = Workspace.from_config() # choose a name for the experiment experiment_name = 'sparse-data-train-test-split' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ``` ## Data ``` from sklearn.datasets import fetch_20newsgroups from sklearn.feature_extraction.text import HashingVectorizer from sklearn.model_selection import train_test_split remove = ('headers', 'footers', 'quotes') categories = [ 'alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space', ] data_train = fetch_20newsgroups(subset = 'train', categories = categories, shuffle = True, random_state = 42, remove = remove) X_train, X_valid, y_train, y_valid = train_test_split(data_train.data, data_train.target, test_size = 0.33, random_state = 42) vectorizer = HashingVectorizer(stop_words = 'english', alternate_sign = False, n_features = 2**16) X_train = vectorizer.transform(X_train) X_valid = vectorizer.transform(X_valid) summary_df = pd.DataFrame(index = ['No of Samples', 'No of Features']) summary_df['Train Set'] = [X_train.shape[0], X_train.shape[1]] summary_df['Validation Set'] = [X_valid.shape[0], X_valid.shape[1]] summary_df ``` ## Train Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment. |Property|Description| |-|-| |**task**|classification or regression| |**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>| |**iteration_timeout_minutes**|Time limit in minutes for each iteration.| |**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.| |**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.<br>**Note:** If input data is sparse, you cannot use *True*.| |**X**|(sparse) array-like, shape = [n_samples, n_features]| |**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.| |**X_valid**|(sparse) array-like, shape = [n_samples, n_features] for the custom validation set.| |**y_valid**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.| ``` automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', primary_metric = 'AUC_weighted', iteration_timeout_minutes = 60, iterations = 5, preprocess = False, verbosity = logging.INFO, X = X_train, y = y_train, X_valid = X_valid, y_valid = y_valid) ``` Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. In this example, we specify `show_output = True` to print currently running iterations to the console. ``` local_run = experiment.submit(automl_config, show_output=True) local_run ``` ## Results #### Widget for Monitoring Runs The widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete. **Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details. ``` from azureml.widgets import RunDetails RunDetails(local_run).show() ``` #### Retrieve All Child Runs You can also use SDK methods to fetch all the child runs and see individual metrics that we log. ``` children = list(local_run.get_children()) metricslist = {} for run in children: properties = run.get_properties() metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} metricslist[int(properties['iteration'])] = metrics rundata = pd.DataFrame(metricslist).sort_index(1) rundata ``` ### Retrieve the Best Model Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ``` best_run, fitted_model = local_run.get_output() ``` #### Best Model Based on Any Other Metric Show the run and the model which has the smallest `accuracy` value: ``` # lookup_metric = "accuracy" # best_run, fitted_model = local_run.get_output(metric = lookup_metric) ``` #### Model from a Specific Iteration Show the run and the model from the third iteration: ``` # iteration = 3 # best_run, fitted_model = local_run.get_output(iteration = iteration) ``` ## Test ``` # Load test data. from pandas_ml import ConfusionMatrix data_test = fetch_20newsgroups(subset = 'test', categories = categories, shuffle = True, random_state = 42, remove = remove) X_test = vectorizer.transform(data_test.data) y_test = data_test.target # Test our best pipeline. y_pred = fitted_model.predict(X_test) y_pred_strings = [data_test.target_names[i] for i in y_pred] y_test_strings = [data_test.target_names[i] for i in y_test] cm = ConfusionMatrix(y_test_strings, y_pred_strings) print(cm) cm.plot() ```
github_jupyter
``` # # This MaterialsAutomated example shows how to overlay simulated Laue spots on # experimental data in an automated fashion. # # It utilizes an existing open-source Laue diffraction analysis toolkit, LaueTools: # https://gitlab.esrf.fr/micha/lauetools # # If the goal is to analyze a single or small number of data sets in isolation, # and manually, the LaueTools GUI can be used directly. # # # This cell defines the main function that does most of the work: calcLaue # It takes as input: # detdimxy: a 2-tuple of the number of pixels along the x and y axis of the detector # latticeparam: a list containing the unit cell size and shape [a,b,c,alpha,beta,gamma] # (in units of Angstrom and degrees) # Extinc: extinction conditions, i.e. which spots should be removed due to glide planes, # centering operators, etc. See full details in ApplyExtinctionrules in # https://gitlab.esrf.fr/micha/lauetools/-/blob/master/LaueTools/CrystalParameters.py # UB: Orientation matrix for the crystal grain. This tool assumes that the incident # beam is parallel to the x-axis. # dd: Shortest Sample-Detector distance in mm (default: 30 mm) # betgam: 2-tuple defining the beta and gamma angles for the detector; see # https://gitlab.esrf.fr/micha/lauetools/-/blob/master/LaueTools/LaueGeometry.py # for details; the most common need is to adjust gamma to reflect a rotation in the # plane of the detector. # Ecut: Minimum and maximum energies of allowed reflections in KeV [assumes x-rays]. # If data is neutron or electrons, give the range of X-ray energies that correspond # to the wavelength range of the neutron or electron source. # pixsize: Size of each pixel (including dead area) in mm (default: 0.15 mm) # trans: If True, transmission mode, if False, reflection mode (default: False) # # The basic algorithm is to compute the position on the detector for every possible hkl # in the range (-30,30) for each h,k,l component that is not extinct by symmetry, and then # only keep those that fall on the detector and fit the given energy window. # # Returns a 5-tuple of h,k,l,x,y (indices and detector pixel positions) # import numpy as np from LaueTools.lauecore import calcSpots_fromHKLlist import LaueTools.CrystalParameters as CP def calcLaue(detdimxy,latticeparam,Extinc=None,UB=np.identity(3),dd=30,betgam=(0.0,0.0),Ecut=(5.0,15.0),pixsize=0.15,trans=False): B0=CP.calc_B_RR(latticeparam) HKLs=[] for i in range(-30,30): for j in range(-30,30): for k in range(-30,30): HKLs.append([i,j,k]) HKLs=np.array(HKLs) HKLs=CP.ApplyExtinctionrules(HKLs,Extinc) dictCCD = {} dictCCD["dim"] = detdimxy # pixels x pixels dictCCD["CCDparam"] = [dd,detdimxy[0]/2,detdimxy[1]/2,*betgam] dictCCD["pixelsize"] = pixsize # pixel size if trans: dictCCD["kf_direction"] = 'X>0' # transmission else: dictCCD["kf_direction"] = 'X<0' # back-reflection spots = calcSpots_fromHKLlist(UB, B0, HKLs, dictCCD) (H, K, L, Qx, Qy, Qz, X, Y, twthe, chi, Energy) = spots subset = [] for i in range(0,len(X)): if X[i] >= 0 and X[i] < detdimxy[0] and Y[i] >= 0 and Y[i] < detdimxy[1] and Energy[i]>=Ecut[0] and Energy[i]<=Ecut[1]: subset.append([H[i],K[i],L[i],X[i],Y[i]]) HKLss = np.array(subset,dtype=np.int64)[:,0:3].T XYss = np.array(subset)[:,3:5].T return (HKLss[0],HKLss[1],HKLss[2],XYss[0],XYss[1]) # # Given an image and a set of laue spots, returns a new image # with the predicted spots overlaid on the image. # from PIL import ImageDraw def overlayLaue(iminp,spots,size=1,color="red"): im = iminp.copy() spots = np.array(spots).T draw = ImageDraw.Draw(im) for i in range(0,len(spots)): draw.ellipse((spots[i][3]-size,spots[i][4]-size,spots[i][3]+size,spots[i][4]+size),fill=color,outline=color) return im # # Example use of the above functions for HoFeO3 from the literature. # M. Shao, et al. J. Cryst. Growth 318, 947-50 (2011) # from PIL import Image import ase.io HoFeO3Laue = Image.open("HoFeO3.PNG") hfo = ase.io.read("HoFeO3.cif") # Align the incident beam perpindicular to the # b axis as proposed UB = np.array([[0,1,0],[1,0,0],[0,0,1]]) spots = calcLaue(HoFeO3Laue.size,hfo.get_cell_lengths_and_angles(),dd=32,UB=UB,betgam=(0,90)) overlayLaue(HoFeO3Laue,spots) # # Example use of the above functions for Nb3Br8 as provided # in issue #9. In this case, the raw Laue image is a flat # file containing 256x256 16-bit unsigned ints in big # endian format, and the crystal has c oriented along the # beam source axis, instead of a. # from PIL import Image import ase.io # Read 16-bit Laue image with open("Nb3Br8.hs2","r") as f: Nb3Br8Laue = np.array(np.fromfile(f,dtype='>u2',count=256*256,sep='').reshape((256,256)),dtype=np.uint16) Nb3Br8Laue = Image.fromarray(Nb3Br8Laue/20).convert("RGB") Nb3Br8 = ase.io.read("Nb3Br8.cif") # Orient c along incident beam UB=np.array([[0,0,1],[0,1,0],[1,0,0]]) spots = calcLaue(Nb3Br8Laue.size,Nb3Br8.get_cell_lengths_and_angles(),dd=12,UB=UB,betgam=(0,60),Ecut=(5,10)) overlayLaue(Nb3Br8Laue,spots) # # Example use of LaueTools to do automatic peak finding/extraction # from an 8-bit, RGB png input datafile # from PIL import Image import LaueTools.readmccd as rmccd # Convert to grayscale on load pic = Image.open("HoFeO3-PP.PNG").convert('L') # Invert image so bright points are spots nppic = np.array(pic) nppic = np.full(nppic.shape,255,dtype=np.uint8) - nppic nppic = np.array(nppic,dtype=np.uint16)*256 # Do peak search rv = rmccd.PeakSearch("",Data_for_localMaxima=nppic,fit_peaks_gaussian=0,PixelNearRadius=2,local_maxima_search_method=0,IntensityThreshold=0.7*np.amax(nppic)) overlayLaue(pic.convert("RGB"),np.vstack((rv[0][:,0],rv[0][:,0],rv[0][:,0],rv[0][:,0],rv[0][:,1]))) # # Example use of LaueTools to do automatic peak finding/extraction # from a 16-bit, measured counts input datafile # from PIL import Image import LaueTools.readmccd as rmccd # Read 16-bit Laue image with open("Nb3Br8.hs2","r") as f: nppic = np.array(np.fromfile(f,dtype='>u2',count=256*256,sep='').reshape((256,256)),dtype=np.uint16) # Do peak search rv = rmccd.PeakSearch("",Data_for_localMaxima=nppic,fit_peaks_gaussian=0,PixelNearRadius=2,local_maxima_search_method=0,IntensityThreshold=0.15*np.amax(nppic)) overlayLaue(Image.fromarray(nppic/20).convert("RGB"),np.vstack((rv[0][:,0],rv[0][:,0],rv[0][:,0],rv[0][:,0],rv[0][:,1]))) ```
github_jupyter
# Python Comments Comments are lines that exist in computer programs that are ignored by compilers and interpreters. Including comments in programs makes code more readable for humans as it provides some information or explanation about what each part of a program is doing. In general, it is a good idea to write comments while you are writing or updating a program as it is easy to forget your thought process later on, and comments written later may be less useful in the long term. In Python, we use the hash (#) symbol to start writing a comment. ``` #Print Hello, world to console print("Hello, world") ``` # Multi Line Comments If we have comments that extend multiple lines, one way of doing it is to use hash (#) in the beginning of each line. ``` #This is a long comment #and it extends #Multiple lines ``` Another way of doing this is to use triple quotes, either ''' or """. ``` """This is also a perfect example of multi-line comments""" ``` # DocString in python Docstring is short for documentation string. It is a string that occurs as the first statement in a module, function, class, or method definition. ``` def double(num): """ function to double the number """ return 2 * num print (double(10)) print (double.__doc__) #Docstring is available to us as the attribute __doc__ of the function ``` # Python Indentation 1. Most of the programming languages like C, C++, Java use braces { } to define a block of code. Python uses indentation. 2. A code block (body of a function, loop etc.) starts with indentation and ends with the first unindented line. The amount of indentation is up to you, but it must be consistent throughout that block. 3. Generally four whitespaces are used for indentation and is preferred over tabs. ``` for i in range(10): print (i) ``` Indentation can be ignored in line continuation. But it's a good idea to always indent. It makes the code more readable. ``` if True: print ("Machine Learning") c = "AAIC" if True: print "Machine Learning"; c = "AAIC" #always add parenthesis ``` # Python Statement Instructions that a Python interpreter can execute are called statements. Examples: ``` a = 1 #single statement ``` # Multi-Line Statement In Python, end of a statement is marked by a newline character. But we can make a statement extend over multiple lines with the line continuation character (\). ``` a = 1 + 2 + 3 + \ 4 + 5 + 6 + \ 7 + 8 print (a) #another way is a = (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8) print (a) a = 10; b = 20; c = 30 #put multiple statements in a single line using ; ```
github_jupyter
``` import pickle from misc import * import SYCLOP_env as syc from RL_brain_b import DeepQNetwork import matplotlib.pyplot as plt %matplotlib notebook import cv2 from scipy import misc import glob datapath='/home/bnapp/arivkindNet/video_datasets/dataset-corridor1_512_16/mav0/cam0/data/' images=[] max_image = 2 image_cnt = 0 for image_path in sorted(glob.glob(datapath+"*.png")): images.append( misc.imread(image_path)) image_cnt += 1 if image_cnt > max_image: break hp = HP() hp.mem_depth=1 hp.logmode = False # recorder = Recorder(n=4) images = read_images_from_path('../video_datasets/liron_images/*.jpg') images = [np.sum(1.0*uu, axis=2) for uu in images] images = [cv2.resize(uu, dsize=(256, 256-64), interpolation=cv2.INTER_AREA) for uu in images] scene = syc.Scene(frame_list=images) sensor = syc.Sensor() agent = syc.Agent(max_q = [scene.maxx-sensor.hp.winx,scene.maxy-sensor.hp.winy]) reward = syc.Rewards() observation_size = 256*4 RL = DeepQNetwork(len(agent.hp.action_space), observation_size*hp.mem_depth,#sensor.frame_size+2, reward_decay=0.99, e_greedy=0.99, e_greedy0=0.99, replace_target_iter=10, memory_size=100000, e_greedy_increment=0.0001, learning_rate=0.0025, double_q=False, dqn_mode=True, state_table=np.zeros([1,observation_size*hp.mem_depth]) ) RL.dqn.load_nwk_param('best_liron.nwk') def local_observer(sensor,agent): if hp.logmode: normfactor=1.0 else: normfactor = 1.0/600.0 # return np.concatenate([1.0/65000*(sensor.dvs_view.reshape([-1]))]) # return 1.0/65000*np.concatenate([relu_up_and_down(sensor.central_dvs_view), # relu_up_and_down(sensor.dvs_view, downsample_fun=lambda x: cv2.resize(x, dsize=(16, 16), interpolation=cv2.INTER_AREA))]) return normfactor*np.concatenate([relu_up_and_down(sensor.central_dvs_view), relu_up_and_down(cv2.resize(1.0*sensor.dvs_view, dsize=(16, 16), interpolation=cv2.INTER_AREA))]) observation = np.random.uniform(0,1,size=[hp.mem_depth, observation_size]) hp.fading_mem = 0.5 recorders=[] for image_num,image in enumerate(images): recorder = Recorder(n=5) step = 0 episode = 0 observation = np.random.uniform(0,1,size=[hp.mem_depth, observation_size]) observation_ = np.random.uniform(0,1,size=[hp.mem_depth, observation_size]) scene.current_frame = image_num scene.image = scene.frame_list[scene.current_frame] agent.reset() agent.q_ana[1]=256./2.-32 agent.q_ana[0]=192./2-32 agent.q = np.int32(np.floor(agent.q_ana)) sensor.reset() sensor.update(scene, agent) sensor.update(scene, agent) for step_prime in range(1000): action = RL.choose_action(observation.reshape([-1])) reward.update_rewards(sensor = sensor, agent = agent) intensity_l1=np.mean(np.abs(sensor.central_dvs_view)) recorder.record([agent.q_ana[0],agent.q_ana[1],reward.reward,RL.epsilon,intensity_l1]) agent.act(action) sensor.update(scene,agent) observation *= hp.fading_mem observation += local_observer(sensor, agent) # todo: generalize if step%1000 ==0: print(episode,step) # print('frame:', scene.current_frame) step += 1 recorders.append(recorder) for image,recorder in zip(images,recorders): plt.figure() plt.imshow(image) plt.plot(32+np.array(recorder.records[0]),256-64-32-np.array(recorder.records[1]),'-') plt.figure() plt.subplot(2,2,1) plt.hist(recorder.records[2],bins=50) plt.title('intensity l2') plt.subplot(2,2,2) plt.hist(recorder.records[4],bins=50) plt.title('intensity l1') plt.subplot(2,2,3) plt.hist(np.log10(np.array(recorder.records[2])+1.0),bins=50) plt.title('log intensity l2') plt.subplot(2,2,4) plt.hist(np.log10(np.array(recorder.records[4])+1.0),bins=50) plt.title('log intensity l1') # plt.title('10 syclop trajectories 10,000 timesteps each') # RL.dqn.save_nwk_param('liron_random_ic02.nwk') # import pickle # images = read_images_from_path('../video_datasets/liron_images/*.jpg') # images = [np.sum(1.0*uu, axis=2) for uu in images] # images = [cv2.resize(uu, dsize=(256, 256-64), interpolation=cv2.INTER_AREA) for uu in images] # _= [np.random.shuffle(np.asarray(uu.reshape([-1]))) for uu in images] # with open('../video_datasets/liron_images/shuffled_images.pkl','wb') as f: # pickle.dump(images,f) ```
github_jupyter
# 基于注意力的神经机器翻译 此笔记本训练一个将缅甸语翻译为英语的序列到序列(sequence to sequence,简写为 seq2seq)模型。此例子难度较高,需要对序列到序列模型的知识有一定了解。 训练完此笔记本中的模型后,你将能够输入一个缅甸语句子,例如 *"ဘာကိစ္စ မဖြစ်ရ မှာ လဲ?"*,并返回其英语翻译 *"Why not?"* 对于一个简单的例子来说,翻译质量令人满意。但是更有趣的可能是生成的注意力图:它显示在翻译过程中,输入句子的哪些部分受到了模型的注意。 <img src="https://tensorflow.google.cn/images/spanish-english.png" alt="spanish-english attention plot"> 请注意:运行这个例子用一个 P100 GPU 需要花大约 10 分钟。 ``` import tensorflow as tf import matplotlib.pyplot as plt import matplotlib.ticker as ticker from sklearn.model_selection import train_test_split import unicodedata import re import numpy as np import os import io import time ``` ## 下载和准备数据集 我们将使用 http://www.manythings.org/anki/ 提供的一个语言数据集。这个数据集包含如下格式的语言翻译对: ``` May I borrow this book? ¿Puedo tomar prestado este libro? ``` 这个数据集中有很多种语言可供选择。我们将使用英语 - 缅甸语数据集。为方便使用,我们在谷歌云上提供了此数据集的一份副本。但是你也可以自己下载副本。下载完数据集后,我们将采取下列步骤准备数据: 1. 给每个句子添加一个 *开始* 和一个 *结束* 标记(token)。 2. 删除特殊字符以清理句子。 3. 创建一个单词索引和一个反向单词索引(即一个从单词映射至 id 的词典和一个从 id 映射至单词的词典)。 4. 将每个句子填充(pad)到最大长度。 ``` ''' # 下载文件 path_to_zip = tf.keras.utils.get_file( 'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip', extract=True) path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt" ''' path_to_file = "./lan/mya.txt" # 将 unicode 文件转换为 ascii def unicode_to_ascii(s): return ''.join(c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn') def preprocess_sentence(w): w = unicode_to_ascii(w.lower().strip()) # 在单词与跟在其后的标点符号之间插入一个空格 # 例如: "he is a boy." => "he is a boy ." # 参考:https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation w = re.sub(r"([?.!,¿])", r" \1 ", w) w = re.sub(r'[" "]+', " ", w) # 除了 (a-z, A-Z, ".", "?", "!", ","),将所有字符替换为空格 w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w) w = w.rstrip().strip() # 给句子加上开始和结束标记 # 以便模型知道何时开始和结束预测 w = '<start> ' + w + ' <end>' return w en_sentence = u"May I borrow this book?" sp_sentence = u"¿Puedo tomar prestado este libro?" print(preprocess_sentence(en_sentence)) print(preprocess_sentence(sp_sentence).encode('utf-8')) # 1. 去除重音符号 # 2. 清理句子 # 3. 返回这样格式的单词对:[ENGLISH, SPANISH] def create_dataset(path, num_examples): lines = io.open(path, encoding='UTF-8').read().strip().split('\n') word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]] return zip(*word_pairs) en, sp = create_dataset(path_to_file, None) print(en[-1]) print(sp[-1]) def max_length(tensor): return max(len(t) for t in tensor) def tokenize(lang): lang_tokenizer = tf.keras.preprocessing.text.Tokenizer( filters='') lang_tokenizer.fit_on_texts(lang) tensor = lang_tokenizer.texts_to_sequences(lang) tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor, padding='post') return tensor, lang_tokenizer def load_dataset(path, num_examples=None): # 创建清理过的输入输出对 targ_lang, inp_lang = create_dataset(path, num_examples) input_tensor, inp_lang_tokenizer = tokenize(inp_lang) target_tensor, targ_lang_tokenizer = tokenize(targ_lang) return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer ``` ### 限制数据集的大小以加快实验速度(可选) 在超过 10 万个句子的完整数据集上训练需要很长时间。为了更快地训练,我们可以将数据集的大小限制为 3 万个句子(当然,翻译质量也会随着数据的减少而降低): ``` # 尝试实验不同大小的数据集 num_examples = 30000 input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file, num_examples) # 计算目标张量的最大长度 (max_length) max_length_targ, max_length_inp = max_length(target_tensor), max_length(input_tensor) # 采用 80 - 20 的比例切分训练集和验证集 input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2) # 显示长度 print(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val)) def convert(lang, tensor): for t in tensor: if t!=0: print ("%d ----> %s" % (t, lang.index_word[t])) print ("Input Language; index to word mapping") convert(inp_lang, input_tensor_train[0]) print () print ("Target Language; index to word mapping") convert(targ_lang, target_tensor_train[0]) ``` ### 创建一个 tf.data 数据集 ``` BUFFER_SIZE = len(input_tensor_train) BATCH_SIZE = 64 steps_per_epoch = len(input_tensor_train)//BATCH_SIZE embedding_dim = 256 units = 1024 vocab_inp_size = len(inp_lang.word_index)+1 vocab_tar_size = len(targ_lang.word_index)+1 dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) example_input_batch, example_target_batch = next(iter(dataset)) example_input_batch.shape, example_target_batch.shape ``` ## 编写编码器 (encoder) 和解码器 (decoder) 模型 实现一个基于注意力的编码器 - 解码器模型。关于这种模型,你可以阅读 TensorFlow 的 [神经机器翻译 (序列到序列) 教程](https://github.com/tensorflow/nmt)。本示例采用一组更新的 API。此笔记本实现了上述序列到序列教程中的 [注意力方程式](https://github.com/tensorflow/nmt#background-on-the-attention-mechanism)。下图显示了注意力机制为每个输入单词分配一个权重,然后解码器将这个权重用于预测句子中的下一个单词。下图和公式是 [Luong 的论文](https://arxiv.org/abs/1508.04025v5)中注意力机制的一个例子。 <img src="https://tensorflow.google.cn/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism"> 输入经过编码器模型,编码器模型为我们提供形状为 *(批大小,最大长度,隐藏层大小)* 的编码器输出和形状为 *(批大小,隐藏层大小)* 的编码器隐藏层状态。 下面是所实现的方程式: <img src="https://tensorflow.google.cn/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800"> <img src="https://tensorflow.google.cn/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800"> 本教程的编码器采用 [Bahdanau 注意力](https://arxiv.org/pdf/1409.0473.pdf)。在用简化形式编写之前,让我们先决定符号: * FC = 完全连接(密集)层 * EO = 编码器输出 * H = 隐藏层状态 * X = 解码器输入 以及伪代码: * `score = FC(tanh(FC(EO) + FC(H)))` * `attention weights = softmax(score, axis = 1)`。 Softmax 默认被应用于最后一个轴,但是这里我们想将它应用于 *第一个轴*, 因为分数 (score) 的形状是 *(批大小,最大长度,隐藏层大小)*。最大长度 (`max_length`) 是我们的输入的长度。因为我们想为每个输入分配一个权重,所以 softmax 应该用在这个轴上。 * `context vector = sum(attention weights * EO, axis = 1)`。选择第一个轴的原因同上。 * `embedding output` = 解码器输入 X 通过一个嵌入层。 * `merged vector = concat(embedding output, context vector)` * 此合并后的向量随后被传送到 GRU 每个步骤中所有向量的形状已在代码的注释中阐明: ``` class Encoder(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz): super(Encoder, self).__init__() self.batch_sz = batch_sz self.enc_units = enc_units self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(self.enc_units, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') def call(self, x, hidden): x = self.embedding(x) output, state = self.gru(x, initial_state = hidden) return output, state def initialize_hidden_state(self): return tf.zeros((self.batch_sz, self.enc_units)) encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE) # 样本输入 sample_hidden = encoder.initialize_hidden_state() sample_output, sample_hidden = encoder(example_input_batch, sample_hidden) print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape)) print ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape)) class BahdanauAttention(tf.keras.layers.Layer): def __init__(self, units): super(BahdanauAttention, self).__init__() self.W1 = tf.keras.layers.Dense(units) self.W2 = tf.keras.layers.Dense(units) self.V = tf.keras.layers.Dense(1) def call(self, query, values): # 隐藏层的形状 == (批大小,隐藏层大小) # hidden_with_time_axis 的形状 == (批大小,1,隐藏层大小) # 这样做是为了执行加法以计算分数 hidden_with_time_axis = tf.expand_dims(query, 1) # 分数的形状 == (批大小,最大长度,1) # 我们在最后一个轴上得到 1, 因为我们把分数应用于 self.V # 在应用 self.V 之前,张量的形状是(批大小,最大长度,单位) score = self.V(tf.nn.tanh( self.W1(values) + self.W2(hidden_with_time_axis))) # 注意力权重 (attention_weights) 的形状 == (批大小,最大长度,1) attention_weights = tf.nn.softmax(score, axis=1) # 上下文向量 (context_vector) 求和之后的形状 == (批大小,隐藏层大小) context_vector = attention_weights * values context_vector = tf.reduce_sum(context_vector, axis=1) return context_vector, attention_weights attention_layer = BahdanauAttention(10) attention_result, attention_weights = attention_layer(sample_hidden, sample_output) print("Attention result shape: (batch size, units) {}".format(attention_result.shape)) print("Attention weights shape: (batch_size, sequence_length, 1) {}".format(attention_weights.shape)) class Decoder(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz): super(Decoder, self).__init__() self.batch_sz = batch_sz self.dec_units = dec_units self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(self.dec_units, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') self.fc = tf.keras.layers.Dense(vocab_size) # 用于注意力 self.attention = BahdanauAttention(self.dec_units) def call(self, x, hidden, enc_output): # 编码器输出 (enc_output) 的形状 == (批大小,最大长度,隐藏层大小) context_vector, attention_weights = self.attention(hidden, enc_output) # x 在通过嵌入层后的形状 == (批大小,1,嵌入维度) x = self.embedding(x) # x 在拼接 (concatenation) 后的形状 == (批大小,1,嵌入维度 + 隐藏层大小) x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1) # 将合并后的向量传送到 GRU output, state = self.gru(x) # 输出的形状 == (批大小 * 1,隐藏层大小) output = tf.reshape(output, (-1, output.shape[2])) # 输出的形状 == (批大小,vocab) x = self.fc(output) return x, state, attention_weights decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE) sample_decoder_output, _, _ = decoder(tf.random.uniform((64, 1)), sample_hidden, sample_output) print ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape)) ``` ## 定义优化器和损失函数 ``` optimizer = tf.keras.optimizers.Adam() loss_object = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction='none') def loss_function(real, pred): mask = tf.math.logical_not(tf.math.equal(real, 0)) loss_ = loss_object(real, pred) mask = tf.cast(mask, dtype=loss_.dtype) loss_ *= mask return tf.reduce_mean(loss_) ``` ## 检查点(基于对象保存) ``` checkpoint_dir = './training_checkpoints' checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt") checkpoint = tf.train.Checkpoint(optimizer=optimizer, encoder=encoder, decoder=decoder) ``` ## 训练 1. 将 *输入* 传送至 *编码器*,编码器返回 *编码器输出* 和 *编码器隐藏层状态*。 2. 将编码器输出、编码器隐藏层状态和解码器输入(即 *开始标记*)传送至解码器。 3. 解码器返回 *预测* 和 *解码器隐藏层状态*。 4. 解码器隐藏层状态被传送回模型,预测被用于计算损失。 5. 使用 *教师强制 (teacher forcing)* 决定解码器的下一个输入。 6. *教师强制* 是将 *目标词* 作为 *下一个输入* 传送至解码器的技术。 7. 最后一步是计算梯度,并将其应用于优化器和反向传播。 ``` @tf.function def train_step(inp, targ, enc_hidden): loss = 0 with tf.GradientTape() as tape: enc_output, enc_hidden = encoder(inp, enc_hidden) dec_hidden = enc_hidden dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1) # 教师强制 - 将目标词作为下一个输入 for t in range(1, targ.shape[1]): # 将编码器输出 (enc_output) 传送至解码器 predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output) loss += loss_function(targ[:, t], predictions) # 使用教师强制 dec_input = tf.expand_dims(targ[:, t], 1) batch_loss = (loss / int(targ.shape[1])) variables = encoder.trainable_variables + decoder.trainable_variables gradients = tape.gradient(loss, variables) optimizer.apply_gradients(zip(gradients, variables)) return batch_loss EPOCHS = 10 for epoch in range(EPOCHS): start = time.time() enc_hidden = encoder.initialize_hidden_state() total_loss = 0 for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)): batch_loss = train_step(inp, targ, enc_hidden) total_loss += batch_loss if batch % 100 == 0: print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1, batch, batch_loss.numpy())) # 每 2 个周期(epoch),保存(检查点)一次模型 if (epoch + 1) % 2 == 0: checkpoint.save(file_prefix = checkpoint_prefix) print('Epoch {} Loss {:.4f}'.format(epoch + 1, total_loss / steps_per_epoch)) print('Time taken for 1 epoch {} sec\n'.format(time.time() - start)) ``` ## 翻译 * 评估函数类似于训练循环,不同之处在于在这里我们不使用 *教师强制*。每个时间步的解码器输入是其先前的预测、隐藏层状态和编码器输出。 * 当模型预测 *结束标记* 时停止预测。 * 存储 *每个时间步的注意力权重*。 请注意:对于一个输入,编码器输出仅计算一次。 ``` def evaluate(sentence): attention_plot = np.zeros((max_length_targ, max_length_inp)) sentence = preprocess_sentence(sentence) inputs = [inp_lang.word_index[i] for i in sentence.split(' ')] inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs], maxlen=max_length_inp, padding='post') inputs = tf.convert_to_tensor(inputs) result = '' hidden = [tf.zeros((1, units))] enc_out, enc_hidden = encoder(inputs, hidden) dec_hidden = enc_hidden dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0) for t in range(max_length_targ): predictions, dec_hidden, attention_weights = decoder(dec_input, dec_hidden, enc_out) # 存储注意力权重以便后面制图 attention_weights = tf.reshape(attention_weights, (-1, )) attention_plot[t] = attention_weights.numpy() predicted_id = tf.argmax(predictions[0]).numpy() result += targ_lang.index_word[predicted_id] + ' ' if targ_lang.index_word[predicted_id] == '<end>': return result, sentence, attention_plot # 预测的 ID 被输送回模型 dec_input = tf.expand_dims([predicted_id], 0) return result, sentence, attention_plot # 注意力权重制图函数 def plot_attention(attention, sentence, predicted_sentence): fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(1, 1, 1) ax.matshow(attention, cmap='viridis') fontdict = {'fontsize': 14} ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90) ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict) ax.xaxis.set_major_locator(ticker.MultipleLocator(1)) ax.yaxis.set_major_locator(ticker.MultipleLocator(1)) plt.show() def translate(sentence): result, sentence, attention_plot = evaluate(sentence) print('Input: %s' % (sentence)) print('Predicted translation: {}'.format(result)) attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))] plot_attention(attention_plot, sentence.split(' '), result.split(' ')) ``` ## 恢复最新的检查点并验证 ``` # 恢复检查点目录 (checkpoint_dir) 中最新的检查点 checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir)) translate(u'hace mucho frio aqui.') translate(u'esta es mi vida.') translate(u'¿todavia estan en casa?') # 错误的翻译 translate(u'trata de averiguarlo.') ```
github_jupyter
``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import tensorflow as tf import csv import random import numpy as np from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.utils import to_categorical from tensorflow.keras import regularizers embedding_dim = 100 max_length = 16 trunc_type='post' padding_type='post' oov_tok = "<OOV>" training_size=#Your dataset size here. Experiment using smaller values (i.e. 16000), but don't forget to train on at least 160000 to see the best effects test_portion=.1 corpus = [] # Note that I cleaned the Stanford dataset to remove LATIN1 encoding to make it easier for Python CSV reader # You can do that yourself with: # iconv -f LATIN1 -t UTF8 training.1600000.processed.noemoticon.csv -o training_cleaned.csv # I then hosted it on my site to make it easier to use in this notebook !wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/training_cleaned.csv \ -O /tmp/training_cleaned.csv num_sentences = 0 with open("/tmp/training_cleaned.csv") as csvfile: reader = csv.reader(csvfile, delimiter=',') for row in reader: # Your Code here. Create list items where the first item is the text, found in row[5], and the second is the label. Note that the label is a '0' or a '4' in the text. When it's the former, make # your label to be 0, otherwise 1. Keep a count of the number of sentences in num_sentences list_item=[] # YOUR CODE HERE num_sentences = num_sentences + 1 corpus.append(list_item) print(num_sentences) print(len(corpus)) print(corpus[1]) # Expected Output: # 1600000 # 1600000 # ["is upset that he can't update his Facebook by texting it... and might cry as a result School today also. Blah!", 0] sentences=[] labels=[] random.shuffle(corpus) for x in range(training_size): sentences.append(# YOUR CODE HERE) labels.append(# YOUR CODE HERE) tokenizer = Tokenizer() tokenizer.fit_on_texts(# YOUR CODE HERE) word_index = tokenizer.word_index vocab_size=len(# YOUR CODE HERE) sequences = tokenizer.texts_to_sequences(# YOUR CODE HERE) padded = pad_sequences(# YOUR CODE HERE) split = int(test_portion * training_size) test_sequences = padded[# YOUR CODE HERE] training_sequences = padded[# YOUR CODE HERE] test_labels = labels[# YOUR CODE HERE] training_labels = labels[# YOUR CODE HERE] print(vocab_size) print(word_index['i']) # Expected Output # 138858 # 1 # Note this is the 100 dimension version of GloVe from Stanford # I unzipped and hosted it on my site to make this notebook easier !wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/glove.6B.100d.txt \ -O /tmp/glove.6B.100d.txt embeddings_index = {}; with open('/tmp/glove.6B.100d.txt') as f: for line in f: values = line.split(); word = values[0]; coefs = np.asarray(values[1:], dtype='float32'); embeddings_index[word] = coefs; embeddings_matrix = np.zeros((vocab_size+1, embedding_dim)); for word, i in word_index.items(): embedding_vector = embeddings_index.get(word); if embedding_vector is not None: embeddings_matrix[i] = embedding_vector; print(len(embeddings_matrix)) # Expected Output # 138859 model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=max_length, weights=[embeddings_matrix], trainable=False), # YOUR CODE HERE - experiment with combining different types, such as convolutions and LSTMs ]) model.compile(# YOUR CODE HERE) model.summary() num_epochs = 50 history = model.fit(training_sequences, training_labels, epochs=num_epochs, validation_data=(test_sequences, test_labels), verbose=2) print("Training Complete") import matplotlib.image as mpimg import matplotlib.pyplot as plt #----------------------------------------------------------- # Retrieve a list of list results on training and test data # sets for each training epoch #----------------------------------------------------------- acc=history.history['accuracy'] val_acc=history.history['val_accuracy'] loss=history.history['loss'] val_loss=history.history['val_loss'] epochs=range(len(acc)) # Get number of epochs #------------------------------------------------ # Plot training and validation accuracy per epoch #------------------------------------------------ plt.plot(epochs, acc, 'r') plt.plot(epochs, val_acc, 'b') plt.title('Training and validation accuracy') plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.legend(["Accuracy", "Validation Accuracy"]) plt.figure() #------------------------------------------------ # Plot training and validation loss per epoch #------------------------------------------------ plt.plot(epochs, loss, 'r') plt.plot(epochs, val_loss, 'b') plt.title('Training and validation loss') plt.xlabel("Epochs") plt.ylabel("Loss") plt.legend(["Loss", "Validation Loss"]) plt.figure() # Expected Output # A chart where the validation loss does not increase sharply! ```
github_jupyter
``` import _init_paths import argparse import os import sys import logging import pprint import cv2 from config.config import config, update_config from utils.image import resize, transform import numpy as np # get config os.environ['PYTHONUNBUFFERED'] = '1' os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0' os.environ['MXNET_ENABLE_GPU_P2P'] = '0' update_config('./road_train_all.yaml') sys.path.insert(0, os.path.join('../external/mxnet', config.MXNET_VERSION)) import mxnet as mx from core.tester import im_detect, Predictor from symbols import * from utils.load_model import load_param from utils.show_boxes import show_boxes from utils.tictoc import tic, toc from nms.nms import py_nms_wrapper, cpu_nms_wrapper, gpu_nms_wrapper # def parse_args(): # parser = argparse.ArgumentParser(description='Show Deformable ConvNets demo') # # general # parser.add_argument('--rfcn_only', help='whether use R-FCN only (w/o Deformable ConvNets)', default=False, action='store_true') # args = parser.parse_args() # return args # args = parse_args() def main(): # get symbol pprint.pprint(config) config.symbol = 'resnet_v1_101_rfcn' sym_instance = eval(config.symbol + '.' + config.symbol)() sym = sym_instance.get_symbol(config, is_train=False) # set up class names num_classes = 4 classes = ['vehicle', 'pedestrian', 'cyclist', 'traffic lights'] # load demo data test_image_path = './data/RoadImages/test/' image_names = ['71777.jpg', '70522.jpg', '72056.jpg', '71531.jpg', '70925.jpg', '70372.jpg', '70211.jpg'] data = [] for im_name in image_names: assert os.path.exists(test_image_path + im_name), ('%s does not exist'.format(test_image_path + im_name)) im = cv2.imread(test_image_path + im_name, cv2.IMREAD_COLOR | cv2.IMREAD_IGNORE_ORIENTATION) target_size = config.SCALES[0][1] max_size = config.SCALES[0][1] im, im_scale = resize(im, target_size, max_size, stride=config.network.IMAGE_STRIDE) im_tensor = transform(im, config.network.PIXEL_MEANS) im_info = np.array([[im_tensor.shape[2], im_tensor.shape[3], im_scale]], dtype=np.float32) data.append({'data': im_tensor, 'im_info': im_info}) # get predictor data_names = ['data', 'im_info'] label_names = [] data = [[mx.nd.array(data[i][name]) for name in data_names] for i in xrange(len(data))] max_data_shape = [[('data', (1, 3, max([v[0] for v in config.SCALES]), max([v[1] for v in config.SCALES])))]] provide_data = [[(k, v.shape) for k, v in zip(data_names, data[i])] for i in xrange(len(data))] provide_label = [None for i in xrange(len(data))] arg_params, aux_params = load_param('./output/rfcn/road_obj/road_train_all/all/' + 'rfcn_road', 19 , process=True) predictor = Predictor(sym, data_names, label_names, context=[mx.gpu(0)], max_data_shapes=max_data_shape, provide_data=provide_data, provide_label=provide_label, arg_params=arg_params, aux_params=aux_params) nms = gpu_nms_wrapper(config.TEST.NMS, 0) # warm up # for j in xrange(2): # data_batch = mx.io.DataBatch(data=[data[0]], label=[], pad=0, index=0, # provide_data=[[(k, v.shape) for k, v in zip(data_names, data[0])]], # provide_label=[None]) # scales = [data_batch.data[i][1].asnumpy()[0, 2] for i in xrange(len(data_batch.data))] # scores, boxes, data_dict = im_detect(predictor, data_batch, data_names, scales, config) # test for idx, im_name in enumerate(image_names): #print('DEBUG: Image Name: {}'.format(im_name)) data_batch = mx.io.DataBatch(data=[data[idx]], label=[], pad=0, index=idx, provide_data=[[(k, v.shape) for k, v in zip(data_names, data[idx])]], provide_label=[None]) scales = [data_batch.data[i][1].asnumpy()[0, 2] for i in xrange(len(data_batch.data))] #print('DEBUG: scales: {}'.format(scales)) tic() scores, boxes, data_dict = im_detect(predictor, data_batch, data_names, scales, config) boxes = boxes[0].astype('f') #print('DEBUG: boxes: {}'.format(boxes)) scores = scores[0].astype('f') #print('DEBUG: scores: {}'.format(scores)) dets_nms = [] for j in range(1, scores.shape[1]): cls_scores = scores[:, j, np.newaxis] #print('DEBUG: cls_scores: {}'.format(cls_scores)) cls_boxes = boxes[:, 4:8] if config.CLASS_AGNOSTIC else boxes[:, j * 4:(j + 1) * 4] #print('DEBUG: cls_boxes: {}'.format(cls_boxes)) cls_dets = np.hstack((cls_boxes, cls_scores)) #print('DEBUG: cls_dets_1: {}'.format(cls_dets)) keep = nms(cls_dets) #print('DEBUG: keep: {}'.format(keep)) cls_dets = cls_dets[keep, :] #print('DEBUG: cls_dets_2: {}'.format(cls_dets)) cls_dets = cls_dets[cls_dets[:, -1] > 0.7, :] #print('DEBUG: cls_dets_3: {}'.format(cls_dets)) dets_nms.append(cls_dets) print 'testing {} {:.4f}s'.format(im_name, toc()) #print('DEBUG: Shape of dets_nms: {}'.format(len(dets_nms))) #print('DEBUG: dets_nms: {}'.format(dets_nms)) # visualize im = cv2.imread(test_image_path + im_name) im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB) show_boxes(im, dets_nms, classes, 1) print 'done' if __name__ == '__main__': main() ```
github_jupyter
# Python review of concepts Mainly to point out useful aspects of Python you may have glossed over. Assumes you already know Python fairly well. ## Python as a language ### Why Python? - Huge community - especially in data science and ML - Easy to learn - Batteries included - Extensive 3rd party libraries - Widely used in both industry and academia - Most important “glue” language bridging multiple communities ``` import __hello__ ``` ### Versions - Only use Python 3 (current release version is 3.8, container is 3.7) - Do not use Python 2 ``` import sys sys.version ``` ### Multi-paradigm #### Procedural ``` x = [] for i in range(5): x.append(i*i) x ``` #### Functional ``` list(map(lambda x: x*x, range(5))) ``` #### Object-oriented ``` class Robot: def __init__(self, name, function): self.name = name self.function = function def greet(self): return f"I am {self.name}, a {self.function} robot!" fido = Robot('roomba', 'vacuum cleaner') fido.name fido.function fido.greet() ``` ### Dynamic typing #### Complexity of a + b ``` 1 + 2.3 type(1), type(2.3) 'hello' + ' world' [1,2,3] + [4,5,6] import numpy as np np.arange(3) + 10 ``` ### Several Python implementations! - CPtyhon - Pypy - IronPython - Jython ### Global interpreter lock (GIL) - Only applies to CPython - Threads vs processes - Avoid threads in general - Performance not predictable ``` from concurrent.futures import ThreadPoolExecutor def f(n): x = np.random.uniform(0,1,n) y = np.random.uniform(0,1,n) count = 0 for i in range(n): if x[i]**2 + y[i]**2 < 1: count += 1 return count*4/n n = 100000 niter = 4 %%time [f(n) for i in range(niter)] %%time with ThreadPoolExecutor(4) as pool: xs = list(pool.map(f, [n]*niter)) xs ``` ## Coding in Python ``` import this ``` ### Coding conventions - PEP 8 - Avoid magic numbers - Avoid copy and paste - extract common functionality into functions [Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/) ### Data types - Integers - Arbitrary precision - Integer division operator - Base conversion - Check if integer ``` import math n = math.factorial(100) n f'{n:,}' h = math.sqrt(3**2 + 4**2) h h.is_integer() ``` - Floats - Checking for equality - Catastrophic cancellation - Complex ``` x = np.arange(9).reshape(3,3) x = x / x.sum(axis=0) λ = np.linalg.eigvals(x) λ[0] λ[0] == 1 math.isclose(λ[0], 1) def var(xs): """Returns variance of sample data.""" n = 0 s = 0 ss = 0 for x in xs: n +=1 s += x ss += x*x v = (ss - (s*s)/n)/(n-1) return v xs = np.random.normal(1e9, 1, int(1e6)) var(xs) np.var(xs) ``` - Boolean - What evaluates as False? ``` stuff = [[], [1], {},'', 'hello', 0, 1, 1==1, 1==2] for s in stuff: if s: print(f'{s} evaluates as True') else: print(f'{s} evaluates as False') ``` - String - Unicode by default - b, r, f strings ``` u'\u732b' ``` String formatting - Learn to use the f-string. ``` import string char = 'e' pos = string.ascii_lowercase.index(char) + 1 f"The letter {char} has position {pos} in the alphabet" n = int(1e9) f"{n:,}" x = math.pi f"{x:8.2f}" import datetime now = datetime.datetime.now() now f"{now:%Y-%m-%d %H:%M}" ``` ### Data structures - Immutable - string, tulle - Mutable - list, set, dictionary - Collections module - heapq ``` import collections [x for x in dir(collections) if not x.startswith('_')] ``` ### Functions - \*args, \*\*kwargs - Care with mutable default values - First class objects - Anonymous functions - Decorators ``` def f(*args, **kwargs): print(f"args = {args}") # in Python 3.8, you can just write f'{args = }' print(f"kwargs = {kwargs}") f(1,2,3,a=4,b=5,c=6) def g(a, xs=[]): xs.append(a) return xs g(1) g(2) h = lambda x, y, z: x**2 + y**2 + z**2 h(1,2,3) from functools import lru_cache def fib(n): print(n, end=', ') if n <= 1: return n else: return fib(n-2) + fib(n-1) fib(10) @lru_cache(maxsize=100) def fib_cache(n): print(n, end=', ') if n <= 1: return n else: return fib_cache(n-2) + fib_cache(n-1) fib_cache(10) ``` ### Classes - Key idea is encapsulation into objects - Everything in Python is an object - Attributes and methods - What is self? - Special methods - double underscore methods - Avoid complex inheritance schemes - prefer composition - Learn “design patterns” if interested in OOP ``` (3.0).is_integer() 'hello world'.title() class Student: def __init__(self, first, last): self.first = first self.last = last @property def name(self): return f'{self.first} {self.last}' s = Student('Santa', 'Claus') s.name ``` ### Enums Use enums readability when you have a discrete set of CONSTANTS. ``` from enum import Enum class Day(Enum): MON = 1 TUE = 2 WED = 3 THU = 4 FRI = 5 SAT = 6 SUN = 7 for day in Day: print(day) ``` ### NamedTuple ``` from collections import namedtuple Student = namedtuple('Student', ['name', 'email', 'age', 'gpa', 'species']) abe = Student('Abraham Lincoln', 'abe.lincoln@gmail.com', 23, 3.4, 'Human') abe.species abe[1:4] ``` ### Data Classes Simplifies creation and use of classes for data records. Note: NamedTuple serves a similar function but are immutable. ``` from dataclasses import dataclass @dataclass class Student: name: str email: str age: int gpa: float species: str = 'Human' abe = Student('Abraham Lincoln', 'abe.lincoln@gmail.com', age=23, gpa=3.4) abe abe.email abe.species ``` **Note** The type annotations are informative only. Python does *not* enforce them. ``` Student(*'abcde') ``` ### Imports, modules and namespaces - A namespace is basically just a dictionary - LEGB - Avoid polluting the global namespace ``` [x for x in dir(__builtin__) if x[0].islower()][:8] x1 = 23 def f1(x2): print(locals()) # x1 is global (G), x2 is enclosing (E), x3 is local def g(x3): print(locals()) return x3 + x2 + x1 return g x = 23 def f2(x): print(locals()) def g(x): print(locals()) return x return g g1 = f1(3) g1(2) g2 = f2(3) g2(2) ``` ### Loops - Prefer vectorization unless using numba - Difference between continue and break - Avoid infinite loops - Comprehensions and generator expressions ``` import string {char: ord(char) for char in string.ascii_lowercase} ``` ### Iterations and generators - The iterator protocol - `__iter__` and `__next__` - iter() - next() - What happens in a for loop - Generators with `yield` and `yield from` ``` class Iterator: """A silly class that implements the Iterator protocol and Strategy pattern. start = start of range to apply func to stop = end of range to apply func to """ def __init__(self, start, stop, func): self.start = start self.stop = stop self.func = func def __iter__(self): self.n = self.start return self def __next__(self): if self.n >= self.stop: raise StopIteration else: x = self.func(self.n) self.n += 1 return x sq = Iterator(0, 5, lambda x: x*x) list(sq) ``` ### Generators Like functions, but lazy. ``` def cycle1(xs, n): """Cuycles through values in xs n times.""" for i in range(n): for x in xs: yield x list(cycle1([1,2,3], 4)) for x in cycle1(['ann', 'bob', 'stop', 'charles'], 1000): if x == 'stop': break else: print(x) def cycle2(xs, n): """Cuycles through values in xs n times.""" for i in range(n): yield from xs list(cycle2([1,2,3], 4)) ``` Because they are lazy, generators can be used for infinite streams. ``` def fib(): a, b = 1, 1 while True: yield a a, b = b, a + b for n in fib(): if n > 100: break print(n, end=', ') ``` You can even slice infinite generators. More when we cover functional programming. ``` import itertools as it list(it.islice(fib(), 5, 10)) ```
github_jupyter
# MAT281 ## Aplicaciones de la Matemática en la Ingeniería ## ¿Qué contenido aprenderemos? * Manipulación de datos con ```pandas```. - Crear objetos (Series, DataFrames, Index). - Análisis exploratorio. - Realizar operaciones y filtros. - Aplicar funciones y métodos. ## Motivación En los últimos años, el interés por los datos ha crecido sostenidamente, algunos términos de moda tales como *data science*, *machine learning*, *big data*, *artifial intelligence*, *deep learning*, etc. son prueba fehaciente de ello. Por dar un ejemplo, las búsquedas la siguiente imagen muestra el interés de búsqueda en Google por *__Data Science__* en los últimos cinco años. [Fuente](https://trends.google.com/trends/explore?date=today%205-y&q=data%20science) ![alt text](images/dataScienceTrend.png "Logo Title Text 1") Muchos se ha dicho respecto a esto, declaraciones tales como: * _"The world’s most valuable resource is no longer oil, but data."_ * _"AI is the new electricity."_ * _"Data Scientist: The Sexiest Job of the 21st Century."_ <script type="text/javascript" src="https://ssl.gstatic.com/trends_nrtr/1544_RC05/embed_loader.js"></script> <script type="text/javascript"> trends.embed.renderExploreWidget("TIMESERIES", {"comparisonItem":[{"keyword":"data science","geo":"","time":"today 5-y"}],"category":0,"property":""}, {"exploreQuery":"date=today%205-y&q=data%20science","guestPath":"https://trends.google.com:443/trends/embed/"}); </script> Los datos por si solos no son útiles, su verdadero valor está en el análisis y en todo lo que esto conlleva, por ejemplo: * Predicciones * Clasificaciones * Optimización * Visualización * Aprendizaje Por esto es importante recordar al tío Ben: _"Un gran poder conlleva una gran responsabilidad"_. ## Numpy Desde la propia página web: NumPy is the fundamental package for scientific computing with Python. It contains among other things: * a powerful N-dimensional array object * sophisticated (broadcasting) functions * tools for integrating C/C++ and Fortran code * useful linear algebra, Fourier transform, and random number capabilities Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases. **Idea**: Realizar cálculos numéricos eficientemente. ## Pandas Desde el repositorio de GitHub: pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with "relational" or "labeled" data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. It is already well on its way toward this goal. Actualmente cuenta con más de 1200 contribuidores y casi 18000 commits! ``` import pandas as pd pd.__version__ ``` ## Series Arreglos unidimensionales con etiquetas. Se puede pensar como una generalización de los diccionarios de Python. ``` pd.Series? ``` Para crear una instancia de una serie existen muchas opciones, las más comunes son: * A partir de una lista. * A partir de un _numpy.array_. * A partir de un diccionario. * A partir de un archivo (por ejemplo un csv). ``` my_serie = pd.Series(range(3, 33, 3)) my_serie type(my_serie) # Presiona TAB y sorpréndete con la cantidad de métodos! # my_serie. ``` Las series son arreglos unidemensionales que constan de _data_ e _index_. ``` my_serie.values type(my_serie.values) my_serie.index type(my_serie.index) ``` A diferencia de numpy, pandas ofrece más flexibilidad para los valores e índices. ``` my_serie_2 = pd.Series(range(3, 33, 3), index=list('abcdefghij')) my_serie_2 ``` Acceder a los valores de una serie es muy fácil! ``` my_serie_2['b'] my_serie_2.loc['b'] my_serie_2.iloc[1] ``` ```loc```?? ```iloc```?? ``` # pd.Series.loc? ``` A modo de resumen: * ```loc``` es un método que hace referencia a las etiquetas (*labels*) del objeto . * ```iloc``` es un método que hace referencia posicional del objeto. **Consejo**: Si quieres editar valores siempre utiliza ```loc``` y/o ```iloc```. ``` my_serie_2.loc['d'] = 1000 my_serie_2 ``` ### Trabajar con fechas Pandas incluso permite que los index sean fechas! Por ejemplo, a continuación se crea una serie con las tendencia de búsqueda de *data science* en Google. ``` import os ds_trend = pd.read_csv(os.path.join('data', 'dataScienceTrend.csv'), index_col=0, squeeze=True) ds_trend.head(10) ds_trend.tail(10) ds_trend.dtype ds_trend.index ``` **OJO!** Los valores del Index son _strings_ (_object_ es una generalización). **Solución:** _Parsear_ a elementos de fecha con la función ```pd.to_datetime()```. ``` # pd.to_datetime? ds_trend.index = pd.to_datetime(ds_trend.index, format='%Y-%m-%d') ds_trend.index ``` Para otros tipos de _parse_ puedes visitar la documentación [aquí](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior). La idea de los elementos de fecha es poder realizar operaciones que resulten naturales para el ser humano. Por ejemplo: ``` ds_trend.index.min() ds_trend.index.max() ds_trend.index.max() - ds_trend.index.min() ``` Volviendo a la Serie, podemos trabajar con todos sus elementos, por ejemplo, determinar rápidamente la máxima tendencia. ``` max_trend = ds_trend.max() max_trend ``` Para determinar el _index_ correspondiente al valor máximo usualmente se utilizan dos formas: * Utilizar una máscara (*mask*) * Utilizar métodos ya implementados ``` # Mask ds_trend[ds_trend == max_trend] # Built-in method ds_trend.idxmax() ``` ## Dataframes Arreglo bidimensional y extensión natural de una serie. Podemos pensarlo como la generalización de un numpy.array. Utilizando el dataset de los jugadores de la NBA la flexibilidad de pandas se hace mucho más visible. No es necesario que todos los elementos sean del mismo tipo! ``` import os player_data = pd.read_csv(os.path.join('data', 'player_data.csv'), index_col='name') player_data.head() player_data.info(memory_usage=True) type(player_data) player_data.dtypes ``` Puedes pensar que un dataframe es una colección de series ``` player_data['birth_date'].head() type(player_data['birth_date']) ``` ### Exploración ``` player_data.describe() player_data.describe(include='all') player_data.max() ``` Para extraer elementos lo más recomendable es el método loc. ``` player_data.loc['Zaid Abdul-Aziz', 'college'] ``` Evita acceder con doble corchete ``` player_data['college']['Zaid Abdul-Aziz'] ``` Aunque en ocasiones funcione, no se asegura que sea siempre así. [Más info aquí.](https://pandas.pydata.org/pandas-docs/stable/indexing.html#why-does-assignment-fail-when-using-chained-indexing) ### Valores perdidos/nulos Pandas ofrece herramientas para trabajar con valors nulos, pero es necesario conocerlas y saber aplicarlas. Por ejemplo, el método ```isnull()``` entrega un booleano si algún valor es nulo. Por ejemplo: ¿Qué jugadores no tienen registrado su fecha de nacimiento? ``` player_data.index.shape player_data[player_data['birth_date'].isnull()] ``` Si deseamos encontrar todas las filas que contengan por lo menos un valor nulo. ``` player_data.isnull() # pd.DataFrame.any? rows_null_mask = player_data.isnull().any(axis=1) # axis=1 hace referencia a las filas. rows_null_mask.head() player_data[rows_null_mask].head() player_data[rows_null_mask].shape ``` Para determinar aquellos que no tienen valors nulos el prodecimiento es similar. ``` player_data[player_data.notnull().all(axis=1)].head() ``` ¿Te fijaste que para usar estas máscaras es necesario escribir por lo menos dos veces el nombre del objeto? Una buena práctica para generalizar las máscaras consiste en utilizar las funciones ``lambda`` ``` player_data[lambda df: df.notnull().all(axis=1)].head() ``` Una función lambda es una función pequeña y anónima. Pueden tomar cualquer número de argumentos pero solo tienen una expresión. Pandas incluso ofrece opciones para eliminar elementos nulos! ``` pd.DataFrame.dropna? # Cualquier registro con null print(player_data.dropna().shape) # Filas con elementos nulos print(player_data.dropna(axis=0).shape) # Columnas con elementos nulos print(player_data.dropna(axis=1).shape) ``` ## Ejemplo práctico ¿Para cada posición, cuál es la máxima cantidad de tiempo que ha estado un jugador? Un _approach_ para resolver la pregunta anterior tiene los siguientes pasos: 1. Determinar el tiempo de cada jugador en su posición. 2. Determinar todas las posiciones. 3. Iterar sobre cada posición y encontrar el mayor valor. ``` # 1. Determinar el tiempo de cada jugador en su posición. player_data['duration'] = player_data['year_end'] - player_data['year_start'] player_data.head() # 2. Determinar todas las posiciones. positions = player_data['position'].unique() positions # 3. Iterar sobre cada posición y encontrar el mayor valor. nba_position_duration = pd.Series() for position in positions: df_aux = player_data.loc[lambda x: x['position'] == position] max_duration = df_aux['duration'].max() nba_position_duration.loc[position] = max_duration nba_position_duration ``` ## Resumen * Pandas posee una infinidad de herramientas para trabajar con datos, incluyendo la carga, manipulación, operaciones y filtrado de datos. * La documentación oficial (y StackOverflow) son tus mejores amigos. * La importancia está en darle sentido a los datos, no solo a coleccionarlos. # Evaluación Laboratorio * Nombre: * Rol: #### Instruciones 1. Pon tu nombre y rol en la celda superior. 2. Debes enviar este **.ipynb** con el siguiente formato de nombre: **```04_data_manipulation_NOMBRE_APELLIDO.ipynb```** con tus respuestas a alonso.ogueda@gmail.com y sebastian.flores@usm.cl . 3. Se evaluara tanto el código como la respuesta en una escala de 0 a 4 con valores enteros. 4. La entrega es al final de esta clase. ## Dataset jugadores NBA (2pts) 1. ¿Cuál o cuáles son los jugadores más altos de la NBA? 2. Crear un DataFrame llamado ```nba_stats``` donde los índices sean las distintas posiciones y que posea las siguientes columns: - nm_players: Cantidad de jugadores distintos que utilizan esa posición. - mean_duration: Duración de años promedio. - tallest: Mayor altura en cm. - young_birth: Fecha de nacimiento del jugador/es más joven. ``` import numpy as np height_split = player_data['height'].str.split('-') for player, height_list in height_split.items(): if height_list == height_list: # Para manejar el caso en que la altura sea nan. height = int(height_list[0]) * 30.48 + int(height_list[1]) * 2.54 player_data.loc[player, "height_cm"] = height else: player_data.loc[player, "height_cm"] = np.nan max_height = player_data['height_cm'].max() tallest_player = player_data.loc[lambda x: x['height_cm'] == max_height].index.tolist() print(tallest_player) # Castear la fecha de str a objeto datetime player_data['birth_date_fix'] = pd.to_datetime(player_data['birth_date'], format="%B %d, %Y") # Crear dataframe con las columnas solicitadas nba_stats = pd.DataFrame(columns=["nm_players", "mean_duration", "tallest", "young_birth"]) for position in player_data['position'].unique(): if position == position: # Existen posiciones nan, por lo que hay que tratarlas de manera distinta. aux_df = player_data.loc[lambda x: x['position'] == position] # Dataframe filtrado else: aux_df = player_data.loc[lambda x: x['position'].isnull()] # Calcular nm_players = aux_df.index.nunique() # or len(aux_df.index.unique()) mean_duration = aux_df['duration'].mean() tallest = aux_df['height_cm'].max() young_birth = aux_df['birth_date_fix'].min() # Escribir en el dataframe nba_stats.loc[position, ["nm_players", "mean_duration", "tallest", "young_birth"]] = [nm_players, mean_duration, tallest, young_birth] nba_stats ``` ## Dataset del Gasto Neto Mensualizado por año de las Instituciones Públicas (2pts) Este dataset incluye las cifras (actualizadas a la moneda del año 2017), el gasto ejecutado por las distintas instituciones en los variados programas del Presupuesto, y desglosado hasta el máximo nivel del clasificador presupuestario. Los montos contemplan el Gasto Neto, es decir, integran los gastos que afectan el patrimonio público, excluyendo aquéllos que sólo se traducen en movimientos de activos y pasivos financieros que sirven de fuente de financiamiento de los primeros 1. Cargar el dataset ```gasto_fiscal.csv``` que se encuentra en la carpeta ```data``` en un DataFrame llamado **```gasto_fiscal```**. ¿Cuánta MB de memoria está utilizando? ¿Cuáles son las columnas que consumen más y menos memoria? ¿Cuál crees que es la razón? 2. Crear un DataFrame llamado ```gasto_fiscal_stats```, donde los _index_ sean cada Partida y las columnas correspondan a: - A la suma total de los montos desde el año 2011 al 2014. - Cantidad de registros con monto igual a cero. - Mes con mayor gasto - Porcentaje del mes con mayor gasto respecto al gasto total. ``` gasto_fiscal = _ # NO EVALUADO ``` * gasto_fiscal_mb = # FIX ME * more_memory_columns = [] * less_memory_columns = [] * reason = '' ``` gasto_fiscal_stats = _ # NO EVALUADO ```
github_jupyter
# Open and run analysis on multiple polygons <img align="right" src="../Supplementary_data/dea_logo.jpg"> * [**Sign up to the DEA Sandbox**](https://docs.dea.ga.gov.au/setup/sandbox.html) to run this notebook interactively from a browser * **Compatibility:** Notebook currently compatible with both the `NCI` and `DEA Sandbox` environments * **Products used:** [ga_ls8c_ard_3](https://explorer.sandbox.dea.ga.gov.au/ga_ls8c_ard_3) ## Background Many users need to run analyses on their own areas of interest. A common use case involves running the same analysis across multiple polygons in a vector file (e.g. ESRI Shapefile or GeoJSON). This notebook will demonstrate how to use a vector file and the Open Data Cube to extract satellite data from Digital Earth Australia corresponding to individual polygon geometries. ## Description If we have a vector file containing multiple polygons, we can use the python package [geopandas](https://geopandas.org/) to open it as a `GeoDataFrame`. We can then iterate through each geometry and extract satellite data corresponding with the extent of each geometry. Further anlaysis can then be conducted on each resulting `xarray.Dataset`. We can retrieve data for each polygon, perform an analysis like calculating NDVI and plot the data. 1. First we open the vector file as a `geopandas.GeoDataFrame` 2. Iterate through each polygon in the `GeoDataFrame`, and extract satellite data from DEA 3. Calculate NDVI as an example analysis on one of the extracted satellite timeseries 4. Plot NDVI for the polygon extent *** ## Getting started To run this analysis, run all the cells in the notebook, starting with the "Load packages" cell. ### Load packages Please note the use of `datacube.utils` package `geometry`: this is important for saving the coordinate reference system of the incoming shapefile in a format that the Digital Earth Australia query can understand. ``` %matplotlib inline import datacube import rasterio.crs import geopandas as gpd import matplotlib.pyplot as plt from datacube.utils import geometry import sys sys.path.append('../Scripts') from dea_datahandling import load_ard from dea_bandindices import calculate_indices from dea_plotting import rgb, map_shapefile from dea_temporal import time_buffer from dea_spatialtools import xr_rasterize ``` ### Connect to the datacube Connect to the datacube database to enable loading Digital Earth Australia data. ``` dc = datacube.Datacube(app='Analyse_multiple_polygons') ``` ## Analysis parameters * `time_of_interest` : Enter a time, in units YYYY-MM-DD, around which to load satellite data e.g. `'2019-01-01'` * `time_buff` : A buffer of a given duration (e.g. days) around the time_of_interest parameter, e.g. `'30 days'` * `vector_file` : A path to a vector file (ESRI Shapefile or GeoJSON) * `attribute_col` : A column in the vector file used to label the output `xarray` datasets containing satellite images. Each row of this column should have a unique identifier * `products` : A list of product names to load from the datacube e.g. `['ga_ls7e_ard_3', 'ga_ls8c_ard_3']` * `measurements` : A list of band names to load from the satellite product e.g. `['nbart_red', 'nbart_green']` * `resolution` : The spatial resolution of the loaded satellite data e.g. for Landsat, this is `(-30, 30)` * `output_crs` : The coordinate reference system/map projection to load data into, e.g. `'EPSG:3577'` to load data in the Albers Equal Area projection * `align` : How to align the x, y coordinates respect to each pixel. Landsat Collection 3 should be centre aligned `align = (15, 15)` if data is loaded in its native UTM zone projection, e.g. `'EPSG:32756'` ``` time_of_interest = '2019-02-01' time_buff = '30 days' vector_file = '../Supplementary_data/Analyse_multiple_polygons/multiple_polys.shp' attribute_col = 'id' products = ['ga_ls8c_ard_3'] measurements = ['nbart_red', 'nbart_green', 'nbart_blue', 'nbart_nir'] resolution = (-30, 30) output_crs = 'EPSG:3577' align = (0, 0) ``` ### Look at the structure of the vector file Import the file and take a look at how the file is structured so we understand what we are iterating through. There are two polygons in the file: ``` gdf = gpd.read_file(vector_file) gdf.head() ``` We can then plot the `geopandas.GeoDataFrame` using the function `map_shapefile` to make sure it covers the area of interest we are concerned with: ``` map_shapefile(gdf, attribute=attribute_col) ``` ### Create a datacube query object We then create a dictionary that will contain the parameters that will be used to load data from the DEA data cube: > **Note:** We do not include the usual `x` and `y` spatial query parameters here, as these will be taken directly from each of our vector polygon objects. ``` query = {'time': time_buffer(time_of_interest, buffer=time_buff), 'measurements': measurements, 'resolution': resolution, 'output_crs': output_crs, 'align': align, } query ``` ## Loading satellite data Here we will iterate through each row of the `geopandas.GeoDataFrame` and load satellite data. The results will be appended to a dictionary object which we can later index to analyse each dataset. ``` # Dictionary to save results results = {} # Loop through polygons in geodataframe and extract satellite data for index, row in gdf.iterrows(): print(f'Feature: {index + 1}/{len(gdf)}') # Extract the feature's geometry as a datacube geometry object geom = geometry.Geometry(geom=row.geometry, crs=gdf.crs) # Update the query to include our geopolygon query.update({'geopolygon': geom}) # Load landsat ds = load_ard(dc=dc, products=products, # min_gooddata=0.99, # only take uncloudy scenes ls7_slc_off = False, group_by='solar_day', **query) # Generate a polygon mask to keep only data within the polygon mask = xr_rasterize(gdf.iloc[[index]], ds) # Mask dataset to set pixels outside the polygon to `NaN` ds = ds.where(mask) # Append results to a dictionary using the attribute # column as an key results.update({str(row[attribute_col]): ds}) ``` --- ## Further analysis Our `results` dictionary will contain `xarray` objects labelled by the unique `attribute column` values we specified in the `Analysis parameters` section: ``` results ``` Enter one of those values below to index our dictionary and conduct further analsyis on the satellite timeseries for that polygon. ``` key = '1' ``` ### Plot an RGB image We can now use the `dea_plotting.rgb` function to plot our loaded data as a three-band RGB plot: ``` rgb(results[key], col='time', size=4) ``` ### Calculate NDVI and plot We can also apply analyses to data loaded for each of our polygons. For example, we can calculate the Normalised Difference Vegetation Index (NDVI) to identify areas of growing vegetation: ``` # Calculate band index ndvi = calculate_indices(results[key], index='NDVI', collection='ga_ls_3') # Plot NDVI for each polygon for the time query ndvi.NDVI.plot(col='time', cmap='YlGn', vmin=0, vmax=1, figsize=(18, 4)) plt.show() ``` *** ## Additional information **License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license. **Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)). If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks). **Last modified:** June 2020 **Compatible datacube version:** ``` print(datacube.__version__) ``` ## Tags Browse all available tags on the DEA User Guide's [Tags Index](https://docs.dea.ga.gov.au/genindex.html)
github_jupyter
# Aula 01 - Introdução à Ciência de Dados ## Indústria 4.0 / Sociedade 5.0 <!-- Figura --> <center> <img src='../figs/01/society-5-industry-4.png' width=900px> </img> </center> [Fonte](https://www.sphinx-it.eu/from-the-agenda-of-the-world-economic-forum-2019-society-5-0/) ## Ciência de Dados no Século XXI - Dados como matéria-prima: pessoas, empresas, governos, ciência - *Big Data*, *business intelligence*, *data analytics*,... <!-- Figura --> <center> <img src='../figs/01/11-pillars.png' width=600px> </img> </center> [Fonte](https://knowledgecom.my/ir4.html) - *Ciência de dados*: nome razoável para denotar o aspecto científico dos dados - Contexto acadêmico: intersecta áreas, processos e pensamento científico - Modelo holístico não unificado - Ciência de dados tem alta capilaridade e multidisciplinaridade ## Modelo em camadas <!-- Figura --> <center> <img src='../figs/01/cycle-ds.png' width=400px> </img> </center> ## Camada interna: interseção de áreas tradicionais - **Matemática/Estatística**: - modelos matemáticos - análise e inferência de dados - aprendizagem de máquina - **Ciência da Computação/Engenharia de Software** - hardware/software - projeto, armazenamento e segurança de dados - **Conhecimento do Domínio/Expertise** - ramo de aplicação do conhecimento - *data reporting* e inteligência de negócios - *marketing* e comunicação de dados ## Camada intermediária: processos da cadeia de dados - Governança, curadoria - Armazenamento, reuso - Preservação, manutenção - Destruição, compartilhamento ## Camada externa: método científico *Soluções dirigidas por dados* (*data-driven solutions*) 1. **Definição do problema**: a "grande pergunta" 2. **Aquisição de dados**: coleta de toda informação disponível sobre o problema 3. **Processamento de dados**, tratamento dos dados (limpeza, formatação e organização) 4. **Análise de dados**, mineração, agrupamento, clusterização, testes de hipótese e inferência 5. **Descoberta de dados**, correlações, comportamentos distintivos, tendências claras, geração de conhecimento 6. **Solução**, conversibilidade em produtos e ativos de valor agregado ### Exemplo: o caso da COVID-19 - Uma pergunta: > _A taxa de contágio do vírus em pessoas vivendo próximas de um centro comercial localizado em uma zona rural é menor do do que em pessoas vivendo próximas de um centro comercial localizado em uma zona urbana?_ - Como delimitar a zona urbana? - Centro comercial: - Conjunto de lojas de pequeno porte? - Feiras? - Circulação de 100 pessoas/h? - Bancos de dados: IBGE? DATASUS? ### Projetos de CD para a COVID-19 Mundo: - *Coronavirus Resource Center*, John Hopkins University [[CRC-JHU]](https://coronavirus.jhu.edu/map.html) Brasil: - Observatório Covid-19 BR [[COVID19BR]](https://covid19br.github.io/index.html) - Observatório Covid-19 Fiocruz [[FIOCRUZ]](https://portal.fiocruz.br/observatorio-covid-19) - CoronaVIS-UFRGS [[CoronaVIS-UFRGS]](https://covid19.ufrgs.dev/dashboard/#/dashboard) - CovidBR-UFCG [[CovidBR-UFCG]](http://covid.lsi.ufcg.edu.br) - [[LEAPIG-UFPB]](http://www.de.ufpb.br/~leapig/projetos/covid_19.html#PB) ## Cientista de dados x analista de dados x engenheiro de dados ### Cientista de dados > _"**Cientista de dados** é um profissional que tem conhecimentos suficientes sobre necessidades de negócio, domínio do conhecimento, além de possuir habilidades analíticas, de software e de engenharia de sistemas para gerir, de ponta a ponta, os processos envolvidos no ciclo de vida dos dados."_ [[NIST 1500-1 (2015)]](https://bigdatawg.nist.gov/_uploadfiles/NIST.SP.1500-1.pdf) ### Ciência de dados > _"**Ciência de dados** é a extração do conhecimento útil diretamente a partir de dados através de um processo de descoberta ou de formulação e teste de hipóteses."_ #### Informação não é sinônimo de conhecimento! - Exemplo: de toda a informação circulante no seu Whatsapp por dia, que fração seria considerada útil e aproveitável? A resposta talvez seja um incrível "nada"... Portanto, ter bastante informação à disposição não significa, necessariamente, possuir conhecimento. ### Analista de dados - _Analytics_ pode ser traduzido literalmente como "análise" - Segundo o documento NIST 1500-1, é definido como o "processo de sintetizar conhecimento a partir da informação". > _"**Analista de dados** é o profissional capaz de sintetizar conhecimento a partir da informação e convertê-lo em ativos exploráveis."_ ### Engenheiro de dados > _"**Engenheiro(a) de dados** é o(a) profissional que explora recursos independentes para construir sistemas escaláveis capazes de armazenar, manipular e analisar dados com eficiência e e desenvolver novas arquiteturas sempre que a natureza do banco de dados exigi-las."_ Embora essas três especializações possuam características distintivas, elas são tratadas como partes de um corpo maior, que é a Ciência de Dados - Ver [PROJETO EDISON](https://edison-project.eu), Universidade de Amsterdã, Holanda - EDISON Data Science Framework [[EDSF]](https://edison-project.eu/sites/edison-project.eu/files/attached_files/node-5/edison2017poster02-dsp-profiles-v03.pdf) #### Quem faz o quê? Resumimos a seguir as principais tarefas atribuídas a cientistas, analistas e engenheiros(as) de dados com base em artigos de canais especializados: - [[DataQuest]](https://www.dataquest.io/blog/data-analyst-data-scientist-data-engineer/) - [[NCube]](https://ncube.com/blog/data-engineer-data-scientist-data-analyst-what-is-the-difference) - [[Medium]](https://medium.com/@gdesantis7/decoding-the-data-scientist-51b353a01443) - [[Data Science Academy]](http://datascienceacademy.com.br/blog/qual-a-diferenca-entre-cientista-de-dados-e-engenheiro-de-machine-learning/) - [[Data Flair]](https://data-flair.training/blogs/data-scientist-vs-data-engineer-vs-data-analyst/). ##### Cientista de dados - Realiza o pré-processamento, a transformação e a limpeza dos dados; - Usa ferramentas de aprendizagem de máquina para descobrir padrões nos dados; - Aperfeiçoa e otimiza algoritmos de aprendizagem de máquina; - Formula questões de pesquisa com base em requisitos do domínio do conhecimento; ##### Analista de dados - Analisa dados por meio de estatística descritiva; - Usa linguagens de consulta a banco de dados para recuperar e manipular a informação; - Confecciona relatórios usando visualização de dados; - Participa do processo de entendimento de negócios; ##### Engenheiro(a) de dados - Desenvolve, constroi e mantém arquiteturas de dados; - Realiza testes de larga escala em plataformas de dados; - Manipula dados brutos e não estruturados; - Desenvolve _pipelines_ para modelagem, mineração e produção de dados - Cuida do suporte a cientistas e analistas de dados; #### Que ferramentas são usadas? As ferramentas usadas por cada um desses profissionais são variadas e evoluem constantemente. Na lista a seguir, citamos algumas. ##### Cientista de dados - R, Python, Hadoop, Ferramentas SQL (Oracle, PostgreSQL, MySQL etc.) - Álgebra, Estatística, Aprendizagem de Máquina - Ferramentas de visualização de dados ##### Analista de dados - R, Python, - Excel, Pandas - Ferramentas de visualização de dados (Tableau, Infogram, PowerBi etc.) - Ferramentas para relatoria e comunicação ##### Engenheiro(a) de dados - Ferramentas SQL e noSQL (Oracle NoSQL, MongoDB, Cassandra etc.) - Soluções ETL - Extract/Transform/Load (AWS Glue, xPlenty, Stitch etc.) - Python, Scala, Java etc. - Spark, Hadoop etc. ### A Matemática por trás dos dados #### *Bits* Linguagem _binária_ (sequencias dos dígitos 0 e 1). Em termos de *bits*, a frase "Ciência de dados é legal!", por exemplo, é escrita como `1000011110100111101010110111011000111101001110000110000011001001100101 1000001100100110000111001001101111111001110000011101001100000110110011001 01110011111000011101100100001`. #### Dados 1D, 2D e 3D - Texto, som, imagem, áudio, vídeo... - *Arrays*, vetores de dados e listas - *Dataframes* e planilhas - Matrizes (pixels, canais de cor) - Matrizes 3D (filmes, animações, FPS) ## Ferramentas computacionais do curso - Python 3.x (onde x é um número de versão) como linguagem de programação. - Linguagem interpretada - Alto nível ### _iPython_ e _Jupyter Notebook_ - [[iPython]](http://ipython.org): iniciado em 2001; interpretador Python para melhorar a interatividade com a linguagem. - Integrado como um _kernel_ (núcleo) no projeto [[Jupyter]](http://jupyter.org), desenvolvido em 2014, permitindo textos, códigos e elementos gráficos sejam integrados em cadernos interativos. - _Jupyter notebooks_ são interfaces onde podemos executar códigos em diferentes linguagens - _Jupyter_ é uma aglutinação de _Julia_, _Python_ e _R_, linguagens usuais para ciência de dados ### *Anaconda* - O [[Anaconda]](https://www.anaconda.com) foi iniciado em 2012 para fornecer uma ferramenta completa para o trabalho com Python. - Em 2020, a [[Individual Edition]](https://www.anaconda.com/products/individual) é a mais popular no mundo com mais de 20 milhões de usuários. - Leia o tutorial de instalação! ### *Jupyter Lab* - Ferramenta que melhorou a interatividade do Jupyter - Este [[artigo]](https://blog.jupyter.org/jupyterlab-is-ready-for-users-5a6f039b8906) discute as características do Jupyter Lab ### *Binder* - O projeto [[Binder]](https://mybinder.org) funciona como um servidor online baseada na tecnologia *Jupyter Hub* para servir cadernos interativos online. - Execução de códigos "na nuvem" sem a necessidade de instalações - Sessões temporárias ### *Google Colab* - O [[Google Colab]](http://colab.research.google.com) é um "misto" do _Jupyter notebook_ e _Binder_, - Permite que o usuário use a infra-estrutura de computação de alto desempenho (GPUs e TPUS) da Google - Sincronização de arquivos com o Google Drive. ### Ecossistema de módulos - *numpy* (*NUMeric PYthon*): o *numpy* serve para o trabalho de computação numérica, operando fundamentalmente com vetores, matrizes e ágebra linear. - *pandas* (*Python for Data Analysis*): é a biblioteca para análise de dados de Python, que opera *dataframes* com eficiência. - *sympy* (*SYMbolic PYthon*): é um módulo para trabalhar com matemática simbólica e cumpre o papel de um verdadeiro sistema algébrico computacional. - *matplotlib*: voltado para plotagem e visualização de dados, foi um dos primeiros módulos Python para este fim. - *scipy* (*SCIentific PYthon*): o *scipy* pode ser visto, na verdade, como um módulo mais amplo que integra os módulos anteriores. Em particular, ele é utilizado para cálculos de integração numérica, interpolação, otimização e estatística. - *seaborn*: é um módulo para visualização de dados baseado no *matplotlib*, porém com capacidades visuais melhores.
github_jupyter
# CPT example: planning dual-Doppler campaign ### Nikola Vasiljevic, August 24th 2019 In this example we will use [CPT](https://www.wind-energ-sci-discuss.net/wes-2019-13/) to plan a fictive measurement campaign for a site consisting of 12x80m turbines. <br>The site is located at the sea coast of Croatia in vicinity of the town of Šibenik. It is assumed that you have CPT installed if not use following link for details on the CPT installation: <br>https://github.com/niva83/campaign-planning-tool <br>The first thing we need to do is to import CPT and numpy libraries by executing following lines of code: ``` from campaign_planning_tool import CPT import numpy as np ``` If everything when well you have access to a number of methods (i.e., functions) and attributes (i.e., data) of CPT class. At any time you want to have an overview of the CPT class you can execute command: ``` help(CPT) ``` Don't get overwhelmed with a long list of methods and attributes, nevertheless lidar experts will notice some familiar terms. We will slowely go through the process of using important methods in the campaign planning process, while some of the data (attributes set as constants) we will modify for the purpose of demonstrating how CPT library is quite adaptable for your use-case. ``` #help(CPT) help(CPT.set_utm_zone) ``` Let us start by creating a CPT object: ``` layout = CPT() ``` Before we proceed adding the measurement points it important to know that the CPT methods perform calculation in UTM coordinate system (i.e., positions provided as triplets Easting, Northing and Height). Therefore it is required to set a proper UTM zone to the class, which in our case is 33T. The digits in the UTM zone represent so-called latitudinal zone, while a character represent longitudinal zone. Both, digits and character are required to be provided to the CPT class: ``` layout.set_utm_zone('33T') ``` Now that we have set a proper UTM zone we can add our measurement point to the CPT class. <br>Let's consider the case where you only have Easting, Northing and turbine hub height: ``` points = np.array([ [576697.34, 4845753, 80], [576968, 4845595, 80], [577215, 4845425, 80], [577439, 4845219, 80], [577752, 4845005, 80], [577979, 4844819, 80], [578400, 4844449, 80], [578658, 4844287, 80], [578838, 4844034, 80], [578974, 4843842, 80], [579121, 4844186, 80], [579246, 4843915, 80] ]) ``` However, the CPT methods work with the height provided as the height above sea level. <br>Therefore, we would need to add terrain height to the turbine hub height. <br>The CPT class has method to fetch this information from SRTM database. <br>Just simply supply UTM zone and numpy array of shapre (n, 3), where n is number of measurement points to the following method: ``` layout.get_elevation('33T', points) ``` The output of this method is terrain height given in meters above the sea level for turbine positions. Also, you will notice that the method first checks whether the provided UTM zone is correct. Now lets add these information to our point array and added the modified point array to the class: ``` points[:, 2] = points[:, 2] + layout.get_elevation('33T', points) layout.add_measurement_instances('initial', points) ``` With the last code line we used the CPT class method 'add_measurement_instances'.<br> This method requires a string representing the points indentificator followed with an array of measurement points. <br>The points identificator takes only following values: * initial * optimized * reachable * misc <br><br>In reality you will only add measurement points using points id 'initial' or 'misc'. <br> The CPT class will use other two points id for its internal purposes. Nevertheless you can access any of these values by supplying string to the so-called measurement dictionary: ``` layout.measurements_dictionary['initial'] ``` Since we want to store CPT results we need to set the output path (path must exist) to the class: ``` layout.set_path('/Users/niva/Desktop/CPT-example', path_type = 'output') ``` Of course in your case you should change `'/Users/niva/Desktop/CPT-example'` to the folder you have on your computer. <br>For those operating Windows machines you will need to add 'r' in front of your path, e.g. :<br> ``` layout.set_path(r'C:\TEMP', path_type = 'output') ``` This avoids [known issues with Windows paths](https://medium.com/@ageitgey/python-3-quick-tip-the-easy-way-to-deal-with-file-paths-on-windows-mac-and-linux-11a072b58d5f).<br> This is also a good moment to provide the path to the CORINE landcover. You can donwload the landcover data form the [Copernicus web site](https://land.copernicus.eu/pan-european/corine-land-cover/clc-2012). <br>The data comes in different formats, however CPT works with landcover data provided in a raster format (GeoTiff), thus you should select the following option from the Copernicus web sit:<br> *Corine Land Cover - 100 meter 2012 Raster 100m GeoTiff* You will download a zip file, which contains various files, but you need to point CPT to the tif file, which depending on the landcover data version could have different name. At the time of writing this Jupyter notebook has a following name: <br>**CLC2018_CLC2012_V2018_20.tif** ``` layout.set_path('/Volumes/Secondary_Drive/work/projects/campaign-planning-tool/data/input/landcover/g100_clc12_V18_5.tif', path_type = 'landcover') ``` Before we start generating GIS layers which will help us navigate positioning of two lidars let us change some attributes of the CPT class, specifically expected averaged range of lidars and maximum permited beam elevation angle: ``` layout.MAX_ELEVATION_ANGLE = 7 # in degrees layout.AVERAGE_RANGE = 4000 # in meters ``` Now lets generate layer for placing first lidar. The method that we need to call is: ``` layout.generate_first_lidar_placement_layer(points_id) ``` where point_id is a string indicating which measurement points should be used in this process. <br> We will set *points_id* to *'initial'* since we have previously added measurement points considering this id: ``` layout.generate_first_lidar_placement_layer('initial') ``` If you want to see the results of this method simply plot them using method <br> ``` layout.plot_layer(layer_id) ``` where layer_id is a string which tells to the method which GIS layer it should plot. A full list of layer identificators you can find by calling accessing attribute: ``` layout.LAYER_TYPE ``` In our case we are interested in the layer 'first_lidar_placement'.<br>Lets provide some extra information such as title and indicate that we want to save the plot to the plotting method: ``` layout.plot_layer('first_lidar_placement', title = 'First lidar placement', save_plot = True) ``` The plot shows you areas where if you place lidar you will be able to reach a certain number of measurement points considering a number of constriants such as: * Range * Unobstracted line-of-sight * Elevation angle * Restriction zones Currently the CPT library don't provide an interactive way of placing lidar or optimization routine to do this for you. <br>Therefore, for time being you will have to get your hands a bit dirty.<br> You have several posibilities of finding a good lidar position. <br> You can export the above layer to KML by using following built-in method: ``` layout.lidar_position_suggestion(filename, **kwargs) ``` You can use another built-in method to get a list of potential positions:<br> ``` layout.lidar_position_suggestion(layer_id, teshold) ``` We will demonstrate both methods:<br> ``` layout.export_kml('first_lidar', layer_ids = ['first_lidar_placement']) first_lidar_positions = layout.lidar_position_suggestion('first_lidar_placement', 12) ``` In the first method we are providing the KML file name followed with *layer_id* suppiled as a list.<br> In the second method we provide *layer_id* string followed with a *treshold* provided as an integer. <br> The treshold represents a minimum number of measurement points the suggested lidar position should be able to provide considering all the constraints. <br>In our case, we want only locations which will allow us to reach all 12 measurement points. <br> Now, you can either work in Google Earht or travers through the suggested positions: ``` layout.add_lidar_instance('koshava', first_lidar_positions[30], layer_id = 'first_lidar_placement') layout.plot_layer('first_lidar_placement', lidar_ids = ['koshava']) ``` What happened in the previous two lines of code? <br>First we call method ``` layout.add_lidar_instance(lidar_id, position, kwargs) ``` which adds lidar position to the lidar dictionary.<br> In our case we provided this method with following parameters: * lidar_id , set to *'koshava'* * position, which was taken from previously derived suggested positions (i.e., first_lidar_positions) * layer_id = *'first_lidar_placement'* The last parameter points the method to extract information about the reachable points from a specific GIS layer, which in the case of the first lidar placement is *'first_lidar_placement'* and updates the lidar dictionary instance 'koshava' accordingly. <br> You can access the lidar dictionary instance by calling: ``` layout.lidar_dictionary['koshava'] ``` , and if you inspect specifically the following key: ``` layout.lidar_dictionary['koshava']['reachable_points'] ``` you can see that the returned array contains 12 elements (same lenght as points) and all elements are equal to 1 since all measurement points are reachable. Otherwise, if a point is unreachable by the lidar in the dictionary this will be indicated with 0. <br>Inspect other elements of the lidar dictionary. We will use them at the later stage. ``` layout.lidar_dictionary['koshava']['reachable_points'] ``` Since we have selected the first lidar position, let's add the second lidar. <br>Now we need to add one more contraint that is the minimum intersecting angle between beams. By calling ``` layout.MIN_INTERSECTING_ANGLE ``` you can see what is the preset value in degrees.<br> For time being we will use the default value of 30 degrees and create so called <br>*'additional lidar placement layer'* <br> To do this we will call method ``` layout.generate_additional_lidar_placement_layer(lidar_id) ``` where *lidar_id* will be set to *'koshava'*, since that is the id of our first lidar. ``` layout.generate_additional_lidar_placement_layer('koshava') layout.plot_layer('additional_lidar_placement', lidar_ids = ['koshava'], save_plot = True) ``` In the above plot of the new GIS layer you can see that some areas which in the previous plot were indicated as a good areas for lidar installation are removed. The reason for this is that if the second lidar was placed at those areas the intersecting angle between the first and second lidar would be lower than 30 degrees. <br> Again we can export this newly made layer to KML file or use the internal method to suggest us the best positions for the second lidar: ``` layout.export_kml('second_lidar', layer_ids = ['additional_lidar_placement']) second_lidar_positions = layout.lidar_position_suggestion('additional_lidar_placement', 12) layout.add_lidar_instance('bura', second_lidar_positions[26],layer_id = 'additional_lidar_placement') layout.plot_layer('additional_lidar_placement', lidar_ids = ['koshava', 'bura']) ``` At this point we have positions of our dual-Doppler system and we are ready to optimize trajectory, plot campaign design and export lidars configurations. We will use the following built-in methods for this:<br> ``` layout.optimize_trajectory(lidar_ids, **kwargs) layout.plot_design(layer_id, lidar_ids, **kwargs) layout.export_measurement_scenario(layer_id) ``` Additionally we will export KML containing layers, lidar positions and trajectory:<br> ``` layout.export_kml(filename, **kwargs) ``` ``` layout.optimize_trajectory(['koshava', 'bura'], sync = True, only_common_points = True) ``` In this method we set kwargs *sync* and *only_common_points* to True. The first kwarg will assure synchronized trajectories, while the second kwarg assure that trajectory will consider only measurement points reachable by both lidars. The optimize_trajectory method currently only generates step-stare trajectories. <br> You can access lidar dictionary and the result of this method (i.e., access key 'motion_config'). ``` layout.lidar_dictionary['koshava']['motion_config'] layout.lidar_dictionary['bura']['motion_config'] layout.plot_design('additional_lidar_placement', lidar_ids = ['koshava', 'bura'], save_plot = True) layout.export_measurement_scenario(['koshava', 'bura']) layout.export_kml('campaign_desing', layer_ids = ['first_lidar_placement','additional_lidar_placement'], lidar_ids = ['koshava', 'bura']) ``` The last method will export following files: 1. Motion program to drive scanner heads (PMC file) 2. Range gate file to configure laser and FPGA (TXT file) 3. YAML and XML files containing human and machine readble compilation of information from (1) and (2) Motion programs and range gate files are currently only applicable for [long-range WindScanners](https://www.mdpi.com/2072-4292/8/11/896). <br>Now you are all equiped to make scanning lidar measurements!
github_jupyter
# Unlocking the Black Box: How to Visualize Data Science Project Pipeline with Yellowbrick Library No matter whether you are a novice data scientist or a well-seasoned and established professional working in the field for a long time, you most likely faced a challenge of interpreting results generated at any stage of the data science pipeline, be it data ingestion or wrangling, feature selection or model evaluation. This issue becomes even more prominent when the need arises to present interim findings to a group of stakeholders, clients, etc. How do you deal in that case with the long arrays of numbers, scientific notations and formulas which tell a story of your data set? That's when visualization library like Yellowbrick becomes an essential tool in the arsenal of any data scientist and helps to undertake that endevour by providing interpretable and comprehensive visualization means for any stage of a project pipeline. ### Introduction In this post we will explain how to integrate visualization step into each stage of your project without a need to create customized and time-consuming charts, while getting the benefit of drawing necessary insights into the data you are working with. Because, let's agree on that, unlike computers, human eye perceives graphical represenation of information way better, than it does with bits and digits. Yellowbrick machine learning visualization library serves just that purpose - to "create publication-ready figures and interactive data explorations while still allowing developers fine-grain control of figures. For users, Yellowbrick can help evaluate the performance, stability, and predictive value of machine learning models and assist in diagnosing problems throughout the machine learning workflow" ( http://www.scikit-yb.org/en/latest/about.html ). For the purpose of this exercise we will be using a dataset from UCI Machine Learning Repository on Absenteeism at Work ( https://archive.ics.uci.edu/ml/machine-learning-databases/00445/ ). This data set contains a mix of continuous, binary and hierarchical features, along with continuous target representing a number of work hours an employee has been absent for from work. Such a variety in data makes for an interesting wrangling, feature selection and model evaluation task, results of which we will make sure to visualize along the way. To begin, we will need to pip install and import Yellowbrick Pyhton library. To do that, simply run the following command from your command line: $ pip install yellowbrick Once that's done, let's import Yellowbrick along with other essential packages, libraries and user-preference set up into the Jupyter Notebook. ``` import numpy as np import pandas as pd %matplotlib inline from cycler import cycler import matplotlib.style import matplotlib as mpl mpl.style.use('seaborn-white') import matplotlib.pyplot as plt from matplotlib.pyplot import figure from sklearn.cluster import KMeans from sklearn.linear_model import RidgeCV from sklearn.model_selection import KFold from sklearn.naive_bayes import MultinomialNB from sklearn.svm import LinearSVC, NuSVC, SVC from sklearn.tree import DecisionTreeRegressor from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import StratifiedKFold from sklearn.feature_selection import SelectFromModel from sklearn.linear_model import Ridge, Lasso, ElasticNet from sklearn.linear_model import LogisticRegressionCV, LogisticRegression, SGDClassifier from sklearn.ensemble import BaggingClassifier, ExtraTreesClassifier, RandomForestClassifier, RandomTreesEmbedding, GradientBoostingClassifier import warnings warnings.filterwarnings("ignore") from sklearn.pipeline import make_pipeline from sklearn.model_selection import train_test_split as tts from sklearn.metrics import roc_curve from sklearn.metrics import f1_score from sklearn.metrics import recall_score from sklearn.metrics import accuracy_score from sklearn.metrics import precision_score from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report from yellowbrick.features import Rank1D from yellowbrick.features import Rank2D from yellowbrick.classifier import ClassBalance from yellowbrick.model_selection import LearningCurve from yellowbrick.model_selection import ValidationCurve from yellowbrick.classifier import ClassPredictionError from yellowbrick.classifier import ClassificationReport from yellowbrick.features.importances import FeatureImportances ``` ### Data Ingestion and Wrangling Now we are ready to proceed with downloading a zipped archive containing the dataset directly from the UCI Machine Learning Repository and extracting the data file. To perform this step, we will be using the urllib.request module which helps with opening URLs (mostly HTTP) in a complex world. ``` import urllib.request print('Beginning file download...') url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00445/Absenteeism_at_work_AAA.zip' urllib.request.urlretrieve( url ## , Specify a path to folder you want the archive to be stored in, e.g. '/Users/Yara/Downloads/Absenteeism_at_work_AAA.zip') ) ``` Unzip the archive and extract a CSV data file which we will be using. Zipfile module does that flawlessly. ``` import zipfile fantasy_zip = zipfile.ZipFile('C:\\Users\\Yara\\Downloads\\Absenteeism_at_work_AAA.zip') fantasy_zip.extract('Absenteeism_at_work.csv', 'C:\\Users\\Yara\\Downloads') fantasy_zip.close() ``` Load the data and place it in the same folder as your Python code. ``` dataset = pd.read_csv('C:\\Users\\Yara\\Downloads\\Absenteeism_at_work.csv', 'Absenteeism_at_work.csv', delimiter=';') ``` Let's take a look at a couple of randomly selected rows from the loaded data set. ``` dataset.sample(10) dataset.ID.count() ``` As we can see, selected dataset contains 740 instances, each instance representing an employed individual. Features provided in the dataset are those considered to be related to the number of hours an employee was absent from work (target). For the purpose of this exercise, we will subjectively group all instances into 3 categories, thus, converting continuous target into categorical. To identify appropriate bins for the target, let's look at the min, max and mean values. ``` # Getting basic statistical information for the target print(dataset.loc[:, 'Absenteeism time in hours'].mean()) print(dataset.loc[:, 'Absenteeism time in hours'].min()) print(dataset.loc[:, 'Absenteeism time in hours'].max()) ``` If approximately 7 hours of absence is an average value accross our dataset, it makes sense to group records in the following manner: 1) Low rate of absence (Low), if 'Absenteeism time in hours' value is < 6; 2) Medium rate of absence (Medium), if 'Absenteeism time in hours' value is between 6 and 30; 3) High rate of absence (High), if 'Absenteeism time in hours' value is > 30. Upon grouping, we will be further exploring data and selecting relevant features from the dataset in order to predict an absentee category for the instances in the test portion of the data. ``` dataset['Absenteeism time in hours'] = np.where(dataset['Absenteeism time in hours'] < 6, 1, dataset['Absenteeism time in hours']) dataset['Absenteeism time in hours'] = np.where(dataset['Absenteeism time in hours'].between(6, 30), 2, dataset['Absenteeism time in hours']) dataset['Absenteeism time in hours'] = np.where(dataset['Absenteeism time in hours'] > 30, 3, dataset['Absenteeism time in hours']) #Let's look at the data now! dataset.head() ``` Once the target is taken care of, time to look at the features. Those of them storing unique identifiers and / or data which might 'leak' information to the model, should be dropped from the data set. For instance, 'Reason for absence' feature stores the information 'from the future' since it will not be available in the real world business scenario when running the model on a new set of data. Therefore, it is highly correlated with the target. ``` dataset = dataset.drop(['ID', 'Reason for absence'], axis=1) dataset.columns ``` We are now left with the set of features and a target to use in a machine learning model of our choice. So, let's separate features from the target, and split our dataset into a matrix of features (X) and an array of target values (y). ``` features = ['Month of absence', 'Day of the week', 'Seasons', 'Transportation expense', 'Distance from Residence to Work', 'Service time', 'Age', 'Work load Average/day ', 'Hit target', 'Disciplinary failure', 'Education', 'Son', 'Social drinker', 'Social smoker', 'Pet', 'Weight', 'Height', 'Body mass index'] target = ['Absenteeism time in hours'] X = dataset.drop(['Absenteeism time in hours'], axis=1) y = dataset.loc[:, 'Absenteeism time in hours'] # Setting up some visual preferences prior to visualizing data class color: PURPLE = '\033[95m' CYAN = '\033[96m' DARKCYAN = '\033[36m' BLUE = '\033[94m' GREEN = '\033[92m' YELLOW = '\033[93m' RED = '\033[91m' BOLD = '\033[1m' UNDERLINE = '\033[4m' END = '\033[0m' ``` ### Exploratory Analysis and Feature Selection Whenever one deals with a categorical target, it is important to remember to test the data set for class imbalance issue. Machine learning models struggle with performing well on imbalanced data where one class is overrepresented, while the other one is underrepresented. While such data sets are representative of the real life, e.g. no company will have majority or even half of its employees missing work on a massive scale, they need to be adjusted for the machine learning purposes, to improve algorithms' ability to pick up patterns present in that data. And to check for the potential class imbalance in our data, we will use Class Balance Visualizer from Yellowbrick. ``` # Calculating population breakdown by target category Target = y.value_counts() print(color.BOLD, 'Low:', color.END, Target[1]) print(color.BOLD, 'Medium:', color.END, Target[2]) print(color.BOLD, 'High:', color.END, Target[3]) # Creating class labels classes = ["Low", "Medium", "High"] # Instantiate the classification model and visualizer mpl.rcParams['axes.prop_cycle'] = cycler('color', ['red', 'limegreen', 'yellow']) forest = RandomForestClassifier() fig, ax = plt.subplots(figsize=(10, 7)) visualizer = ClassBalance(forest, classes=classes, ax=ax) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.grid(axis='x') visualizer.fit(X, y) # Fit the training data to the visualizer visualizer.score(X, y) # Evaluate the model on the test data g = visualizer.show() ``` There is an obvious class imbalance here, therefore, we can expect the model to have difficulties learning the pattern for Medium and High categories, unless data resampling is performed or class weight parameter applied within selected model if chosen algorithm allows it. With that being said, let's proceed with assessing feature importance and selecting those which will be used further in a model of our choice. Yellowbrick library provides a number of convenient vizualizers to perform feature analysis, and we will use a couple of them for demonstration purposes, as well as to make sure that consistent results are returned when different methods are applied. Rank 1D visualizer utilizes Shapiro-Wilk algorithm that takes into account only a single feature at a time and assesses the normality of the distribution of instances with respect to the feature. Let's see how it works! ``` # Creating 1D visualizer with the Sharpiro feature ranking algorithm fig, ax = plt.subplots(figsize=(10, 7)) visualizer = Rank1D(features=features, ax=ax, algorithm='shapiro') ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['bottom'].set_visible(False) visualizer.fit(X, y) visualizer.transform(X) visualizer.show() ``` Rank 2D Visualizer, in its turn, utilizes a ranking algorithm that takes into account pairs of features at a time. It provides an option for a user to select ranking algorithm of their choice. We are going to experiment with covariance and Pearson, and compare the results. ``` # Instantiate visualizer using covariance ranking algorithm figsize=(10, 7) fig, ax = plt.subplots(figsize=figsize) visualizer = Rank2D(features=features, ax=ax, algorithm='covariance', colormap='summer') ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['bottom'].set_visible(False) visualizer.fit(X, y) visualizer.transform(X) visualizer.show() # Instantiate visualizer using Pearson ranking algorithm figsize=(10, 7) fig, ax = plt.subplots(figsize=figsize) visualizer = Rank2D(features=features, algorithm='pearson', colormap='winter') ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['bottom'].set_visible(False) visualizer.fit(X, y) visualizer.transform(X) visualizer.show() ``` Visual representation of feature correlation makes it much easier to spot pairs of features, which have high or low correlation coefficients. For instance, lighter colours on both plots indicate strong correlation between such pairs of features as 'Body mass index' and 'Weight'; 'Seasons' and 'Month of absence', etc. Another way of estimating feature importance relative to the model is to rank them by feature_importances_ attribute when data is fitted to the model. The Yellowbrick Feature Importances visualizer utilizes this attribute to rank and plot features' relative importances. Let's look at how this approach works with Ridge, Lasso and ElasticNet models. ``` # Visualizing Ridge, Lasso and ElasticNet feature selection models side by side for comparison # Ridge # Create a new figure mpl.rcParams['axes.prop_cycle'] = cycler('color', ['red']) fig = plt.gcf() fig.set_size_inches(10,10) ax = plt.subplot(311) labels = features viz = FeatureImportances(Ridge(alpha=0.1), ax=ax, labels=labels, relative=False) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.grid(False) # Fit and display viz.fit(X, y) viz.show() # ElasticNet # Create a new figure mpl.rcParams['axes.prop_cycle'] = cycler('color', ['salmon']) fig = plt.gcf() fig.set_size_inches(10,10) ax = plt.subplot(312) labels = features viz = FeatureImportances(ElasticNet(alpha=0.01), ax=ax, labels=labels, relative=False) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.grid(False) # Fit and display viz.fit(X, y) viz.show() # Lasso # Create a new figure mpl.rcParams['axes.prop_cycle'] = cycler('color', ['purple']) fig = plt.gcf() fig.set_size_inches(10,10) ax = plt.subplot(313) labels = features viz = FeatureImportances(Lasso(alpha=0.01), ax=ax, labels=labels, relative=False) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.grid(False) # Fit and display viz.fit(X, y) viz.show() ``` Having analyzed the output of all utilized visualizations (Shapiro algorithm, Pearson Correlation Ranking, Covariance Ranking, Lasso, Ridge and ElasticNet), we can now select a set of features which have meaningful coefficient values (positive or negative). These are the features to be kept in the model: - Disciplinary failure - Day of the week - Seasons - Distance from Residence to Work - Number of children (Son) - Social drinker - Social smoker - Height - Weight - BMI - Pet - Month of absence Graphic visualization of the feature coefficients calculated in a number of different ways significantly simplifies feature selection process, making it more obvious, as it provides an easy way to visualy compare multiple values and consider only those which are statistically significant to the model. Now let's drop features which didn't make it and proceed with creating models. ``` # Dropping features from X based on visual feature importance visualization X = X.drop(['Transportation expense', 'Age', 'Transportation expense', 'Service time', 'Hit target', 'Education','Work load Average/day '], axis=1) ``` Some of the features which are going to be further utilized in the modeling stage, might be of a hierarchical type and require encoding. Let's look at the top couple of rows to see if we have any of those. ``` X.head() ``` Looks like 'Month of absence', 'Day of week' and 'Seasons' are not binary. Therefore, we'll be using pandas get_dummies function to encode them. ``` # Encoding some categorical features X = pd.get_dummies(data=X, columns=['Month of absence', 'Day of the week', 'Seasons']) X.head() print(X.columns) ``` ### Model Evaluation and Selection Our matrix of features X is now ready to be fitted to a model, but first we need to split the data into train and test portions for further model validation. ``` # Perform 80/20 training/test split X_train, X_test, y_train, y_test = tts(X, y, test_size=0.20, random_state=42) ``` For the purpose of model evaluation and selection we will be using Yellowbrick's Classification Report Visualizer, which displays the precision, recall, F1, and support scores for the model. In order to support easier interpretation and problem detection, the report integrates numerical scores with a color-coded heatmap. All heatmaps are normalized, i.e. in the range from 0 to 1, to facilitate easy comparison of classification models across different classification reports. ``` # Creating a function to visualize estimators def visual_model_selection(X, y, estimator): visualizer = ClassificationReport(estimator, classes=['Low', 'Medium', 'High'], cmap='PRGn') visualizer.fit(X, y) visualizer.score(X, y) visualizer.show() visual_model_selection(X, y, BaggingClassifier()) visual_model_selection(X, y, LogisticRegression(class_weight='balanced')) visual_model_selection(X, y, KNeighborsClassifier()) visual_model_selection(X, y, RandomForestClassifier(class_weight='balanced')) visual_model_selection(X, y, ExtraTreesClassifier(class_weight='balanced')) ``` For the purposes of this exercise we will consider F1 score when estimating models' performance and making a selection. All of the above models visualized through Yellowbrick's Classification Report Visualizer make clear that classifier algorithms performed the best. We need to pay special attention to the F1 score for the underrepresented classes, such as "High" and "Medium", as they contained significantly less instances than "Low" class. Therefore, high F1 score for all three classes indicate a very strong performance of the following models: Bagging Classifier, Random Forest Classifier, Extra Trees Classifier. We will also use Class Prediction Error visualizer for these models to confirm their strong performance. ``` # Visualizaing class prediction error for Bagging Classifier model classes = ['Low', 'Medium', 'High'] mpl.rcParams['axes.prop_cycle'] = cycler('color', ['turquoise', 'cyan', 'teal', 'coral', 'blue', 'lime', 'lavender', 'lightblue', 'darkgreen', 'tan', 'salmon', 'gold', 'darkred', 'darkblue']) fig = plt.gcf() fig.set_size_inches(10,10) ax = plt.subplot(311) visualizer = ClassPredictionError(BaggingClassifier(), classes=classes, ax=ax) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.grid(False) visualizer.fit(X_train, y_train) visualizer.score(X_test, y_test) g = visualizer.show() # Visualizaing class prediction error for Random Forest Classifier model classes = ['Low', 'Medium', 'High'] mpl.rcParams['axes.prop_cycle'] = cycler('color', ['coral', 'tan', 'darkred']) fig = plt.gcf() fig.set_size_inches(10,10) ax = plt.subplot(312) visualizer = ClassPredictionError(RandomForestClassifier(class_weight='balanced'), classes=classes, ax=ax) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.grid(False) visualizer.fit(X_train, y_train) visualizer.score(X_test, y_test) g = visualizer.show() # Visualizaing class prediction error for Extra Trees Classifier model classes = ['Low', 'Medium', 'High'] mpl.rcParams['axes.prop_cycle'] = cycler('color', ['limegreen', 'yellow', 'orange']) fig = plt.gcf() fig.set_size_inches(10,10) ax = plt.subplot(313) visualizer = ClassPredictionError(ExtraTreesClassifier(class_weight='balanced'), classes=classes, ax=ax) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.grid(False) visualizer.fit(X_train, y_train) visualizer.score(X_test, y_test) g = visualizer.show() ``` ### Model Optimization Now we can conclude that ExtraTreesClassifier seems to perform better as it had no instances from "High" class reported under the "Low" class. However, decision trees become more overfit the deeper they are because at each level of the tree the partitions are dealing with a smaller subset of data. One way to avoid overfitting is by adjusting the depth of the tree. Yellowbrick's Validation Curve visualizer explores the relationship of the "max_depth" parameter to the R2 score with 10 shuffle split cross-validation. So let's proceed with hyperparameter tuning for our selected ExtraTreesClassifier model using Validation Curve visualizer! ``` # Performing Hyperparameter tuning # Validation Curve mpl.rcParams['axes.prop_cycle'] = cycler('color', ['purple', 'darkblue']) fig = plt.gcf() fig.set_size_inches(10,10) ax = plt.subplot(411) viz = ValidationCurve(ExtraTreesClassifier(class_weight='balanced'), ax=ax, param_name="max_depth", param_range=np.arange(1, 11), cv=3, scoring="accuracy") # Fit and show the visualizer viz.fit(X, y) viz.show() ``` We can observe on the above chart that even though training score keeps rising continuosly, cross validation score drops down at max_depth=7. Therefore, we will chose that parameter for our selected model to optimize its performance. ``` visual_model_selection(X, y, ExtraTreesClassifier(class_weight='balanced', max_depth=7)) ``` ### Conclusions As we demonstrated in this article, visualization techniques prove to be a useful tool in the machine learning toolkit, and Yellowbrick provides a wide selection of visualizers to meet the needs at every step and stage of the data science project pipeline. Ranging from feature analysis and selection, to model selection and optimization, Yellowbrick visualizers make it easy to make a decision as to which features to keep in the model, which model performs best, and how to tune model's hyperparameters to achieve its optimal performance for future use. Moreover, visualizing algorithmic output also makes it easy to present insights to the audience and stakeholders, and contribute to the simplified interpretability of the machine learning results.
github_jupyter
### Utilitary Functions Definition of functions that don't belong to a specific class inside the logic of the problem solution ``` import numpy as np import numpy as np import matplotlib.pyplot as plt import seaborn as sns import math import copy %matplotlib inline def pseudo_transpose(listt): ''' Utilitary function used to compute the transpose of a matrix expressed in the shape of a vector ''' result = np.empty(9) for i in range(9): result[i] = listt[3 * (i % 3) + math.floor(1 / 3)] return result def moving_average(a, n=5) : ret = np.cumsum(a, dtype=float) ret[n:] = ret[n:] - ret[:-n] return ret[n - 1:] / n def plot_agents_avg_reward(title, data, styles, labels): sns.set_style('white') sns.set_context('talk') plt.subplot(1, 1, 1) plt.title(title) for i in range(len(data)): plt.plot(range(len(data[i])), data[i], styles[i], label = labels[i]) plt.ylabel('Average Reward') plt.xlabel('Time Step') plt.legend(loc=4, bbox_to_anchor=(1.4, 0)) sns.despine() plt.show() ``` ### Game Definition of the class that represents a game. Important methods for this experiments are the ones related with joint actions. The game class makes sure that agent1 and agent2 get the corresponding feedback of each other so that they can update they shared knowledge and obtain a solution based on it. ``` class Game(object): ''' Definition of the game values and game actions (play) based on the involved agents ''' def __init__(self, game_values, iterations, agent1, agent2): self.game_values = game_values self.iterations = iterations self.agent1 = agent1 self.agent2 = agent2 self.history_values_agent1 = [] self.history_values_agent2 = [] def run(self): for i in range(self.iterations): self.agent1.compute_action() self.agent2.compute_action() self.play(self.agent1.last_action, self.agent2.last_action) def play(self, action_agent1, action_agent2): ''' Defines a step in the game. Based on the input actions, the output of the game is computed for a joint scenario ''' value = self.get_game_value(action_agent1, action_agent2) agent1.add_action_value(action_agent1, value[0]) agent2.add_action_value(action_agent2, value[1]) self.agent1.add_extended_action_value(agent1.last_action * 3 + agent2.last_action , value[0]) self.agent2.add_extended_action_value(agent2.last_action * 3 + agent1.last_action, value[1]) return value def run_joint(self, experiments = 1): history_values_agent1 = np.empty((experiments, iterations)) history_values_agent2 = np.empty((experiments, iterations)) for e in range(experiments): self.agent1.reset() self.agent2.reset() for i in range(self.iterations): tmp_agent1_values = copy.copy(self.agent1.extended_values) tmp_agent1_actions_count = copy.copy(self.agent1.extended_actions_count) self.agent1.compute_action_joint(self.agent2.extended_values, self.agent2.extended_actions_count, i) self.agent2.compute_action_joint(tmp_agent1_values, tmp_agent1_actions_count, i) value = self.play(self.agent1.last_action, self.agent2.last_action) history_values_agent1[e][i] = value[0] history_values_agent2[e][i] = value[1] self.history_values_agent1 = np.mean(history_values_agent1, axis=0) self.history_values_agent2 = np.mean(history_values_agent2, axis=0) def get_game_value(self, position_1, position_2): ''' Obtains a tuple with the values for the players after each of them choose an action/position to play. Player 1 is the row player and Player 2 the column one ''' return (np.random.normal(self.game_values[position_1][position_2][0], self.game_values[position_1][position_2][1]), np.random.normal(self.game_values[position_1][position_2][0], self.game_values[position_1][position_2][1])) ``` ### Agent Definition of the agent class which contains the history of the values and the methods to select a position for the game and play it. ``` class Agent(object): def __init__(self): self.reset() def reset(self): self.actions_count = np.ones(3) # count for the action taken self.values = np.zeros(3) # values obtained so far -> average def compute_action(self, iteration = None): ''' Gets the position/action for the following game based on the policy of the agent. For this base class the policy follows a random choice ''' action = np.random.choice(3) self.last_action = action self.actions_count[action] += 1 def add_action_value(self, action, value): self.values[action] = ((self.values[action] * (self.actions_count[action] - 1) + value) / (self.actions_count[action])) class BoltzmannActionLearner(Agent): def __init__(self, t): self.t = t super(BoltzmannActionLearner, self).__init__() def compute_action(self, iteration = None): ''' Gets the position/action for the following game based on the policy of the agent. For this class the decision is taken based on the boltzmann definition ''' if iteration != None: t = 3 * ((self.num_iterations - iteration) / self.num_iterations) numerator = np.exp(self.values / t) denominator = np.sum(numerator) pdf = numerator / denominator #probability distribution function action = np.random.choice(len(self.values), p=pdf) self.last_action = action self.actions_count[action] += 1 #increment in the counter of the actions class BoltzmannJointActionLearner(Agent): def __init__(self, t, num_iterations): self.t = t self.num_iterations = num_iterations super(BoltzmannJointActionLearner, self).__init__() def __str__(self): return 'Boltzamn JAL' def reset(self): self.actions_count = np.ones(3) # count for the action taken self.values = np.zeros(3) # values obtained so far -> average self.extended_actions_count = np.ones(9) self.extended_values = np.zeros(9) def compute_action(self, iteration = None): ''' Gets the position/action for the following game based on the policy of the agent. This method works as a simulation of the two agents into a single agent. Not real practical application. Kept as utilitary for possible further testing ''' numerator = np.exp(self.values / t) denominator = np.sum(numerator) pdf = numerator / denominator #probability distribution function action = np.random.choice(len(self.values), p=pdf) self.last_action = action self.actions_count[action] += 1 def compute_action_joint(self, external_agent_values, external_agent_actions_count, iteration = None): ''' Gets the position/action for the following game based on the policy of the agent and the values of the external agent getting an average of the obtained values.''' if iteration != None: t = 0.05 * ((self.num_iterations - iteration) / self.num_iterations) avg_values = ((self.extended_values + pseudo_transpose(external_agent_values)) / (self.extended_actions_count + pseudo_transpose(external_agent_actions_count))) numerator = np.exp(avg_values / t) denominator = np.sum(numerator) pdf = numerator / denominator #probability distribution function action = np.random.choice(len(self.extended_values), p=pdf) self.last_action = math.floor(action / 3) self.actions_count[self.last_action] += 1 def add_extended_action_value(self, action, value): self.extended_actions_count[action] += 1 self.extended_values[action] = ((self.extended_values[action] * (self.extended_actions_count[action] - 1) + value) / (self.extended_actions_count[action])) class OptimisticBoltzmannJointActionLearner(Agent): def __init__(self, t, num_iterations): self.t = t self.num_iterations = num_iterations super(OptimisticBoltzmannJointActionLearner, self).__init__() def __str__(self): return 'Optimistic Boltzamn JAL' def reset(self): self.actions_count = np.zeros(3) # count for the action taken self.values = np.zeros(3) # values obtained so far -> average self.extended_actions_count = np.ones(9) self.extended_values = np.zeros(9) self.max_values = np.zeros(9) def compute_action_joint(self, ext_agent_max_values, ext_agent_actions_count, iteration = None): ''' Gets the position/action for the following game based on the policy of the agent and the values of the external agent getting an average of the obtained values.''' if iteration != None: t = 2 * ((self.num_iterations - iteration) / self.num_iterations) avg_values = (self.max_values + ext_agent_max_values) / 2 numerator = np.exp(avg_values / t) denominator = np.sum(numerator) pdf = numerator / denominator #probability distribution function action = np.random.choice(len(self.extended_values), p=pdf) self.last_action = math.floor(action / 3) self.actions_count[self.last_action] += 1 def add_action_value(self, action, value): ''' Adds a corresponding action value taking into consideration the posibility of a new max value for the chosen action ''' # Re-asign max value if required if value >= self.max_values[action]: self.max_values[action] = value self.values[action] = ((self.values[action] * (self.actions_count[action] - 1) + value) / (self.actions_count[action])) def add_extended_action_value(self, action, value): # Re-asign max value if required if value >= self.max_values[action]: self.max_values[action] = value self.extended_actions_count[action] += 1 self.extended_values[action] = ((self.extended_values[action] * (self.extended_actions_count[action] - 1) + value) / (self.extended_actions_count[action])) ``` ### Exercise A ``` sigma = 0.2 sigma0 = 0.2 sigma1 = 0.2 iterations = 5000 experiments = 500 # Used to compute an average of the iterations and smooth the final charts game_values = [[(11, sigma0), (-30, sigma), (0, sigma)], [(-30, sigma), (7, sigma1), (6, sigma)], [(0, sigma), (0, sigma), (5, sigma)]] agent1 = BoltzmannJointActionLearner(None, iterations) agent2 = BoltzmannJointActionLearner(None, iterations) game = Game(game_values, iterations, agent1, agent2) game.run_joint(experiments) values_boltzman = moving_average((game.history_values_agent1 + game.history_values_agent2) / 2, 10) agent1 = OptimisticBoltzmannJointActionLearner(None, iterations) agent2 = OptimisticBoltzmannJointActionLearner(None, iterations) game = Game(game_values, iterations, agent1, agent2) game.run_joint(experiments) values_optimistic_boltzman = moving_average((game.history_values_agent1 + game.history_values_agent2) / 2, 10) plot_agents_avg_reward('Results - \u03C3 = \u03C3 0 = \u03C3 1 = {}'.format(sigma), [values_boltzman, values_optimistic_boltzman], ['b-', 'r-'], ['Boltzman', 'OptimisticBoltzman']) ``` ### Exercise B ``` sigma = 0.1 sigma0 = 4.0 sigma1 = 0.1 game_values = [[(11, sigma0), (-30, sigma), (0, sigma)], [(-30, sigma), (7, sigma1), (6, sigma)], [(0, sigma), (0, sigma), (5, sigma)]] agent1 = BoltzmannJointActionLearner(None, iterations) agent2 = BoltzmannJointActionLearner(None, iterations) game = Game(game_values, iterations, agent1, agent2) game.run_joint(experiments) values_boltzman = moving_average((game.history_values_agent1 + game.history_values_agent2) / 2, 10) agent1 = OptimisticBoltzmannJointActionLearner(None, iterations) agent2 = OptimisticBoltzmannJointActionLearner(None, iterations) game = Game(game_values, iterations, agent1, agent2) game.run_joint(experiments) values_optimistic_boltzman = moving_average((game.history_values_agent1 + game.history_values_agent2) / 2, 10) plot_agents_avg_reward('Results - \u03C3 = {}, \u03C3 0 = {}, \u03C3 1 = {}'. format(sigma, sigma0, sigma1), [values_boltzman, values_optimistic_boltzman], ['b-', 'r-'], ['Boltzman', 'OptimisticBoltzman']) ``` ### Exercise C ``` sigma = 0.1 sigma0 = 0.1 sigma1 = 4.0 game_values = [[(11, sigma0), (-30, sigma), (0, sigma)], [(-30, sigma), (7, sigma1), (6, sigma)], [(0, sigma), (0, sigma), (5, sigma)]] agent1 = BoltzmannJointActionLearner(None, iterations) agent2 = BoltzmannJointActionLearner(None, iterations) game = Game(game_values, iterations, agent1, agent2) game.run_joint(experiments) values_boltzman = moving_average((game.history_values_agent1 + game.history_values_agent2) / 2, 10) agent1 = OptimisticBoltzmannJointActionLearner(None, iterations) agent2 = OptimisticBoltzmannJointActionLearner(None, iterations) game = Game(game_values, iterations, agent1, agent2) game.run_joint(experiments) values_optimistic_boltzman = moving_average((game.history_values_agent1 + game.history_values_agent2) / 2, 10) plot_agents_avg_reward('Results - \u03C3 = {}, \u03C3 0 = {}, \u03C3 1 = {}'. format(sigma, sigma0, sigma1), [values_boltzman, values_optimistic_boltzman], ['b-', 'r-'], ['Boltzman', 'OptimisticBoltzman']) ```
github_jupyter
# Ray RLlib - Extra Application Example - FrozenLake-v0 © 2019-2021, Anyscale. All Rights Reserved ![Anyscale Academy](../../../images/AnyscaleAcademyLogo.png) This example uses [RLlib](https://ray.readthedocs.io/en/latest/rllib.html) to train a policy with the `FrozenLake-v0` environment ([gym.openai.com/envs/FrozenLake-v0/](https://gym.openai.com/envs/FrozenLake-v0/)). For more background about this problem, see: * ["Introduction to Reinforcement Learning: the Frozen Lake Example"](https://reinforcementlearning4.fun/2019/06/09/introduction-reinforcement-learning-frozen-lake-example/), [Rodolfo Mendes](https://twitter.com/rodmsmendes) * ["Gym Tutorial: The Frozen Lake"](https://reinforcementlearning4.fun/2019/06/16/gym-tutorial-frozen-lake/), [Rodolfo Mendes](https://twitter.com/rodmsmendes) ``` import pandas as pd import json import os import shutil import sys import ray import ray.rllib.agents.ppo as ppo info = ray.init(ignore_reinit_error=True) print("Dashboard URL: http://{}".format(info["webui_url"])) ``` Set up the checkpoint location: ``` checkpoint_root = "tmp/ppo/frozen-lake" shutil.rmtree(checkpoint_root, ignore_errors=True, onerror=None) ``` Next we'll train an RLlib policy with the `FrozenLake-v0` environment. By default, training runs for `10` iterations. Increase the `n_iter` setting if you want to see the resulting rewards improve. Also note that *checkpoints* get saved after each iteration into the `/tmp/ppo/taxi` directory. > **Note:** If you prefer to use a different directory root than `/tmp`, change it in the next cell **and** in the `rllib rollout` command below. ``` SELECT_ENV = "FrozenLake-v0" N_ITER = 10 config = ppo.DEFAULT_CONFIG.copy() config["log_level"] = "WARN" agent = ppo.PPOTrainer(config, env=SELECT_ENV) results = [] episode_data = [] episode_json = [] for n in range(N_ITER): result = agent.train() results.append(result) episode = {'n': n, 'episode_reward_min': result['episode_reward_min'], 'episode_reward_mean': result['episode_reward_mean'], 'episode_reward_max': result['episode_reward_max'], 'episode_len_mean': result['episode_len_mean'] } episode_data.append(episode) episode_json.append(json.dumps(episode)) file_name = agent.save(checkpoint_root) print(f'{n+1:3d}: Min/Mean/Max reward: {result["episode_reward_min"]:8.4f}/{result["episode_reward_mean"]:8.4f}/{result["episode_reward_max"]:8.4f}, len mean: {result["episode_len_mean"]:8.4f}. Checkpoint saved to {file_name}') import pprint policy = agent.get_policy() model = policy.model pprint.pprint(model.variables()) pprint.pprint(model.value_function()) print(model.base_model.summary()) ray.shutdown() ``` ## Rollout Next we'll use the [`rollout` script](https://ray.readthedocs.io/en/latest/rllib-training.html#evaluating-trained-policies) to evaluate the trained policy. The following rollout visualizes the "character" agent operating within its simulation: trying to find a walkable path to a goal tile. The [FrozenLake-v0 environment](https://gym.openai.com/envs/FrozenLake-v0/) documentation provides a detailed explanation of the text encoding. The *observation space* is defined as a 4x4 grid: * `S` -- starting point, safe * `F` -- frozen surface, safe * `H` -- hole, fall to your doom * `G` -- goal, where the frisbee is located * orange rectangle shows where the character/agent is currently located Note that for each action, the grid gets printed first followed by the action. The *action space* is defined by four possible movements across the grid on the frozen lake: * Up * Down * Left * Right The *rewards* at the end of each episode are structured as: * 1 if you reach the goal * 0 otherwise An episode ends when you reach the goal or fall in a hole. ``` !rllib rollout \ tmp/ppo/frozen-lake/checkpoint_10/checkpoint-10 \ --config "{\"env\": \"FrozenLake-v0\"}" \ --run PPO \ --steps 2000 ``` The rollout uses the second saved checkpoint, evaluated through `2000` steps. Modify the path to view other checkpoints. ## Exercise ("Homework") In addition to _Taxi_ and _Frozen Lake_, there are other so-called ["toy text"](https://gym.openai.com/envs/#toy_text) problems you can try.
github_jupyter
# Artificial Intelligence in Finance ## Data-Driven Finance (a) ## Financial Econometrics and Regression ``` import numpy as np def f(x): return 2 + 1 / 2 * x x = np.arange(-4, 5) x y = f(x) y x y beta = np.cov(x, y, ddof=0)[0, 1] / x.var() beta alpha = y.mean() - beta * x.mean() alpha y_ = alpha + beta * x np.allclose(y_, y) ``` ## Data Availability In addition to a (paid) subscribtion to the Eikon Data API (https://developers.refinitiv.com/eikon-apis/eikon-data-apis), the following code requires the `eikon` Python package: pip install eikon ``` import eikon as ek import configparser c = configparser.ConfigParser() c.read('../aiif.cfg') ek.set_app_key(c['eikon']['app_id']) ek.__version__ symbols = ['AAPL.O', 'MSFT.O', 'NFLX.O', 'AMZN.O'] data = ek.get_timeseries(symbols, fields='CLOSE', start_date='2019-07-01', end_date='2020-07-01') data.info() data.tail() data = ek.get_timeseries('AMZN.O', fields='*', start_date='2020-09-24', end_date='2020-09-25', interval='minute') data.info() data.head() data_grid, err = ek.get_data(['AAPL.O', 'IBM', 'GOOG.O', 'AMZN.O'], ['TR.TotalReturnYTD', 'TR.WACCBeta', 'YRHIGH', 'YRLOW', 'TR.Ebitda', 'TR.GrossProfit']) data_grid ``` In addition to a (free paper trading) account with Oanda (http://oanda.com), the following code requires the `tpqoa` package: pip install --upgrade git+https://github.com/yhilpisch/tpqoa.git ``` import tpqoa oa = tpqoa.tpqoa('../aiif.cfg') oa.stream_data('BTC_USD', stop=5) data = ek.get_timeseries('AAPL.O', fields='*', start_date='2020-09-25 15:00:00', end_date='2020-09-25 15:15:00', interval='tick') data.info() data.head(8) news = ek.get_news_headlines('R:TSLA.O PRODUCTION', date_from='2020-06-01', date_to='2020-08-01', count=7 ) news storyId = news['storyId'][1] from IPython.display import HTML HTML(ek.get_news_story(storyId)) import nlp import requests sources = [ 'https://nr.apple.com/dE0b1T5G3u', # iPad Pro 'https://nr.apple.com/dE4c7T6g1K', # MacBook Air 'https://nr.apple.com/dE4q4r8A2A', # Mac Mini ] html = [requests.get(url).text for url in sources] data = [nlp.clean_up_text(t) for t in html] data[0][0:1001] from twitter import Twitter, OAuth t = Twitter(auth=OAuth(c['twitter']['access_token'], c['twitter']['access_secret_token'], c['twitter']['api_key'], c['twitter']['api_secret_key']), retry=True) l = t.statuses.home_timeline(count=15) for e in l: print(e['text']) l = t.statuses.user_timeline(screen_name='dyjh', count=5) for e in l: print(e['text']) d = t.search.tweets(q='#Python', count=7) for e in d['statuses']: print(e['text']) l = t.statuses.user_timeline(screen_name='elonmusk', count=50) tl = [e['text'] for e in l] tl[:5] wc = nlp.generate_word_cloud(' '.join(tl), 35) ``` ## Normative Theories Revisited ### Mean-Variance Portfolio Theory ``` import numpy as np import pandas as pd from pylab import plt, mpl from scipy.optimize import minimize plt.style.use('seaborn') mpl.rcParams['savefig.dpi'] = 300 mpl.rcParams['font.family'] = 'serif' np.set_printoptions(precision=5, suppress=True, formatter={'float': lambda x: f'{x:6.3f}'}) url = 'http://hilpisch.com/aiif_eikon_eod_data.csv' raw = pd.read_csv(url, index_col=0, parse_dates=True).dropna() raw.info() symbols = ['AAPL.O', 'MSFT.O', 'INTC.O', 'AMZN.O', 'GLD'] rets = np.log(raw[symbols] / raw[symbols].shift(1)).dropna() (raw[symbols[:]] / raw[symbols[:]].iloc[0]).plot(figsize=(10, 6)); weights = len(rets.columns) * [1 / len(rets.columns)] weights def port_return(rets, weights): return np.dot(rets.mean(), weights) * 252 # annualized port_return(rets, weights) def port_volatility(rets, weights): return np.dot(weights, np.dot(rets.cov() * 252 , weights)) ** 0.5 # annualized port_volatility(rets, weights) def port_sharpe(rets, weights): return port_return(rets, weights) / port_volatility(rets, weights) port_sharpe(rets, weights) w = np.random.random((1000, len(symbols))) w = (w.T / w.sum(axis=1)).T w[:5] w[:5].sum(axis=1) pvr = [(port_volatility(rets[symbols], weights), port_return(rets[symbols], weights)) for weights in w] pvr = np.array(pvr) psr = pvr[:, 1] / pvr[:, 0] plt.figure(figsize=(10, 6)) fig = plt.scatter(pvr[:, 0], pvr[:, 1], c=psr, cmap='coolwarm') cb = plt.colorbar(fig) cb.set_label('Sharpe ratio') plt.xlabel('expected volatility') plt.ylabel('expected return') plt.title(' | '.join(symbols)); bnds = len(symbols) * [(0, 1),] bnds cons = {'type': 'eq', 'fun': lambda weights: weights.sum() - 1} opt_weights = {} for year in range(2010, 2019): rets_ = rets[symbols].loc[f'{year}-01-01':f'{year}-12-31'] ow = minimize(lambda weights: -port_sharpe(rets_, weights), len(symbols) * [1 / len(symbols)], bounds=bnds, constraints=cons)['x'] opt_weights[year] = ow opt_weights res = pd.DataFrame() for year in range(2010, 2019): rets_ = rets[symbols].loc[f'{year}-01-01':f'{year}-12-31'] epv = port_volatility(rets_, opt_weights[year]) epr = port_return(rets_, opt_weights[year]) esr = epr / epv rets_ = rets[symbols].loc[f'{year + 1}-01-01':f'{year + 1}-12-31'] rpv = port_volatility(rets_, opt_weights[year]) rpr = port_return(rets_, opt_weights[year]) rsr = rpr / rpv res = res.append(pd.DataFrame({'epv': epv, 'epr': epr, 'esr': esr, 'rpv': rpv, 'rpr': rpr, 'rsr': rsr}, index=[year + 1])) res res.mean() res[['epv', 'rpv']].corr() res[['epv', 'rpv']].plot(kind='bar', figsize=(10, 6), title='Expected vs. Realized Portfolio Volatility'); res[['epr', 'rpr']].corr() res[['epr', 'rpr']].plot(kind='bar', figsize=(10, 6), title='Expected vs. Realized Portfolio Return'); res[['esr', 'rsr']].corr() res[['esr', 'rsr']].plot(kind='bar', figsize=(10, 6), title='Expected vs. Realized Sharpe Ratio'); ``` ### Capital Asset Pricing Model ``` r = 0.005 market = '.SPX' rets = np.log(raw / raw.shift(1)).dropna() res = pd.DataFrame() for sym in rets.columns[:4]: print('\n' + sym) print(54 * '=') for year in range(2010, 2019): rets_ = rets.loc[f'{year}-01-01':f'{year}-12-31'] muM = rets_[market].mean() * 252 cov = rets_.cov().loc[sym, market] var = rets_[market].var() beta = cov / var rets_ = rets.loc[f'{year + 1}-01-01':f'{year + 1}-12-31'] muM = rets_[market].mean() * 252 mu_capm = r + beta * (muM - r) mu_real = rets_[sym].mean() * 252 res = res.append(pd.DataFrame({'symbol': sym, 'mu_capm': mu_capm, 'mu_real': mu_real}, index=[year + 1]), sort=True) print('{} | beta: {:.3f} | mu_capm: {:6.3f} | mu_real: {:6.3f}' .format(year + 1, beta, mu_capm, mu_real)) sym = 'AMZN.O' res[res['symbol'] == sym].corr() res[res['symbol'] == sym].plot(kind='bar', figsize=(10, 6), title=sym); grouped = res.groupby('symbol').mean() grouped grouped.plot(kind='bar', figsize=(10, 6), title='Average Values'); ``` ### Arbitrage-Pricing Theory ``` factors = ['.SPX', '.VIX', 'EUR=', 'XAU='] res = pd.DataFrame() np.set_printoptions(formatter={'float': lambda x: f'{x:5.2f}'}) for sym in rets.columns[:4]: print('\n' + sym) print(71 * '=') for year in range(2010, 2019): rets_ = rets.loc[f'{year}-01-01':f'{year}-12-31'] reg = np.linalg.lstsq(rets_[factors], rets_[sym], rcond=-1)[0] rets_ = rets.loc[f'{year + 1}-01-01':f'{year + 1}-12-31'] mu_apt = np.dot(rets_[factors].mean() * 252, reg) mu_real = rets_[sym].mean() * 252 res = res.append(pd.DataFrame({'symbol': sym, 'mu_apt': mu_apt, 'mu_real': mu_real}, index=[year + 1])) print('{} | fl: {} | mu_apt: {:6.3f} | mu_real: {:6.3f}' .format(year + 1, reg.round(2), mu_apt, mu_real)) sym = 'AMZN.O' res[res['symbol'] == sym].corr() res[res['symbol'] == sym].plot(kind='bar', figsize=(10, 6), title=sym); grouped = res.groupby('symbol').mean() grouped grouped.plot(kind='bar', figsize=(10, 6), title='Average Values'); factors = pd.read_csv('http://hilpisch.com/aiif_eikon_eod_factors.csv', index_col=0, parse_dates=True) factors.info() (factors / factors.iloc[0]).plot(figsize=(10, 6)); start = '2017-01-01' end = '2020-01-01' retsd = rets.loc[start:end].copy() retsd.dropna(inplace=True) retsf = np.log(factors / factors.shift(1)) retsf = retsf.loc[start:end] retsf.dropna(inplace=True) retsf = retsf.loc[retsd.index].dropna() retsf.corr() res = pd.DataFrame() np.set_printoptions(formatter={'float': lambda x: f'{x:5.2f}'}) split = int(len(retsf) * 0.5) for sym in rets.columns[:4]: print('\n' + sym) print(74 * '=') retsf_, retsd_ = retsf.iloc[:split], retsd.iloc[:split] reg = np.linalg.lstsq(retsf_, retsd_[sym], rcond=-1)[0] retsf_, retsd_ = retsf.iloc[split:], retsd.iloc[split:] mu_apt = np.dot(retsf_.mean() * 252, reg) mu_real = retsd_[sym].mean() * 252 res = res.append(pd.DataFrame({'mu_apt': mu_apt, 'mu_real': mu_real}, index=[sym,]), sort=True) print('fl: {} | apt: {:.3f} | real: {:.3f}' .format(reg.round(1), mu_apt, mu_real)) res.plot(kind='bar', figsize=(10, 6)); sym rets_sym = np.dot(retsf_, reg) rets_sym = pd.DataFrame(rets_sym, columns=[sym + '_apt'], index=retsf_.index) rets_sym[sym + '_real'] = retsd_[sym] rets_sym.mean() * 252 rets_sym.std() * 252 ** 0.5 rets_sym.corr() rets_sym.cumsum().apply(np.exp).plot(figsize=(10, 6)); rets_sym['same'] = (np.sign(rets_sym[sym + '_apt']) == np.sign(rets_sym[sym + '_real'])) rets_sym['same'].value_counts() rets_sym['same'].value_counts()[True] / len(rets_sym) ``` <img src='http://hilpisch.com/taim_logo.png' width="350px" align="right"> <br><br><br><a href="http://tpq.io" target="_blank">http://tpq.io</a> | <a href="http://twitter.com/dyjh" target="_blank">@dyjh</a> | <a href="mailto:ai@tpq.io">ai@tpq.io</a>
github_jupyter
``` # Configuracion para recargar módulos y librerías %reload_ext autoreload %autoreload 2 ``` # MAT281 ## Aplicaciones de la Matemática en la Ingeniería Puedes ejecutar este jupyter notebook de manera interactiva: [![Binder](../shared/images/jupyter_binder.png)](https://mybinder.org/v2/gh/sebastiandres/mat281_m01_introduccion/master?filepath=00_template/00_template.ipynb) [![Colab](../shared/images/jupyter_colab.png)](https://colab.research.google.com/github/sebastiandres/mat281_m01_introduccion/blob/master//00_template/00_template.ipynb) # Módulo 1: Data Science Toolkit ## ¿Qué contenidos aprenderemos? * [Sistema Operativo](#so) * [Interfaz de Línea de Comandos](#cli) * [Entorno Virtual](#venv) * [Python](#python) * [Project Jupyter](#jupyter) * [Git](#git) ## Objetivos de la clase * Instalación y uso de herramientas computacionales. * Utilizar Jupyter Notebook/Lab. * Recordar Python <a id='so'></a> ## Sistema Operativo * Personalmente recomiendo **Linux**, en particular distribuciones como Ubuntu, Mint o Fedora por su facilidad a la hora de instalar. * En ocasiones las implementaciones en **Windows** no están completamente integradas e inclusive en ocasiones no están disponibles. - Una alternativa es [**Windows Subsystem for Linux**](https://docs.microsoft.com/en-us/windows/wsl/about), pero lamentablemente no se asegura un 100% de compatibilidad. * En el caso que poseas un equipo con **macOS** no debería haber problema. <a id='cli'></a> ## Interfaz de Línea de Comandos (*Command Line Interface* / CLI) * Es un método que permite a los usuarios interactuar con algún programa informático por medio de líneas de texto. * Típicamente se hace uso de una terminal/*shell* (ver imagen). * En el día a día dentro de la oficina facilita flujo de trabajo. * Permite moverse entre manipular directorios y ficheros, instalar/actualizar herramientas, aplicaciones, softwares, etc. <img src="https://upload.wikimedia.org/wikipedia/commons/2/29/Linux_command-line._Bash._GNOME_Terminal._screenshot.png" alt="" align="center"/> *Screenshot of a sample bash session in GNOME Terminal 3, Fedora 15. [Wikipedia](https://en.wikipedia.org/wiki/Command-line_interface)* <a id='venv'></a> ## Entorno Virtual __Problemas recurrentes:__ - Dependencias de librerías (*packages*) incompatibles. - Dificultad a la hora de compartir y reproducir resultados, e.g. no conocer las versiones de las librerías instaladas. - Tener una máquina virtual para cada desarrollo es tedioso y costoso. - Miedo constante a instalar algo nuevo y tu script vuelva a funcionar. #### Solución Aislar el desarrollo con tal de mejorar la compatibilidad y reproducibilidad de resultados. #### ¿Cómo? Utilizando entornos virtuales. #### Para el curso (es recomendable) ![Conda](https://conda.io/docs/_images/conda_logo.svg) *Package, dependency and environment management for any language—Python, R, Ruby, Lua, Scala, Java, JavaScript, C/ C++, FORTRAN.* [(Link)](https://conda.io/docs/) #### ¿Por qué Conda? * Open Source * Gestor de librerías __y__ entornos virtuales. * Compatible con Linux, Windows y macOS. * Es agnóstico al lenguaje de programación (inicialmente fue desarrollado para Python). * Es de fácil instalación y uso. ### Instalación Toda la documentación se encuentra en este [link](https://conda.io/docs/user-guide/install/index.html), pero en resumen: * Existe Anaconda y Miniconda, ambas contienen conda. - Anaconda es una distribución que incluye una artillería de librerías científicas. - Miniconda es para realizar una instalación mínima de conda sin adicionales (recomendada). * Descarga e instala **Miniconda3** (Python3) según tu sistena operativo. * __Importante__: En la fase final de la instalación se te preguntará si deseaas agregar Conda a tu PATH, dependiendo de tu sistema operativo: - En Windows selecciona la opción "NO" e inicia desde *Anaconda Prompt*. - En Linux y macOS acepta e inicia desde tu terminal. * Test: ```conda --version``` ``` !conda --version ``` #### Otras alternativas: * ```pip``` + ```virtualenv```, el primero es el gestor favorito de librerías de Python y el segundo es un gestos de entornos virtuales, el contra es que es exclusivo de Python. - Ojo! En Conda también puedes instalar por ```pip```. * __Dockers__ es una herramienta muy de moda en grandes proyectos debido a ser, en palabras simples, un intermedio entre entornos virtuales y máquinas virtuales. <a id='python'></a> ## Python Las principales librerías científicas a instalar y que ocuparemos durante el curso son: * [Numpy](http://www.numpy.org/): Computación científica. * [Pandas](https://pandas.pydata.org/): Análisis de datos. * [Matplotlib](https://matplotlib.org/): Visualización. * [Altair](https://altair-viz.github.io/): Visualización Declarativa. * [Scikit-Learn](http://scikit-learn.org/stable/): Machine Learning <a id='juptyer'></a> ## Project Jupyter *[Project Jupyter](https://jupyter.org/index.html) exists to develop open-source software, open-standards, and services for interactive computing across dozens of programming languages.* <img src="https://2.bp.blogspot.com/-Q23VBETHLS0/WN_lgpxinkI/AAAAAAAAA-k/f3DJQfBre0QD5rwMWmGIGhBGjU40MTAxQCLcB/s1600/jupyter.png" alt="" align="center"/> ### Jupyter Notebook Es una aplicación web que permite crear y compartir documentos que contienen código, ecuaciones, visualizaciones y texto. Entre sus usos se encuentra: * Limpieza de datos * Transformación de datos * Simulaciones numéricas * Modelamiendo Estadístico * Visualización de Datos * Machine Learning * Mucho más. ![Jupyter Notebook Example](https://jupyter.org/assets/jupyterpreview.png) #### Dato Curioso Esta presentación está hecha con Jupyter Notebook + [RISE](https://github.com/damianavila/RISE). ### Jupyter Lab * Es la siguiente generación de la interfaz de usuario de *Project Jupyter*. * Similar a Jupyter Notebook cuenta con la facilidad de editar archivos .ipynb (notebooks) y heramientas como una terminal, editor de texto, explorador de archivos, etc. * Eventualmente Jupyter Lab reemplazará a Jupyter Notebok (aunque la versión estable fue liberada hace algunos meses). * Cuenta con una serie de extensiones que puedes instalar (y desarrollar inclurisve. <a id='git'></a> ## Git *[__Git__](https://git-scm.com/) is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.* ### Instalación de Git No es nada del otro mundo, basta con seguir las instrucciones de la página oficial en este [link](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). ### Configuración de Git No es un paso crucial, pero si en el futuro quieres tener tu propio repositorio o contribuir en otros proyectos, debes tener una identidad. Lo primero es configurar git, la [documentación oficial](https://git-scm.com/book/es/v1/Empezando-Configurando-Git-por-primera-vez) incluso está en español. Con configurar tu ```email``` y ```username``` debería bastar en una primera instancia. Lo siguiente es tener una cuenta en la plataforma que utilizarás. Algunas de ellas son: * GitHub * GitLab * Bitbucket ### GitHub *[GitHub](https://github.com/) is a development platform inspired by the way you work. From open source to business, you can host and review code, manage projects, and build software alongside 30 million developers.* Es decir, es una plataforma para alojar proyectos que utilizan Git como sistema de control. ### Material del curso El material del curso estará alojado en el siguiente repositorio: https://github.com/sebastiandres/mat281_2018S2 ## Manos a la obra 1. Instalar [__git__](https://git-scm.com/). 2. En la terminal (de git si es el caso) moverte hasta donde quieras guardar el material del curso. En lo personal me gusta crear una carpeta llamada __git__ en __Documents__. Los pasos serían: * Abrir terminal * ```cd Documents``` * ```mkdir git``` 3. Clonar el repositorio del curso, ejecutando la siguiente línea de comando en la ruta deseada (en mi caso *~/Documents/git*): ```git clone git@github.com:sebastiandres/mat281_m01_introduccion.git``` 4. ```cd mat281_m01_introduccion``` (__tip__: Puedes usar ```TAB``` para autocompletar). 5. En la terminal (Anaconda Prompt si estás utilizando Windows) crear un entorno virtual de conda a partir de un archivo de requisitos (llamado *mat281_modulo1*), ejecutando la siguiente línea de comando: ```conda env create -f environment.yml``` 6. Activar el entorno: - Linux/macOS: ```source activate mat281_modulo1``` - Windows: ```activate mat281_modulo1``` 7. Ejecutar: - ```jupyter lab```, o - ```jupyter notebook``` 8. Si no se abre atuomáticamente una nueva pestaña en tu navegador, copiar el link del token que se muestra en la terminal en tu navegador. 9. Abrir el notebook ```02_data_science_toolkit.ipynb``` ubicado en el directorio 02_data_science_toolkit. ![Ejemplo terminal jupyter lab](http://i.imgur.com/w3tb35S.png) ## Recordando Python * Números y operaciones * Listas, tuplas, conjuntos y diccionarios. * Control flow * Importar librerías ### Hola Mundo! ``` print('Hello World!') name = 'John Titor' print("Hola {}!".format(name)) ``` ## Números y operaciones ``` 1 + 2 100 - 99 3 * 4 42 / 4 # En python 2 la división por defecto es entera 43 // 4 14 % 4 ``` ## Listas, tuplas, conjuntos y diccionarios ``` # Listas my_list = [1, 2, 3, 4, 5] print(my_list[0]) # Tuplas my_tuple = (1, 2, 3, 4, 5) print(my_tuple[4]) # Las listas pueden ser modificadas my_list.append(100) print(my_list) # Las tuplas son objetos inmutables! my_tuple.append(50) # Accediendo a valores print(my_list[0]) print(my_list[-1]) print(my_list[0:2]) print(my_list[3:5]) print(my_list[::2]) print(my_list[:-2]) # Conjuntos my_set = {1, 1, 1, 2, 2, 3} print(my_set) # Diccionarios my_dict = { 'llave': 'corazón', 'sonrisa': 'corazones' } print(my_dict['sonrisa']) ``` ## Control Flow ``` # Condicionales If, Elif, Else x = 7 cota_inferior = 5 cota_superior = 10 if x < cota_inferior: print('{} es menor que {}'.format(x, cota_inferior)) elif x < cota_superior: print('{} es mayor igual que {} y menor que {}'.format(x, cota_inferior, cota_superior)) else: print('{} es mayor igual que {}'.format(x, cota_superior)) # Cilo While i = 0 while i < 10: print(i) i += 1 # Ciclo For for i in range(1, 10, 2): if i < 5: print('{} es menor que 5'.format(i)) else: print('{} es mayor igual que 5'.format(i)) ``` ### Range no es una lista! ``` my_range = range(1, 10, 2) type(my_range) range? # Presione TAB en el punto de la lína my_range. my_range.start my_range.stop my_range.step ``` ## Librerías ``` import testFunction as tf # En Jupyter podemos ver el código de fuente inclusive tf.testFunction?? tf.testFunction(10, 2.5) # La librería math no está cargada! math.floor(2.5) ``` ### Pro-Tip Jupyter está basado en ```IPython```, por lo que puedes hacer uso de los comandos mágicos. ``` # Un poco de documentación %magic %lsmagic %pwd %%lsmagic ``` Existen comando mágicos de línea (```%```)y de celda completa (```%%```) ``` %timeit comprehension_list = [i*2 for i in range(1000)] %%timeit no_comprehension_list = [] for i in range(1000): no_comprehension_list.append(i * 2) ``` ## Evaluación del laboratorio #### Preparativos * Crea un archivo en la misma carpeta de este notebook llamado ```labFunctions.py``` (tip: Puedes hacerlo desde Jupyter Notebook/Lab!). * Se presentan a continuación tres ejercicios, para cada uno debes definir una función con el nombre y argumentos que se solicite. * La evaluación consiste en que esas funciones se deben cargar a este notebook como una librería y se probarán con los datos entregados. #### Se evaluará: 1. El código 2. Que la solución coincida con el ```assert```. 3. Que se ejecute el jupyter notebook. ### Ejercicio #1 * Definir la función llamada ```tribonacci``` con un argumento ```n``` tal que: - Para n = 1, 2 o 3 retorne el valor 1. - Para n >= 4 retorne la suma de los últimos tres valores de la sucesión. Recuerda, para definir una función: ```python def tribonacci(n): # Tu código va aquí value = ... return value ``` ``` from labFunctions import tribonacci n = 20 assert tribonacci(n) == 46499, 'Nope! Revisa cuidadosamente tu función' print('Correcto!. El {0}-ésimo término de la serie es {1}.'.format(n, tribonacci(n))) ``` __OJO__: Engañar al ```assert``` es fácil. Podrías crear la siguiente función: ``` def tribonacci(n): if n == 20: return 46499 else: return None n = 20 assert tribonacci(n) == 46499, 'Nope! Revisa cuidadosamente tu función' print('Correcto!. El {0}-ésimo término de la serie es {1}.'.format(n, tribonacci(n))) ``` __Consejo:__ No lo hagas... ### Ejercicio #2 Primero, ejecuta la siguiente celda con tal de crear el diccionario ```nba_players``` que contiene información de los jugadores de la NBA en los últimos años de la siguiente manera: - Las *keys* corresponden al nombre del jugador. - Los *values* corresponden a listas en el siguiente formato: 1. Año de inicio 2. Año de fin 3. Posición 4. Altura (pies - pulgadas) 5. Peso (libras) 6. Día de nacimiento 7. Universidad ``` import os import pickle with open(os.path.join('data', 'nba_player_data.pkl'), 'rb') as input_file: nba_player_data = pickle.load(input_file) # Recordatorio! Los diccionarios son elementos iterables for key, value in nba_player_data.items(): print('El jugador {} tiene las siguientes estadísticas'.format(key)) print('\t {}'.format(value)) print('\n') ``` ### Ejercicio #2 Define la función ```tallest_player(nba_player_data)``` tal que el argumento sea el diccionario anteriormente definido y retorne un objeto ```list``` con el/los jugadores más altos de la NBA. #### Tip 1 Las listas poseen el método ```append``` ``` list.append? def onlyEvenNumbers(x): """Retorna los números pares de la lista x""" aux_list = [] if isinstance(x, range) or isinstance(x, list): for i in x: if i % 2 == 0: aux_list.append(i) return aux_list else: print('El argumento debe ser una objeto list o range') return print(onlyEvenNumbers(range(10))) print('-------------------------') print(onlyEvenNumbers({'key1': 1})) ``` #### Tip 2 Las objetos ```str``` (_string_) los puedes cortar con ```split```. ``` str.split? pies, pulgadas = '10-1'.split('-') print(pies) print(type(pies)) print(int(pies)) print(type(int(pies))) from labFunctions import tallest_player assert tallest_player(nba_player_data) == ['Manute Bol', 'Gheorghe Muresan'], 'Nope! Revisa cuidadosamente tu función' print('Correcto!. Los jugadores más altos de la NBA son:\n') print(*tallest_player(nba_player_data), sep='\n') ``` ### Ejercicio #3 Define la función ```more_time_position_player(nba_player_data, position)``` tal que el primer argumento sea el diccionario con los datos cargados, el segundo argumento la posición con tal que retorne una lista de el/los jugadores que estuvieron la mayor cantidad de años en la NBA para la posición seleccionada Las posibles posiciones son: ['F-C', 'C-F', 'C', 'G', 'F', 'G-F', 'F-G'] ``` from labFunctions import more_time_position_player position = 'F-G' assert more_time_position_player(nba_player_data, position) == ['Grant Hill', 'Paul Pierce'], 'Nope! Revisa cuidadosamente tu función' print('Correcto! En la posición {} el/los jugadores con una mayor cantidad de tiempo son:\n'.format(position)) print(*more_time_position_player(nba_player_data, position), sep='\n') ``` ### Nombre: Juanito Perez #### Rol: 201110002-6
github_jupyter
# The Hill-Tononi Neuron and Synapse Models ## Hans Ekkehard Plesser, NMBU/FZ Jülich/U Oslo, 2016-12-01 ## Background This notebook describes the neuron and synapse model proposed by Hill and Tononi in *J Neurophysiol* 93:1671-1698, 2005 ([doi:10.1152/jn.00915.2004](http://dx.doi.org/doi:10.1152/jn.00915.2004)) and their implementation in NEST. The notebook also contains some tests. This description is based on the original publication and publications cited therein, an analysis of the source code of the original Synthesis implementation kindly provided by Sean Hill, and plausiblity arguments. In what follows, I will refer to the original paper as [HT05]. This notebook was run successfully with NEST Branch HT_NMDA at Commit bec1c52 (15 Dec 2016). ## The Neuron Model ### Integration The original Synthesis implementation of the model uses Runge-Kutta integration with fixed 0.25 ms step size, and integrates channels dynamics first, followed by integration of membrane potential and threshold. NEST, in contrast, integrates the complete 16-dimensional state using a single adaptive-stepsize Runge-Kutta-Fehlberg-4(5) solver from the GNU Science Library (`gsl_odeiv_step_rkf45`). ### Membrane potential Membrane potential evolution is governed by [HT05, p 1677] \begin{equation} \frac{\text{d}V}{\text{d}t} = \frac{-g_{\text{NaL}}(V-E_{\text{Na}}) -g_{\text{KL}}(V-E_{\text{K}})+I_{\text{syn}}+I_{\text{int}}}{\tau_{\text{m}}} -\frac{g_{\text{spike}}(V-E_{\text{K}})}{\tau_{\text{spike}}} \end{equation} - The equation does not contain membrane capacitance. As a side-effect, all conductances are dimensionless. - Na and K leak conductances $g_{\text{NaL}}$ and $g_{\text{KL}}$ are constant, although $g_{\text{KL}}$ may be adjusted on slow time scales to mimic neuromodulatory effects. - Reversal potentials $E_{\text{Na}}$ and $E_{\text{K}}$ are assumed constant. - Synaptic currents $I_{\text{syn}}$ and intrinsic currents $I_{\text{int}}$ are discussed below. In contrast to the paper, they are shown with positive sign here (just change in notation). - The last term is a re-polarizing current only active during the refractory period, see below. Note that it has a different (faster) time constant than the other currents. It might have been more natural to use the same time constant for all currents and instead adjust $g_{\text{spike}}$. We follow the original approach here. ### Threshold, Spike generation and refractory effects The threshold evolves according to [HT05, p 1677] \begin{equation} \frac{\text{d}\theta}{\text{d}t} = -\frac{\theta-\theta_{\text{eq}}}{\tau_{\theta}} \end{equation} The neuron emits a single spike if - it is not refractory - membrane potential crosses the threshold, $V\geq\theta$ Upon spike emission, - $V \leftarrow E_{\text{Na}}$ - $\theta \leftarrow E_{\text{Na}}$ - the neuron becomes refractory for time $t_{\text{spike}}$ (`t_ref` in NEST) The repolarizing current is active during, and only during the refractory period: \begin{equation} g_{\text{spike}} = \begin{cases} 1 & \text{neuron is refractory}\\ 0 & \text{else} \end{cases} \end{equation} During the refractory period, the neuron cannot fire new spikes, but all state variables evolve freely, nothing is clamped. The model of spiking and refractoriness is based on Synthesis model `PulseIntegrateAndFire`. ### Intrinsic currents Note that not all intrinsic currents are active in all populations of the network model presented in [HT05, p1678f]. Intrinsic currents are based on the Hodgkin-Huxley description, i.e., \begin{align} I_X &= g_{\text{peak}, X} m_X(V, t)^N_X h_X(V, t)(V-E_X) \\ \frac{\text{d}m_X}{\text{d}t} &= \frac{m_X^{\infty}-m_X}{\tau_{m,X}(V)}\\ \frac{\text{d}h_X}{\text{d}t} &= \frac{h_X^{\infty}-h_X}{\tau_{h,X}(V)} \end{align} where $I_X$ is the current through channel $X$ and $m_X$ and $h_X$ the activation and inactivation variables for channel $X$. #### Pacemaker current $I_h$ Synthesis: `IhChannel` \begin{align} N_h & = 1 \\ m_h^{\infty}(V) &= \frac{1}{1+\exp\left(\frac{V+75\text{mV}}{5.5\text{mV}}\right)} \\ \tau_{m,h}(V) &= \frac{1}{\exp(-14.59-0.086V) + \exp(-1.87 + 0.0701V)} \\ h_h(V, t) &\equiv 1 \end{align} Note that subscript $h$ in some cases above marks the $I_h$ channel. #### Low-threshold calcium current $I_T$ Synthesis: `ItChannel` ##### Equations given in paper \begin{align} N_T & \quad \text{not given} \\ m_T^{\infty}(V) &= 1/\{1 + \exp[ -(V + 59.0)/6.2]\} \\ \tau_{m,T}(V) &= \{0.22/\exp[ -(V + 132.0)/ 16.7]\} + \exp[(V + 16.8)/18.2] + 0.13\\ h_T^{\infty}(V) &= 1/\{1 + \exp[(V + 83.0)/4.0]\} \\ \tau_{h,T}(V) &= \langle 8.2 + \{56.6 + 0.27 \exp[(V + 115.2)/5.0]\}\rangle / \{1.0 + \exp[(V + 86.0)/3.2]\} \end{align} Note the following: - The channel model is based on Destexhe et al, *J Neurophysiol* 76:2049 (1996). - In the equation for $\tau_{m,T}$, the second exponential term must be added to the first (in the denominator) to make dimensional sense; 0.13 and 0.22 have unit ms. - In the equation for $\tau_{h,T}$, the $\langle \rangle$ brackets should be dropped, so that $8.2$ is not divided by the $1+\exp$ term. Otherwise, it could have been combined with the $56.6$. - This analysis is confirmed by code analysis and comparison with Destexhe et al, *J Neurophysiol* 76:2049 (1996), Eq 5. - From Destexhe et al we also find $N_T=2$. ##### Corrected equations This leads to the following equations, which are implemented in Synthesis and NEST. \begin{align} N_T &= 2 \\ m_T^{\infty}(V) &= \frac{1}{1+\exp\left(-\frac{V+59\text{mV}}{6.2\text{mV}}\right)}\\ \tau_{m,T}(V) &= 0.13\text{ms} + \frac{0.22\text{ms}}{\exp\left(-\frac{V + 132\text{mV}}{16.7\text{mV}}\right) + \exp\left(\frac{V + 16.8\text{mV}}{18.2\text{mV}}\right)} \\ h_T^{\infty}(V) &= \frac{1}{1+\exp\left(\frac{V+83\text{mV}}{4\text{mV}}\right)}\\ \tau_{h,T}(V) &= 8.2\text{ms} + \frac{56.6\text{ms} + 0.27\text{ms} \exp\left(\frac{V + 115.2\text{mV}}{5\text{mV}}\right)}{1 + \exp\left(\frac{V + 86\text{mV}}{3.2\text{mV}}\right)} \end{align} #### Persistent Sodium Current $I_{NaP}$ Synthesis: `INaPChannel` This model has only activation ($m$) and uses the steady-state value, so the only relevant equation is that for $m$. In the paper, it is given as \begin{equation} m_{NaP}^{\infty}(V) = 1/[1+\exp(-V+55.7)/7.7] \end{equation} Dimensional analysis indicates that the division by $7.7$ should be in the argument of the exponential, and the minus sign needs to be moved so that the current activates as the neuron depolarizes leading to the corrected equation \begin{equation} m_{NaP}^{\infty}(V) = \frac{1}{1+\exp\left(-\frac{V+55.7\text{mV}}{7.7\text{mV}}\right)} \end{equation} This equation is implemented in NEST and Synthesis and is the one found in Compte et al (2003), cited by [HT05, p 1679]. ##### Corrected exponent According to Compte et al (2003), $N_{NaP}=3$, i.e., \begin{equation} I_{NaP} = g_{\text{peak,NaP}}(m_{NaP}^{\infty}(V))^3(V-E_{NaP}) \end{equation} This equation is also given in a comment in Synthesis, but is missing from the implementation. **Note: NEST implements the equation according to Compte et al (2003) with $N_{NaP}=3$, while Synthesis uses $N_{NaP}=1$.** #### Depolarization-activated Potassium Current $I_{DK}$ Synthesis: `IKNaChannel` This model also only has a single activation variable $m$, following more complicated dynamics expressed by $D$. ##### Equations in paper \begin{align} dD/dt &= D_{\text{influx}} - D(1-D_{\text{eq}})/\tau_D \\ D_{\text{influx}} &= 1/\{1+ \exp[-(V-D_{\theta})/\sigma_D]\} \\ m_{DK}^{\infty} &= 1/1 + (d_{1/2}D)^{3.5} \end{align} There are several problems with these equations. In the steady state the first equation becomes \begin{equation} 0 = - D(1-D_{\text{eq}})/\tau_D \end{equation} with solution \begin{equation} D = 0 \end{equation} This contradicts both the statement [HT05, p. 1679] that $D\to D_{\text{eq}}$ in this case, and the requirement that $D>0$ to avoid a singluarity in the equation for $m_{DK}^{\infty}$. The most plausible correction is \begin{equation} dD/dt = D_{\text{influx}} - (D-D_{\text{eq}})/\tau_D \end{equation} The third equation appears incorrect and logic as well as Wang et al, *J Neurophysiol* 89:3279–3293, 2003, Eq 9, cited in [HT05, p 1679], indicate that the correct equation is \begin{equation} m_{DK}^{\infty} = 1/(1 + (d_{1/2} / D)^{3.5}) \end{equation} ##### Corrected equations The equations for this channel implemented in NEST are thus \begin{align} I_{DK} &= - g_{\text{peak},DK} m_{DK}^{\infty}(V,t) (V - E_{DK})\\ m_{DK}^{\infty} &= \frac{1}{1 + \left(\frac{d_{1/2}}{D}\right)^{3.5}}\\ \frac{dD}{dt} &= D_{\text{influx}}(V) - \frac{D-D_{\text{eq}}}{\tau_D} = \frac{D_{\infty}(V)-D}{\tau_D} \\ D_{\infty}(V) &= \tau_D D_{\text{influx}}(V) + {D_{\text{eq}}}\\ D_{\text{influx}} &= \frac{D_{\text{influx,peak}}}{1+ \exp\left(-\frac{V-D_{\theta}}{\sigma_D}\right)} \end{align} with |$D_{\text{influx,peak}}$|$D_{\text{eq}}$|$\tau_D$|$D_{\theta}$|$\sigma_D$|$d_{1/2}$| | --: | --: | --: | --: | --: | --: | |$0.025\text{ms}^{-1}$ |$0.001$|$1250\text{ms}$|$-10\text{mV}$|$5\text{mV}$|$0.25$| Note the following: - $D_{eq}$ is the equilibrium value only for $D_{\text{influx}}(V)=0$, i.e., in the limit $V\to -\infty$ and $t\to\infty$. - The actual steady-state value is $D_{\infty}$. - $d_{1/2}$, $D$, $D_{\infty}$, and $D_{\text{eq}}$ have identical, but arbitrary units, so we can assume them dimensionless ($D$ is a "factor" that in an abstract way represents concentrations). - $D_{\text{influx}}$ and $D_{\text{influx,peak}}$ are rates of change of $D_{\infty}$ and thus have units of inverse time. - $m_{DK}^{\infty}$ is a steep sigmoid which is almost 0 or 1 except for a narrow window around $d_{1/2}$. - To the left of this window, $I_{DK}\approx 0$. - To the right of this window, $I_{DK}\sim -(V-E_{DK})$. **Note: The differential equation for $dD/dt$ differs from the one implemented in Synthesis.** ### Synaptic channels These are described in [HT05, p 1678]. Synaptic channels are conductance based with double-exponential time course (beta functions) and normalized for peak conductance. NMDA channels are additionally voltage gated, as described below. Let $\{t_{(j, X)}\}$ be the set of all spike arrival times, where $X$ indicates the synapse model and $j$ enumerates spikes. Then the total synaptic input is given by \begin{equation} I_{\text{syn}}(t) = - \sum_{\{t_{(j, X)}\}} \bar{g}_X(t-t_{(j, X)}) (V-E_X) \end{equation} #### Standard Channels Synthesis: `SynChannel` The conductance change due to a single input spike at time $t=0$ through a channel of type $X$ is given by (see below for exceptions) \begin{align} \bar{g}_X(t) &= g_X(t)\\ g_X(t) &= g_{\text{peak}, X}\frac{\exp(-t/\tau_1) - \exp(-t/\tau_2)}{ \exp(-t_{\text{peak}}/\tau_1) - \exp(-t_{\text{peak}}/\tau_2)} \Theta(t)\\ t_{\text{peak}} &= \frac{\tau_2 \tau_1}{\tau_2 - \tau_1} \ln\frac{ \tau_2}{\tau_1} \end{align} where $t_{\text{peak}}$ is the time of the conductance maximum and $\tau_1$ and $\tau_2$ are synaptic rise- and decay-time, respectively; $\Theta(t)$ is the Heaviside step function. The equation is integrated using exact integration in Synthesis; in NEST, it is included in the ODE-system integrated using the Runge-Kutta-Fehlberg 4(5) solver from GSL. The "indirection" from $g$ to $\bar{g}$ is required for consistent notation for NMDA channels below. These channels are used for AMPA, GABA_A and GABA_B channels. #### NMDA Channels Synthesis: `SynNMDAChannel` For the NMDA channel we have \begin{equation} \bar{g}_{\text{NMDA}}(t) = m(V, t) g_{\text{NMDA}}(t) \end{equation} with $g_{\text{NMDA}}(t)$ from above. The voltage-dependent gating $m(V, t)$ is defined as follows (based on textual description, Vargas-Caballero and Robinson *J Neurophysiol* 89:2778–2783, 2003, [doi:10.1152/jn.01038.2002](http://dx.doi.org/10.1152/jn.01038.2002), and code inspection): \begin{align} m(V, t) &= a(V) m_{\text{fast}}^*(V, t) + ( 1 - a(V) ) m_{\text{slow}}^*(V, t)\\ a(V) &= 0.51 - 0.0028 V \\ m^{\infty}(V) &= \frac{1}{ 1 + \exp\left( -S_{\text{act}} ( V - V_{\text{act}} ) \right) } \\ m_X^*(V, t) &= \min(m^{\infty}(V), m_X(V, t))\\ \frac{\text{d}m_X}{\text{d}t} &= \frac{m^{\infty}(V) - m_X }{ \tau_{\text{Mg}, X}} \end{align} where $X$ is "slow" or "fast". $a(V)$ expresses voltage-dependent weighting between slow and fast unblocking, $m^{\infty}(V)$ the steady-state value of the proportion of unblocked NMDA-channels, the minimum condition in $m_X^*(V,t)$ the instantaneous blocking and the differential equation for $m_X(V,t)$ the unblocking dynamics. Synthesis uses tabluated values for $m^{\infty}$. NEST uses the best fit of $V_{\text{act}}$ and $S_{\text{act}}$ to the tabulated data for conductance table `fNMDA`. **Note**: NEST also supports instantaneous NMDA dynamics using a boolean switch. In that case $m(V, t)=m^{\infty}(V)$. ### No synaptic "minis" Synaptic "minis" due to spontaneous release of neurotransmitter quanta [HT05, p 1679] are not included in the NEST implementation of the Hill-Tononi model, because the total mini input rate for a cell was just 2 Hz and they cause PSP changes by $0.5 \pm 0.25$mV only and thus should have minimal effect. ## The Synapse Depression Model The synapse depression model is implemented in NEST as `ht_synapse`, in Synthesis in `SynChannel` and `VesiclePool`. $P\in[0, 1]$ describes the state of the presynaptic vesicle pool. Spikes are transmitted with an effective weight \begin{equation} w_{\text{eff}} = P w \end{equation} where $w$ is the nominal weight of the synapse. ### Evolution of $P$ in paper and Synthesis implementation According to [HT05, p 1678], the pool $P$ evolves according to \begin{equation} \frac{\text{d}P}{\text{d}t} = -\:\text{spike}\:\delta_P P+\frac{P_{\text{peak}}-P}{\tau_P} \end{equation} where - $\text{spike}=1$ while the neuron is in spiking state, 0 otherwise - $P_{\text{peak}}=1$ - $\delta_P = 0.5$ by default - $\tau_P = 500\text{ms}$ by default Since neurons are in spiking state for one integration time step $\Delta t$, this suggest that the effect of a spike on the vesicle pool is approximately \begin{equation} P \leftarrow ( 1 - \Delta t \delta_P ) P \end{equation} For default parameters $\Delta t=0.25\text{ms}$ and $\delta_P=0.5$, this means that a single spike reduceds the pool by 1/8 of its current size. ### Evolution of $P$ in the NEST implementation In NEST, we modify the equations above to obtain a definite jump in pool size on transmission of a spike, without any dependence on the integration time step (fixing explicitly $P_{\text{peak}}$): \begin{align} \frac{\text{d}P}{\text{d}t} &= \frac{1-P}{\tau_P} \\ P &\leftarrow ( 1 - \delta_P^*) P \end{align} $P$ is only updated when a spike passes the synapse, in the following way (where $\Delta$ is the time since the last spike through the same synapse): 1. Recuperation: $P\leftarrow 1 - ( 1 - P ) \exp( -\Delta / \tau_P )$ 2. Spike transmission with $w_{\text{eff}} = P w$ 3. Depletion: $P \leftarrow ( 1 - \delta_P^*) P$ To achieve approximately the same depletion as in Synthesis, use $\delta_P^*=\Delta t\delta_p$. ## Tests of the Models ``` import sys import math import numpy as np import pandas as pd import scipy.optimize as so import scipy.integrate as si import matplotlib.pyplot as plt import nest %matplotlib inline plt.rcParams['figure.figsize'] = (12, 3) ``` ### Neuron Model #### Passive properties Test relaxation of neuron and threshold to equilibrium values in absence of intrinsic currents and input. We then have \begin{align} \tau_m \dot{V}&= \left[-g_{NaL}(V-E_{Na})-g_{KL}(V-E_K)\right] = -(g_{NaL}+g_{KL})V+(g_{NaL}E_{Na}+g_{KL}E_K)\\ \Leftrightarrow\quad \tau_{\text{eff}}\dot{V} &= -V+V_{\infty}\\ V_{\infty} &= \frac{g_{NaL}E_{Na}+g_{KL}E_K}{g_{NaL}+g_{KL}}\\ \tau_{\text{eff}}&=\frac{\tau_m}{g_{NaL}+g_{KL}} \end{align} with solution \begin{equation} V(t) = V_0 e^{-\frac{t}{\tau_{\text{eff}}}} + V_{\infty}\left(1-e^{-\frac{t}{\tau_{\text{eff}}}} \right) \end{equation} and for the threshold \begin{equation} \theta(t) = \theta_0 e^{-\frac{t}{\tau_{\theta}}} + \theta_{eq}\left(1-e^{-\frac{t}{\tau_{\theta}}} \right) \end{equation} ``` def Vpass(t, V0, gNaL, ENa, gKL, EK, taum, I=0): tau_eff = taum/(gNaL + gKL) Vinf = (gNaL*ENa + gKL*EK + I)/(gNaL + gKL) return V0*np.exp(-t/tau_eff) + Vinf*(1-np.exp(-t/tau_eff)) def theta(t, th0, theq, tauth): return th0*np.exp(-t/tauth) + theq*(1-np.exp(-t/tauth)) nest.ResetKernel() nest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0., 'g_peak_T': 0., 'g_peak_h': 0., 'tau_theta': 10.}) hp = nest.GetDefaults('ht_neuron') V_th_0 = [(-100., -65.), (-70., -51.), (-55., -10.)] T_sim = 20. nrns = nest.Create('ht_neuron', n=len(V_th_0), params=[{'V_m': V, 'theta': th} for V, th in V_th_0]) nest.Simulate(T_sim) V_th_sim = nest.GetStatus(nrns, ['V_m', 'theta']) for (V0, th0), (Vsim, thsim) in zip(V_th_0, V_th_sim): Vex = Vpass(T_sim, V0, hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'], hp['tau_m']) thex = theta(T_sim, th0, hp['theta_eq'], hp['tau_theta']) print('Vex = {:.3f}, Vsim = {:.3f}, Vex-Vsim = {:.3e}'.format(Vex, Vsim, Vex-Vsim)) print('thex = {:.3f}, thsim = {:.3f}, thex-thsim = {:.3e}'.format(thex, thsim, thex-thsim)) ``` Agreement is excellent. #### Spiking without intrinsic currents or synaptic input The equations above hold for input current $I(t)$, but with \begin{equation} V_{\infty}(I) = \frac{g_{NaL}E_{Na}+g_{KL}E_K}{g_{NaL}+g_{KL}} + \frac{I}{g_{NaL}+g_{KL}} \end{equation} In NEST, we need to inject input current into the `ht_neuron` with a `dc_generator`, whence the current will set on only at a later time and we need to take this into account. For simplicity, we assume that $V$ is initialized to $V_{\infty}(I=0)$ and that current onset is at $t_I$. We then have for $t\geq t_I$ \begin{equation} V(t) = V_{\infty}(0) e^{-\frac{t-t_I}{\tau_{\text{eff}}}} + V_{\infty}(I)\left(1-e^{-\frac{t-t_I}{\tau_{\text{eff}}}} \right) \end{equation} If we also initialize $\theta=\theta_{\text{eq}}$, the threshold is constant and we have the first spike at \begin{align} V(t) &= \theta_{\text{eq}}\\ \Leftrightarrow\quad t &= t_I -\tau_{\text{eff}} \ln \frac{\theta_{\text{eq}}-V_{\infty}(I)}{V_{\infty}(0)-V_{\infty}(I)} \end{align} ``` def t_first_spike(gNaL, ENa, gKL, EK, taum, theq, tI, I): tau_eff = taum/(gNaL + gKL) Vinf0 = (gNaL*ENa + gKL*EK)/(gNaL + gKL) VinfI = (gNaL*ENa + gKL*EK + I)/(gNaL + gKL) return tI - tau_eff * np.log((theq-VinfI) / (Vinf0-VinfI)) nest.ResetKernel() nest.SetKernelStatus({'resolution': 0.001}) nest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0., 'g_peak_T': 0., 'g_peak_h': 0.}) hp = nest.GetDefaults('ht_neuron') I = [25., 50., 100.] tI = 1. delay = 1. T_sim = 40. nrns = nest.Create('ht_neuron', n=len(I)) dcgens = nest.Create('dc_generator', n=len(I), params=[{'amplitude': dc, 'start': tI} for dc in I]) sdets = nest.Create('spike_detector', n=len(I)) nest.Connect(dcgens, nrns, 'one_to_one', {'delay': delay}) nest.Connect(nrns, sdets, 'one_to_one') nest.Simulate(T_sim) t_first_sim = [ev['events']['times'][0] for ev in nest.GetStatus(sdets)] for dc, tf_sim in zip(I, t_first_sim): tf_ex = t_first_spike(hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'], hp['tau_m'], hp['theta_eq'], tI+delay, dc) print('tex = {:.4f}, tsim = {:.4f}, tex-tsim = {:.4f}'.format(tf_ex, tf_sim, tf_ex-tf_sim)) ``` Agreement is as good as possible: All spikes occur in NEST at then end of the time step containing the expected spike time. #### Inter-spike interval After each spike, $V_m = \theta = E_{Na}$, i.e., all memory is erased. We can thus treat ISIs independently. $\theta$ relaxes according to the equation above. For $V_m$, we have during $t_{\text{spike}}$ after a spike \begin{align} \tau_m\dot{V} &= {-g_{\text{NaL}}(V-E_{\text{Na}}) -g_{\text{KL}}(V-E_{\text{K}})+I} -\frac{\tau_m}{\tau_{\text{spike}}}({V-E_{\text{K}}})\\ &= -(g_{NaL}+g_{KL}+\frac{\tau_m}{\tau_{\text{spike}}})V+(g_{NaL}E_{Na}+g_{KL}E_K+\frac{\tau_m}{\tau_{\text{spike}}}E_K) \end{align} thus recovering the same for for the solution but with \begin{align} \tau^*_{\text{eff}} &= \frac{\tau_m}{g_{NaL}+g_{KL}+\frac{\tau_m}{\tau_{\text{spike}}}}\\ V^*_{\infty} &= \frac{g_{NaL}E_{Na}+g_{KL}E_K+I+\frac{\tau_m}{\tau_{\text{spike}}}E_K}{g_{NaL}+g_{KL}+\frac{\tau_m}{\tau_{\text{spike}}}} \end{align} Assuming that the ISI is longer than the refractory period $t_{\text{spike}}$, and we had a spike at time $t_s$, then we have at $t_s+t_{\text{spike}}$ \begin{align} V^* &= V(t_s+t_{\text{spike}}) = E_{Na} e^{-\frac{t_{\text{spike}}}{\tau^*_{\text{eff}}}} + V^*_{\infty}(I)\left(1-e^{-\frac{t_{\text{spike}}}{\tau^*_{\text{eff}}}} \right)\\ \theta^* &= \theta(t_s+t_{\text{spike}}) = E_{Na} e^{-\frac{t_{\text{spike}}}{\tau_{\theta}}} + \theta_{eq}\left(1-e^{-\frac{t_{\text{spike}}}{\tau_{\theta}}} \right)\\ t^* &= t_s+t_{\text{spike}} \end{align} For $t>t^*$, the normal equations apply again, i.e., \begin{align} V(t) &= V^* e^{-\frac{t-t^*}{\tau_{\text{eff}}}} + V_{\infty}(I)\left(1-e^{-\frac{t-t^*}{\tau_{\text{eff}}}} \right)\\ \theta(t) &= \theta^* e^{-\frac{t-t^*}{\tau_{\theta}}} + \theta_{\infty}\left(1-e^{-\frac{t-t^*}{\tau_{\theta}}}\right) \end{align} The time of the next spike is then given by \begin{equation} V(\hat{t}) = \theta(\hat{t}) \end{equation} which can only be solved numerically. The ISI is then obtained as $\hat{t}-t_s$. ``` def Vspike(tspk, gNaL, ENa, gKL, EK, taum, tauspk, I=0): tau_eff = taum/(gNaL + gKL + taum/tauspk) Vinf = (gNaL*ENa + gKL*EK + I + taum/tauspk*EK)/(gNaL + gKL + taum/tauspk) return ENa*np.exp(-tspk/tau_eff) + Vinf*(1-np.exp(-tspk/tau_eff)) def thetaspike(tspk, ENa, theq, tauth): return ENa*np.exp(-tspk/tauth) + theq*(1-np.exp(-tspk/tauth)) def Vpost(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I=0): Vsp = Vspike(tspk, gNaL, ENa, gKL, EK, taum, tauspk, I) return Vpass(t-tspk, Vsp, gNaL, ENa, gKL, EK, taum, I) def thetapost(t, tspk, ENa, theq, tauth): thsp = thetaspike(tspk, ENa, theq, tauth) return theta(t-tspk, thsp, theq, tauth) def threshold(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I, theq, tauth): return Vpost(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I) - thetapost(t, tspk, ENa, theq, tauth) nest.ResetKernel() nest.SetKernelStatus({'resolution': 0.001}) nest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0., 'g_peak_T': 0., 'g_peak_h': 0.}) hp = nest.GetDefaults('ht_neuron') I = [25., 50., 100.] tI = 1. delay = 1. T_sim = 1000. nrns = nest.Create('ht_neuron', n=len(I)) dcgens = nest.Create('dc_generator', n=len(I), params=[{'amplitude': dc, 'start': tI} for dc in I]) sdets = nest.Create('spike_detector', n=len(I)) nest.Connect(dcgens, nrns, 'one_to_one', {'delay': delay}) nest.Connect(nrns, sdets, 'one_to_one') nest.Simulate(T_sim) isi_sim = [] for ev in nest.GetStatus(sdets): t_spk = ev['events']['times'] isi = np.diff(t_spk) isi_sim.append((np.min(isi), np.mean(isi), np.max(isi))) for dc, (isi_min, isi_mean, isi_max) in zip(I, isi_sim): isi_ex = so.bisect(threshold, hp['t_ref'], 50, args=(hp['t_ref'], hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'], hp['tau_m'], hp['tau_spike'], dc, hp['theta_eq'], hp['tau_theta'])) print('isi_ex = {:.4f}, isi_sim (min, mean, max) = ({:.4f}, {:.4f}, {:.4f})'.format( isi_ex, isi_min, isi_mean, isi_max)) ``` - ISIs are as predicted: measured ISI is predicted rounded up to next time step - ISIs are perfectly regular as expected #### Intrinsic Currents ##### Preparations ``` nest.ResetKernel() class Channel: """ Base class for channel models in Python. """ def tau_m(self, V): raise NotImplementedError() def tau_h(self, V): raise NotImplementedError() def m_inf(self, V): raise NotImplementedError() def h_inf(self, V): raise NotImplementedError() def D_inf(self, V): raise NotImplementedError() def dh(self, h, t, V): return (self.h_inf(V)-h)/self.tau_h(V) def dm(self, m, t, V): return (self.m_inf(V)-m)/self.tau_m(V) def voltage_clamp(channel, DT_V_seq, nest_dt=0.1): "Run voltage clamp with voltage V through intervals DT." # NEST part nest_g_0 = {'g_peak_h': 0., 'g_peak_T': 0., 'g_peak_NaP': 0., 'g_peak_KNa': 0.} nest_g_0[channel.nest_g] = 1. nest.ResetKernel() nest.SetKernelStatus({'resolution': nest_dt}) nrn = nest.Create('ht_neuron', params=nest_g_0) mm = nest.Create('multimeter', params={'record_from': ['V_m', 'theta', channel.nest_I], 'interval': nest_dt}) nest.Connect(mm, nrn) # ensure we start from equilibrated state nest.SetStatus(nrn, {'V_m': DT_V_seq[0][1], 'equilibrate': True, 'voltage_clamp': True}) for DT, V in DT_V_seq: nest.SetStatus(nrn, {'V_m': V, 'voltage_clamp': True}) nest.Simulate(DT) t_end = nest.GetKernelStatus()['time'] # simulate a little more so we get all data up to t_end to multimeter nest.Simulate(2 * nest.GetKernelStatus()['min_delay']) tmp = pd.DataFrame(nest.GetStatus(mm)[0]['events']) nest_res = tmp[tmp.times <= t_end] # Control part t_old = 0. try: m_old = channel.m_inf(DT_V_seq[0][1]) except NotImplementedError: m_old = None try: h_old = channel.h_inf(DT_V_seq[0][1]) except NotImplementedError: h_old = None try: D_old = channel.D_inf(DT_V_seq[0][1]) except NotImplementedError: D_old = None t_all, I_all = [], [] if D_old is not None: D_all = [] for DT, V in DT_V_seq: t_loc = np.arange(0., DT+0.1*nest_dt, nest_dt) I_loc = channel.compute_I(t_loc, V, m_old, h_old, D_old) t_all.extend(t_old + t_loc[1:]) I_all.extend(I_loc[1:]) if D_old is not None: D_all.extend(channel.D[1:]) m_old = channel.m[-1] if m_old is not None else None h_old = channel.h[-1] if h_old is not None else None D_old = channel.D[-1] if D_old is not None else None t_old = t_all[-1] if D_old is None: ctrl_res = pd.DataFrame({'times': t_all, channel.nest_I: I_all}) else: ctrl_res = pd.DataFrame({'times': t_all, channel.nest_I: I_all, 'D': D_all}) return nest_res, ctrl_res ``` ##### I_h channel The $I_h$ current is governed by \begin{align} I_h &= g_{\text{peak}, h} m_h(V, t) (V-E_h) \\ \frac{\text{d}m_h}{\text{d}t} &= \frac{m_h^{\infty}-m_h}{\tau_{m,h}(V)}\\ m_h^{\infty}(V) &= \frac{1}{1+\exp\left(\frac{V+75\text{mV}}{5.5\text{mV}}\right)} \\ \tau_{m,h}(V) &= \frac{1}{\exp(-14.59-0.086V) + \exp(-1.87 + 0.0701V)} \end{align} We first inspect $m_h^{\infty}(V)$ and $\tau_{m,h}(V)$ to prepare for testing ``` nest.ResetKernel() class Ih(Channel): nest_g = 'g_peak_h' nest_I = 'I_h' def __init__(self, ht_params): self.hp = ht_params def tau_m(self, V): return 1/(np.exp(-14.59-0.086*V) + np.exp(-1.87 + 0.0701*V)) def m_inf(self, V): return 1/(1+np.exp((V+75)/5.5)) def compute_I(self, t, V, m0, h0, D0): self.m = si.odeint(self.dm, m0, t, args=(V,)) return - self.hp['g_peak_h'] * self.m * (V - self.hp['E_rev_h']) ih = Ih(nest.GetDefaults('ht_neuron')) V = np.linspace(-110, 30, 100) plt.plot(V, ih.tau_m(V)); ax = plt.gca(); ax.set_xlabel('Voltage V [mV]'); ax.set_ylabel('Time constant tau_m [ms]', color='b'); ax2 = ax.twinx() ax2.plot(V, ih.m_inf(V), 'g'); ax2.set_ylabel('Steady-state m_h^inf', color='g'); ``` - The time constant is extremely long, up to 1s, for relevant voltages where $I_h$ is perceptible. We thus need long test runs. - Curves are in good agreement with Fig 5 of Huguenard and McCormick, *J Neurophysiol* 68:1373, 1992, cited in [HT05]. I_h data there was from guinea pig slices at 35.5 C and needed no temperature adjustment. We now run a voltage clamp experiment starting from the equilibrium value. ``` ih = Ih(nest.GetDefaults('ht_neuron')) nr, cr = voltage_clamp(ih, [(500, -65.), (500, -80.), (500, -100.), (500, -90.), (500, -55.)]) plt.subplot(1, 2, 1) plt.plot(nr.times, nr.I_h, label='NEST'); plt.plot(cr.times, cr.I_h, label='Control'); plt.legend(loc='upper left'); plt.xlabel('Time [ms]'); plt.ylabel('I_h [mV]'); plt.title('I_h current') plt.subplot(1, 2, 2) plt.plot(nr.times, (nr.I_h-cr.I_h)/np.abs(cr.I_h)); plt.title('Relative I_h error') plt.xlabel('Time [ms]'); plt.ylabel('Rel. error (NEST-Control)/|Control|'); ``` - Agreement is very good - Note that currents have units of $mV$ due to choice of dimensionless conductances. ##### I_T Channel The corrected equations used for the $I_T$ channel in NEST are \begin{align} I_T &= g_{\text{peak}, T} m_T^2(V, t) h_T(V,t) (V-E_T) \\ m_T^{\infty}(V) &= \frac{1}{1+\exp\left(-\frac{V+59\text{mV}}{6.2\text{mV}}\right)}\\ \tau_{m,T}(V) &= 0.13\text{ms} + \frac{0.22\text{ms}}{\exp\left(-\frac{V + 132\text{mV}}{16.7\text{mV}}\right) + \exp\left(\frac{V + 16.8\text{mV}}{18.2\text{mV}}\right)} \\ h_T^{\infty}(V) &= \frac{1}{1+\exp\left(\frac{V+83\text{mV}}{4\text{mV}}\right)}\\ \tau_{h,T}(V) &= 8.2\text{ms} + \frac{56.6\text{ms} + 0.27\text{ms} \exp\left(\frac{V + 115.2\text{mV}}{5\text{mV}}\right)}{1 + \exp\left(\frac{V + 86\text{mV}}{3.2\text{mV}}\right)} \end{align} ``` nest.ResetKernel() class IT(Channel): nest_g = 'g_peak_T' nest_I = 'I_T' def __init__(self, ht_params): self.hp = ht_params def tau_m(self, V): return 0.13 + 0.22/(np.exp(-(V+132)/16.7) + np.exp((V+16.8)/18.2)) def tau_h(self, V): return 8.2 + (56.6 + 0.27 * np.exp((V+115.2)/5.0)) /(1 + np.exp((V+86.0)/3.2)) def m_inf(self, V): return 1/(1+np.exp(-(V+59.0)/6.2)) def h_inf(self, V): return 1/(1+np.exp((V+83.0)/4.0)) def compute_I(self, t, V, m0, h0, D0): self.m = si.odeint(self.dm, m0, t, args=(V,)) self.h = si.odeint(self.dh, h0, t, args=(V,)) return - self.hp['g_peak_T'] * self.m**2 * self.h * (V - self.hp['E_rev_T']) iT = IT(nest.GetDefaults('ht_neuron')) V = np.linspace(-110, 30, 100) plt.plot(V, 10 * iT.tau_m(V), 'b-', label='10 * tau_m'); plt.plot(V, iT.tau_h(V), 'b--', label='tau_h'); ax1 = plt.gca(); ax1.set_xlabel('Voltage V [mV]'); ax1.set_ylabel('Time constants [ms]', color='b'); ax2 = ax1.twinx() ax2.plot(V, iT.m_inf(V), 'g-', label='m_inf'); ax2.plot(V, iT.h_inf(V), 'g--', label='h_inf'); ax2.set_ylabel('Steady-state', color='g'); ln1, lb1 = ax1.get_legend_handles_labels() ln2, lb2 = ax2.get_legend_handles_labels() plt.legend(ln1+ln2, lb1+lb2, loc='upper right'); ``` - Time constants here are much shorter than for I_h - Time constants are about five times shorter than in Fig 1 of Huguenard and McCormick, *J Neurophysiol* 68:1373, 1992, cited in [HT05], but that may be due to the fact that the original data was collected at 23-25C and parameters have been adjusted to 36C. - Steady-state activation and inactivation look much like in Huguenard and McCormick. - Note: Most detailed paper on data is Huguenard and Prince, *J Neurosci* 12:3804-3817, 1992. The parameters given for h_inf here are for VB cells, not nRT cells in that paper (Fig 5B), parameters for m_inf are similar to but not exactly those of Fig 4B for either VB or nRT. ``` iT = IT(nest.GetDefaults('ht_neuron')) nr, cr = voltage_clamp(iT, [(200, -65.), (200, -80.), (200, -100.), (200, -90.), (200, -70.), (200, -55.)], nest_dt=0.1) plt.subplot(1, 2, 1) plt.plot(nr.times, nr.I_T, label='NEST'); plt.plot(cr.times, cr.I_T, label='Control'); plt.legend(loc='upper left'); plt.xlabel('Time [ms]'); plt.ylabel('I_T [mV]'); plt.title('I_T current') plt.subplot(1, 2, 2) plt.plot(nr.times, (nr.I_T-cr.I_T)/np.abs(cr.I_T)); plt.title('Relative I_T error') plt.xlabel('Time [ms]'); plt.ylabel('Rel. error (NEST-Control)/|Control|'); ``` - Also here the results are in good agreement and the error appears acceptable. #### I_NaP channel This channel adapts instantaneously to changes in membrane potential: \begin{align} I_{NaP} &= - g_{\text{peak}, NaP} (m_{NaP}^{\infty}(V, t))^3 (V-E_{NaP}) \\ m_{NaP}^{\infty}(V) &= \frac{1}{1+\exp\left(-\frac{V+55.7\text{mV}}{7.7\text{mV}}\right)} \end{align} ``` nest.ResetKernel() class INaP(Channel): nest_g = 'g_peak_NaP' nest_I = 'I_NaP' def __init__(self, ht_params): self.hp = ht_params def m_inf(self, V): return 1/(1+np.exp(-(V+55.7)/7.7)) def compute_I(self, t, V, m0, h0, D0): return self.I_V_curve(V * np.ones_like(t)) def I_V_curve(self, V): self.m = self.m_inf(V) return - self.hp['g_peak_NaP'] * self.m**3 * (V - self.hp['E_rev_NaP']) iNaP = INaP(nest.GetDefaults('ht_neuron')) V = np.arange(-110., 30., 1.) nr, cr = voltage_clamp(iNaP, [(1, v) for v in V], nest_dt=0.1) plt.subplot(1, 2, 1) plt.plot(nr.times, nr.I_NaP, label='NEST'); plt.plot(cr.times, cr.I_NaP, label='Control'); plt.legend(loc='upper left'); plt.xlabel('Time [ms]'); plt.ylabel('I_NaP [mV]'); plt.title('I_NaP current') plt.subplot(1, 2, 2) plt.plot(nr.times, (nr.I_NaP-cr.I_NaP)); plt.title('I_NaP error') plt.xlabel('Time [ms]'); plt.ylabel('Error (NEST-Control)'); ``` - Perfect agreement - Step structure is because $V$ changes only every second. ##### I_KNa channel (aka I_DK) Equations for this channel are \begin{align} I_{DK} &= - g_{\text{peak},DK} m_{DK}^{\infty}(V,t) (V - E_{DK})\\ m_{DK}^{\infty} &= \frac{1}{1 + \left(\frac{d_{1/2}}{D}\right)^{3.5}}\\ \frac{dD}{dt} &= D_{\text{influx}}(V) - \frac{D-D_{\text{eq}}}{\tau_D} = \frac{D_{\infty}(V)-D}{\tau_D} \\ D_{\infty}(V) &= \tau_D D_{\text{influx}}(V) + {D_{\text{eq}}}\\ D_{\text{influx}} &= \frac{D_{\text{influx,peak}}}{1+ \exp\left(-\frac{V-D_{\theta}}{\sigma_D}\right)} \end{align} with |$D_{\text{influx,peak}}$|$D_{\text{eq}}$|$\tau_D$|$D_{\theta}$|$\sigma_D$|$d_{1/2}$| | --: | --: | --: | --: | --: | --: | |$0.025\text{ms}^{-1}$ |$0.001$|$1250\text{ms}$|$-10\text{mV}$|$5\text{mV}$|$0.25$| Note the following: - $D_{eq}$ is the equilibrium value only for $D_{\text{influx}}(V)=0$, i.e., in the limit $V\to -\infty$ and $t\to\infty$. - The actual steady-state value is $D_{\infty}$. - $m_{DK}^{\infty}$ is a steep sigmoid which is almost 0 or 1 except for a narrow window around $d_{1/2}$. - To the left of this window, $I_{DK}\approx 0$. - To the right of this window, $I_{DK}\sim -(V-E_{DK})$. ``` nest.ResetKernel() class IDK(Channel): nest_g = 'g_peak_KNa' nest_I = 'I_KNa' def __init__(self, ht_params): self.hp = ht_params def m_inf(self, D): return 1/(1+(0.25/D)**3.5) def D_inf(self, V): return 1250. * self.D_influx(V) + 0.001 def D_influx(self, V): return 0.025 / ( 1 + np.exp(-(V+10)/5.) ) def dD(self, D, t, V): return (self.D_inf(V) - D)/1250. def compute_I(self, t, V, m0, h0, D0): self.D = si.odeint(self.dD, D0, t, args=(V,)) self.m = self.m_inf(self.D) return - self.hp['g_peak_KNa'] * self.m * (V - self.hp['E_rev_KNa']) ``` ###### Properties of I_DK ``` iDK = IDK(nest.GetDefaults('ht_neuron')) D=np.linspace(0.01, 1.5,num=200); V=np.linspace(-110, 30, num=200); ax1 = plt.subplot2grid((1, 9), (0, 0), colspan=4); ax2 = ax1.twinx() ax3 = plt.subplot2grid((1, 9), (0, 6), colspan=3); ax1.plot(V, -iDK.m_inf(iDK.D_inf(V))*(V - iDK.hp['E_rev_KNa']), 'g'); ax1.set_ylabel('Current I_inf(V)', color='g'); ax2.plot(V, iDK.m_inf(iDK.D_inf(V)), 'b'); ax2.set_ylabel('Activation m_inf(D_inf(V))', color='b'); ax1.set_xlabel('Membrane potential V [mV]'); ax2.set_title('Steady-state activation and current'); ax3.plot(D, iDK.m_inf(D), 'b'); ax3.set_xlabel('D'); ax3.set_ylabel('Activation m_inf(D)', color='b'); ax3.set_title('Activation as function of D'); ``` - Note that current in steady state is - $\approx 0$ for $V < -40$mV - $\sim -(V-E_{DK})$ for $V> -30$mV ###### Voltage clamp ``` nr, cr = voltage_clamp(iDK, [(500, -65.), (500, -35.), (500, -25.), (500, 0.), (5000, -70.)], nest_dt=1.) ax1 = plt.subplot2grid((1, 9), (0, 0), colspan=4); ax2 = plt.subplot2grid((1, 9), (0, 6), colspan=3); ax1.plot(nr.times, nr.I_KNa, label='NEST'); ax1.plot(cr.times, cr.I_KNa, label='Control'); ax1.legend(loc='lower right'); ax1.set_xlabel('Time [ms]'); ax1.set_ylabel('I_DK [mV]'); ax1.set_title('I_DK current'); ax2.plot(nr.times, (nr.I_KNa-cr.I_KNa)/np.abs(cr.I_KNa)); ax2.set_title('Relative I_DK error') ax2.set_xlabel('Time [ms]'); ax2.set_ylabel('Rel. error (NEST-Control)/|Control|'); ``` - Looks very fine. - Note that the current gets appreviable only when $V>-35$ mV - Once that threshold is crossed, the current adjust instantaneously to changes in $V$, since it is in the linear regime. - When returning from $V=0$ to $V=-70$ mV, the current remains large for a long time since $D$ has to drop below 1 before $m_{\infty}$ changes appreciably #### Synaptic channels For synaptic channels, NEST allows recording of conductances, so we test conductances directly. Due to the voltage-dependence of the NMDA channels, we still do this in voltage clamp. ``` nest.ResetKernel() class SynChannel: """ Base class for synapse channel models in Python. """ def t_peak(self): return self.tau_1 * self.tau_2 / (self.tau_2 - self.tau_1) * np.log(self.tau_2/self.tau_1) def beta(self, t): val = ( ( np.exp(-t/self.tau_1) - np.exp(-t/self.tau_2) ) / ( np.exp(-self.t_peak()/self.tau_1) - np.exp(-self.t_peak()/self.tau_2) ) ) val[t < 0] = 0 return val def syn_voltage_clamp(channel, DT_V_seq, nest_dt=0.1): "Run voltage clamp with voltage V through intervals DT with single spike at time 1" spike_time = 1.0 delay = 1.0 nest.ResetKernel() nest.SetKernelStatus({'resolution': nest_dt}) try: nrn = nest.Create('ht_neuron', params={'theta': 1e6, 'theta_eq': 1e6, 'instant_unblock_NMDA': channel.instantaneous}) except: nrn = nest.Create('ht_neuron', params={'theta': 1e6, 'theta_eq': 1e6}) mm = nest.Create('multimeter', params={'record_from': ['g_'+channel.receptor], 'interval': nest_dt}) sg = nest.Create('spike_generator', params={'spike_times': [spike_time]}) nest.Connect(mm, nrn) nest.Connect(sg, nrn, syn_spec={'weight': 1.0, 'delay': delay, 'receptor_type': channel.rec_code}) # ensure we start from equilibrated state nest.SetStatus(nrn, {'V_m': DT_V_seq[0][1], 'equilibrate': True, 'voltage_clamp': True}) for DT, V in DT_V_seq: nest.SetStatus(nrn, {'V_m': V, 'voltage_clamp': True}) nest.Simulate(DT) t_end = nest.GetKernelStatus()['time'] # simulate a little more so we get all data up to t_end to multimeter nest.Simulate(2 * nest.GetKernelStatus()['min_delay']) tmp = pd.DataFrame(nest.GetStatus(mm)[0]['events']) nest_res = tmp[tmp.times <= t_end] # Control part t_old = 0. t_all, g_all = [], [] m_fast_old = (channel.m_inf(DT_V_seq[0][1]) if channel.receptor == 'NMDA' and not channel.instantaneous else None) m_slow_old = (channel.m_inf(DT_V_seq[0][1]) if channel.receptor == 'NMDA' and not channel.instantaneous else None) for DT, V in DT_V_seq: t_loc = np.arange(0., DT+0.1*nest_dt, nest_dt) g_loc = channel.g(t_old+t_loc-(spike_time+delay), V, m_fast_old, m_slow_old) t_all.extend(t_old + t_loc[1:]) g_all.extend(g_loc[1:]) m_fast_old = channel.m_fast[-1] if m_fast_old is not None else None m_slow_old = channel.m_slow[-1] if m_slow_old is not None else None t_old = t_all[-1] ctrl_res = pd.DataFrame({'times': t_all, 'g_'+channel.receptor: g_all}) return nest_res, ctrl_res ``` ##### AMPA, GABA_A, GABA_B channels ``` nest.ResetKernel() class PlainChannel(SynChannel): def __init__(self, hp, receptor): self.hp = hp self.receptor = receptor self.rec_code = hp['receptor_types'][receptor] self.tau_1 = hp['tau_rise_'+receptor] self.tau_2 = hp['tau_decay_'+receptor] self.g_peak = hp['g_peak_'+receptor] self.E_rev = hp['E_rev_'+receptor] def g(self, t, V, mf0, ms0): return self.g_peak * self.beta(t) def I(self, t, V): return - self.g(t) * (V-self.E_rev) ampa = PlainChannel(nest.GetDefaults('ht_neuron'), 'AMPA') am_n, am_c = syn_voltage_clamp(ampa, [(25, -70.)], nest_dt=0.1) plt.subplot(1, 2, 1); plt.plot(am_n.times, am_n.g_AMPA, label='NEST'); plt.plot(am_c.times, am_c.g_AMPA, label='Control'); plt.xlabel('Time [ms]'); plt.ylabel('g_AMPA'); plt.title('AMPA Channel'); plt.subplot(1, 2, 2); plt.plot(am_n.times, (am_n.g_AMPA-am_c.g_AMPA)/am_c.g_AMPA); plt.xlabel('Time [ms]'); plt.ylabel('Rel error'); plt.title('AMPA rel error'); ``` - Looks quite good, but the error is maybe a bit larger than one would hope. - But the synaptic rise time is short (0.5 ms) compared to the integration step in NEST (0.1 ms), which may explain the error. - Reducing the time step reduces the error: ``` ampa = PlainChannel(nest.GetDefaults('ht_neuron'), 'AMPA') am_n, am_c = syn_voltage_clamp(ampa, [(25, -70.)], nest_dt=0.001) plt.subplot(1, 2, 1); plt.plot(am_n.times, am_n.g_AMPA, label='NEST'); plt.plot(am_c.times, am_c.g_AMPA, label='Control'); plt.xlabel('Time [ms]'); plt.ylabel('g_AMPA'); plt.title('AMPA Channel'); plt.subplot(1, 2, 2); plt.plot(am_n.times, (am_n.g_AMPA-am_c.g_AMPA)/am_c.g_AMPA); plt.xlabel('Time [ms]'); plt.ylabel('Rel error'); plt.title('AMPA rel error'); gaba_a = PlainChannel(nest.GetDefaults('ht_neuron'), 'GABA_A') ga_n, ga_c = syn_voltage_clamp(gaba_a, [(50, -70.)]) plt.subplot(1, 2, 1); plt.plot(ga_n.times, ga_n.g_GABA_A, label='NEST'); plt.plot(ga_c.times, ga_c.g_GABA_A, label='Control'); plt.xlabel('Time [ms]'); plt.ylabel('g_GABA_A'); plt.title('GABA_A Channel'); plt.subplot(1, 2, 2); plt.plot(ga_n.times, (ga_n.g_GABA_A-ga_c.g_GABA_A)/ga_c.g_GABA_A); plt.xlabel('Time [ms]'); plt.ylabel('Rel error'); plt.title('GABA_A rel error'); gaba_b = PlainChannel(nest.GetDefaults('ht_neuron'), 'GABA_B') gb_n, gb_c = syn_voltage_clamp(gaba_b, [(750, -70.)]) plt.subplot(1, 2, 1); plt.plot(gb_n.times, gb_n.g_GABA_B, label='NEST'); plt.plot(gb_c.times, gb_c.g_GABA_B, label='Control'); plt.xlabel('Time [ms]'); plt.ylabel('g_GABA_B'); plt.title('GABA_B Channel'); plt.subplot(1, 2, 2); plt.plot(gb_n.times, (gb_n.g_GABA_B-gb_c.g_GABA_B)/gb_c.g_GABA_B); plt.xlabel('Time [ms]'); plt.ylabel('Rel error'); plt.title('GABA_B rel error'); ``` - Looks good for all - For GABA_B the error is negligible even for dt = 0.1, since the time constants are large. ##### NMDA Channel The equations for this channel are \begin{align} \bar{g}_{\text{NMDA}}(t) &= m(V, t) g_{\text{NMDA}}(t) m(V, t)\\ &= a(V) m_{\text{fast}}^*(V, t) + ( 1 - a(V) ) m_{\text{slow}}^*(V, t)\\ a(V) &= 0.51 - 0.0028 V \\ m^{\infty}(V) &= \frac{1}{ 1 + \exp\left( -S_{\text{act}} ( V - V_{\text{act}} ) \right) } \\ m_X^*(V, t) &= \min(m^{\infty}(V), m_X(V, t))\\ \frac{\text{d}m_X}{\text{d}t} &= \frac{m^{\infty}(V) - m_X }{ \tau_{\text{Mg}, X}} \end{align} where $g_{\text{NMDA}}(t)$ is the beta functions as for the other channels. In case of instantaneous unblocking, $m=m^{\infty}$. ###### NMDA with instantaneous unblocking ``` class NMDAInstantChannel(SynChannel): def __init__(self, hp, receptor): self.hp = hp self.receptor = receptor self.rec_code = hp['receptor_types'][receptor] self.tau_1 = hp['tau_rise_'+receptor] self.tau_2 = hp['tau_decay_'+receptor] self.g_peak = hp['g_peak_'+receptor] self.E_rev = hp['E_rev_'+receptor] self.S_act = hp['S_act_NMDA'] self.V_act = hp['V_act_NMDA'] self.instantaneous = True def m_inf(self, V): return 1. / ( 1. + np.exp(-self.S_act*(V-self.V_act))) def g(self, t, V, mf0, ms0): return self.g_peak * self.m_inf(V) * self.beta(t) def I(self, t, V): return - self.g(t) * (V-self.E_rev) nmdai = NMDAInstantChannel(nest.GetDefaults('ht_neuron'), 'NMDA') ni_n, ni_c = syn_voltage_clamp(nmdai, [(50, -60.), (50, -50.), (50, -20.), (50, 0.), (50, -60.)]) plt.subplot(1, 2, 1); plt.plot(ni_n.times, ni_n.g_NMDA, label='NEST'); plt.plot(ni_c.times, ni_c.g_NMDA, label='Control'); plt.xlabel('Time [ms]'); plt.ylabel('g_NMDA'); plt.title('NMDA Channel (instant unblock)'); plt.subplot(1, 2, 2); plt.plot(ni_n.times, (ni_n.g_NMDA-ni_c.g_NMDA)/ni_c.g_NMDA); plt.xlabel('Time [ms]'); plt.ylabel('Rel error'); plt.title('NMDA (inst) rel error'); ``` - Looks good - Jumps are due to blocking/unblocking of Mg channels with changes in $V$ ###### NMDA with unblocking over time ``` class NMDAChannel(SynChannel): def __init__(self, hp, receptor): self.hp = hp self.receptor = receptor self.rec_code = hp['receptor_types'][receptor] self.tau_1 = hp['tau_rise_'+receptor] self.tau_2 = hp['tau_decay_'+receptor] self.g_peak = hp['g_peak_'+receptor] self.E_rev = hp['E_rev_'+receptor] self.S_act = hp['S_act_NMDA'] self.V_act = hp['V_act_NMDA'] self.tau_fast = hp['tau_Mg_fast_NMDA'] self.tau_slow = hp['tau_Mg_slow_NMDA'] self.instantaneous = False def m_inf(self, V): return 1. / ( 1. + np.exp(-self.S_act*(V-self.V_act)) ) def dm(self, m, t, V, tau): return ( self.m_inf(V) - m ) / tau def g(self, t, V, mf0, ms0): self.m_fast = si.odeint(self.dm, mf0, t, args=(V, self.tau_fast)) self.m_slow = si.odeint(self.dm, ms0, t, args=(V, self.tau_slow)) a = 0.51 - 0.0028 * V m_inf = self.m_inf(V) mfs = self.m_fast[:] mfs[mfs > m_inf] = m_inf mss = self.m_slow[:] mss[mss > m_inf] = m_inf m = np.squeeze(a * mfs + ( 1 - a ) * mss) return self.g_peak * m * self.beta(t) def I(self, t, V): raise NotImplementedError() nmda = NMDAChannel(nest.GetDefaults('ht_neuron'), 'NMDA') nm_n, nm_c = syn_voltage_clamp(nmda, [(50, -70.), (50, -50.), (50, -20.), (50, 0.), (50, -60.)]) plt.subplot(1, 2, 1); plt.plot(nm_n.times, nm_n.g_NMDA, label='NEST'); plt.plot(nm_c.times, nm_c.g_NMDA, label='Control'); plt.xlabel('Time [ms]'); plt.ylabel('g_NMDA'); plt.title('NMDA Channel'); plt.subplot(1, 2, 2); plt.plot(nm_n.times, (nm_n.g_NMDA-nm_c.g_NMDA)/nm_c.g_NMDA); plt.xlabel('Time [ms]'); plt.ylabel('Rel error'); plt.title('NMDA rel error'); ``` - Looks fine, too. ### Synapse Model We test the synapse model by placing it between two parrot neurons, sending spikes with differing intervals and compare to expected weights. ``` nest.ResetKernel() sp = nest.GetDefaults('ht_synapse') P0 = sp['P'] dP = sp['delta_P'] tP = sp['tau_P'] spike_times = [10., 12., 20., 20.5, 100., 200., 1000.] expected = [(0., P0, P0)] for idx, t in enumerate(spike_times): tlast, Psend, Ppost = expected[idx] Psend = 1 - (1-Ppost)*math.exp(-(t-tlast)/tP) expected.append((t, Psend, (1-dP)*Psend)) expected_weights = list(zip(*expected[1:]))[1] sg = nest.Create('spike_generator', params={'spike_times': spike_times}) n = nest.Create('parrot_neuron', 2) wr = nest.Create('weight_recorder') nest.SetDefaults('ht_synapse', {'weight_recorder': wr[0], 'weight': 1.0}) nest.Connect(sg, n[:1]) nest.Connect(n[:1], n[1:], syn_spec='ht_synapse') nest.Simulate(1200) rec_weights = nest.GetStatus(wr)[0]['events']['weights'] print('Recorded weights:', rec_weights) print('Expected weights:', expected_weights) print('Difference :', np.array(rec_weights) - np.array(expected_weights)) ``` Perfect agreement, synapse model looks fine. ## Integration test: Neuron driven through all synapses We drive a Hill-Tononi neuron through pulse packets arriving at 1 second intervals, impinging through all synapse types. Compare this to Fig 5 of [HT05]. ``` nest.ResetKernel() nrn = nest.Create('ht_neuron') ppg = nest.Create('pulsepacket_generator', n=4, params={'pulse_times': [700., 1700., 2700., 3700.], 'activity': 700, 'sdev': 50.}) pr = nest.Create('parrot_neuron', n=4) mm = nest.Create('multimeter', params={'interval': 0.1, 'record_from': ['V_m', 'theta', 'g_AMPA', 'g_NMDA', 'g_GABA_A', 'g_GABA_B', 'I_NaP', 'I_KNa', 'I_T', 'I_h']}) weights = {'AMPA': 25., 'NMDA': 20., 'GABA_A': 10., 'GABA_B': 1.} receptors = nest.GetDefaults('ht_neuron')['receptor_types'] nest.Connect(ppg, pr, 'one_to_one') for p, (rec_name, rec_wgt) in zip(pr, weights.items()): nest.Connect([p], nrn, syn_spec={'model': 'ht_synapse', 'receptor_type': receptors[rec_name], 'weight': rec_wgt}) nest.Connect(mm, nrn) nest.Simulate(5000) data = nest.GetStatus(mm)[0]['events'] t = data['times'] def texify_name(name): return r'${}_{{\mathrm{{{}}}}}$'.format(*name.split('_')) fig = plt.figure(figsize=(12,10)) Vax = fig.add_subplot(311) Vax.plot(t, data['V_m'], 'k', lw=1, label=r'$V_m$') Vax.plot(t, data['theta'], 'r', alpha=0.5, lw=1, label=r'$\Theta$') Vax.set_ylabel('Potential [mV]') Vax.legend(fontsize='small') Vax.set_title('ht_neuron driven by sinousiodal Poisson processes') Iax = fig.add_subplot(312) for iname, color in (('I_h', 'blue'), ('I_KNa', 'green'), ('I_NaP', 'red'), ('I_T', 'cyan')): Iax.plot(t, data[iname], color=color, lw=1, label=texify_name(iname)) #Iax.set_ylim(-60, 60) Iax.legend(fontsize='small') Iax.set_ylabel('Current [mV]') Gax = fig.add_subplot(313) for gname, sgn, color in (('g_AMPA', 1, 'green'), ('g_GABA_A', -1, 'red'), ('g_GABA_B', -1, 'cyan'), ('g_NMDA', 1, 'magenta')): Gax.plot(t, sgn*data[gname], lw=1, label=texify_name(gname), color=color) #Gax.set_ylim(-150, 150) Gax.legend(fontsize='small') Gax.set_ylabel('Conductance') Gax.set_xlabel('Time [ms]'); ```
github_jupyter
``` import pandas as pd import matplotlib.pyplot as plt import numpy as np from rdkit import Chem from rdkit.Chem import AllChem from rdkit.Chem import Descriptors from sklearn.model_selection import train_test_split from sklearn.neural_network import MLPClassifier from sklearn.preprocessing import StandardScaler from sklearn.neural_network import MLPRegressor from sklearn.svm import SVR from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LassoCV from rdkit.ML.Descriptors.MoleculeDescriptors import MolecularDescriptorCalculator as Calculator #Data Cleaning data = pd.read_excel("inputdata.xlsx") data['EC_value'], data['EC_error'] = zip(*data['ELE_COD'].map(lambda x: x.split('±'))) data.head() #Setting up for molecular descriptors n = data.shape[0] list_of_descriptors = ['NumHeteroatoms','MolWt','NOCount','NumHDonors','RingCount','NumAromaticRings','NumSaturatedRings','NumAliphaticRings'] calc = Calculator(list_of_descriptors) D = len(list_of_descriptors) d = len(list_of_descriptors)*2 + 3 print(n,d) #setting up the x and y matrices X = [] X = np.zeros((n,d)) X[:,-3] = data['T'] X[:,-2] = data['P'] X[:,-1] = data['MOLFRC_A'] for i in range(n): A = Chem.MolFromSmiles(data['A'][i]) B = Chem.MolFromSmiles(data['B'][i]) X[i][:D] = calc.CalcDescriptors(A) X[i][D:2*D] = calc.CalcDescriptors(B) new_data = pd.DataFrame(X,columns=['NumHeteroatoms_A','MolWt_A','NOCount_A','NumHDonors_A','RingCount_A','NumAromaticRings_A','NumSaturatedRings_A','NumAliphaticRings_A','NumHeteroatoms_B','MolWt_B','NOCount_B','NumHDonors_B','RingCount_B','NumAromaticRings_B','NumSaturatedRings_B','NumAliphaticRings_B','T','P','MOLFRC_A']) y = data['EC_value'] X = StandardScaler().fit_transform(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1) #MLPClassifier alphas = np.array([5,2,5,1.5,1,0.1,0.01,0.001,0.0001,0]) mlp_class = MLPClassifier(hidden_layer_sizes=(100,), activation='relu', solver='adam', alpha=0.0001, max_iter=5000, random_state=None,learning_rate_init=0.01) gs = GridSearchCV(mlp_class, param_grid=dict(alpha=alphas)) gs.fit(X_train,y_train) plt.figure(figsize=(4,4)) plt.scatter(y_test.values.astype(np.float), gs.predict(X_test)) plt.plot([0,12],[0,12],lw=4,c = 'r') plt.show() #MLPRegressor alphas = np.array([5,2,5,1.5,1,0.1,0.01,0.001,0.0001,0]) mlp_regr = MLPRegressor(hidden_layer_sizes=(100,), activation='relu', solver='adam', alpha=0.0001, max_iter=5000, random_state=None,learning_rate_init=0.01) gs = GridSearchCV(mlp_regr, param_grid=dict(alpha=alphas)) #mlp_regr.fit(X_train,y_train) gs.fit(X_train,y_train) plt.figure(figsize=(4,4)) plt.scatter(y_test.values.astype(np.float), gs.predict(X_test)) plt.plot([0,12],[0,12],lw=4,c = 'r') plt.show() #Lasso alphas = np.array([5,2,5,1.5,1,0.1,0.01,0.001,0.0001,0]) lasso = Lasso(alpha=0.001, fit_intercept=True, normalize=False, precompute=False, copy_X=True, max_iter=5000, tol=0.001, positive=False, random_state=None, selection='cyclic') gs = GridSearchCV(lasso, param_grid=dict(alpha=alphas)) gs.fit(X_train,y_train) plt.figure(figsize=(4,4)) plt.scatter(y_test.values.astype(np.float),lasso.predict(X_test)) plt.plot([0,12],[0,12],lw=4,c = 'r') plt.show() #SVR svr = SVR(kernel='rbf', degree=3, gamma='auto', coef0=0.0, tol=0.001, C=1.0, epsilon=0.01, shrinking=True, cache_size=200, verbose=False, max_iter=-1) svr = GridSearchCV(svr, cv=5, param_grid={"C": [1e0, 1e1, 1e2, 1e3],"gamma": np.logspace(-2, 2, 5)}) svr.fit(X_train,y_train) plt.figure(figsize=(4,4)) plt.scatter(y_test.values.astype(np.float), svr.predict(X_test)) plt.plot([0,12],[0,12],lw=4,c = 'r') plt.show() ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import scipy from sklearn.model_selection import ParameterGrid from sklearn.manifold import Isomap import time from tqdm import tqdm import librosa from librosa import cqt from librosa.core import amplitude_to_db from librosa.display import specshow import os import glob data_dir = '/Users/sripathisridhar/Desktop/SOL' file_paths= sorted(glob.glob(os.path.join(data_dir, '**', '*.wav'))) file_names= [] for file_path in file_paths: file_names.append(os.path.basename(file_path)) hop_size= 512 q= 24 import h5py with h5py.File("SOL.h5", "r") as f: features_dict = {key:f[key][()] for key in f.keys()} grid = { 'Q': [24], 'k': [3], 'comp': ['log'], 'instr': ['all'], 'dyn': ['all'] } settings = list(ParameterGrid(grid)) for setting in settings: if setting["instr"] == 'all': setting['instr'] = '' if setting['dyn'] == 'all': setting['dyn'] = '' batch_str = [] CQT_OCTAVES = 7 features_keys = list(features_dict.keys()) for setting in settings: q = setting['Q'] # Batch process and store in a folder batch_str = [setting['instr'], setting['dyn']] batch_features = [] for feature_key in features_keys: # Get features that match setting if all(x in feature_key for x in batch_str): batch_features.append(features_dict[feature_key]) batch_features = np.stack(batch_features, axis=1) # Isomap parameters hop_size = 512 compression = 'log' features = amplitude_to_db(batch_features) n_neighbors = setting['k'] n_dimensions = 3 n_octaves = 3 # Prune feature matrix bin_low = np.where((np.std(features, axis=1) / np.std(features)) > 0.1)[0][0] + q bin_high = bin_low + n_octaves*q X = features[bin_low:bin_high, :] # Z-score Standardization- improves contrast in correlation matrix mus = np.mean(X, axis=1) sigmas = np.std(X, axis=1) X_std = (X - mus[:, np.newaxis]) / (1e-6 + sigmas[:, np.newaxis]) # 1e-6 to avoid runtime division by zero # Pearson correlation matrix rho_std = np.dot(X_std, X_std.T) / X_std.shape[1] # Isomap embedding isomap = Isomap(n_components= n_dimensions, n_neighbors= n_neighbors) coords = isomap.fit_transform(rho_std) # Get note value freqs= librosa.cqt_frequencies(q*CQT_OCTAVES, fmin=librosa.note_to_hz('C1'), bins_per_octave=q) #librosa CQT default fmin is C1 chroma_list= librosa.core.hz_to_note(freqs[bin_low:bin_high]) notes = [] reps = q//12 for chroma in chroma_list: for i in range(reps): notes.append(chroma) curr_fig= plt.figure(figsize=(5.5, 2.75)) ax= curr_fig.add_subplot(121) ax.axis('off') import colorcet as cc subsampled_color_ids = np.floor(np.linspace(0, 256, q, endpoint=False)).astype('int') color_list= [cc.cyclic_mygbm_30_95_c78[i] for i in subsampled_color_ids] # Plot embedding with color for i in range(coords.shape[0]): plt.scatter(coords[i, 0], coords[i, 1], color= color_list[i%q], s=30.0) plt.plot(coords[:, 0], coords[:, 1], color='black', linewidth=0.2) # Plot Pearson correlation matrix rho_frequencies = freqs[bin_low:bin_high] freq_ticklabels = ['A2', 'A3', 'A4'] freq_ticks = librosa.core.note_to_hz(freq_ticklabels) tick_bins = [] tick_labels= [] for i,freq_tick in enumerate(freq_ticks): tick_bin = np.argmin(np.abs(rho_frequencies-freq_tick)) tick_bins.append(tick_bin) tick_labels.append(freq_ticklabels[i]) plt.figure(figsize=(2.5,2.5)) plt.imshow(np.abs(rho_std), cmap='magma_r') plt.xticks(tick_bins) plt.gca().set_xticklabels(freq_ticklabels) # plt.xlabel('Log-frequency (octaves)') plt.yticks(tick_bins) plt.gca().set_yticklabels(freq_ticklabels) # plt.ylabel('Log-frequency (octaves)') plt.gca().invert_yaxis() plt.clim(0, 1) ``` ### Circle projection ``` import circle_fit import importlib importlib.reload(circle_fit) from circle_fit import circle_fit A = np.transpose(coords[:,:-1]) x, r, circle_residual = circle_fit(A, verbose=True) import matplotlib matplotlib.rc('font', family='serif') fig, axes = plt.subplots() plt.scatter(A[0,:],A[1,:]) plt.plot(x[0],x[1],'rx') circle = plt.Circle(x, radius=r, fill=False, linestyle='-.') axes.set_aspect(1) axes.add_artist(circle) # axes.set_ylim([-5,6]) # axes.set_xlim([-2,8]) plt.title('Circle fit: TinySOL all instr', pad=10.0) plt.show() print(np.sqrt(circle_residual)/72) r def d_squared(a, b): # Takes two n-D tuples and returns euclidean distance between them # Cast to array for computation # Cast first to tuple in case a or b are Sympy Point objects p_a = np.array(tuple(a), dtype='float') p_b = np.array(tuple(b), dtype='float') return np.sum(np.square(p_a - p_b)) import sympy from sympy.geometry import Circle, Point, Line center = Point(x, evaluate=False) c = Circle(center, r, evaluate=False) l = Line(Point(coords[0,:-1]), center, evaluate=False) points = [tuple(p) for p in l.points] xy_prime = [] # TODO: Optimize to a more pythonic manner for x,y in coords[:,:2]: intersections = c.intersection(Line(Point(x,y), center, evaluate=False)) if d_squared((x,y),intersections[0]) < d_squared((x,y), intersections[1]): xy_prime.append([float(p) for p in intersections[0]]) else: xy_prime.append([float(p) for p in intersections[1]]) fig, axes = plt.subplots() plt.scatter(np.array(xy_prime)[:,0],np.array(xy_prime)[:,1], s=10, label='projected points') plt.scatter(A[0,:],A[1,:], s=0.5, label='isomap embedding points (2D)') plt.plot(center[0],center[1],'rx') circle = plt.Circle([float(p) for p in center], radius=r, fill=False, linestyle='--', label='estimated circle fit') axes.set_aspect(1) axes.add_artist(circle) plt.title('Projected points on circle', pad=10.0) plt.legend(bbox_to_anchor=(1,1)) plt.show() ``` ### Line projection ``` z = np.arange(len(coords[:,2])) z_fit = scipy.stats.linregress(z, coords[:,2]) print(z_fit.stderr) plt.figure() plt.title('Line fit: TinySOL all instr') plt.scatter(np.arange(len(coords[:,2])), coords[:,2]) plt.plot(z_fit.intercept + z_fit.slope*z, 'b') # New line coordinates z_prime = [i * z_fit.slope + z_fit.intercept for i,_ in enumerate(coords[:,2])] coords_prime = np.append(np.array(xy_prime), np.expand_dims(np.array(z_prime), axis=1), axis=1) coords_length = coords_prime.shape[0] ``` ### Distance matrices ``` # Projected helix self-distance matrix D_proj = np.zeros((coords_length, coords_length)) for i in range(coords_length): for j in range(i,coords_length): D_proj[i][j] = d_squared(coords_prime[i,:], coords_prime[j,:]) # Isomap embedding self-distance matrix D_isomap = np.zeros((coords_length, coords_length)) # Projected points same no. as isomap for i in range(coords_length): for j in range(i, coords_length): D_isomap[i][j] = d_squared(coords[i,:], coords[j,:]) # Geodesic self-distance matrix D_geodesic = isomap.dist_matrix_ # Convert to upper triangular sparse matrix for i in range(coords_length): for j in range(i): D_geodesic[i,j] = 0 ## Centering matrix def centered(A, Q=24, J=3): # Returns centered distance matrix ''' Inputs ----- A - squared distance matrix Q - quality factor, 24 by default J - number of octaves, 3 by default Returns ----- tau - MDS style diagonalized matrix of A ''' coords_length = A.shape[0] H = np.zeros((coords_length, coords_length)) const = 1/(Q*J) for i in range(coords_length): for j in range(coords_length): if j==i: H[i,j] = 1 - const else: H[i,j] = -const return -0.5 * np.matmul(np.matmul(H, A), H) def frobenius_distance(A, B): # Given two nxn matrices, return their 'Frobenius distance' return np.sqrt(np.sum(np.square(A - B))) loss_isomap = frobenius_distance(centered(D_geodesic), centered(D_isomap))/coords_length loss_total = frobenius_distance(centered(D_geodesic), centered(D_proj))/coords_length loss_proj = frobenius_distance(centered(D_isomap), centered(D_proj))/coords_length print(f"Isomap loss= {loss_isomap}") print(f"Projection loss= {loss_proj}") print(f"Total loss= {loss_total}") (loss_total) - (loss_isomap + loss_proj) < 0 ```
github_jupyter
``` import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt from datetime import datetime, timedelta from sklearn.cluster import KMeans from sklearn.ensemble import RandomForestClassifier import shap ``` # Loading clean data ``` import os clean_files = ['cleaned_data/all_dataset/all_dataset.parquet'] dfs = [] for file in clean_files: dfs.append(pd.read_parquet(file)) df = pd.concat(dfs, axis = 0) df.index = pd.to_datetime(df.index) df['state_of_charge_percent'] = df['state_of_charge_percent'].clip(0,140) all_ids = df.battery_id.unique() ``` # Histogram clustering The idea is to cluster fields (here after "power_out") according to their distribution, approximated by its histogram. ``` n_bins = 20 # number of bins to split the field in max_power = df['power_out'].quantile(0.99) min_power = 0 df['binned_power'] = pd.cut(df['power_out'], np.linspace(min_power,max_power,n_bins)) # count number of times we see a value in the bucket df_quants = df.groupby(['battery_id','binned_power'])['power_out'].count().unstack('binned_power') # divides by the number of observation per battery to end up with a frequency df_quants = df_quants.div(df_quants.sum(axis = 1), axis = 0) # apply KMean model with K cluster and check inertia distortions = [] K = range(1,20) for k in K: kmeanModel = KMeans(n_clusters=k) kmeanModel.fit(df_quants.values) distortions.append(kmeanModel.inertia_) plt.plot(distortions, marker = '*') ``` According to the elbow method, a reasonable number of cluster to choose is 3 or 4. Now predit cluster with Kmeans method. Then, fit a Classifier what will aim to predict class, starting from the input data. It will help us understand the sensibility of each parameter to class determination. ``` n_clusters = 4 kmeanModel = KMeans(n_clusters=n_clusters) # predict class y=kmeanModel.fit(df_quants.values).labels_ # fit a Classifier clf=RandomForestClassifier() clf.fit(df_quants.values,y) # plot shap values explainer= shap.TreeExplainer(clf) shap_values = explainer(df_quants) for k in range(n_clusters): print(f'plot {k}') shap.plots.beeswarm(shap_values[:,:,k]) ``` We can interpret those results as the following. For each graph, shap values represent "feature importance". For cluster 1 (1 graph) we can see than important features playing in favor of belonging to cluster 1 is high observation frequency in the first bin, and low frequency of observations in other bins. Let's find this result back by plotting the distribution of each class. ``` id_to_class = pd.DataFrame(data = y, index = df_quants.index, columns = ['class']).squeeze().to_dict() df_quants = df_quants.stack() df_quants = df_quants.rename('n_count') df_quants = df_quants.reset_index() df_quants['class'] = df_quants['battery_id'].map(id_to_class) sns.boxplot(data = df_quants, x = 'binned_power', y = 'n_count', hue = 'class') ``` # Clustering histogram by power and time for sake of simplicity when should stick with clustering on the all year. BUT we loose temporal information. can we have that? Yes, instead of clustring only power, one cluster power and time. ``` max_power = df['power_out'].quantile(0.99) min_power = 0 df['binned_power'] = pd.cut(df['power_out'], np.linspace(min_power,max_power,10)) df['month'] = df.index.month df_quants_power = df.groupby(['battery_id','binned_power'])['power_out'].count().unstack('binned_power') df_counts = df_quants_power.sum(axis = 1) df_quants_hour = df.groupby(['battery_id','hour','binned_power'])['power_out'].count() X_pca = df_quants_hour.unstack('binned_power').unstack('hour') ``` Now one can perform a PCA to reduce the dimensionality of the problem ``` feature_names = X_pca.columns.map('{0[0]}_H{0[1]}'.format) X_pca_n = X_pca.div(X_pca.sum(axis = 1), axis = 0) X_pca_n.columns = feature_names X_pca_n = X_pca_n.apply(lambda x: x.fillna(x.mean())) ``` Let's take a look at the data. ``` sns.heatmap(np.flip(X_pca_n.loc[3].values.reshape(-1,24), 0)) ``` The PCA ``` from sklearn.decomposition import PCA pca = PCA() pca.fit(X_pca_n) plt.plot(np.cumsum(pca.explained_variance_ratio_)[:30]) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); plt.grid(); n_components = 6 two_comp_PCA = PCA(n_components=n_components) X_pca_space = two_comp_PCA.fit_transform(X_pca_n) plt.bar(x = [f'PC{k}' for k in range(1,n_components+1)], height = two_comp_PCA.explained_variance_) plt.figure() plt.plot(np.cumsum(two_comp_PCA.explained_variance_ratio_)) loadings = two_comp_PCA.components_.T * np.sqrt(two_comp_PCA.explained_variance_) df_loadings = pd.DataFrame(data = loadings, index = feature_names) ax = sns.heatmap(df_loadings, annot=False, cmap='Spectral') plt.show() ``` # Plot points in the PCA space We chose to keep 6 first dimensions but we will project spaces only on the 3 first principal components ``` # plot data plt.scatter(X_pca_space[:, 0], X_pca_space[:, 1], alpha=0.2) plt.axis('equal'); plt.figure() plt.scatter(X_pca_space[:, 1], X_pca_space[:, 2], alpha=0.2) plt.axis('equal'); n_components_to_keep = 6 X_clustering = X_pca_space[:,:n_components_to_keep] from sklearn.cluster import KMeans distortions = [] K = range(1,15) for k in K: kmeanModel = KMeans(n_clusters=k) kmeanModel.fit(X_clustering) distortions.append(kmeanModel.inertia_) plt.plot(distortions, marker = "*") n_clusters = 4 kmeanModel = KMeans(n_clusters=n_clusters) y=kmeanModel.fit(X_clustering).labels_ df_X_clsutering = pd.DataFrame(data= X_clustering, index = X_pca.index, columns = [f'PC{k}' for k in range(1,n_components_to_keep+1)]) df_y_clustering = pd.DataFrame(data= y, index = X_pca.index, columns = ['class']) ``` # Plot data and class ``` sns.lmplot(data = pd.concat([df_X_clsutering, df_y_clustering], axis = 1), x = 'PC1', y = 'PC2', hue = 'class', fit_reg = False) sns.lmplot(data = pd.concat([df_X_clsutering, df_y_clustering], axis = 1), x = 'PC2', y = 'PC3', hue = 'class', fit_reg = False) df_analyse = X_pca_n.copy() df_analyse = df_analyse.join(df_y_clustering) std = df_analyse.groupby('class').std() mean = df_analyse.groupby('class').mean() ``` # Show clustered profies Did it cluster correctly ? Let's take a look if clusters make sens ``` for k in range(n_clusters): plt.figure() sns.heatmap(np.flip(mean.loc[k].values.reshape(-1,24), 0)) plt.title(f'mean cluster {k}') ```
github_jupyter
Autor: Érick Barbosa de Souza Home: https://abre.ai/ebsouza-pagina Instagram: @erickbsouza --- **Programação Orientada a Objetos** Após aprender lógica de programação utilizando **programação estruturada**, é necessário aprender novos conceitos para resolver problemas comuns em computação. A **programação orientada a objetos** traz recursos valiosos para o desenvolvedor. Aqui não cabe a comparação de qual programação é melhor, os dois paradigmas de programação são complementares. A **programação estruturada** pode ser realizada utilizando sequência de instruções, condicionais e repetições. Estes são os elementos chave para escrever o que chamamos de **rotinas** ou **funções**. Na programação orientada a objetos um novo elemento é introduzido: o **objeto**. Objetos são estruturas que armazenam **atributos** e **métodos**. Atributo está para variável da mesma forma que método está para função. Por exemplo, considerando um ponto qualquer um objeto, as coordenadas x,y seriam seus atributos e a possibilidade de se deslocar o seu método. * Objeto: Ponto * Atributos: Coordenadas (x,y) * Método: Deslocar-se no plano cartesiano Em Python, para programar utilizando POO é necessário entender os conceitos a seguir. **1. Classe** Classes representam a **"descrição detalhada"** dos objetos. É através dela que sabemos quais atributos e métodos o **objeto** terá. A partir de uma classe, criamos objetos. Logo abaixo nós temos a criação de uma classe vazia chamada **Point**. ``` class Point: pass ``` **2. Objeto** Objetos são **instâncias** de uma classe. Todo programa interage com os objetos de uma classe, logo, apenas os objetos contém informações e precisa alocar memória para armazená-las. **Exemplo 1** ``` class Point: # x e y são atributos da classe def __init__(self, x, y): self.x = x self.y = y #Criação do objeto a partir da classe # x = 5 # y = 2 point = Point(5, 2) ``` Neste exemplo, obj é um objeto da classe Point. ``` #Coordenada x point.x #Coordenada y point.y ``` **3. Métodos** Métodos são funções definidas dentro de uma classe. Eles definem os comportamentos possíveis de um objeto. **Exemplo 2** ``` class Point: # x e y são atributos da classe def __init__(self, x, y): self.x = x self.y = y # Métodos que modificam os atributos def move_x(self, delta_x): self.x += delta_x def move_y(self, delta_y): self.y += delta_y # Método que exibe valores dos atributos def show_coordinates(self): print(f'({self.x}, {self.y})') ``` Foram definidos três métodos: * **move_x**: Este método representa o deslocamento do ponto no eixo x. Como argumento nós temos delta_x, que representa a magnitude do deslocamento. * **move_y**: O mesmo conceito de move_x, porém aplicado para o eixo y. * **show_coordinates**: Apresenta as coordenadas do ponto.* *Em outro notebook veremos um jeito mais elegante de fazer isto. ``` #Criação do objeto a partir da classe # x = 10 # y = 10 point = Point(10, 10) #O ponto se desloca no eixo x, movendo 10 unidades no sentido positivo. point.move_x(10) #Coordenadas (x, y) do ponto point.show_coordinates() ``` **Exemplo 3** Cada instância criada a partir de uma classe possui seus próprios atributos. Neste exemplo, dois objetos são criados point_1 e point_2. Cada um deles possui atributos com valores diferentes e a aplicação dos métodos altera seus respectivos valores. ``` point_1 = Point(100, 10) point_2 = Point(10, 100) print("Ponto 1") point_1.show_coordinates() print("Ponto 2") point_2.show_coordinates() #Deslocamento de 10 unidades em y no sentido negativo point_1.move_y(-10) #Deslocamento de 10 unidades em x no sentido negativo point_2.move_x(-10) print("Ponto 1") point_1.show_coordinates() print("Ponto 2") point_2.show_coordinates() ``` --- Espero que tenha gostado do material. Sempre que possível farei atualizações neste conteúdo. Se tiver sugestões ou dúvidas a respeito do assunto, entre em contato comigo pelo instagram @erickbsouza.
github_jupyter
<center> <img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" /> </center> # Watson Speech to Text Translator Estimated time needed: **25** minutes ## Objectives After completing this lab you will be able to: - Create Speech to Text Translator ### Introduction <p>In this notebook, you will learn to convert an audio file of an English speaker to text using a Speech to Text API. Then you will translate the English version to a Spanish version using a Language Translator API. <b>Note:</b> You must obtain the API keys and enpoints to complete the lab.</p> <div class="alert alert-block alert-info" style="margin-top: 20px"> <h2>Table of Contents</h2> <ul> <li><a href="#ref0">Speech To Text</a></li> <li><a href="#ref1">Language Translator</a></li> <li><a href="#ref2">Exercise</a></li> </ul> </div> ``` %pip install python-dotenv from dotenv import load_dotenv load_dotenv() #you will need the following library !pip install ibm_watson wget ``` <h2 id="ref0">Speech to Text</h2> <p>First we import <code>SpeechToTextV1</code> from <code>ibm_watson</code>.For more information on the API, please click on this <a href="https://cloud.ibm.com/apidocs/speech-to-text?code=python">link</a></p> ``` from ibm_watson import SpeechToTextV1 import json from ibm_cloud_sdk_core.authenticators import IAMAuthenticator ``` <p>The service endpoint is based on the location of the service instance, we store the information in the variable URL. To find out which URL to use, view the service credentials and paste the url here.</p> ``` url_s2t = os.getenv(Watson_Spech_URL) ``` <p>You require an API key, and you can obtain the key on the <a href="https://cloud.ibm.com/resources">Dashboard </a>.</p> ``` iam_apikey_s2t = os.getenv(Watson_Spech_Key) ``` <p>You create a <a href="http://watson-developer-cloud.github.io/python-sdk/v0.25.0/apis/watson_developer_cloud.speech_to_text_v1.html">Speech To Text Adapter object</a> the parameters are the endpoint and API key.</p> ``` authenticator = IAMAuthenticator(iam_apikey_s2t) s2t = SpeechToTextV1(authenticator=authenticator) s2t.set_service_url(url_s2t) s2t ``` <p>Lets download the audio file that we will use to convert into text.</p> ``` !wget -O PolynomialRegressionandPipelines.mp3 https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/labs/Module%205/data/PolynomialRegressionandPipelines.mp3 ``` <p>We have the path of the wav file we would like to convert to text</p> ``` filename='PolynomialRegressionandPipelines.mp3' ``` <p>We create the file object <code>wav</code> with the wav file using <code>open</code> ; we set the <code>mode</code> to "rb" , this is similar to read mode, but it ensures the file is in binary mode.We use the method <code>recognize</code> to return the recognized text. The parameter audio is the file object <code>wav</code>, the parameter <code>content_type</code> is the format of the audio file.</p> ``` with open(filename, mode="rb") as wav: response = s2t.recognize(audio=wav, content_type='audio/mp3') ``` <p>The attribute result contains a dictionary that includes the translation:</p> ``` response.result from pandas import json_normalize json_normalize(response.result['results'],"alternatives") response ``` <p>We can obtain the recognized text and assign it to the variable <code>recognized_text</code>:</p> ``` recognized_text=response.result['results'][0]["alternatives"][0]["transcript"] type(recognized_text) ``` <h2 id="ref1">Language Translator</h2> <p>First we import <code>LanguageTranslatorV3</code> from ibm_watson. For more information on the API click <a href="https://cloud.ibm.com/apidocs/speech-to-text?code=python"> here</a></p> ``` from ibm_watson import LanguageTranslatorV3 ``` <p>The service endpoint is based on the location of the service instance, we store the information in the variable URL. To find out which URL to use, view the service credentials.</p> ``` url_lt= os.getenv(Watson_Lang_URL) ``` <p>You require an API key, and you can obtain the key on the <a href="https://cloud.ibm.com/resources">Dashboard</a>.</p> ``` apikey_lt= os.getenv(Watson_Lang_Key) ``` <p>API requests require a version parameter that takes a date in the format version=YYYY-MM-DD. This lab describes the current version of Language Translator, 2018-05-01</p> ``` version_lt='2018-05-01' ``` <p>we create a Language Translator object <code>language_translator</code>:</p> ``` authenticator = IAMAuthenticator(apikey_lt) language_translator = LanguageTranslatorV3(version=version_lt,authenticator=authenticator) language_translator.set_service_url(url_lt) language_translator ``` <p>We can get a Lists the languages that the service can identify. The method Returns the language code. For example English (en) to Spanis (es) and name of each language.</p> ``` from pandas import json_normalize json_normalize(language_translator.list_identifiable_languages().get_result(), "languages") ``` <p>We can use the method <code>translate</code> this will translate the text. The parameter text is the text. Model_id is the type of model we would like to use use we use list the language . In this case, we set it to 'en-es' or English to Spanish. We get a Detailed Response object translation_response</p> ``` translation_response = language_translator.translate(\ text=recognized_text, model_id='en-es') translation_response ``` <p>The result is a dictionary.</p> ``` translation=translation_response.get_result() translation ``` <p>We can obtain the actual translation as a string as follows:</p> ``` spanish_translation =translation['translations'][0]['translation'] spanish_translation ``` <p>We can translate back to English</p> ``` translation_new = language_translator.translate(text=spanish_translation ,model_id='es-en').get_result() ``` <p>We can obtain the actual translation as a string as follows:</p> ``` translation_eng=translation_new['translations'][0]['translation'] translation_eng ``` <br> <h2>Quiz</h2> Translate to French. ``` # Write your code below and press Shift+Enter to execute French_translation=language_translator.translate(text=translation_eng , model_id='en-fr').get_result() French_translation['translations'][0]['translation'] Spanish_translation=language_translator.translate(text=translation_eng , model_id='en-es').get_result() Spanish_translation['translations'][0]['translation'] ``` <details><summary>Click here for the solution</summary> ```python French_translation=language_translator.translate( text=translation_eng , model_id='en-fr').get_result() French_translation['translations'][0]['translation'] ``` </details> <h3>Language Translator</h3> <b>References</b> [https://cloud.ibm.com/apidocs/speech-to-text?code=python](https://cloud.ibm.com/apidocs/speech-to-text?code=python&utm_email=Email&utm_source=Nurture&utm_content=000026UJ&utm_term=10006555&utm_campaign=PLACEHOLDER&utm_id=SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395) [https://cloud.ibm.com/apidocs/language-translator?code=python](https://cloud.ibm.com/apidocs/language-translator?code=python&utm_email=Email&utm_source=Nurture&utm_content=000026UJ&utm_term=10006555&utm_campaign=PLACEHOLDER&utm_id=SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395) <hr> ## Authors: [Joseph Santarcangelo](https://www.linkedin.com/in/joseph-s-50398b136/?utm_email=Email&utm_source=Nurture&utm_content=000026UJ&utm_term=10006555&utm_campaign=PLACEHOLDER&utm_id=SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395) Joseph Santarcangelo has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD. ## Other Contributor(s) <a href="https://www.linkedin.com/in/fanjiang0619/">Fan Jiang</a> ## Change Log | Date (YYYY-MM-DD) | Version | Changed By | Change Description | | ----------------- | ------- | ---------- | ---------------------------------- | | 2021-04-07 | 2.2 | Malika | Updated the libraries | | 2021-01-05 | 2.1 | Malika | Added a library | | 2020-08-26 | 2.0 | Lavanya | Moved lab to course repo in GitLab | | | | | | | | | | | <hr/> ## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
github_jupyter
``` name = '2017-06-02-matplotlib-contourf-subplots' title = 'Filled contour plots and colormap normalization' tags = 'matplotlib' author = 'Maria Zamyatina' from nb_tools import connect_notebook_to_post from IPython.core.display import HTML, Image html = connect_notebook_to_post(name, title, tags, author) ``` Today we are going to learn some tricks about plotting two dimensional data with matplotlib contourf function. ``` import numpy as np import matplotlib.mlab as mlab import matplotlib.pyplot as plt %matplotlib inline ``` Let us start with creating two sample 2D arrays, Z1 and Z2. ``` # Array 1 delta1 = 0.025 x1 = np.arange(-3.0, 3.0, delta1) y1 = np.arange(-2.0, 2.0, delta1) X1, Y1 = np.meshgrid(x1, y1) Z1_1 = mlab.bivariate_normal(X1, Y1, 1.0, 1.0, 0.0, 0.0) Z2_1 = mlab.bivariate_normal(X1, Y1, 1.5, 0.5, 1, 1) Z1 = 10.0 * (Z2_1 - Z1_1) # Array 2 delta2 = 0.05 x2 = np.arange(-6.0, 6.0, delta2) y2 = np.arange(-4.0, 4.0, delta2) X2, Y2 = np.meshgrid(x2, y2) Z1_2 = mlab.bivariate_normal(X2, Y2, 1.0, 1.0, 0.0, 0.0) Z2_2 = mlab.bivariate_normal(X2, Y2, 1.5, 0.5, 1, 1) Z2 = 30.0 * (Z2_2 - Z1_2) print(Z1.shape, Z2.shape) ``` And now straight to plotting! Step 0. Plot Z1, Z2 and the difference between them on the three subplots using contourf(). ``` fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(12, 4)) ax[0].contourf(X1, Y1, Z1) ax[1].contourf(X2, Y2, Z2) ax[2].contourf(X2, Y2, Z1 - Z2) ax[0].set_title('Z1') ax[1].set_title('Z2') ax[2].set_title('diff') plt.ioff() ``` Step 1. Add a colorbar to each of the subplots in order to be able to interpret the data. Alternatively we could have chosen to use contour() to add contours on top of filled contours, but not today. ``` fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(14, 4)) p1 = ax[0].contourf(X1, Y1, Z1) p2 = ax[1].contourf(X2, Y2, Z2) p3 = ax[2].contourf(X2, Y2, Z1 - Z2) fig.colorbar(p1, ax=ax[0]) fig.colorbar(p2, ax=ax[1]) fig.colorbar(p3, ax=ax[2]) ax[0].set_title('Z1') ax[1].set_title('Z2') ax[2].set_title('diff') plt.ioff() ``` > Why did we call fig.colobar(...), not ax.colorbar(...)? The reason is that creation of a colorbar requires addition of a new axis to the figure. Think about the following for a moment: > * By writing ax[0].contourf(..., Z1) you say 'display Z1 using contourf() method of axis [0] on axis [0]'. In other words, you use axis method to reserve axis [0] for displaying Z1, and unless you want to overlay Z1 with some other array, you can't use axis [0] for anything else. > * Colorbar is exactly 'something else', something extra, that needs to be shown on an additional axis, and in order to create such an axis we use a figure method, fig.colorbar(). > Why fig.colorbar(p1, ...), not fig.colorbar(...)? The reason is that we need to pass an object to the fig.colorbar(), which we want to show the colorbar for. To have a colorbar for one subplot we need to do two things: > 1. Create the required object (known as mappable in Python terminology) by assigning the output from contourf() to a variable, e.g. p1. > 2. Pass the object to fig.colorbar(). Step 2. If Z1 and Z2 describe the same variable, it is logical to have the same colorbar for the first two subplots. > **Tip**. If you want to have **one colorbar for two or more contour plots**, then you need to not only control the colorbar, but also control the levels in these contour plots. That is, to compare the same levels between the plots, the plots should have the same contour levels. One way of doing this is to calculate the levels ahead of time. Let us create an array of equally spaced values (or levels) that encompasses minima and maxima from both datasets and pass this array to levels keyword of contourf(). ``` print(Z1.min(), Z1.max(), Z2.min(), Z2.max()) Z_range = np.arange( round(min(Z1.min(), Z2.min()))-1, round(max(Z1.max(), Z2.max()))+2, 1) Z_range fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(14, 4)) p1 = ax[0].contourf(X1, Y1, Z1, levels=Z_range) p2 = ax[1].contourf(X2, Y2, Z2, levels=Z_range) p3 = ax[2].contourf(X2, Y2, Z1 - Z2) fig.colorbar(p1, ax=ax[0]) fig.colorbar(p2, ax=ax[1]) fig.colorbar(p3, ax=ax[2]) ax[0].set_title('Z1') ax[1].set_title('Z2') ax[2].set_title('diff') plt.ioff() ``` Note that it has become much easier to see that the gradients on the first subplot are much smaller than on the second one. At the same time, though, having a colorbar for each of the two subplots has become redundant. Step 3. Create a common colorbar for the first two suplots. ``` fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(14, 4)) p1 = ax[0].contourf(X1, Y1, Z1, levels=Z_range) p2 = ax[1].contourf(X2, Y2, Z2, levels=Z_range) p3 = ax[2].contourf(X2, Y2, Z1 - Z2) fig.colorbar(p3, ax=ax[2]) cax = fig.add_axes([0.18, 0., 0.4, 0.03]) fig.colorbar(p1, cax=cax, orientation='horizontal') ax[0].set_title('Z1') ax[1].set_title('Z2') ax[2].set_title('diff') plt.ioff() ``` Step 4. Use a diverging colormap for plotting the difference between arrays. ``` fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(14, 4)) p1 = ax[0].contourf(X1, Y1, Z1, levels=Z_range) p2 = ax[1].contourf(X2, Y2, Z2, levels=Z_range) p3 = ax[2].contourf(X2, Y2, Z1 - Z2, cmap='RdBu_r') fig.colorbar(p3, ax=ax[2]) cax = fig.add_axes([0.18, 0., 0.4, 0.03]) fig.colorbar(p1, cax=cax, orientation='horizontal') ax[0].set_title('Z1') ax[1].set_title('Z2') ax[2].set_title('diff') plt.ioff() ``` > **Tip**. If the **range of your data is non-symmetrical around zero**, but you want to **set the middle point of a colormap to zero**, you could try to **normalize** your **colormap**. Step 5. Introduce MidpointNormalize class that would scale data values to colors and add the capability to specify the middle point of a colormap. Use norm keyword of contourf(). ``` import matplotlib.colors as colors class MidpointNormalize(colors.Normalize): def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False): self.midpoint = midpoint colors.Normalize.__init__(self, vmin, vmax, clip) def __call__(self, value, clip=None): # I'm ignoring masked values and all kinds of edge cases to make a # simple example... x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1] return np.ma.masked_array(np.interp(value, x, y)) fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(15, 4)) p1 = ax[0].contourf(X1, Y1, Z1, levels=Z_range) p2 = ax[1].contourf(X2, Y2, Z2, levels=Z_range) p3 = ax[2].contourf(X2, Y2, Z1 - Z2, norm=MidpointNormalize(midpoint=0.), cmap='RdBu_r') fig.colorbar(p3, ax=ax[2]) cax = fig.add_axes([0.18, 0., 0.4, 0.03]) fig.colorbar(p1, cax=cax, orientation='horizontal') ax[0].set_title('Z1') ax[1].set_title('Z2') ax[2].set_title('diff') plt.ioff() ``` ## References: * Anatomy of matplotlib | SciPy 2015 Tutorial | Benjamin Root and Joe Kington (https://www.youtube.com/watch?v=MKucn8NtVeI) * https://stackoverflow.com/questions/26065811/same-color-bar-range-for-different-plots-matplotlib?answertab=active#tab-top * https://matplotlib.org/users/colormapnorms.html * https://stackoverflow.com/questions/7404116/defining-the-midpoint-of-a-colormap-in-matplotlib/7746125#7746125 ``` HTML(html) ```
github_jupyter
# Symbulate Documentation # Random Processes <a id='contents'></a> 1. [**RandomProcess and TimeIndex**](#time) 1. [**Defining a RandomProcess explicitly as a function of time**](#Xt) 1. [**Process values at particular time points**](#value) 1. [**Mean function**](#mean) 1. [**Defining a RandomProcess incrementally**](#rw) < [Conditioning](conditioning.html) | [Contents](index.html) | [Markov processes](mc.html) > Be sure to import Symbulate using the following commands. ``` from symbulate import * %matplotlib inline ``` <a id='process'></a> ### Random processes A **random process** (a.k.a. **stochastic process**) is an indexed collection of random variables defined on some probability space. The index often represents "time", which can be either discrete or continuous. - A **discrete time stochastic process** is a collection of countably many random variables, e.g. $X_n$ for $n=0 ,1, 2,\ldots$. For each outcome in the probability space, the outcome of a discrete time stochastic process is a *sequence* (in $n$). (Remember Python starts indexing at 0. The zero-based-index is often natural in stochastic process contexts in which there is a time 0, i.e. $X_0$ is the initial value of the process.) - A **continuous time stochastic process** is a collection of uncountably many random variables, e.g. $X_t$ for $t\ge0$. For each outcome in the probability space, the outcome of a discrete time stochastic process is a *function* (a.k.a. *sample path*) (of $t$). <a id='time'></a> ### RandomProcess and TimeIndex Much like `RV`, a **RandomProcess** can be defined on a ProbabilitySpace. For a `RandomProcess`, however, the **TimeIndex** must also be specified. TimeIndex takes a single parameter, the **sampling frequency** `fs`. While many values of `fs` are allowed, the two most common inputs for `fs` are * `TimeIndex(fs=1)`, for a discrete time process $X_n, n = 0, 1, 2, \ldots$. * `TimeIndex(fs=inf)`, for a continuous time process $X(t), t\ge0$. <a id='Xt'></a> ### Defining a RandomProcess explicity as a function of time A random variable is a function $X$ which maps an outcome $\omega$ in a probability space $\Omega$ to a real value $X(\omega)$. Similarly, a random process is a function $X$ which maps an outcome $\omega$ and a time $t$ in the time index set to the process value at that time $X(\omega, t)$. In some situations, the function defining the random process can be specified explicitly. *Example.* Let $X(t) = A + B t, t\ge0$ where $A$ and $B$ are independent with $A\sim$ Bernoulli(0.9) and $B\sim$ Bernoulli(0.7). In this case, there are only 4 possible sample paths. * $X(t) = 0$, when $A=0, B=0$, which occurs with probability $0.03$ * $X(t) = 1$, when $A=1, B=0$, which occurs with probability $0.27$ * $X(t) = t$, when $A=0, B=1$, which occurs with probability $0.07$ * $X(t) = 1+t$, when $A=1, B=1$, which occurs with probability $0.63$ The following code defines a RandomProcess `X` by first defining an appropriate function `f`. Note that an outcome in the probability space consists of an $A, B$ pair, represented as $\omega_0$ and $\omega_1$ in the function. A RandomProcess is then defined by specifying: the probability space, the time index set, and the $X(\omega, t)$ function. ``` def f(omega, t): return omega[0] + omega[1] * t X = RandomProcess(Bernoulli(0.9) * Bernoulli(0.7), TimeIndex(fs=inf), f) ``` Like RV, RandomProcess only defines the random process. Values of the process can be simulated using the usual [simulation tools](sim.html). Since a stochastic process is a collection of random variables, many of the commands in the previous sections ([Random variables](rv.html), [Multiple random variables](joint.html), [Conditioning](conditioning.html)) are useful when simulating stochastic processes. For a given outcome in the probability space, a random process outputs a **sample path** which describes how the value of the process evolves over time for that particular outcome. Calling `.plot()` for a RandomProcess will return a plot of sample paths. The parameter `alpha` controls the weight of the line drawn in the plot. The paramaters `tmin` and `tmax` control the range of time values in the display. ``` X.sim(1).plot(alpha = 1) ``` Simulate and plot many sample paths, specifying the range of $t$ values to plot. Note that the darkness of a path represents its relative likelihood. ``` X.sim(100).plot(tmin=0, tmax=2) ``` <a id='value'></a> ### Process values at particular time points The value $X(t)$ (or $X_n$) of a stochastic process at any particular point in time $t$ (or $n$) is a random variable. These random variables can be accessed using brackets `[]`. Note that the value inside the brackets represents *time* $t$ or $n$. Many of the commands in the previous sections ([Random variables](rv.html), [Multiple random variables](joint.html), [Conditioning](conditioning.html)) are useful when simulating stochastic processes. *Example.* Let $X(t) = A + B t, t\ge0$ where $A$ and $B$ are independent with $A\sim$ Bernoulli(0.9) and $B\sim$ Bernoulli(0.7). Find the distribution of $X(1.5)$, the process value at time $t=1.5$. ``` def f(omega, t): return omega[0] * t + omega[1] X = RandomProcess(Bernoulli(0.9) * Bernoulli(0.7), TimeIndex(fs=inf), f) X[1.5].sim(10000).plot() ``` Find the joint distribution of process values at times 1 and 1.5. ``` (X[1] & X[1.5]).sim(1000).plot("tile") ``` Find the conditional distribution of $X(1.5)$ given $X(1) = 1)$. ``` (X[1.5] | (X[1] == 1)).sim(10000).plot() ``` <a id='mean'></a> ### Mean function The mean function of a stochastic process $X(t)$ is a deterministic function which maps $t$ to $E(X(t))$. The mean function can be estimated and plotted by simulating many sample paths of the process and using `.mean()`. ``` paths = X.sim(1000) plot(paths) plot(paths.mean(), 'r') ``` The **variance** function maps $t$ to $Var(X(t))$; similarly for the **standard deviation** function. These functions can be used to give error bands about the mean function. ``` # This illustrates the functionality, but is not an appropriate example for +/- 2SD plot(paths) paths.mean().plot('--') (paths.mean() + 2 * paths.sd()).plot('--') (paths.mean() - 2 * paths.sd()).plot('--') ``` <a id='rw'></a> ### Defining a RandomProcess incrementally There are few situations like the linear process in the example above in which the random process can be expressed explicitly as a function of the probability space outcome and the time value. More commonly, random processes are often defined incrementally, by specifying the next value of the process given the previous value. *Example.* At each point in time $n=0, 1, 2, \ldots$ a certain type of "event" either occurs or not. Suppose the probability that the event occurs at any particular time is $p=0.5$, and occurrences are independent from time to time. Let $Z_n=1$ if an event occurs at time $n$, and $Z_n=0$ otherwise. Then $Z_0, Z_1, Z_2,\ldots$ is a **Bernoulli process**. In a Bernoulli process, let $X_n$ count the number of events that have occurred up to and including time $n$, starting with 0 events at time 0. Since $Z_{n+1}=1$ if an event occurs at time $n+1$ and $Z_{n+1} = 0$ otherwise, $X_{n+1} = X_n + Z_{n+1}$. The following code defines the random process $X$. The probability space corresponds to the independent Bernoulli random variables; note that `inf` allows for infinitely many values. Also notice how the process is defined incrementally through $X_{n+1} = X_n + Z_{n+1}$. ``` P = Bernoulli(0.5)**inf Z = RV(P) X = RandomProcess(P, TimeIndex(fs=1)) X[0] = 0 for n in range(100): X[n+1] = X[n] + Z[n+1] ``` The above code defines a random process incrementally. Once a RandomProcess is defined, it can be manipulated the same way, regardless of how it is defined. ``` X.sim(1).plot(alpha = 1) X.sim(100).plot(tmin = 0, tmax = 5) X[5].sim(10000).plot() (X[5] & X[10]).sim(10000).plot("tile") (X[10] | (X[5] == 3)).sim(10000).plot() (X[5] | (X[10] == 4)).sim(10000).plot() ``` < [Conditioning](conditioning.html) | [Contents](index.html) | [Markov processes](mc.html) >
github_jupyter
# Mosaic ``` %matplotlib inline import matplotlib.pyplot as plt from matplotlib import rc import matplotlib.font_manager rc('font',**{'family':'serif','serif':['Computer Modern Roman'],'size':13}) rc('text', usetex=True) import pandas as pd import numpy as np from statistics import load def plot(ax, frame, cell_type, legend=False): if 'Spatially' in cell_type: opps = load.spatial(frame, cell_type.lower()) else: opps = load.spectral(frame, cell_type.lower()) retina2 = opps[opps['layer'] == 'retina_relu2'] ventral0 = opps[opps['layer'] == 'ventral_relu0'] ventral1 = opps[opps['layer'] == 'ventral_relu1'] ax.plot(retina2['n_bn'], retina2['mean_rel_amount'], label='Retina 2', linestyle=':') ax.fill_between( retina2['n_bn'], retina2['mean_rel_amount'] + retina2['std_rel_amount'], retina2['mean_rel_amount'] - retina2['std_rel_amount'], alpha=0.1 ) ax.plot(ventral0['n_bn'], ventral0['mean_rel_amount'], label='Ventral 1', linestyle='--') ax.fill_between( ventral0['n_bn'], ventral0['mean_rel_amount'] + ventral0['std_rel_amount'], ventral0['mean_rel_amount'] - ventral0['std_rel_amount'], alpha=0.1 ) ax.plot(ventral1['n_bn'], ventral1['mean_rel_amount'], label='Ventral 2', linestyle='-.') ax.fill_between( ventral1['n_bn'], ventral1['mean_rel_amount'] + ventral1['std_rel_amount'], ventral1['mean_rel_amount'] - ventral1['std_rel_amount'], alpha=0.1 ) if legend: ax.legend(frameon=False) if 'Spatially' in cell_type: ax.set_title(cell_type.replace('Spatially ', ''), pad=25) if cell_type == 'Spatially Opponent': ax.set_ylabel('Spatially', labelpad=25, fontsize='large') if cell_type == 'Spectrally Opponent': ax.set_ylabel('Spectrally', labelpad=25, fontsize='large') ax.set_xlim(1, 32) ax.set_ylim(0, 1) plt.draw() labels = ax.get_yticklabels() if len(labels) > 0: labels[-1] = "" ax.set_yticklabels(labels) cell_types = ['Opponent', 'Non-opponent', 'Unresponsive'] fig, axs = plt.subplots(2, 3, sharex='col', sharey='row', gridspec_kw={'hspace': 0, 'wspace': 0}) fig.set_size_inches(9, 5) fig.add_subplot(111, frameon=False) plt.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False) plt.grid(False) plt.xlabel('Bottleneck Size') plt.ylabel('Percentage') frames = ['logs/spatial-mos.pd', 'logs/devalois-mos.pd'] for i, opp_type in enumerate(['Spatially', 'Spectrally']): frame = pd.read_pickle(frames[i]) for c, cell_type in enumerate(cell_types): plot(axs[i, c], frame, f'{opp_type} {cell_type}', opp_type == 'Spatially' and cell_type == 'Unresponsive') plt.savefig('figures/opponency_mos.pdf', bbox_inches='tight') ```
github_jupyter
# Autoencoders ## Imports ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd import tensorflow as tf from sklearn.metrics import accuracy_score, precision_score, recall_score from sklearn.model_selection import train_test_split from tensorflow.keras import layers, losses from tensorflow.keras.datasets import fashion_mnist from tensorflow.keras.models import Model ``` ## Load Dataset ``` (x_train, _), (x_test, _) = fashion_mnist.load_data() x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32') / 255 print(x_train) print(x_test) x_train = x_train[..., tf.newaxis] x_test = x_test[..., tf.newaxis] print(x_train.shape) ``` ### Adding noise to images ``` noise_factor = 0.2 x_train_noisy = x_train + noise_factor * tf.random.normal(shape=x_train.shape) x_test_noisy = x_test + noise_factor * tf.random.normal(shape=x_test.shape) x_train_noisy = tf.clip_by_value(x_train_noisy, clip_value_min=0., clip_value_max=1.) x_test_noisy = tf.clip_by_value(x_test_noisy, clip_value_min=0., clip_value_max=1.) n = 10 plt.figure(figsize=(20, 2)) for i in range(n): ax = plt.subplot(1, n, i+1) plt.title("Original + Noise") plt.imshow(tf.squeeze(x_test_noisy[i])) plt.gray() plt.show() ``` ## Model ``` class Denoise(Model): def __init__(self): super(Denoise, self).__init__() self.encoder = tf.keras.Sequential([ layers.Input(shape=(28, 28, 1)), layers.Conv2D(16, (3,3), activation='relu', padding='same', strides=2), layers.Conv2D(8, (3,3), activation='relu', padding='same', strides=2) ]) self.decoder = tf.keras.Sequential([ layers.Conv2DTranspose(8, kernel_size=3, strides=2, activation='relu', padding='same'), layers.Conv2DTranspose(16, kernel_size=3, strides=2, activation='relu', padding='same'), layers.Conv2D(1, kernel_size=(3,3), activation='sigmoid', padding='same') ]) def call(self, x): encoded = self.encoder(x) decoded = self.decoder(encoded) return decoded autoencoder = Denoise() autoencoder.encoder.summary() ``` ## Optimizer ``` autoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError()) ``` ## Train ``` autoencoder.fit(x_train_noisy, x_train, epochs=10, shuffle=True, validation_data=(x_test_noisy, x_test)) autoencoder.decoder.summary() ``` ## Testing ``` encoded_imgs = autoencoder.encoder(x_test).numpy() decoded_imgs = autoencoder.decoder(encoded_imgs).numpy() n = 10 plt.figure(figsize=(20, 4)) for i in range(n): ax = plt.subplot(2, n, i+1) plt.title('Original + Noise') plt.imshow(tf.squeeze(x_test_noisy[i])) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) bx = plt.subplot(2, n, i+n+1) plt.title("Reconstructed") plt.imshow(tf.squeeze(decoded_imgs[i])) plt.gray() bx.get_xaxis().set_visible(False) bx.get_yaxis().set_visible(False) plt.show() ``` credits: [Intro to Autoencoders](https://www.tensorflow.org/tutorials/generative/autoencoder#:~:text=An%20autoencoder%20is%20a%20special,representation%20back%20to%20an%20image.)
github_jupyter
# Homework and bake-off: Sentiment analysis ``` __author__ = "Christopher Potts" __version__ = "CS224u, Stanford, Spring 2021" ``` ## Contents 1. [Overview](#Overview) 1. [Methodological note](#Methodological-note) 1. [Set-up](#Set-up) 1. [Train set](#Train-set) 1. [Dev sets](#Dev-sets) 1. [A softmax baseline](#A-softmax-baseline) 1. [RNNClassifier wrapper](#RNNClassifier-wrapper) 1. [Error analysis](#Error-analysis) 1. [Homework questions](#Homework-questions) 1. [Token-level differences [1 point]](#Token-level-differences-[1-point]) 1. [Training on some of the bakeoff data [1 point]](#Training-on-some-of-the-bakeoff-data-[1-point]) 1. [A more powerful vector-averaging baseline [2 points]](#A-more-powerful-vector-averaging-baseline-[2-points]) 1. [BERT encoding [2 points]](#BERT-encoding-[2-points]) 1. [Your original system [3 points]](#Your-original-system-[3-points]) 1. [Bakeoff [1 point]](#Bakeoff-[1-point]) ## Overview This homework and associated bakeoff are devoted to supervised sentiment analysis using the ternary (positive/negative/neutral) version of the Stanford Sentiment Treebank (SST-3) as well as a new dev/test dataset drawn from restaurant reviews. Our goal in introducing the new dataset is to push you to create a system that performs well in both the movie and restaurant domains. The homework questions ask you to implement some baseline system, and the bakeoff challenge is to define a system that does well at both the SST-3 test set and the new restaurant test set. Both are ternary tasks, and our central bakeoff score is the mean of the macro-FI scores for the two datasets. This assigns equal weight to all classes and datasets regardless of size. The SST-3 test set will be used for the bakeoff evaluation. This dataset is already publicly distributed, so we are counting on people not to cheat by developing their models on the test set. You must do all your development without using the test set at all, and then evaluate exactly once on the test set and turn in the results, with no further system tuning or additional runs. __Much of the scientific integrity of our field depends on people adhering to this honor code__. One of our goals for this homework and bakeoff is to encourage you to engage in __the basic development cycle for supervised models__, in which you 1. Design a new system. We recommend starting with something simple. 1. Use `sst.experiment` to evaluate your system, using random train/test splits initially. 1. If you have time, compare your system with others using `sst.compare_models` or `utils.mcnemar`. (For discussion, see [this notebook section](sst_02_hand_built_features.ipynb#Statistical-comparison-of-classifier-models).) 1. Return to step 1, or stop the cycle and conduct a more rigorous evaluation with hyperparameter tuning and assessment on the `dev` set. [Error analysis](#Error-analysis) is one of the most important methods for steadily improving a system, as it facilitates a kind of human-powered hill-climbing on your ultimate objective. Often, it takes a careful human analyst just a few examples to spot a major pattern that can lead to a beneficial change to the feature representations. ## Methodological note You don't have to use the experimental framework defined below (based on `sst`). The only constraint we need to place on your system is that it must have a `predict_one` method that can map directly from an example text to a prediction, and it must be able to make predictions without having any information beyond the text. (For example, it can't depend on knowing which task the text comes from.) See [the bakeoff section below](#Bakeoff-[1-point]) for examples of functions that conform to this specification. ## Set-up See [the first notebook in this unit](sst_01_overview.ipynb#Set-up) for set-up instructions. ``` from collections import Counter import numpy as np import os import pandas as pd from sklearn.linear_model import LogisticRegression import torch.nn as nn from torch_rnn_classifier import TorchRNNClassifier from torch_tree_nn import TorchTreeNN import sst import utils SST_HOME = os.path.join('data', 'sentiment') ``` ## Train set Our primary train set is the SST-3 train set: ``` sst_train = sst.train_reader(SST_HOME) sst_train.shape[0] ``` This is the train set we will use for all the regular homework questions. You are welcome to bring in new datasets for your original system. You are also free to add `include_subtrees=True`. This is very likely to lead to better systems, but it substantially increases the overall size of the dataset (from 8,544 examples to 159,274), which will in turn substantially increase the time it takes to run experiments. See [this notebook](sst_01_overview.ipynb) for additional details of this dataset. ## Dev sets We have two development set. SST3-dev consists of sentences from movie reviews, just like SST-3 train: ``` sst_dev = sst.dev_reader(SST_HOME) ``` Our new bakeoff dev set consists of sentences from restaurant reviews: ``` bakeoff_dev = sst.bakeoff_dev_reader(SST_HOME) bakeoff_dev.sample(3, random_state=1).to_dict(orient='records') ``` Here is the label distribution: ``` bakeoff_dev.label.value_counts() ``` The label distribution for the corresponding test set is similar to this. ## A softmax baseline This example is here mainly as a reminder of how to use our experimental framework with linear models: ``` def unigrams_phi(text): return Counter(text.split()) ``` Thin wrapper around `LogisticRegression` for the sake of `sst.experiment`: ``` def fit_softmax_classifier(X, y): mod = LogisticRegression( fit_intercept=True, solver='liblinear', multi_class='ovr') mod.fit(X, y) return mod ``` The experimental run with some notes: ``` softmax_experiment = sst.experiment( sst.train_reader(SST_HOME), # Train on any data you like except SST-3 test! unigrams_phi, # Free to write your own! fit_softmax_classifier, # Free to write your own! assess_dataframes=[sst_dev, bakeoff_dev]) # Free to change this during development! ``` `softmax_experiment` contains a lot of information that you can use for error analysis; see [this section below](#Error-analysis) for starter code. ## RNNClassifier wrapper This section illustrates how to use `sst.experiment` with `TorchRNNClassifier`. To featurize examples for an RNN, we can just get the words in order, letting the model take care of mapping them into an embedding space. ``` def rnn_phi(text): return text.split() ``` The model wrapper gets the vocabulary using `sst.get_vocab`. If you want to use pretrained word representations in here, then you can have `fit_rnn_classifier` build that space too; see [this notebook section for details](sst_03_neural_networks.ipynb#Pretrained-embeddings). See also [torch_model_base.py](torch_model_base.py) for details on the many optimization parameters that `TorchRNNClassifier` accepts. ``` def fit_rnn_classifier(X, y): sst_glove_vocab = utils.get_vocab(X, mincount=2) mod = TorchRNNClassifier( sst_glove_vocab, early_stopping=True) mod.fit(X, y) return mod rnn_experiment = sst.experiment( sst.train_reader(SST_HOME), rnn_phi, fit_rnn_classifier, vectorize=False, # For deep learning, use `vectorize=False`. assess_dataframes=[sst_dev, bakeoff_dev]) ``` ## Error analysis This section begins to build an error-analysis framework using the dicts returned by `sst.experiment`. These have the following structure: ``` 'model': trained model 'phi': the feature function used 'train_dataset': 'X': feature matrix 'y': list of labels 'vectorizer': DictVectorizer, 'raw_examples': list of raw inputs, before featurizing 'assess_datasets': list of datasets, each with the same structure as the value of 'train_dataset' 'predictions': list of lists of predictions on the assessment datasets 'metric': `score_func.__name__`, where `score_func` is an `sst.experiment` argument 'score': the `score_func` score on the each of the assessment dataasets ``` The following function just finds mistakes, and returns a `pd.DataFrame` for easy subsequent processing: ``` def find_errors(experiment): """Find mistaken predictions. Parameters ---------- experiment : dict As returned by `sst.experiment`. Returns ------- pd.DataFrame """ dfs = [] for i, dataset in enumerate(experiment['assess_datasets']): df = pd.DataFrame({ 'raw_examples': dataset['raw_examples'], 'predicted': experiment['predictions'][i], 'gold': dataset['y']}) df['correct'] = df['predicted'] == df['gold'] df['dataset'] = i dfs.append(df) return pd.concat(dfs) softmax_analysis = find_errors(softmax_experiment) rnn_analysis = find_errors(rnn_experiment) ``` Here we merge the sotmax and RNN experiments into a single DataFrame: ``` analysis = softmax_analysis.merge( rnn_analysis, left_on='raw_examples', right_on='raw_examples') analysis = analysis.drop('gold_y', axis=1).rename(columns={'gold_x': 'gold'}) ``` The following code collects a specific subset of examples; small modifications to its structure will give you different interesting subsets: ``` # Examples where the softmax model is correct, the RNN is not, # and the gold label is 'positive' error_group = analysis[ (analysis['predicted_x'] == analysis['gold']) & (analysis['predicted_y'] != analysis['gold']) & (analysis['gold'] == 'positive') ] error_group.shape[0] for ex in error_group['raw_examples'].sample(5, random_state=1): print("="*70) print(ex) ``` ## Homework questions Please embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) ### Token-level differences [1 point] We can begin to get a sense for how our two dev sets differ by considering the most frequent tokens from each. This question asks you to begin such analysis. Your task: write a function `get_token_counts` that, given a `pd.DataFrame` in the format of our datasets, tokenizes the example sentences based on whitespace and creates a count distribution over all of the tokens. The function should return a `pd.Series` sorted by frequency; if you create a count dictionary `d`, then `pd.Series(d).sort_values(ascending=False)` will give you what you need. ``` def get_token_counts(df): pass ##### YOUR CODE HERE def test_get_token_counts(func): df = pd.DataFrame([ {'sentence': 'a a b'}, {'sentence': 'a b a'}, {'sentence': 'a a a b.'}]) result = func(df) for token, expected in (('a', 7), ('b', 2), ('b.', 1)): actual = result.loc[token] assert actual == expected, \ "For token {}, expected {}; got {}".format( token, expected, actual) if 'IS_GRADESCOPE_ENV' not in os.environ: test_get_token_counts(get_token_counts) ``` As you develop your original system, you might review these results. The two dev sets have different vocabularies and different low-level encoding details that are sure to impact model performance, especially when one considers that the train set is like `sst_dev` in all these respects. For additional discussion, see [this notebook section](sst_01_overview.ipynb#Tokenization). ### Training on some of the bakeoff data [1 point] We have so far presented the bakeoff dev set as purely for evaluation. Since the train set consists entirely of SST-3 data, this makes the bakeoff split especially challenging. We might be able to reduce the challenging by adding some of the bakeoff dev set to the train set, keeping some of it for evaluation. The current question asks to begin explore the effects of such training. Your task: write a function `run_mixed_training_experiment`. The function should: 1. Take as inputs (a) a model training wrapper like `fit_softmax_classifier` and (b) an integer `bakeoff_train_size` specifying the number of examples from `bakeoff_dev` that should be included in the train set. 1. Split `bakeoff_dev` so that the first `bakeoff_train_size` examples are in the train set and the rest are used for evaluation. 1. Use `sst.experiment` with the user-supplied model training wrapper, `unigram_phi` as defined above, and a train set that consists of SST-3 train and the train portion of `bakeoff_dev` as defined in step 2. The value of `assess_dataframes` should be a list consisting of the SST-3 dev set and the evaluation portion of `bakeoff_dev` as defined in step 2. 1. Return the return value of `sst.experiment`. The function `test_run_mixed_training_experiment` will help you iterate to the required design. ``` def run_mixed_training_experiment(wrapper_func, bakeoff_train_size): pass ##### YOUR CODE HERE def test_run_mixed_training_experiment(func): bakeoff_train_size = 1000 experiment = func(fit_softmax_classifier, bakeoff_train_size) assess_size = len(experiment['assess_datasets']) assert len(experiment['assess_datasets']) == 2, \ ("The evaluation should be done on two datasets: " "SST3 and part of the bakeoff dev set. " "You have {} datasets.".format(assess_size)) bakeoff_test_size = bakeoff_dev.shape[0] - bakeoff_train_size expected_eval_examples = bakeoff_test_size + sst_dev.shape[0] eval_examples = sum(len(d['raw_examples']) for d in experiment['assess_datasets']) assert expected_eval_examples == eval_examples, \ "Expected {} evaluation examples; got {}".format( expected_eval_examples, eval_examples) if 'IS_GRADESCOPE_ENV' not in os.environ: test_run_mixed_training_experiment(run_mixed_training_experiment) ``` ### A more powerful vector-averaging baseline [2 points] In [Distributed representations as features](sst_03_neural_networks.ipynb#Distributed-representations-as-features), we looked at a baseline for the ternary SST-3 problem in which each example is modeled as the mean of its GloVe representations. A `LogisticRegression` model was used for prediction. A neural network might do better with these representations, since there might be complex relationships between the input feature dimensions that a linear classifier can't learn. To address this question, we want to get set up to run the experiment with a shallow neural classifier. Your task: write and submit a model wrapper function around `TorchShallowNeuralClassifier`. This function should implement hyperparameter search according to this specification: * Set `early_stopping=True` for all experiments. * Using 3-fold cross-validation, exhaustively explore this set of hyperparameter combinations: * The hidden dimensionality at 50, 100, and 200. * The hidden activation function as `nn.Tanh()` and `nn.ReLU()`. * For all other parameters to `TorchShallowNeuralClassifier`, use the defaults. See [this notebook section](sst_02_hand_built_features.ipynb#Hyperparameter-search) for examples. You are not required to run a full evaluation with this function using `sst.experiment`, but we assume you will want to. We're not evaluating the quality of your model. (We've specified the protocols completely, but there will still be variation in the results.) However, the primary goal of this question is to get you thinking more about this strong baseline feature representation scheme for SST-3, so we're sort of hoping you feel compelled to try out variations on your own. ``` from torch_shallow_neural_classifier import TorchShallowNeuralClassifier def fit_shallow_neural_classifier_with_hyperparameter_search(X, y): pass ##### YOUR CODE HERE ``` ### BERT encoding [2 points] We might hypothesize that encoding our examples with BERT will yield improvements over the GloVe averaging method explored in the previous question, since BERT implements a much more complex and data-driven function for this kind of combination. This question asks you to begin exploring this general hypothesis. Your task: write a function `hf_cls_phi` that uses Hugging Face functionality to encode individual examples with BERT and returns the final output representation above the [CLS] token. You are not required to evaluate this feature function, but it is easy to do so with `sst.experiment` and `vectorize=False` (since your feature function directly encodes every example as a vector). Your code should also be a natural basis for even more powerful approaches – for example, it might be even better to pool all the output states rather than using just the first output state. Another option is [fine-tuning](finetuning.ipynb). ``` from transformers import BertModel, BertTokenizer import vsm # Instantiate a Bert model and tokenizer based on `bert_weights_name`: bert_weights_name = 'bert-base-uncased' ##### YOUR CODE HERE def hf_cls_phi(text): # Get the ids. `vsm.hf_encode` will help; be sure to # set `add_special_tokens=True`. ##### YOUR CODE HERE # Get the BERT representations. `vsm.hf_represent` will help: ##### YOUR CODE HERE # Index into `reps` to get the representation above [CLS]. # The shape of `reps` should be (1, n, 768), where n is the # number of tokens. You need the 0th element of the 2nd dim: ##### YOUR CODE HERE # These conversions should ensure that you can work with the # representations flexibly. Feel free to change the variable # name: return cls_rep.cpu().numpy() def test_hf_cls_phi(func): rep = func("Just testing!") expected_shape = (768,) result_shape = rep.shape assert rep.shape == (768,), \ "Expected shape {}; got {}".format( expected_shape, result_shape) # String conversion to avoid precision errors: expected_first_val = str(0.1709) result_first_val = "{0:.04f}".format(rep[0]) assert expected_first_val == result_first_val, \ ("Unexpected representation values. Expected the " "first value to be {}; got {}".format( expected_first_val, result_first_val)) if 'IS_GRADESCOPE_ENV' not in os.environ: test_hf_cls_phi(hf_cls_phi) ``` Note: encoding all of SST-3 train (no subtrees) takes about 11 minutes on my 2015 iMac, CPU only (32GB). ### Your original system [3 points] Your task is to develop an original model for the SST-3 problem and our new bakeoff dataset. There are many options. If you spend more than a few hours on this homework problem, you should consider letting it grow into your final project! Here are some relatively manageable ideas that you might try: 1. We didn't systematically evaluate the `bidirectional` option to the `TorchRNNClassifier`. Similarly, that model could be tweaked to allow multiple LSTM layers (at present there is only one), and you could try adding layers to the classifier portion of the model as well. 1. We've already glimpsed the power of rich initial word representations, and later in the course we'll see that smart initialization usually leads to a performance gain in NLP, so you could perhaps achieve a winning entry with a simple model that starts in a great place. 1. Our [practical introduction to contextual word representations](finetuning.ipynb) covers pretrained representations and interfaces that are likely to boost the performance of any system. We want to emphasize that this needs to be an __original__ system. It doesn't suffice to download code from the Web, retrain, and submit. You can build on others' code, but you have to do something new and meaningful with it. See the course website for additional guidance on how original systems will be evaluated. In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development (your best average of macro-F1 scores), just to help us understand how systems performed overall. ``` # PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS: # 1) Textual description of your system. # 2) The code for your original system. # 3) The score achieved by your system in place of MY_NUMBER. # With no other changes to that line. # You should report your score as a decimal value <=1.0 # PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS # NOTE: MODULES, CODE AND DATASETS REQUIRED FOR YOUR ORIGINAL SYSTEM # SHOULD BE ADDED BELOW THE 'IS_GRADESCOPE_ENV' CHECK CONDITION. DOING # SO ABOVE THE CHECK MAY CAUSE THE AUTOGRADER TO FAIL. # START COMMENT: Enter your system description in this cell. # My peak score was: MY_NUMBER if 'IS_GRADESCOPE_ENV' not in os.environ: pass # STOP COMMENT: Please do not remove this comment. ``` ## Bakeoff [1 point] As we said above, the bakeoff evaluation data is the official SST test-set release and a new test set derived from the same sources and labeling methods as for `bakeoff_dev`. For this bakeoff, you'll evaluate your original system from the above homework problem on these test sets. Our metric will be the mean of the macro-F1 values, which weights both datasets equally despite their differing sizes. The central requirement for your system is that you have define a `predict_one` method for it that maps a text (str) directly to a label prediction – one of 'positive', 'negative', 'neutral'. If you used `sst.experiment` with `vectorize=True`, then the following function (for `softmax_experiment`) will be easy to adapt – you probably just need to change the variable `softmax_experiment` to the variable for your experiment output. ``` def predict_one_softmax(text): # Singleton list of feature dicts: feats = [softmax_experiment['phi'](text)] # Vectorize to get a feature matrix: X = softmax_experiment['train_dataset']['vectorizer'].transform(feats) # Standard sklearn `predict` step: preds = softmax_experiment['model'].predict(X) # Be sure to return the only member of the predictions, # rather than the singleton list: return preds[0] ``` If you used an RNN like the one we demoed above, then featurization is a bit more straightforward: ``` def predict_one_rnn(text): # Singleton list of feature dicts: feats = [rnn_experiment['phi'](text)] # Standard `predict` step on a list of lists of str: preds = rnn_experiment['model'].predict(X) # Be sure to return the only member of the predictions, # rather than the singleton list: return preds[0] ``` The following function is used to create the bakeoff submission file. Its arguments are your `predict_one` function and an output filename (str). ``` def create_bakeoff_submission( predict_one_func, output_filename='cs224u-sentiment-bakeoff-entry.csv'): bakeoff_test = sst.bakeoff_test_reader(SST_HOME) sst_test = sst.test_reader(SST_HOME) bakeoff_test['dataset'] = 'bakeoff' sst_test['dataset'] = 'sst3' df = pd.concat((bakeoff_test, sst_test)) df['prediction'] = df['sentence'].apply(predict_one_func) df.to_csv(output_filename, index=None) ``` Thus, for example, the following will create a bake-off entry based on `predict_one_softmax`: ``` create_bakeoff_submission(predict_one_softmax) ``` This creates a file `cs224u-sentiment-bakeoff-entry.csv` in the current directory. That file should be uploaded as-is. Please do not change its name. Only one upload per team is permitted, and you should do no tuning of your system based on what you see in our bakeoff prediction file – you should not study that file in anyway, beyond perhaps checking that it contains what you expected it to contain. The upload function will do some additional checking to ensure that your file is well-formed. People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points. Late entries will be accepted, but they cannot earn the extra 0.5 points.
github_jupyter
# Semantic Vector Space Construct a basic semantic vector set for disambiguating coordinate relations. ``` import collections from datetime import datetime from tools.langtools import PositionsTF from tools.significance import apply_fishers, contingency_table from tools.locations import data_locations from cxbuilders import wordConstructions from sklearn.metrics.pairwise import pairwise_distances from scipy.stats import chi2_contingency from matplotlib import pyplot as plt import pandas as pd import numpy as np from tf.app import use from tf.fabric import Fabric # load custom BHSA data + heads TF = Fabric(locations=data_locations.values()) load_features = ['g_cons_utf8', 'trailer_utf8', 'label', 'lex', 'role', 'rela', 'typ', 'function', 'language', 'pdp', 'gloss', 'vs', 'vt', 'nhead', 'head', 'mother', 'nu', 'prs', 'sem_set', 'ls', 'st', 'kind', 'top_assoc', 'number', 'obj_prep', 'embed', 'freq_lex', 'sp'] api = TF.load(' '.join(load_features)) F, E, T, L = api.F, api.E, api.T, api.L # shortform TF methods A = use('bhsa', api=api, silent=True) A.displaySetup(condenseType='phrase', withNodes=True, extraFeatures='lex') ``` ## Get Context Counts Around Window (bag of words) For every lexeme found in a timephrase, count the other lexemes that occur in it's vicinity of 5 words for every occurrence of that word in the Hebrew Bible. This allows us to construct an approximate semantic profile that can be compared between terms. A "bag of words" model means that we do not consider the position of a context word relative to the target word (i.e. "ngrams"). ``` words = wordConstructions(A) words.findall(2) def get_window(word, model='bagofwords'): ''' Build a contextual window, return context words. ''' window = 5 context = 'sentence' confeat = 'lex' P = PositionsTF(word, context, A).get fore = list(range(-window, 0)) back = list(range(1, window+1)) conwords = [] for pos in (fore + back): cword = P(pos, confeat) if cword: if model == 'bagofwords': conwords.append(f'{cword}') elif model == 'ngram': conwords.append(f'{pos}.{cword}') return conwords wordcons = collections.defaultdict(lambda:collections.Counter()) timelexs = set() for ph in F.otype.s('timephrase'): for w in L.d(ph,'word'): cx = words.findall(w)[0] if cx.name == 'cont': timelexs.add(L.u(w,'lex')[0]) timewords = set( w for lex in timelexs for w in L.d(lex,'word') ) print(f'{len(timewords)} timewords ready for analysis...') for w in timewords: context = get_window(w) wordcons[F.lex.v(w)].update(context) wordcons = pd.DataFrame(wordcons).fillna(0) print(f'{wordcons.shape[1]} words analyzed...') print(f'\t{wordcons.shape[0]} word contexts analyzed...') wordcons.head() wordcons.shape[0] * wordcons.shape[1] wordcons['CNH/'].sort_values(ascending=False).head(10) ``` ## Measure Target Word / Context Associations ``` # contingency table ct = contingency_table(wordcons) ``` ### Apply ΔP We need an efficient (i.e. simple) normalization method for such a large dataset. ΔP is such a test that includes contingency information [(Gries 2008)](https://www.researchgate.net/publication/233650934_Dispersions_and_adjusted_frequencies_in_corpora_further_explorations). ``` a = wordcons b = ct['b'] c = ct['c'] d = ct['d'] deltap = (a/(a+b)) - (c/(c+d)).fillna(0) ``` ## Calculate Cosine Distance ``` distances_raw = pairwise_distances(np.nan_to_num(deltap.T.values), metric='cosine') dist = pd.DataFrame(distances_raw, columns=wordcons.columns, index=wordcons.columns) ``` ## Testing Efficacy We want to use semantic vectors to disambiguate coordinate relations when there is more than one candidate to connect a target to. ### Hypothesis: Candidates for coordinate pairs can be distinguished by selecting the candidate with the shortest distance in semantic space from the target word. ``` def show_dist(target, compares): """Return candidates in order of distance.""" return sorted( (dist[target][comp], comp) for comp in compares ) ``` ### K>B: with XLH or JWM? ``` A.pretty(777703) show_dist('K>B/', ('XLH[', 'JWM/')) ``` Success. The test shows that XLH is more semantically similar. ### <RPL: <NN or JWM? ``` A.pretty(817713) show_dist('<RPL/', ('JWM/', '<NN/')) ``` Success. <NN/ is correctly selected as more semantically similar. ### >PLH/: LJLH or >JCWN? ``` A.pretty(862564) show_dist('>PLH/', ('LJLH/', '>JCWN/')) ``` Sucess. LJLH is most similar semantically. ### MRWD: <NJH or JWM? ``` A.pretty(872677) show_dist('MRWD/', ('<NJ=/', 'JWM/')) ``` Sucess. ### >M: >B or MWT? ``` A.pretty(874237) show_dist('>M/', ('>B/', 'MWT/')) ``` Sucess. # Export Vector Resource ``` import pickle dist_dict = dist.to_dict() with open('semvector.pickle', 'wb') as outfile: pickle.dump(dist_dict, outfile) ```
github_jupyter
# TensorFlow实现VGG16 ## 导入需要使用的库 ``` import inspect import os import numpy as np import tensorflow as tf ``` ## 定义卷积层 ``` '''Convolution op wrapper, use RELU activation after convolution Args: layer_name: e.g. conv1, pool1... x: input tensor, [batch_size, height, width, channels] out_channels: number of output channels (or comvolutional kernels) kernel_size: the size of convolutional kernel, VGG paper used: [3,3] stride: A list of ints. 1-D of length 4. VGG paper used: [1, 1, 1, 1] is_pretrain: if load pretrained parameters, freeze all conv layers. Depending on different situations, you can just set part of conv layers to be freezed. the parameters of freezed layers will not change when training. Returns: 4D tensor ''' def conv_layer(layer_name, x, out_channels, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=True): in_channels = x.get_shape()[-1] with tf.variable_scope(layer_name): w = tf.get_variable(name='weights', trainable=is_pretrain, shape=[kernel_size[0], kernel_size[1], in_channels, out_channels], initializer=tf.contrib.layers.xavier_initializer()) # default is uniform distribution initialization b = tf.get_variable(name='biases', trainable=is_pretrain, shape=[out_channels], initializer=tf.constant_initializer(0.0)) x = tf.nn.conv2d(x, w, stride, padding='SAME', name='conv') x = tf.nn.bias_add(x, b, name='bias_add') x = tf.nn.relu(x, name='relu') return x ``` ## 定义池化层 ``` '''Pooling op Args: x: input tensor kernel: pooling kernel, VGG paper used [1,2,2,1], the size of kernel is 2X2 stride: stride size, VGG paper used [1,2,2,1] padding: is_max_pool: boolen if True: use max pooling else: use avg pooling ''' def pool(layer_name, x, kernel=[1,2,2,1], stride=[1,2,2,1], is_max_pool=True): if is_max_pool: x = tf.nn.max_pool(x, kernel, strides=stride, padding='SAME', name=layer_name) else: x = tf.nn.avg_pool(x, kernel, strides=stride, padding='SAME', name=layer_name) return x ``` ## 定义全连接层 ``` '''Wrapper for fully connected layers with RELU activation as default Args: layer_name: e.g. 'FC1', 'FC2' x: input feature map out_nodes: number of neurons for current FC layer ''' def fc_layer(layer_name, x, out_nodes,keep_prob=0.8): shape = x.get_shape() # 处理没有预先做flatten的输入 if len(shape) == 4: size = shape[1].value * shape[2].value * shape[3].value else: size = shape[-1].value with tf.variable_scope(layer_name): w = tf.get_variable('weights', shape=[size, out_nodes], initializer=tf.contrib.layers.xavier_initializer()) b = tf.get_variable('biases', shape=[out_nodes], initializer=tf.constant_initializer(0.0)) flat_x = tf.reshape(x, [-1, size]) # flatten into 1D x = tf.nn.bias_add(tf.matmul(flat_x, w), b) x = tf.nn.relu(x) x = tf.nn.dropout(x, keep_prob) return x ``` ## 定义VGG16网络 ``` def vgg16_net(x, n_classes, is_pretrain=True): with tf.name_scope('VGG16'): x = conv_layer('conv1_1', x, 64, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain) x = conv_layer('conv1_2', x, 64, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain) with tf.name_scope('pool1'): x = pool('pool1', x, kernel=[1,2,2,1], stride=[1,2,2,1], is_max_pool=True) x = conv_layer('conv2_1', x, 128, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain) x = conv_layer('conv2_2', x, 128, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain) with tf.name_scope('pool2'): x = pool('pool2', x, kernel=[1,2,2,1], stride=[1,2,2,1], is_max_pool=True) x = conv_layer('conv3_1', x, 256, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain) x = conv_layer('conv3_2', x, 256, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain) x = conv_layer('conv3_3', x, 256, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain) with tf.name_scope('pool3'): x = pool('pool3', x, kernel=[1,2,2,1], stride=[1,2,2,1], is_max_pool=True) x = conv_layer('conv4_1', x, 512, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain) x = conv_layer('conv4_2', x, 512, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain) x = conv_layer('conv4_3', x, 512, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain) with tf.name_scope('pool4'): x = pool('pool4', x, kernel=[1,2,2,1], stride=[1,2,2,1], is_max_pool=True) x = conv_layer('conv5_1', x, 512, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain) x = conv_layer('conv5_2', x, 512, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain) x = conv_layer('conv5_3', x, 512, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain) with tf.name_scope('pool5'): x = pool('pool5', x, kernel=[1,2,2,1], stride=[1,2,2,1], is_max_pool=True) x = fc_layer('fc6', x, out_nodes=4096) assert x.get_shape().as_list()[1:] == [4096] x = fc_layer('fc7', x, out_nodes=4096) fc8 = fc_layer('fc8', x, out_nodes=n_classes) # softmax = tf.nn.softmax(fc8) return x ``` # 定义损失函数 采用交叉熵计算损失 ``` '''Compute loss Args: logits: logits tensor, [batch_size, n_classes] labels: one-hot labels ''' def loss(logits, labels): with tf.name_scope('loss') as scope: cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels,name='cross-entropy') loss = tf.reduce_mean(cross_entropy, name='loss') tf.summary.scalar(scope+'/loss', loss) return loss ``` # 定义准确率 ``` ''' Evaluate the quality of the logits at predicting the label. Args: logits: Logits tensor, float - [batch_size, NUM_CLASSES]. labels: Labels tensor, ''' def accuracy(logits, labels): with tf.name_scope('accuracy') as scope: correct = tf.equal(tf.arg_max(logits, 1), tf.arg_max(labels, 1)) correct = tf.cast(correct, tf.float32) accuracy = tf.reduce_mean(correct)*100.0 tf.summary.scalar(scope+'/accuracy', accuracy) return accuracy ``` # 定义优化函数 ``` def optimize(loss, learning_rate, global_step): '''optimization, use Gradient Descent as default ''' with tf.name_scope('optimizer'): optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) #optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) train_op = optimizer.minimize(loss, global_step=global_step) return train_op ``` # 定义加载模型函数 ``` def load_with_skip(data_path, session, skip_layer): data_dict = np.load(data_path, encoding='latin1').item() for key in data_dict: if key not in skip_layer: with tf.variable_scope(key, reuse=True): for subkey, data in zip(('weights', 'biases'), data_dict[key]): session.run(tf.get_variable(subkey).assign(data)) ``` # 定义训练图片读取函数 ``` def read_cifar10(data_dir, is_train, batch_size, shuffle): """Read CIFAR10 Args: data_dir: the directory of CIFAR10 is_train: boolen batch_size: shuffle: Returns: label: 1D tensor, tf.int32 image: 4D tensor, [batch_size, height, width, 3], tf.float32 """ img_width = 32 img_height = 32 img_depth = 3 label_bytes = 1 image_bytes = img_width*img_height*img_depth with tf.name_scope('input'): if is_train: filenames = [os.path.join(data_dir, 'data_batch_%d.bin' %ii) for ii in np.arange(1, 6)] else: filenames = [os.path.join(data_dir, 'test_batch.bin')] filename_queue = tf.train.string_input_producer(filenames) reader = tf.FixedLengthRecordReader(label_bytes + image_bytes) key, value = reader.read(filename_queue) record_bytes = tf.decode_raw(value, tf.uint8) label = tf.slice(record_bytes, [0], [label_bytes]) label = tf.cast(label, tf.int32) image_raw = tf.slice(record_bytes, [label_bytes], [image_bytes]) image_raw = tf.reshape(image_raw, [img_depth, img_height, img_width]) image = tf.transpose(image_raw, (1,2,0)) # convert from D/H/W to H/W/D image = tf.cast(image, tf.float32) # # data argumentation # image = tf.random_crop(image, [24, 24, 3])# randomly crop the image size to 24 x 24 # image = tf.image.random_flip_left_right(image) # image = tf.image.random_brightness(image, max_delta=63) # image = tf.image.random_contrast(image,lower=0.2,upper=1.8) image = tf.image.per_image_standardization(image) #substract off the mean and divide by the variance if shuffle: images, label_batch = tf.train.shuffle_batch( [image, label], batch_size = batch_size, num_threads= 64, capacity = 20000, min_after_dequeue = 3000) else: images, label_batch = tf.train.batch( [image, label], batch_size = batch_size, num_threads = 64, capacity= 2000) ## ONE-HOT n_classes = 10 label_batch = tf.one_hot(label_batch, depth= n_classes) label_batch = tf.cast(label_batch, dtype=tf.int32) label_batch = tf.reshape(label_batch, [batch_size, n_classes]) return images, label_batch ``` # 定义训练函数 ``` IMG_W = 32 IMG_H = 32 N_CLASSES = 10 BATCH_SIZE = 32 learning_rate = 0.01 MAX_STEP = 10 # it took me about one hour to complete the training. IS_PRETRAIN = False image_size = 32 # 输入图像尺寸 images = tf.Variable(tf.random_normal([batch_size, image_size, image_size, 3], dtype=tf.float32, stddev=1e-1)) vgg16_net(image, N_CLASSES, IS_PRETRAIN) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) def train(): pre_trained_weights = './/vgg16_pretrain//vgg16.npy' data_dir = './/data//cifar-10-batches-bin//' train_log_dir = './/logs//train//' val_log_dir = './/logs//val//' with tf.name_scope('input'): tra_image_batch, tra_label_batch = read_cifar10(data_dir=data_dir, is_train=True, batch_size= BATCH_SIZE, shuffle=True) val_image_batch, val_label_batch = read_cifar10(data_dir=data_dir, is_train=False, batch_size= BATCH_SIZE, shuffle=False) x = tf.placeholder(tf.float32, shape=[BATCH_SIZE, IMG_W, IMG_H, 3]) y_ = tf.placeholder(tf.int16, shape=[BATCH_SIZE, N_CLASSES]) logits = vgg16_net(x, N_CLASSES, IS_PRETRAIN) loss_1 = loss(logits, y_) accuracy = accuracy(logits, y_) my_global_step = tf.Variable(0, name='global_step', trainable=False) train_op = optimize(loss_1, learning_rate, my_global_step) saver = tf.train.Saver(tf.global_variables()) summary_op = tf.summary.merge_all() init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) print(x.shape()) print(y_.shape()) if(IS_PRETRAIN): load_with_skip(pre_trained_weights, sess, ['fc6','fc7','fc8']) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) tra_summary_writer = tf.summary.FileWriter(train_log_dir, sess.graph) val_summary_writer = tf.summary.FileWriter(val_log_dir, sess.graph) try: for step in np.arange(MAX_STEP): if coord.should_stop(): break tra_images,tra_labels = sess.run([tra_image_batch, tra_label_batch]) _, tra_loss, tra_acc = sess.run([train_op, loss, accuracy], feed_dict={x:tra_images, y_:tra_labels}) if step % 50 == 0 or (step + 1) == MAX_STEP: print ('Step: %d, loss: %.4f, accuracy: %.4f%%' % (step, tra_loss, tra_acc)) summary_str = sess.run(summary_op) tra_summary_writer.add_summary(summary_str, step) if step % 200 == 0 or (step + 1) == MAX_STEP: val_images, val_labels = sess.run([val_image_batch, val_label_batch]) val_loss, val_acc = sess.run([loss, accuracy], feed_dict={x:val_images,y_:val_labels}) print('** Step %d, val loss = %.2f, val accuracy = %.2f%% **' %(step, val_loss, val_acc)) summary_str = sess.run(summary_op) val_summary_writer.add_summary(summary_str, step) if step % 2000 == 0 or (step + 1) == MAX_STEP: checkpoint_path = os.path.join(train_log_dir, 'model.ckpt') saver.save(sess, checkpoint_path, global_step=step) except tf.errors.OutOfRangeError: print('Done training -- epoch limit reached') finally: coord.request_stop() coord.join(threads) train() ``` ## VGG16使用 ``` def time_tensorflow_run(session, target, feed, info_string): num_steps_burn_in = 10 # 预热轮数 total_duration = 0.0 # 总时间 total_duration_squared = 0.0 # 总时间的平方和用以计算方差 for i in range(num_batches + num_steps_burn_in): start_time = time.time() _ = session.run(target,feed_dict=feed) duration = time.time() - start_time if i >= num_steps_burn_in: # 只考虑预热轮数之后的时间 if not i % 10: print('%s:step %d,duration = %.3f' % (datetime.now(), i - num_steps_burn_in, duration)) total_duration += duration total_duration_squared += duration * duration mn = total_duration / num_batches # 平均每个batch的时间 vr = total_duration_squared / num_batches - mn * mn # 方差 sd = math.sqrt(vr) # 标准差 print('%s: %s across %d steps, %.3f +/- %.3f sec/batch' % (datetime.now(), info_string, num_batches, mn, sd)) def run_benchmark(): with tf.Graph().as_default(): '''定义图片尺寸224,利用tf.random_normal函数生成标准差为0.1的正态分布的随机数来构建224x224的随机图片''' image_size = 224 # 输入图像尺寸 images = tf.Variable(tf.random_normal([batch_size, image_size, image_size, 3], dtype=tf.float32, stddev=1e-1)) #构建keep_prob的placeholder keep_prob = tf.placeholder(tf.float32) prediction,softmax,fc8,p = vgg16_net(images,keep_prob) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) #设置keep_prob为1.0,运用time_tensorflow_run来评测forward运算随机 time_tensorflow_run(sess, prediction,{keep_prob:1.0}, "Forward") # 用以模拟训练的过程 objective = tf.nn.l2_loss(fc8) # 给一个loss grad = tf.gradients(objective, p) # 相对于loss的 所有模型参数的梯度 #评测backward运算时间 time_tensorflow_run(sess, grad, {keep_prob:0.5},"Forward-backward") batch_size = 32 num_batches = 100 run_benchmark() ``` ## 其他参数 ``` # Construct model pred = conv_net(x, weights, biases, keep_prob) # Define loss and optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Evaluate model correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # Initializing the variables init = tf.global_variables_initializer() saver=tf.train.Saver() ``` https://blog.csdn.net/roguesir/article/details/77051250 https://blog.csdn.net/zhangwei15hh/article/details/78417789 https://blog.csdn.net/v1_vivian/article/details/77898652
github_jupyter
``` # conda/pip install pycircstat import sys import os import math import random import pickle import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from scipy import stats,io import pycircstat from scipy.ndimage import gaussian_filter1d ``` # Functions ``` sys.path.append('/Users/jasperhvdm/Dropbox (Attention Group)/Attention Group Team Folder/Jasper/lab_meeting_Feb/temp_dec') from decoding_functions import * from least_squares_fit_cos import * # simple function to plot a time course def tsplot(x,y,color='k',smooth=True,chance=0): """Plot line with chance hline""" #figure fig = plt.figure(figsize=(6,6)) ax = plt.subplot(1,1,1) #plot line ax.plot(x, y,color='k',linewidth=1.5) ax.plot(x, gaussian_filter1d(y,5),color='red',linewidth=2) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.hlines(chance, min(x), max(x), color='gray', linestyle='dashed') return ax ``` # Load in the data of example subject ``` #searchlight neighbours electrode_SL = io.loadmat('/Users/jasperhvdm/Documents/DPhil/PROJECTS/EXP8_UpdProtec/data/Channel_selection/neighbours.mat') # load data path = '/Users/jasperhvdm/Dropbox (Attention Group)/Attention Group Team Folder/Jasper/lab_meeting_Feb/' with open (path + 'data_S13_bias_decoding_UpPro', 'rb') as fp: [stimulus, prev_stim, wm_load, cue, X, time, stimulus_nr] = pickle.load(fp) ``` # Run classification on MEG time series ``` # run decoding # data as trials x features x time # subselect trials X = X[(stimulus_nr == 1) | (stimulus_nr == 2), :, :] # stimuli nr_bins = 10 bins = np.arange(-math.pi-0.000001, math.pi, 2*math.pi/nr_bins) y = np.digitize(pycircstat.cdiff(stimulus, 0), bins) # check between -400 ms and 900 ms time_lim = [-.4, .9] X_all = X[:,:,(time >= time_lim[0]) & (time <= time_lim[1]) ] #select time points ( >-.2 and <1.0) time_ = time[(time >= time_lim[0]) & (time <= time_lim[1]) ] #adjust the time vector evidence = temporal_decoding(X_all, y, time_, n_bins = nr_bins, size_window = 30, n_folds = 10, classifier = 'LDA', use_pca = True, pca_components = .9, temporal_dynamics = True, demean='trial_window') # output_channel_evidence = np.zeros((306,34)) # subselect trials X = X[(stimulus_nr == 1) | (stimulus_nr == 2), :, :] # stimuli nr_bins = 10 bins = np.arange(-math.pi-0.000001, math.pi, 2*math.pi/nr_bins) y = np.digitize(pycircstat.cdiff(stimulus, 0), bins) # check between -400 ms and 900 ms time_lim = [.15, .32] X_all = X[:,:,(time >= time_lim[0]) & (time <= time_lim[1]) ] #select time points ( >-.2 and <1.0) time_ = time[(time >= time_lim[0]) & (time <= time_lim[1]) ] #adjust the time vector for ch in range(100,306): print(ch) X_chan = X_all[:,electrode_SL['neighb'][:,ch]==True,:] evidence = temporal_decoding(X_chan, y, time_, n_bins = nr_bins, size_window = 30, n_folds = 10, classifier = 'LDA', use_pca = True, pca_components = .95, temporal_dynamics = True, demean='window') # topoplot averaged over multiple time points or get topo for every tp. print('compute_evidence') evidence = cos_convolve(evidence) output_channel_evidence[ch,:] = evidence['cos_convolved'] output_channel_evidence.shape # Nr of trials and nr of subjects # with open('/Users/jasperhvdm/Documents/DPhil/Projects/EXP8_UpdProtec/scripts/MEG_topo_struct.pkl', 'rb') as f: # [epochs] = pickle.load(f) # evoked = epochs.average() evoked.data[:,0] = output_channel_evidence[:,32:33].mean(1) times = evoked.times evoked.plot_topomap(times[0:1], ch_type='grad', time_unit='s') plt.plot(time_,output_channel_evidence[0,:].T) plt.plot(time_,output_channel_evidence[1,:].T) ``` # Output data ``` tsplot(time_,evidence['accuracy'],chance=.1) ``` # Cos convolved evidence ``` # this function will compute the (avg.) cos-convolved evidence per trial # and adds this to the dictionary evidence = cos_convolve(evidence) tsplot(time_,evidence['cos_convolved'],chance=0) plt.hist(evidence['single_trial_cosine_fit'][:,(time_>=.2) & (time_<=.5)].mean(1),50) plt.title('mean cos conv evidence') ``` # Fit cosine tuning curve to data ``` def decoding_evidence_shift(evidence, y, distractor, nbin = 100, min_lim = 25, max_lim = 50): """ shifts in tuning curve based on target and distractor distance. """ if isinstance(evidence, dict): tuning = evidence['single_trial_ev_centered'] else: tuning = evidence tb = np.zeros((nbin, tuning.shape[1], tuning.shape[2])) #get the sizes y_diff = pycircstat.cdiff(distractor, y) bins = np.arange(-math.pi, math.pi, 2*math.pi/nbin) y_diff_binned = np.digitize(y_diff, bins) for i in range(1, nbin+1): tb[i-1, :, :] = evidence['single_trial_ev_centered'][y_diff_binned == i, :, :].mean(0) tuning_binned = np.zeros((2, tuning.shape[1], tuning.shape[2])) x_bins = np.arange(-math.pi, math.pi, 2*math.pi/nbin) + 1/nbin*math.pi tuning_binned[0, :, :] = tb[(x_bins >= -max_lim/90*np.pi) & (x_bins <= -min_lim/90*np.pi) ,: ,:].mean(0) tuning_binned[1, :, :] = tb[(x_bins <= max_lim/90*np.pi) & (x_bins >= min_lim/90*np.pi), :, :].mean(0) output_biasfit = least_squares_fit_cos(tuning_binned, 1) return output_biasfit, tuning_binned # min_lim = 25 max_lim = 50 # not selected orientations y_diff = pycircstat.cdiff(prev_stim, stimulus) plt.scatter(stimulus[y_diff < -max_lim/90*np.pi],prev_stim[y_diff < -max_lim/90*np.pi],color='k') plt.scatter(stimulus[y_diff > max_lim/90*np.pi],prev_stim[y_diff > max_lim/90*np.pi],color='k') plt.scatter(stimulus[(y_diff > -min_lim/90*np.pi) & (y_diff < min_lim/90*np.pi)],prev_stim[(y_diff > -min_lim/90*np.pi) & (y_diff < min_lim/90*np.pi)],color='k') #selected orientations plt.scatter(stimulus[(y_diff <= max_lim/90*np.pi) & (y_diff >= min_lim/90*np.pi)],prev_stim[(y_diff <= max_lim/90*np.pi) & (y_diff >= min_lim/90*np.pi)],color='g') plt.scatter(stimulus[(y_diff >= -max_lim/90*np.pi) & (y_diff <= -min_lim/90*np.pi)],prev_stim[(y_diff >= -max_lim/90*np.pi) & (y_diff <= -min_lim/90*np.pi)],color='r') plt.xlabel('stimulus orientation') plt.ylabel('prev_stimulus orientation') output_bias,distractor_tuning = decoding_evidence_shift(evidence, stimulus, prev_stim, min_lim = 25, max_lim = 50) tsplot(time_,output_bias['phase']) ```
github_jupyter
``` # default_exp convert ``` # The Converter > The internals for the lib2nbdev functionality ``` #hide from nbdev.showdoc import * #hide from fastcore.test import * #export import json from fastcore.basics import Path from fastcore.xtras import is_listy from fastcore.foundation import Config from fastcore.script import call_parse from fastprogress.fastprogress import progress_bar from nbdev.export import nbglob, export_names, _re_class_func_def, _re_obj_def from nbdev.sync import _split from lib2nbdev.generators import generate_settings, generate_ci, generate_doc_foundations, generate_setup ``` ## Foundational Helper Functions ``` #export def code_cell(code:str=None) -> str: """ Returns a Jupyter cell with potential `code` """ cell = { "cell_type": "code", "execution_count": None, "metadata": {}, "outputs": [], "source": [] } if is_listy(code): for i, c in enumerate(code): if i < len(code)-1: cell["source"].append(c+'\n') else: cell["source"].append(c) elif code: cell["source"].append(code) return cell ``` A very simplistic and foundational function, it simply returns a string representation of a Jupyter cell without any metadata and potentially some code. ``` #hide _default_cell = "{'cell_type': 'code', 'execution_count': None, 'metadata': {}, 'outputs': [], 'source': []}" test_eq(_default_cell, str(code_cell())) #export def write_module_cell() -> str: """ Writes a template `Markdown` cell for the title and description of a notebook """ return { "cell_type": "markdown", "metadata": {}, "source": [ "# Default Title (change me)\n", "> Default description (change me)" ] } #export def init_nb(module_name:str) -> str: """ Initializes a complete blank notebook based on `module_name` Also writes the first #default_exp cell and checks for a nested module (moduleA.moduleB) """ if module_name[0] == '.': module_name = module_name.split('.')[1] if '.ipynb' in module_name: module_name = module_name.split('.ipynb')[0] return {"cells":[code_cell(f"# default_exp {module_name}"), write_module_cell()], "metadata":{ "jupytext":{"split_at_heading":True}, "kernelspec":{"display_name":"Python 3", "language": "python", "name": "python3"} }, "nbformat":4, "nbformat_minor":4} #hide _initial_nb = '''{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# default_exp testname" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Default Title (change me)\\n", "> Default description (change me)" ] } ], "metadata": { "jupytext": { "split_at_heading": true }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 4 }''' test_eq(_initial_nb, json.dumps(init_nb("testname"), indent=3)) #export def write_cell(code:str, is_public:bool=False) -> str: """ Takes source `code`, adds an initial #export tag, and writes a Jupyter cell """ if is_public is None: export = '' export = '#export' if is_public else '#exporti' source = [f"{export}"] + code.split("\n") return code_cell(source) ``` This function will write a cell given some `code` (which is a str). `is_public` is there to determine if `#export` or `#exporti` should be used (a public or private function, class, or object). ``` #export def write_nb(cfg_path:str, cfg_name:str, splits:list, num:int, parent:str=None, private_list:list=[]) -> str: """ Writes a fully converted Jupyter Notebook based on `splits` and saves it in `Config`'s `nbs_path`. The notebook number is based on `num` `parent` denotes if the current notebook module is based on a parent module such as `moduleA.moduleB` `private_list` is a by-cell list of `True`/`False` for each block of code of whether it is private or public """ # Get filename fname = splits[0][0] if fname[0] == '.': fname = fname[1:] if parent is not None: fname = f'{parent}.{fname}' # Initialize and write notebook nb = init_nb(fname) for i, (_, code) in enumerate(splits): c = write_cell(code, private_list[i]) nb["cells"].append(c) # Figure out the notebook number if num < 10: fname = f'0{num}_{fname}' else: fname = f'{num}_{fname}' # Save notebook in `nbs_path` with open(f'{Config(cfg_path, cfg_name).path("nbs_path")/fname}', 'w+') as source_nb: source_nb.write(json.dumps(nb)) #exporti def _not_private(n): "Checks if a func is private or not, alternative to nbdev's" for t in n.split('.'): if (t.startswith('_') and not t.startswith('__')): return False return '\\' not in t and '^' not in t and t != 'else' ``` ## Converting Libraries ``` #export @call_parse def convert_lib(): """ Converts existing library to an nbdev one by autogenerating notebooks. Optional prerequisites: - Make a nbdev settings.ini file beforehand - Optionally you can add `# Cell` and `# Internal Cell` tags in the source files where you would like specific cells to be Run this command in the base of your repo **Can only be run once** """ print('Checking for a settings.ini...') cfg_path, cfg_name = '.', 'settings.ini' generate_settings() print('Gathering files...') files = nbglob(extension='.py', config_key='lib_path', recursive=True) if len(files) == 0: raise ValueError("No files were found, please ensure that `lib_path` is configured properly in `settings.ini`") print(f'{len(files)} modules found in the library') num_nbs = len(files) nb_path = Config(cfg_path, cfg_name).path('nbs_path') nb_path.mkdir(exist_ok=True) print(f'Writing notebooks to {nb_path}...') if nb_path.name == Config(cfg_path, cfg_name).lib_name: nb_path = Path('') slash = '' else: nb_path = Path(nb_path.name) slash = '/' for num, file in enumerate(progress_bar(files)): if (file.parent.name != Config(cfg_path, cfg_name).lib_name) and slash is not None: parent = file.parent.name else: parent = None fname = file.name.split('.py')[0] + '.ipynb' if fname[0] == '.': fname = fname[1:] # Initial string in the .py init_str = f"# AUTOGENERATED! DO NOT EDIT! File to edit: {nb_path}{slash}{fname} (unless otherwise specified).\n\n# Cell\n" # Override existing code to include nbdev magic and one code cell with open(file, encoding='utf8') as f: code = f.read() if "AUTOGENERATED" not in code: code = init_str + code # Check to ensure we haven't tried exporting once yet if "# Cell" and "# Internal Cell" not in code and '__all__' not in code: split_code = code.split('\n') private_list = [True] _do_pass, _private, _public = False, '# Internal Cell\n', '# Cell\n' for row, line in enumerate(split_code): if _do_pass: _do_pass = False; continue # Deal with decorators if '@' in line: code = split_code[row+1] if code[:4] == 'def ': code = code[4:] if 'patch' in line or 'typedispatch' in line or not line[0].isspace(): is_private = _not_private(code.split('(')[0]) private_list.append(is_private) split_code[row] = f'{_public}{line}' if is_private else f'{_private}{line}' _do_pass = True # Deal with objects elif _re_obj_def.match(line) and not _do_pass: is_private = _not_private(line.split('(')[0]) private_list.append(is_private) split_code[row] = f'{_public}{line}' if is_private else f'{_private}{line}' # Deal with classes or functions elif _re_class_func_def.match(line) and not _do_pass: is_private = _not_private(line.split(' ')[1].split('(')[0]) private_list.append(is_private) split_code[row] = f'{_public}{line}' if is_private else f'{_private}{line}' code = '\n'.join(split_code) # Write to file with open(file, 'w', encoding='utf8') as f: f.write(code) # Build notebooks splits = _split(code) write_nb(cfg_path, cfg_name, splits, num, parent, private_list) # Generate the `__all__` in the top of each .py if '__all__' not in code: c = code.split("(unless otherwise specified).") code = c[0] + "(unless otherwise specified).\n" + f'\n__all__ = {export_names(code)}\n\n# Cell' + c[1] with open(file, 'w', encoding='utf8') as f: f.write(code) else: print(f"{file.name} was already converted.") generate_doc_foundations() print(f"{Config(cfg_path, cfg_name).lib_name} successfully converted!") _setup = int(input("Would you like to setup this project to be pip installable and configure a setup.py? (0/1)")) if _setup: generate_setup() print('Project is configured for pypi, please see `setup.py` for any advanced configurations') _workflow = int(input("Would you like to setup the automated Github workflow that nbdev provides? (0/1)")) if _workflow: generate_ci() print("Github actions generated! Please make sure to include .github/actions/main.yml in your next commit!") ``` An example of adding in `# Cell` or `# Internal Cell` to the source code can be seen below: ```python # Filename is noop.py # Internal Cell def _myPrivateFunc(o): return o # Cell def myPublicFunc(o): return o ``` ``` #hide p = Path('../test_convert/test_convert') p.mkdir(exist_ok=True, parents=True) file = p/'test.py' file.touch() file.write_text('def testing_code(a,b): return a+b') text = """[DEFAULT] host = github lib_name = test_convert user = muellerzr description = A test keywords = test author = muellerzr author_email = m@gmail.com copyright = zach branch = master version = 0.0.1 min_python = 3.6 audience = Developers language = English custom_sidebar = False license = apache2 status = 2 nbs_path = nbs doc_path = docs recursive = False doc_baseurl = /test_convert/ git_url = https://github.com/muellerzr/test_convert/tree/master/ lib_path = test_convert title = test_convert doc_host = https://muellerzr.github.io""".split('\n') settings = Path('../test_convert/settings.ini') settings.touch() settings.write_text('\n'.join(text)) %cd ../test_convert !printf "1\n0\n" | convert_lib # Test that the lib was made fine test_eq(Path('nbs/00_test.ipynb').read_text(), '{"cells": [{"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["# default_exp test"]}, {"cell_type": "markdown", "metadata": {}, "source": ["# Default Title (change me)\\n", "> Default description (change me)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["#export\\n", ""]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["#export\\n", "def testing_code(a,b): return a+b"]}], "metadata": {"jupytext": {"split_at_heading": true}, "kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}}, "nbformat": 4, "nbformat_minor": 4}') %cd - ``` # Export - ``` #hide from nbdev.export import notebook2script notebook2script() ```
github_jupyter
# Working with Landsat Thematic Mapper Imagery ![](http://esri.github.io/arcgis-python-api/notebooks/nbimages/02_change_detection_app_01.gif) # Questions - How does land change manifest itself in time-series of multispectral imagery? - Can you identify when a significant disturbance occurred? # Let's explore Landsat Thematic Mapper Data over a period of 30 years! - About 20 years ago, my house was farmland - Can we use the 30 years of TM data to identify when development occurred? # Let's get started! # Imports ``` from arcgis.features import SpatialDataFrame from arcgis.raster.functions import * from arcgis.raster import ImageryLayer from arcgis.geometry import Geometry from arcgis.geometry import Point from arcgis.gis import GIS ``` # Log into Portal ``` gis = GIS('http://fedciv.esri.com/portal', 'GBrunner') ``` # Two Services: - Landsat Thematic Mapper Multispectral Service - 8 Bands (7 Spectral and QA Band) - Landsat Thematic Mapper NDVI Service - 2 Bands (NDVI and QA values) ``` ndvi_svc ="https://fedciv.esri.com/imageserver/rest/services/LandsatTM_NDVI/ImageServer" ms_svc = "https://fedciv.esri.com/imageserver/rest/services/LandsatTM_MS/ImageServer" ndvi_lyr = ImageryLayer(ndvi_svc, gis=gis) ms_lyr = ImageryLayer(ms_svc, gis=gis) ``` # How does land change manifest itself in multispectral imagery? ## To investigate, let's go to my house! ## ... in O'Fallon, MO # Here is my home ``` geometry = Point({"x" :-10093991.604, "y" : 4689459.491, "spatialReference" : {"wkid" : 102100}}) map1 = gis.map("122 Arabian Path, O'Fallon, MO, USA") map1 ``` # I changed the basemap and the zoom ``` map1.zoom = 14 map1.basemap = 'satellite' ``` # I defined a symbol and drew it on the map ``` symbol = {"color":[128,0,0,128], "size":18, "angle":0, "xoffset":0, "yoffset":0, "type":"esriSMS", "style": "esriSMSCircle", "outline": {"color":[128,0,0,255], "width":1, "type":"esriSLS", "style":"esriSLSSolid"} } map1.draw(geometry, symbol=symbol) ``` # How many Landsat TM rasters are over O'Fallon, MO? ## I can query the data frame using a spatial filter ``` from arcgis.geometry import filters geometry = Point({"x" :-10093991.604, "y" : 4689459.491, "spatialReference" : {"wkid" : 102100}}) sp_filter = filters.intersects(geometry=geometry) im_sdf = ms_lyr.query(geometry_filter=sp_filter, return_all_records=True).df ``` # There are 563 Landsat TM Images over O'Fallon ``` len(im_sdf) ``` # What does that data frame look like? ``` im_sdf.head(3) ``` # How do I interpret the "AcquisitionDate"? - I need to convert from Unix time into a datetime ``` import pandas as pd im_sdf['Date'] = pd.to_datetime(im_sdf['AcquisitionDate'], unit='ms') im_sdf['Date'].head() ``` # I'm interested in cloud free images ``` im_sdf[im_sdf['CloudCover']<.10].Date.head() ``` # I found some: - From 1982: 'LT40240331982321XXX04' - From 2010: 'LT50240332010102EDC00' ``` selected_oldest = ms_lyr.filter_by(where="Name = 'LT40240331982321XXX04'") selected_newest = ms_lyr.filter_by(where="Name = 'LT50240332010102EDC00'") ``` # Let's look at O'Fallon in 1982 and 2010 ``` map_old = gis.map(location="122 Arabian Path, O'Fallon, MO")#, zoomlevel=14) map_new = gis.map(location="122 Arabian Path, O'Fallon, MO")#, zoomlevel=14) from ipywidgets import * map_old.layout=Layout(flex='1 1', padding='10px',height='500px', min_width='40px') map_new.layout=Layout(flex='1 1', padding='10px',height='500px', min_width='40px') box = HBox([map_old, map_new]) box map_old.add_layer(stretch(extract_band(selected_oldest,[2,1,0]), stretch_type='StdDev', num_stddev=2.5, dra=True)) map_new.add_layer(stretch(extract_band(selected_newest,[2,1,0]), stretch_type='StdDev', num_stddev=2.5, dra=True)) map_old map_new map_new.zoom = 14 map_old.zoom = 14 map_old.draw(geometry, symbol=symbol) map_new.draw(geometry, symbol=symbol) ``` # Using NDVI to pinpoint when my home was built # What is NDVI? - Normalized Difference Vegetation Index - For Landsat TM - `NDVI = (NIR_4 - VIS_3)/(NIR_4 + VIS_3)` - Result will be between -1 and 1 # What does NDVI Typically looks like? ``` agol_gis = GIS("http://ps-dbs.maps.arcgis.com/home", "gregbrunner_dbs") landsat_lyr = ImageryLayer("https://landsat2.arcgis.com/arcgis/rest/services/Landsat8_Views/ImageServer", gis=agol_gis) l8_map = agol_gis.map('122 Arabian Path, St. Peters, MO 63376') l8_map ``` # Landsat Raster Functions ``` landsat_lyr.properties.rasterFunctionInfos l8_map.add_layer(apply(landsat_lyr, 'NDVI Colorized')) l8_map.zoom = 14 ``` # How do I get pixel values over time at a given location? ``` pixel_location = Point({"x" :-10093991.604, "y" : 4689459.491, "spatialReference" : {"wkid" : 102100}}) time = [] dtime = [] pixels = [] for idx,row in enumerate(im_sdf.iterrows()): oid = str(row[1]['OBJECTID']) image_at_t = ndvi_lyr.filter_by(where="OBJECTID = "+oid)#, geometry=the_geom) pixel = image_at_t.identify(geometry=pixel_location)#, time_extent=t) try: pix = [float(x) for x in pixel['value'].split(',')] pixels.append(pix) time.append(float(row[1]['AcquisitionDate'])) dtime.append(row[1]['Date']) except: print("NoData") #import pickle #pixels_pickle = open("ndvi_vals.p","wb") #pickle.dump(pixels, pixels_pickle) #pixels_pickle.close() #dtime_pickle = open("dtime_vals.p","wb") #pickle.dump(dtime, dtime_pickle) #dtime_pickle.close() #time_pickle = open("time_vals.p","wb") #pickle.dump(time, time_pickle) #time_pickle.close() import pickle pixels = pickle.load(open("ndvi_vals.p","rb")) dtime = pickle.load(open("dtime_vals.p","rb")) time = pickle.load(open("time_vals.p","rb")) ``` # What are NDVI the pixel values? ``` pixels[:5] ``` ### The first column is the NDVI value, the second is the QA Band value # What is the Landsat QA Band? <!--<img src="https://landsat.usgs.gov/sites/default/files/images/C1-BQA-Example.jpg" width="400" class="center"/>--> <img src="https://hyspeedblog.files.wordpress.com/2014/08/landsat8_lake_tahoe.jpg" width="400" class="center"/> ### https://landsat.usgs.gov/collectionqualityband # I can separate the NDVI values and QA values into separate lists ``` qa_band = [] pixel_values = [] for pix in pixels: qa_band.append(pix[1]) pixel_values.append(pix[0]) qa_band[:5] ``` # What does the NDVI curve look like? I'll do this first without any filtering and it's hard to whether there is a trend. # Plotting parameters ``` %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns plt.style.use('seaborn-darkgrid') fig=plt.figure(figsize=(14, 6), dpi= 80, facecolor='w', edgecolor='k') plt.plot(dtime, pixel_values, '*',label='linear') #indicde 0 is max plt.xlabel('Time') plt.ylabel('NDVI') ``` ## Let's Filter the QA Bands We can use the Landsat QA band to filter out snow, ice, and clouds. Here we wll apply that filter. ``` LANDSAT_5_CLEAR_PIX_VALS = [672, 676, 680, 684] QA_BAND_IND = 1 clear_indices = [x for x in range(len(qa_band)) if qa_band[x] in LANDSAT_5_CLEAR_PIX_VALS] ``` # Let's only use times and pixels that are "clear" ``` clear_pix = [pixel_values[val] for val in clear_indices] clear_time = [time[x] for x in clear_indices] clear_dtime = [dtime[x] for x in clear_indices] ``` # I'll sort the data too ``` import numpy as np sorted_t = np.sort(clear_dtime) sorted_t_idx = np.argsort(clear_dtime) sorted_clear_time = np.sort(clear_time) sorted_clear_pix = [clear_pix[int(sorted_idx)] for sorted_idx in sorted_t_idx] ``` # Now, what does the plot look like now? ``` %matplotlib inline import matplotlib.pyplot as plt fig=plt.figure(figsize=(14, 6), dpi= 80, facecolor='w', edgecolor='k') plt.plot(sorted_t, sorted_clear_pix,label='linear') #indicde 0 is max plt.xlabel('Time') plt.ylabel('NDVI') ``` # It looks like something changed between 1996 and 1998 - Higher max NDVI values from 1980 through 1996. - The range of NDVI values becomes more narrow from 1997 onward. # Let's look at some images from 1996 and 1997 - Filter on cloud cover and date range to identify cloud free images from 1996 and 1997 # 1996 ``` date_df = im_sdf[im_sdf['CloudCover']<.10] date_df[(date_df['Date'] > '1996-01-01') & (date_df['Date'] < '1997-01-01')] date_df.head(3) map_1996 = gis.map("122 Arabian Path, O'Fallon, MO") selected_1996 = ms_lyr.filter_by(where="Name = 'LT50240331996272XXX02'") ``` # 1997 ``` date_df = im_sdf[im_sdf['CloudCover']<.10] date_df[(date_df['Date'] > '1997-01-01') & (date_df['Date'] < '1998-01-01')].Name map_1997 = gis.map("122 Arabian Path, O'Fallon, MO") selected_1997 = ms_lyr.filter_by(where="Name = 'LT50240331997274XXX02'") ``` # 1996 and 1997 ``` from ipywidgets import * map_1996.layout=Layout(flex='1 1', padding='10px',height='500px', min_width='40px') map_1997.layout=Layout(flex='1 1', padding='10px',height='500px', min_width='40px') box = HBox([map_1996, map_1997]) box ``` # Adding the years ``` map_1996.add_layer(stretch(extract_band(selected_1996,[2,1,0]), stretch_type='StdDev', num_stddev=2.5, dra=True)) map_1997.add_layer(stretch(extract_band(selected_1997,[2,1,0]), stretch_type='StdDev', num_stddev=2.5, dra=True)) map_1996.zoom = 14 map_1996.draw(geometry, symbol=symbol) map_1997.zoom = 14 map_1997.draw(geometry, symbol=symbol) ``` # Some other questions we could ask? - How is this occuring across the counrty? - Does this apply to other scenarios? - How do you implement this as an algorithm that can be run nationally? globally?
github_jupyter
# Tutorial 3.1. Structural response under windload ### Description: For the given geometry, compute the shear force and the bending moment of the structure along the height for the given wind load. Compare the base shear and the bending moment at the base with other buildings of similar height Project: Structural Wind Engineering WS19-20 Chair of Structural Analysis @ TUM - R. Wüchner, M. Péntek Author: kodakkal.anoop@tum.de, mate.pentek@tum.de Created on: 16.11.2018 Last update: 27.09.2019 ### Exercise: Structural response for building with given geometry located at Jeddah Airport For the given location of Jeddah airport compute the shear force and bending moment for the given geometry and compare the base shear and bending moment at the base for different geometry. ``` # import import matplotlib.pyplot as plt import numpy as np ``` #### Gust wind speed computed from the previous example for the location of the Jeddah Airport considering a return period of 50 years is 40.12 m/s. The mean wind speed is computed as $$ u_{mean} = u_{gust}/1.4$$ ``` gust_windspeed = 0.0 # 1.4 is the approximate factor to convert from gust to mean wind speed mean_windspeed = 0.0 ``` The building is located at an urban area with height of adjacent building upto 15m: __Terrain category IV__ Let us calculate the shear force and bending moment values for 600 m tall building having a uniform cross section of given geometry and building width = 60.0 m the building is divided into slices of height 10m ``` height_slice = 10.0 height_start = height_slice height_end = 600.0 height = np.arange(height_start, height_end + height_slice, height_slice) # lever arm at the center of each slice # so shift for lever arm height = height - height_slice / 2.0 ``` According to EN 1991-1-4 the wind profile for terrain category IV is $$ u_{gust}(z) = 1.05 \times v_b \times (z/10)^{0.2}$$ ``` a_gust_4 = 0.0 alpha_gust_4 = 0.0 ugust_4 = 0.0 air_density = 1.2 # airdensity in kg/m3 ``` ###### Drag coefficient for the given geometry ``` drag_coefficient = 0.0 # to be extended ``` ###### Shear force over the height ``` shear_force = 0.0 # to be extended ``` ###### Let us plot ``` plt.figure(num=1, figsize=(8, 6)) # to be extended plt.show() ``` ###### Bending moment over the height ``` bending_moment = 0.0 # to be extended ``` ###### Let us plot ``` plt.figure(num=2, figsize=(8, 6)) # to be extended plt.show() ``` ###### Base shear and bending moment at the bottom ``` base_shear = 0.0 # to be extended bending_moment_at_bottom = 0.0 # to be extended ``` #### Discuss amoung groups the base shear and bending moment at the base of the building.
github_jupyter
# Road Following - Data Collection (using Gamepad) If you've run through the collision avoidance sample, your should be familiar following three steps 1. Data collection 2. Training 3. Deployment In this notebook, we'll do the same exact thing! Except, instead of classification, you'll learn a different fundamental technique, **regression**, that we'll use to enable JetBot to follow a road (or really, any path or target point). 1. Place the JetBot in different positions on a path (offset from center, different angles, etc) > Remember from collision avoidance, data variation is key! 2. Display the live camera feed from the robot 3. Using a gamepad controller, place a 'green dot', which corresponds to the target direction we want the robot to travel, on the image. 4. Store the X, Y values of this green dot along with the image from the robot's camera Then, in the training notebook, we'll train a neural network to predict the X, Y values of our label. In the live demo, we'll use the predicted X, Y values to compute an approximate steering value (it's not 'exactly' an angle, as that would require image calibration, but it's roughly proportional to the angle so our controller will work fine). So how do you decide exactly where to place the target for this example? Here is a guide we think may help 1. Look at the live video feed from the camera 2. Imagine the path that the robot should follow (try to approximate the distance it needs to avoid running off road etc.) 3. Place the target as far along this path as it can go so that the robot could head straight to the target without 'running off' the road. > For example, if we're on a very straight road, we could place it at the horizon. If we're on a sharp turn, it may need to be placed closer to the robot so it doesn't run out of boundaries. Assuming our deep learning model works as intended, these labeling guidelines should ensure the following: 1. The robot can safely travel directly towards the target (without going out of bounds etc.) 2. The target will continuously progress along our imagined path What we get, is a 'carrot on a stick' that moves along our desired trajectory. Deep learning decides where to place the carrot, and JetBot just follows it :) ### Labeling example video Execute the block of code to see an example of how to we labeled the images. This model worked after only 123 images :) ``` from IPython.display import HTML HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/FW4En6LejhI" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>') ``` ### Import Libraries So lets get started by importing all the required libraries for "data collection" purpose. We will mainly use OpenCV to visualize and save image with labels. Libraries such as uuid, datetime are used for image naming. ``` # IPython Libraries for display and widgets import traitlets import ipywidgets.widgets as widgets from IPython.display import display # Camera and Motor Interface for JetBot from jnmouse import Robot, Camera, bgr8_to_jpeg # Basic Python packages for image annotation from uuid import uuid1 import os import json import glob import datetime import numpy as np import cv2 import time ``` ### Display Live Camera Feed First, let's initialize and display our camera like we did in the teleoperation notebook. We use Camera Class from jnmouse to enable CSI MIPI camera. Our neural network takes a 224x224 pixel image as input. We'll set our camera to that size to minimize the filesize of our dataset (we've tested that it works for this task). In some scenarios it may be better to collect data in a larger image size and downscale to the desired size later. ``` camera = Camera() widget_width = camera.width widget_height = camera.height image_widget = widgets.Image(format='jpeg', width=widget_width, height=widget_height) target_widget = widgets.Image(format='jpeg', width=widget_width, height=widget_height) x_slider = widgets.FloatSlider(min=-1.0, max=1.0, step=0.001, description='x') y_slider = widgets.FloatSlider(min=-1.0, max=1.0, step=0.001, description='y') def display_xy(camera_image): image = np.copy(camera_image) x = x_slider.value y = y_slider.value x = int(x * widget_width / 2 + widget_width / 2) y = int(y * widget_height / 2 + widget_height / 2) image = cv2.circle(image, (x, y), 8, (0, 255, 0), 3) image = cv2.circle(image, (widget_width / 2, widget_height), 8, (0, 0,255), 3) image = cv2.line(image, (x,y), (widget_width / 2, widget_height), (255,0,0), 3) jpeg_image = bgr8_to_jpeg(image) return jpeg_image time.sleep(1) traitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=bgr8_to_jpeg) traitlets.dlink((camera, 'value'), (target_widget, 'value'), transform=display_xy) display(widgets.HBox([image_widget, target_widget]), x_slider, y_slider) ``` ### Create Gamepad Controller This step is similar to "Teleoperation" task. In this task, we will use gamepad controller to label images. The first thing we want to do is create an instance of the Controller widget, which we'll use to label images with "x" and "y" values as mentioned in introduction. The Controller widget takes a index parameter, which specifies the number of the controller. This is useful in case you have multiple controllers attached, or some gamepads appear as multiple controllers. To determine the index of the controller you're using, Visit http://html5gamepad.com. Press buttons on the gamepad you're using Remember the index of the gamepad that is responding to the button presses Next, we'll create and display our controller using that index. ``` controller = widgets.Controller(index=0) display(controller) ``` ### Connect Gamepad Controller to Label Images Now, even though we've connected our gamepad, we haven't yet attached the controller to label images! We'll connect that to the left and right vertical axes using the dlink function. The dlink function, unlike the link function, allows us to attach a transform between the source and target. ``` widgets.jsdlink((controller.axes[2], 'value'), (x_slider, 'value')) widgets.jsdlink((controller.axes[3], 'value'), (y_slider, 'value')) ``` ### Collect data The following block of code will display the live image feed, as well as the number of images we've saved. We store the target X, Y values by 1. Place the green dot on the target 2. Press 'down' on the DPAD to save This will store a file in the ``dataset_xy`` folder with files named ``xy_<x value>_<y value>_<uuid>.jpg`` where `<x value>` and `<y value>` are the coordinates **in pixel (not in percentage)** (count from the top left corner). When we train, we load the images and parse the x, y values from the filename ``` DATASET_DIR = 'dataset_xy' # we have this "try/except" statement because these next functions can throw an error if the directories exist already try: os.makedirs(DATASET_DIR) except FileExistsError: print('Directories not created because they already exist') for b in controller.buttons: b.unobserve_all() count_widget = widgets.IntText(description='count', value=len(glob.glob(os.path.join(DATASET_DIR, '*.jpg')))) def xy_uuid(x, y): return 'xy_%03d_%03d_%s' % (x * widget_width / 2 + widget_width / 2, y * widget_height / 2 + widget_height / 2, uuid1()) def save_snapshot(change): if change['new']: uuid = xy_uuid(x_slider.value, y_slider.value) image_path = os.path.join(DATASET_DIR, uuid + '.jpg') with open(image_path, 'wb') as f: f.write(image_widget.value) count_widget.value = len(glob.glob(os.path.join(DATASET_DIR, '*.jpg'))) controller.buttons[13].observe(save_snapshot, names='value') display(widgets.VBox([ target_widget, count_widget ])) ``` Again, let's close the camera conneciton properly so that we can use the camera in other notebooks. ``` camera.stop() ``` ### Next Once you've collected enough data, we'll need to copy that data to our GPU desktop or cloud machine for training. First, we can call the following terminal command to compress our dataset folder into a single zip file. > If you're training on the JetBot itself, you can skip this step! The ! prefix indicates that we want to run the cell as a shell (or terminal) command. The -r flag in the zip command below indicates recursive so that we include all nested files, the -q flag indicates quiet so that the zip command doesn't print any output ``` def timestr(): return str(datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S')) !zip -r -q road_following_{DATASET_DIR}_{timestr()}.zip {DATASET_DIR} ``` You should see a file named road_following_<Date&Time>.zip in the Jupyter Lab file browser. You should download the zip file using the Jupyter Lab file browser by right clicking and selecting Download.
github_jupyter
## Rock, Paper & Scissors with TensorFlow Hub - TFLite <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/mohan-mj/tflite-rock_paper_scissors/blob/main/tflite_rock_paper_scissors.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> </table> ### Import libraries ``` import os import matplotlib.pylab as plt import numpy as np import tensorflow as tf import tensorflow_hub as hub ``` ### Select the Hub/TF2 module to use ``` handle_base module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true} handle_base, pixels, FV_SIZE = module_selection MODULE_HANDLE = f"https://tfhub.dev/google/tf2-preview/{handle_base}/feature_vector/4" IMAGE_SIZE = (pixels, pixels) ``` ## Data preprocessing Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the rock, paper and scissors dataset. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](Train_using_new_images.ipynb) ``` import tensorflow_datasets as tfds # tfds.disable_progress_bar() ``` The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model. Since `"rock_paper_scissors"` doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively. ``` # splits = tfds.Split.all.subsplit(weighted=(80, 10, 10)) splits=['train[:80%]', 'train[80%:90%]', 'train[90%:]'] splits, info = tfds.load('rock_paper_scissors', with_info=True, as_supervised=True, split = splits) (train_examples, validation_examples, test_examples) = splits num_examples = info.splits['train'].num_examples num_classes = info.features['label'].num_classes ``` ### Format the Data Use the `tf.image` module to format the images for the task. Resize the images to a fixes input size, and rescale the input channels ``` def format_image(image, label): image = tf.image.resize(image, IMAGE_SIZE) / 255.0 return image, label ``` Now shuffle and batch the data ``` BATCH_SIZE = 32 #@param {type:"integer"} train_batches = train_examples.shuffle(num_examples // 4).batch(BATCH_SIZE).map(format_image).prefetch(1) validation_batches = validation_examples.batch(BATCH_SIZE).map(format_image).prefetch(1) test_batches = test_examples.batch(1).map(format_image) ``` Inspect a batch ``` for image_batch, label_batch in train_batches.take(1): pass image_batch.shape ``` ## Defining the model All it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module. For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy. ``` do_fine_tuning = False #@param {type:"boolean"} print("Building model with", MODULE_HANDLE) model = tf.keras.Sequential([ hub.KerasLayer(MODULE_HANDLE, input_shape=IMAGE_SIZE + (3, ), output_shape=[FV_SIZE], trainable=do_fine_tuning), tf.keras.layers.Dense(num_classes) ]) model.summary() ``` ## Training the model ``` if do_fine_tuning: model.compile( optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9), loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) else: model.compile( optimizer='adam', loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) EPOCHS = 5 hist = model.fit(train_batches, epochs=EPOCHS, validation_data=validation_batches) ``` ## Export the model ``` RPS_SAVED_MODEL = "rps_saved_model" ``` Export the SavedModel ``` tf.saved_model.save(model, RPS_SAVED_MODEL) loaded = tf.saved_model.load(RPS_SAVED_MODEL) print(list(loaded.signatures.keys())) infer = loaded.signatures["serving_default"] print(infer.structured_input_signature) print(infer.structured_outputs) ``` ## Convert with TFLiteConverter ``` converter = tf.lite.TFLiteConverter.from_saved_model(RPS_SAVED_MODEL) converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model = converter.convert() with open("converted_model.tflite", "wb") as f: f.write(tflite_model) ``` Test the TFLite model using the Python Interpreter ``` # Load TFLite model and allocate tensors. tflite_model_file = 'converted_model.tflite' with open(tflite_model_file, 'rb') as fid: tflite_model = fid.read() interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] from tqdm import tqdm # Gather results for the randomly sampled test images predictions = [] test_labels, test_imgs = [], [] for img, label in tqdm(test_batches.take(10)): interpreter.set_tensor(input_index, img) interpreter.invoke() predictions.append(interpreter.get_tensor(output_index)) test_labels.append(label.numpy()[0]) test_imgs.append(img) #@title Utility functions for plotting # Utilities for plotting class_names = ['rock', 'paper', 'scissors'] def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) img = np.squeeze(img) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) print(type(predicted_label), type(true_label)) if predicted_label == true_label: color = 'green' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) #@title Visualize the outputs { run: "auto" } index = 8 #@param {type:"slider", min:0, max:9, step:1} plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(index, predictions, test_labels, test_imgs) plt.show() ``` Download the model **NOTE: You might have to run to the cell below twice** ``` with open('labels.txt', 'w') as f: f.write('\n'.join(class_names)) ``` # Prepare the test images for download (Optional) This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples ``` !mkdir -p test_images from PIL import Image for index, (image, label) in enumerate(test_batches.take(50)): image = tf.cast(image * 255.0, tf.uint8) image = tf.squeeze(image).numpy() pil_image = Image.fromarray(image) pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index)) ```
github_jupyter
## Preamble ``` import json import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as colors import matplotlib.cm as cmx plt.style.use('ggplot') import qsharp qsharp.packages.add("Microsoft.Quantum.MachineLearning::0.14.2011120240") qsharp.reload() from Microsoft.Quantum.Samples import ( TrainHalfMoonModel, ValidateHalfMoonModel, ClassifyHalfMoonModel ) %matplotlib inline ``` ## Data ``` with open('data.json') as f: data = json.load(f) data ``` ## Training ``` parameter_starting_points = [ [0.060057, 3.00522, 2.03083, 0.63527, 1.03771, 1.27881, 4.10186, 5.34396], [0.586514, 3.371623, 0.860791, 2.92517, 1.14616, 2.99776, 2.26505, 5.62137], [1.69704, 1.13912, 2.3595, 4.037552, 1.63698, 1.27549, 0.328671, 0.302282], [5.21662, 6.04363, 0.224184, 1.53913, 1.64524, 4.79508, 1.49742, 1.545] ] (parameters, bias) = TrainHalfMoonModel.simulate( trainingVectors=data['TrainingData']['Features'], trainingLabels=data['TrainingData']['Labels'], initialParameters=parameter_starting_points, verbose=True ) ``` ## Validation ``` miss_rate = ValidateHalfMoonModel.simulate( validationVectors=data['ValidationData']['Features'], validationLabels=data['ValidationData']['Labels'], parameters=parameters, bias=bias ) print(f"Miss rate: {miss_rate:0.2%}") ``` ## Plotting Classify the validation so that we can plot it. ``` actual_labels = data['ValidationData']['Labels'] classified_labels = ClassifyHalfMoonModel.simulate( samples=data['ValidationData']['Features'], parameters=parameters, bias=bias, tolerance=0.005, nMeasurements=10_000 ) ``` To plot samples, it's helpful to have colors for each. We'll plot four cases: - actually 0, classified as 0 - actually 0, classified as 1 - actually 1, classified as 1 - actually 1, classified as 0 ``` cases = [(0, 0), (0, 1), (1, 1), (1, 0)] ``` We can use these cases to define markers and colormaps for plotting. ``` markers = [ '.' if actual == classified else 'x' for (actual, classified) in cases ] colormap = cmx.ScalarMappable(colors.Normalize(vmin=0, vmax=len(cases) - 1)) colors = [colormap.to_rgba(idx_case) for (idx_case, case) in enumerate(cases)] ``` It's also really helpful to have the samples as a NumPy array so that we can find masks for each of the four cases. ``` samples = np.array(data['ValidationData']['Features']) ``` Finally, we loop over the cases above and plot the samples that match each. ``` plt.figure(figsize=(12, 8)) for (idx_case, ((actual, classified), marker, color)) in enumerate(zip(cases, markers, colors)): mask = np.logical_and(np.equal(actual_labels, actual), np.equal(classified_labels, classified)) if not np.any(mask): continue plt.scatter( samples[mask, 0], samples[mask, 1], label=f"Was {actual}, classified {classified}", marker=marker, s=300, c=[color], ) plt.legend() ``` ## Epilogue ``` qsharp.component_versions() ```
github_jupyter
# Z-score (Solution) ## Install packages ``` import sys !{sys.executable} -m pip install -r requirements.txt import cvxpy as cvx import numpy as np import pandas as pd import time import os import quiz_helper import matplotlib.pyplot as plt %matplotlib inline plt.style.use('ggplot') plt.rcParams['figure.figsize'] = (14, 8) ``` ### data bundle ``` import os import quiz_helper from zipline.data import bundles os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(), '..', '..','data','module_4_quizzes_eod') ingest_func = bundles.csvdir.csvdir_equities(['daily'], quiz_helper.EOD_BUNDLE_NAME) bundles.register(quiz_helper.EOD_BUNDLE_NAME, ingest_func) print('Data Registered') ``` ### Build pipeline engine ``` from zipline.pipeline import Pipeline from zipline.pipeline.factors import AverageDollarVolume from zipline.utils.calendars import get_calendar universe = AverageDollarVolume(window_length=120).top(500) trading_calendar = get_calendar('NYSE') bundle_data = bundles.load(quiz_helper.EOD_BUNDLE_NAME) engine = quiz_helper.build_pipeline_engine(bundle_data, trading_calendar) ``` ### View Data¶ With the pipeline engine built, let's get the stocks at the end of the period in the universe we're using. We'll use these tickers to generate the returns data for the our risk model. ``` universe_end_date = pd.Timestamp('2016-01-05', tz='UTC') universe_tickers = engine\ .run_pipeline( Pipeline(screen=universe), universe_end_date, universe_end_date)\ .index.get_level_values(1)\ .values.tolist() universe_tickers ``` # Get Returns data ``` from zipline.data.data_portal import DataPortal data_portal = DataPortal( bundle_data.asset_finder, trading_calendar=trading_calendar, first_trading_day=bundle_data.equity_daily_bar_reader.first_trading_day, equity_minute_reader=None, equity_daily_reader=bundle_data.equity_daily_bar_reader, adjustment_reader=bundle_data.adjustment_reader) ``` ## Get pricing data helper function ``` def get_pricing(data_portal, trading_calendar, assets, start_date, end_date, field='close'): end_dt = pd.Timestamp(end_date.strftime('%Y-%m-%d'), tz='UTC', offset='C') start_dt = pd.Timestamp(start_date.strftime('%Y-%m-%d'), tz='UTC', offset='C') end_loc = trading_calendar.closes.index.get_loc(end_dt) start_loc = trading_calendar.closes.index.get_loc(start_dt) return data_portal.get_history_window( assets=assets, end_dt=end_dt, bar_count=end_loc - start_loc, frequency='1d', field=field, data_frequency='daily') ``` ## get pricing data into a dataframe ``` returns_df = \ get_pricing( data_portal, trading_calendar, universe_tickers, universe_end_date - pd.DateOffset(years=5), universe_end_date)\ .pct_change()[1:].fillna(0) #convert prices into returns returns_df ``` ## Sector data helper function We'll create an object for you, which defines a sector for each stock. The sectors are represented by integers. We inherit from the Classifier class. [Documentation for Classifier](https://www.quantopian.com/posts/pipeline-classifiers-are-here), and the [source code for Classifier](https://github.com/quantopian/zipline/blob/master/zipline/pipeline/classifiers/classifier.py) ``` from zipline.pipeline.classifiers import Classifier from zipline.utils.numpy_utils import int64_dtype class Sector(Classifier): dtype = int64_dtype window_length = 0 inputs = () missing_value = -1 def __init__(self): self.data = np.load('../../data/project_4_sector/data.npy') def _compute(self, arrays, dates, assets, mask): return np.where( mask, self.data[assets], self.missing_value, ) sector = Sector() ``` ## We'll use 2 years of data to calculate the factor **Note:** Going back 2 years falls on a day when the market is closed. Pipeline package doesn't handle start or end dates that don't fall on days when the market is open. To fix this, we went back 2 extra days to fall on the next day when the market is open. ``` factor_start_date = universe_end_date - pd.DateOffset(years=2, days=2) factor_start_date ``` ## Quiz 1 Create a factor of one year returns, demeaned, and ranked, and then converted to a zscore ## Answer 1 ``` from zipline.pipeline.factors import Returns #TODO # create a pipeline called p p = Pipeline(screen=universe) # create a factor of one year returns, deman by sector, then rank factor = ( Returns(window_length=252, mask=universe). demean(groupby=Sector()). #we use the custom Sector class that we reviewed earlier rank(). zscore() ) # add the factor to the pipeline p.add(factor, 'Momentum_1YR_demean_by_sector_ranked_zscore') ``` ## visualize the pipeline ``` p.show_graph(format='png') ``` ## run pipeline and view the factor data ``` df = engine.run_pipeline(p, factor_start_date, universe_end_date) df.head() ``` ## Quiz 2 What do you notice about the factor values? ## Answer 2 The factor values are now decimal values (zscores), that are mostly between -2 and +2
github_jupyter
<a href="https://colab.research.google.com/github/suredream/CNN-Sentinel/blob/master/mnist.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Version Control ``` %%bash function auth(){ echo $(grep $1 ~/.auth_git | cut -d'"' -f 4 ) } user=$(auth .user) pwd=$(auth .pwd) token=$(auth .token) email=$(auth .email) name=$(auth .name) function git_clone(){ clone_url="https://"$user":"$pwd$"@github.com/"$user"/"$1 git clone $clone_url tmp && mv tmp/.git . && rm -rf tmp && git reset --hard git config --global user.email "$email" && git config --global user.name "$name" echo "https://github.com/"$user"/"$1 } function git_push() { git add -u && git commit -m "$2" && git push "https://"$token"@github.com/"$user"/"$1".git" } # git_clone learn_torch git pull # git add README.md # git add ex_main.py # git add run_dataclass.py # git_push learn_torch "edit" # git status #%%time #!pip install -qr requirements.txt ``` # Dependency ``` !pip install -qr tfrecord # import torch, tensorflow as tf # print('torch({}), tf({})'.format(torch.__version__, tf.__version__)) # import tensorflow_datasets as tfds # mnist = tfds.load(name='mnist') mnist['test'] ``` # Get Data The MNIST database of handwritten digits has 60,000 training examples, and 10,000 test examples. Each example included in the MNIST database is a 28x28 grayscale image of handwritten digit and its corresponding label(0-9). ``` import torchvision.datasets as dset dset.MNIST?? !ls ../data/MNIST/raw ../data/MNIST/raw/t10k-images-idx3-ubyte ../data/MNIST/raw/t10k-labels-idx1-ubyte import codecs def get_int(b: bytes) -> int: return int(codecs.encode(b, 'hex'), 16) SN3_PASCALVINCENT_TYPEMAP = { 8: (torch.uint8, np.uint8, np.uint8), 9: (torch.int8, np.int8, np.int8), 11: (torch.int16, np.dtype('>i2'), 'i2'), 12: (torch.int32, np.dtype('>i4'), 'i4'), 13: (torch.float32, np.dtype('>f4'), 'f4'), 14: (torch.float64, np.dtype('>f8'), 'f8') } def read_sn3_pascalvincent_tensor(path: str, strict: bool = True) -> torch.Tensor: """Read a SN3 file in "Pascal Vincent" format (Lush file 'libidx/idx-io.lsh'). Argument may be a filename, compressed filename, or file object. """ # read with open(path, "rb") as f: data = f.read() # parse magic = get_int(data[0:4]) nd = magic % 256 ty = magic // 256 assert 1 <= nd <= 3 assert 8 <= ty <= 14 m = SN3_PASCALVINCENT_TYPEMAP[ty] s = [get_int(data[4 * (i + 1): 4 * (i + 2)]) for i in range(nd)] parsed = np.frombuffer(data, dtype=m[1], offset=(4 * (nd + 1))) assert parsed.shape[0] == np.prod(s) or not strict return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s) def read_label_file(path: str) -> torch.Tensor: x = read_sn3_pascalvincent_tensor(path, strict=False) assert(x.dtype == torch.uint8) assert(x.ndimension() == 1) return x.long() def read_image_file(path: str) -> torch.Tensor: x = read_sn3_pascalvincent_tensor(path, strict=False) assert(x.dtype == torch.uint8) assert(x.ndimension() == 3) return x data = read_image_file('../data/MNIST/raw/t10k-images-idx3-ubyte') targets = read_label_file('../data/MNIST/raw/t10k-labels-idx1-ubyte') train_loader = torch.utils.data.DataLoader( dataset=(data, targets), batch_size=batch_size, shuffle=True) from torchvision import datasets, transforms transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)), ]) transform(image) from torchvision import datasets, transforms transform = transforms.Compose([ transforms.ToPILImage(), transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)), ]) # data.shape image = data[0,:,:]#.ToPILImage() print(image.shape, type(image)) transform(image) data[0,:,:].values # type(data), type(targets) # data.shape, targets.shape # from torchvision import datasets, transforms # dataset1 = datasets.MNIST('../data', train=True, download=True, # transform=transform) # dataset2 = datasets.MNIST('../data', train=False, # transform=transform) # !aws s3 cp s3://com.climate.production.users/people/jun.xiong/projects/mnist/ . --recursive # loader import torch, torchvision from tfrecord.torch.dataset import TFRecordDataset import matplotlib.pyplot as plt batch_size = 64 tfrecord_path = "train/train.tfrecords.gz" index_path = None description = {"idx":"int", "image": "byte", "digit": "int"} dataset = TFRecordDataset(tfrecord_path, index_path, description, compression_type='gzip') loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size) row = next(iter(loader)) # print(row) # viz data, target = row['image'].reshape(batch_size,1,28,28), row['digit'].reshape(batch_size) img = torchvision.utils.make_grid(data) img = img.numpy().transpose(1,2,0) std = [0.5,0.5,0.5] mean = [0.5,0.5,0.5] img = img*std+mean print([target[i] for i in range(64)]) plt.imshow(img) # https://github.com/vahidk/tfrecord import torch from tfrecord.torch.dataset import TFRecordDataset from torchvision import transforms transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=[0.5,0.5,0.5],std=[0.5,0.5,0.5])]) def get_loader(tfrecord_path): index_path = None description = {"idx":"int", "image": "byte", "digit": "int"} dataset = TFRecordDataset(tfrecord_path, index_path, description, compression_type='gzip', transform=transform) return torch.utils.data.DataLoader(dataset, batch_size=64) train_loader = get_loader("train/train.tfrecords.gz") test_loader = get_loader("val/record/test.tfrecords.gz") row = next(iter(train_loader)) print(row) from __future__ import print_function import argparse import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms from torch.optim.lr_scheduler import StepLR if True: train_kwargs = {'batch_size': 64} test_kwargs = {'batch_size': 1000} transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) dataset1 = datasets.MNIST('../data', train=True, download=True, transform=transform) dataset2 = datasets.MNIST('../data', train=False, transform=transform) train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs) test_loader = torch.utils.data.DataLoader(dataset2, **test_kwargs) # for batch_idx, (data, target) in enumerate(train_loader): # break data = next(iter(train_loader)) data = next(iter(train_loader)) data, target = next(iter(train_loader)) data.shape, target.shape data.shape, target.shape img = torchvision.utils.make_grid(data) img = img.numpy().transpose(1,2,0) std = [0.5,0.5,0.5] mean = [0.5,0.5,0.5] img = img*std+mean print([labels[i] for i in range(64)]) plt.imshow(img) ``` # Create Idx ``` # from glob import glob # for f in glob('train/*.tfrecord'): # # !echo ${f} ${f}.idx # !python -m tfrecord.tools.tfrecord2idx train/${f} train/${f}.idx for f in glob('val/*.tfrecord'): # !echo ${f} ${f}.idx !python -m tfrecord.tools.tfrecord2idx val/${f} val/${f}.idx import torch from tfrecord.torch.dataset import MultiTFRecordDataset tfrecord_pattern = "train/{}.tfrecord" index_pattern = "train/{}.tfrecord.idx" splits = { "tfrecord_1": 1, "tfrecord_2": 1, } description = {"idx":"int", "image": "int", "digit": "int"} dataset = MultiTFRecordDataset(tfrecord_pattern, index_pattern, splits, description) loader = torch.utils.data.DataLoader(dataset, batch_size=32) data = next(iter(loader)) print(data) import itertools dict(zip([f.split('/')[1].split('.')[0] for f in glob(f'train/tf*.tfrecord')], itertools.repeat(1))) import torch from tfrecord.torch.dataset import MultiTFRecordDataset from glob import glob import itertools # transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=[0.5,0.5,0.5],std=[0.5,0.5,0.5])]) # def get_data_load(folder, batch_size=64): # tfrecord_pattern = f"{folder}/{{}}.tfrecord.gz" # # print(tfrecord_pattern) # index_pattern = f"{folder}/{{}}.tfrecord.idx" # flist = [f.split('/')[1].split('.')[0] for f in glob(f'{folder}/tf*.tfrecord')] # splits = dict(zip(flist, itertools.repeat(1))) # description = {"idx":"int", "image": "int", "digit": "int"} # dataset = MultiTFRecordDataset(tfrecord_pattern, index_pattern, splits, description, transform=transform) # return torch.utils.data.DataLoader(dataset, batch_size=batch_size) tfrecord_path = "/tmp/data.tfrecord.gz" index_path = None description = {"idx":"int", "image": "byte", "label": "float"} dataset = TFRecordDataset(tfrecord_path, index_path, description, ) loader = torch.utils.data.DataLoader(dataset, batch_size=32) train_loader = get_data_load('train') test_loader = get_data_load('val') data = next(iter(train_loader)) print(data) %run -i run_dataclass.py train = MY_MNIST(transform=None) # train[0] for (cnt,i) in enumerate(train): image = i['img'] label = i['target'] ax = plt.subplot(4, 4, cnt+1) ax.axis('off') ax.imshow(image) ax.set_title(label) plt.pause(0.001) if cnt ==15: break import numpy as np import tensorflow as tf !ls -l /content/record/train.tfrecords import torch from tfrecord.torch.dataset import TFRecordDataset tfrecord_path = "/content/record/train.tfrecords" index_path = None description = {"idx":"int", "image": "byte", "digit": "int"} dataset = TFRecordDataset(tfrecord_path, index_path, description) loader = torch.utils.data.DataLoader(dataset, batch_size=32) data = next(iter(loader)) print(data['image'].shape) ``` # Create TFrecord from numpy ``` %run -i run_numpy_array.py import tensorflow.keras.datasets.mnist as mnist # https://www.programcreek.com/python/?code=ddbourgin%2Fnumpy-ml%2Fnumpy-ml-master%2Fnumpy_ml%2Ftests%2Ftest_nn.py (x_train, y_train), (x_test, y_test) = mnist.load_data() class DatasetMNIST(torch.utils.data.Dataset): def __init__(self, X, y, transform=None): self.X = X self.y = y self.transform = transform def __len__(self): return self.X.shape[0] def __getitem__(self, index): image = torch.from_numpy(self.X[index, :, :]).reshape(1,28,28) label = self.y[index] if self.transform is not None: image = self.transform(image) return image, label test_set = DatasetMNIST(x_train, y_train) a, b = test_set[0] type(a), type(b) a.shape y_train[0] %run -i run_numpy_array.py (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train.shape, y_train.shape, x_test.shape, y_test.shape x_train.shape[0] %%time def _int64_feature(value): return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) def _bytes_feature(value): return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) def write_to_tfrecord(X, y, filename): options = tf.io.TFRecordOptions(compression_type='GZIP') writer = tf.io.TFRecordWriter(filename, options=options) for i in range(X.shape[0]): image_raw = X[i].tobytes() example = tf.train.Example(features=tf.train.Features( feature={ 'idx': _int64_feature(i), 'digit': _int64_feature(y[i]), 'image': _bytes_feature(image_raw) })) writer.write(example.SerializeToString()) writer.close() write_to_tfrecord(x_train, y_train, 'record/train.tfrecords.gz') write_to_tfrecord(x_test, y_test, 'record/test.tfrecords.gz') !aws s3 cp record/test.tfrecords.gz s3://com.climate.production.users/people/jun.xiong/projects/mnist/val/ !aws s3 cp record/train.tfrecords.gz s3://com.climate.production.users/people/jun.xiong/projects/mnist/train/ ``` # Insert ``` class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 32, 3, 1) self.conv2 = nn.Conv2d(32, 64, 3, 1) self.dropout1 = nn.Dropout(0.25) self.dropout2 = nn.Dropout(0.5) self.fc1 = nn.Linear(9216, 128) self.fc2 = nn.Linear(128, 10) def forward(self, x): x = self.conv1(x) x = F.relu(x) x = self.conv2(x) x = F.relu(x) x = F.max_pool2d(x, 2) x = self.dropout1(x) x = torch.flatten(x, 1) x = self.fc1(x) x = F.relu(x) x = self.dropout2(x) x = self.fc2(x) output = F.log_softmax(x, dim=1) return output def train(args, model, device, train_loader, optimizer, epoch): model.train() # for batch_idx, (data, target) in enumerate(train_loader): for batch_idx, row in enumerate(train_loader): data, target = row['image'].reshape(64,1,28,28), row['digit'].reshape(64) data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % args.log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) if args.dry_run: break def test(model, device, test_loader): model.eval() test_loss = 0 correct = 0 with torch.no_grad(): # for data, target in test_loader: for row in test_loader: data, target = row['image'].reshape(64,1,28,28), row['digit'].reshape(64) data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) if True: device = torch.device('cpu') model = Net().to(device) optimizer = optim.Adadelta(model.parameters(), lr=args.lr) scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma) for epoch in range(1, args.epochs + 1): train(args, model, device, train_loader, optimizer, epoch) test(model, device, test_loader) scheduler.step() if args.save_model: torch.save(model.state_dict(), "mnist_cnn.pt") args.epochs data.reshape? transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=[0.5,0.5,0.5],std=[0.5,0.5,0.5])]) data_train = datasets.MNIST(root = "./data/", transform=transform, train = True, download = True) data_test = datasets.MNIST(root="./data/", transform = transform, train = False) data_loader_train = torch.utils.data.DataLoader(dataset=data_train, batch_size = 64, shuffle = True, num_workers=2) data_loader_test = torch.utils.data.DataLoader(dataset=data_test, batch_size = 64, shuffle = True, num_workers=2) print(len(data_train)) images, labels = next(iter(data_loader_train)) img = torchvision.utils.make_grid(images) img = img.numpy().transpose(1,2,0) std = [0.5,0.5,0.5] mean = [0.5,0.5,0.5] img = img*std+mean print([labels[i] for i in range(64)]) plt.imshow(img) import torchvision import matplotlib.pyplot as plt row = next(iter(train_loader)) images, labels = row['image'], row['digit'] img = torchvision.utils.make_grid(images) img = img.numpy().transpose(1,2,0) std = [0.5,0.5,0.5] mean = [0.5,0.5,0.5] img = img*std+mean print([labels[i] for i in range(64)]) plt.imshow(img) data = next(iter(train_loader)) data data['image'].shape MultiTFRecordDataset? # from torchvision import datasets, transforms # transform=transforms.Compose([ # transforms.ToTensor(), # transforms.Normalize((0.1307,), (0.3081,)) # ]) # dataset1 = datasets.MNIST('../data', train=True, download=True, transform=transform) # dataset2 = datasets.MNIST('../data', train=False, # transform=transform) !ls -l ../data/MNIST/raw/ from __future__ import print_function import argparse import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms from torch.optim.lr_scheduler import StepLR from tfrecord.torch.dataset import MultiTFRecordDataset class Net1(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 32, 3, 1) self.conv2 = nn.Conv2d(32, 64, 3, 1) self.dropout1 = nn.Dropout(0.25) self.dropout2 = nn.Dropout(0.5) self.fc1 = nn.Linear(9216, 128) self.fc2 = nn.Linear(128, 10) def forward(self, x): x = self.conv1(x) x = F.relu(x) x = self.conv2(x) x = F.relu(x) x = F.max_pool2d(x, 2) x = self.dropout1(x) x = torch.flatten(x, 1) x = self.fc1(x) x = F.relu(x) x = self.dropout2(x) x = self.fc2(x) output = F.log_softmax(x, dim=1) return output class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = torch.nn.Sequential(torch.nn.Conv2d(1,64,kernel_size=3,stride=1,padding=1), torch.nn.ReLU(), torch.nn.Conv2d(64,128,kernel_size=3,stride=1,padding=1), torch.nn.ReLU(), torch.nn.MaxPool2d(stride=2,kernel_size=2)) self.dense = torch.nn.Sequential(torch.nn.Linear(14*14*128,1024), torch.nn.ReLU(), torch.nn.Dropout(p=0.5), torch.nn.Linear(1024, 10)) def forward(self, x): x = self.conv1(x) x = x.view(-1, 14*14*128) x = self.dense(x) return x def train(args, model, device, train_loader, optimizer, epoch): model.train() # for batch_idx, (data, target) in enumerate(train_loader): for row in train_loader: batch_idx, data, target = row['idx'], row['image'], row['digit'] data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % args.log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) if args.dry_run: break def test(model, device, test_loader): model.eval() test_loss = 0 correct = 0 with torch.no_grad(): # for idx, data, target in test_loader: for row in test_loader: batch_idx, data, target = row['idx'], row['image'], row['digit'] data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) from collections import namedtuple d = {'no_cuda':False, 'batch_size':64, 'test_batch_size':1000,'epochs':14, 'lr':1.0, 'gamma':0.7, 'dry_run':False, 'log_interval':10, 'save_model':True} args = namedtuple('args', d.keys())(*d.values()) def main(): # # Training settings # parser = argparse.ArgumentParser(description='PyTorch MNIST Example') # parser.add_argument('--batch-size', type=int, default=64, metavar='N', # help='input batch size for training (default: 64)') # parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N', # help='input batch size for testing (default: 1000)') # parser.add_argument('--epochs', type=int, default=14, metavar='N', # help='number of epochs to train (default: 14)') # parser.add_argument('--lr', type=float, default=1.0, metavar='LR', # help='learning rate (default: 1.0)') # parser.add_argument('--gamma', type=float, default=0.7, metavar='M', # help='Learning rate step gamma (default: 0.7)') # parser.add_argument('--no-cuda', action='store_true', default=False, # help='disables CUDA training') # parser.add_argument('--dry-run', action='store_true', default=False, # help='quickly check a single pass') # parser.add_argument('--seed', type=int, default=1, metavar='S', # help='random seed (default: 1)') # parser.add_argument('--log-interval', type=int, default=10, metavar='N', # help='how many batches to wait before logging training status') # parser.add_argument('--save-model', action='store_true', default=False, # help='For Saving the current Model') # args = parser.parse_args() # args = use_cuda = not args.no_cuda and torch.cuda.is_available() # torch.manual_seed(args.seed) device = torch.device("cuda" if use_cuda else "cpu") train_kwargs = {'batch_size': args.batch_size} test_kwargs = {'batch_size': args.test_batch_size} if use_cuda: cuda_kwargs = {'num_workers': 1, 'pin_memory': True, 'shuffle': True} train_kwargs.update(cuda_kwargs) test_kwargs.update(cuda_kwargs) transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) # dataset1 = datasets.MNIST('../data', train=True, download=True, # transform=transform) # dataset2 = datasets.MNIST('../data', train=False,A # transform=transform) # # train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs) # # test_loader = torch.utils.data.DataLoader(dataset2, **test_kwargs) print(Net) model = Net().to(device) optimizer = optim.Adadelta(model.parameters(), lr=args.lr) scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma) for epoch in range(1, args.epochs + 1): train(args, model, device, train_loader, optimizer, epoch) test(model, device, test_loader) scheduler.step() if args.save_model: torch.save(model.state_dict(), "mnist_cnn.pt") # if __name__ == '__main__': main() # s3://com.climate.production.analytics/dsw/scratch/sagemaker/data/mnist-tfrecords/ # https://zhuanlan.zhihu.com/p/77952356 import tensorflow as tf print (tf.__version__) def tfrecord_parser(serialized_example): """Parses a single tf.Example into image and label tensors.""" feature = { "idx": tf.FixedLenFeature([1], tf.int64), "image": tf.FixedLenFeature([28 * 28], tf.int64), "digit": tf.FixedLenFeature([1], tf.int64), } features = tf.parse_single_example(serialized_example, features=feature) # 28 x 28 is size of MNIST example image = tf.cast(tf.reshape(features["image"], [28 * 28]), tf.float32) digit = tf.reshape(features["digit"], [1]) return {"image": image}, digit batch_size = 256 num_shards = 1 shard_index = 0 num_epochs = 1 # tfrecord_glob_pattern = f"*.tfrecord" filenames = ['s3://com.climate.production.analytics/dsw/scratch/sagemaker/data/mnist-tfrecords/train/tfrecord_1.tfrecord'] ds = tf.data.TFRecordDataset(filenames[:]) for x, y in ds.map(read_tfrecord): image = torch.from_numpy(x.numpy()) digit = torch.from_numpy(y.numpy()) break image, digit # volume = torch.from_numpy(x.numpy()) # segmentation = torch.from_numpy(y.numpy()) # return volume, segmentation # ds = ( # tf.data.Dataset.list_files(tfrecord_glob_pattern, shuffle=True) # .interleave(tf.data.TFRecordDataset, cycle_length=2) # .shard(num_shards=num_shards, index=shard_index) # .repeat(num_epochs) # .shuffle(buffer_size=100) # .map(tfrecord_parser, num_parallel_calls=4) # .batch(batch_size=batch_size) # ) !aws s3 cp s3://com.climate.production.analytics/dsw/scratch/sagemaker/data/mnist-tfrecords/train/tfrecord_1.tfrecord . import argparse import os import json import ast import tensorflow as tf import logging as _logging from tensorflow.python.platform import tf_logging def model_fn(features, labels, mode, params): # model taken from https://www.kaggle.com/ilufei/mnist-with-tensorflow-dnn-97 layer1 = tf.keras.layers.Dense( params["nr_neurons_first_layer"], activation="relu", input_shape=(params["batch_size"], 784), kernel_initializer=tf.contrib.layers.xavier_initializer(), )(features["image"]) dropped_out = tf.layers.dropout( inputs=layer1, rate=0.4, training=(mode == tf.estimator.ModeKeys.TRAIN) ) layer2 = tf.keras.layers.Dense( 128, activation="relu", kernel_initializer=tf.contrib.layers.xavier_initializer(), )(dropped_out) layer3 = tf.keras.layers.Dense( 64, activation="relu", kernel_initializer=tf.contrib.layers.xavier_initializer() )(layer2) layer4 = tf.keras.layers.Dense( 32, activation="relu", kernel_initializer=tf.contrib.layers.xavier_initializer() )(layer3) layer5 = tf.keras.layers.Dense( 16, activation="relu", kernel_initializer=tf.contrib.layers.xavier_initializer() )(layer4) logits = tf.keras.layers.Dense( 10, kernel_initializer=tf.contrib.layers.xavier_initializer() )(layer5) predictions = tf.argmax(logits, 1) if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec( mode=mode, predictions={"preds": predictions}, export_outputs={ "SIGNATURE_NAME": tf.estimator.export.PredictOutput( {"preds": predictions} ) }, ) cross_entropy = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits) if mode == tf.estimator.ModeKeys.TRAIN: optimizer = tf.train.AdamOptimizer(learning_rate=1e-3) train_op = optimizer.minimize( loss=cross_entropy, global_step=tf.train.get_or_create_global_step() ) return tf.estimator.EstimatorSpec( mode=mode, loss=cross_entropy, train_op=train_op ) accuracy = tf.metrics.accuracy( labels=tf.cast(labels, tf.int64), predictions=tf.cast(predictions, tf.int64) ) eval_metric_ops = {"accuracy": accuracy} # Provide an estimator spec for `ModeKeys.EVAL` mode. return tf.estimator.EstimatorSpec( mode=mode, loss=cross_entropy, eval_metric_ops=eval_metric_ops ) def tfrecord_parser(serialized_example): """Parses a single tf.Example into image and label tensors.""" feature = { "idx": tf.FixedLenFeature([1], tf.int64), "image": tf.FixedLenFeature([28 * 28], tf.int64), "digit": tf.FixedLenFeature([1], tf.int64), } features = tf.parse_single_example(serialized_example, features=feature) # 28 x 28 is size of MNIST example image = tf.cast(tf.reshape(features["image"], [28 * 28]), tf.float32) digit = tf.reshape(features["digit"], [1]) return {"image": image}, digit def main(): # tf.logging.set_verbosity(tf.logging.DEBUG) # TF 1.13 and 1.14 handle logging a bit different, so wrapping the logging setup in a try/except block try: tf_logger = tf_logging._get_logger() handler = tf_logger.handlers[0] handler.setFormatter( _logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s") ) except: pass main() from box import Box !ls ../data/MNIST/raw !aws s3 cp s3://com.climate.production.analytics/dsw/scratch/sagemaker/data/mnist-tfrecords/val/ val --recursive MultiTFRecordDataset?? ```
github_jupyter
# diverse development using pennlinckit ``` import sys factor = sys.argv[1] ``` #### pennlinckit contains data, plotting, brain, network science, and math functions common to neuroscience projects ``` import pennlinckit ``` #### standard libraries ``` import numpy as np import scipy.stats import seaborn as sns import matplotlib.pylab as plt from scipy.stats import pearsonr, spearmanr data = pennlinckit.data.dataset('pnc') data.load_matrices('rest') ``` ## What is in this object? #### You will probably be most interested in the matrices: ``` data.matrix.shape ``` #### You can always double check what dataset you called with: ``` data.source ``` #### All data objects will have a "measures" method, which is a pandas dataframe with all the info you need ``` data.measures.head() data.measures.columns ``` #### Sometimes it's confusing to have all these odd names, which is why we have a data_dict method ``` data.data_dict['overall_psychopathology_4factorv2'] ``` #### Let's only look at subjects that have a matrix for resting-state and cognitive scores ``` data.filter('matrix') data.filter('cognitive') data.matrix.shape ``` #### Let's only look at people who did not move a lot ``` data.filter('==',value=0,column='restRelMeanRMSMotionExclude') data.matrix.shape ``` ## Let's see what regions predict what mental illness symtom factors best ``` def region_predict(data,region,factor,**model_args): data.targets = data.measures[factor].values data.features = data.matrix[:,region] model_args['self'] = data pennlinckit.utils.predict(**model_args) data.matrix[np.isnan(data.matrix)]= 0.0 #the diagonal has np.nan, have to set to zero for sklearn # factor = 'F1_Exec_Comp_Res_Accuracy_RESIDUALIZED' prediction = np.zeros((400,len(data.measures.subject))) for node in range(400): region_predict(data,node,factor,**{'model':'deep','cv':'KFold','folds':5,'neurons':400,'layers':10,'remove_linear_vars':['restRelMeanRMSMotion','sex']}) prediction[node] = data.prediction data.features.shape prediction_acc = np.zeros(400) for node in range(data.matrix.shape[-1]): prediction_acc[node] = pearsonr(prediction[node],data.corrected_targets)[0] np.save('/home/mb3152/diverse_development/data/prediction_{0}.npy'.format(factor),prediction) np.save('/home/mb3152/diverse_development/data/prediction_acc_{0}.npy'.format(factor),prediction_acc) np.save('/home/mb3152/diverse_development/data/prediction_regressed_targets_{0}.npy'.format(factor),data.corrected_targets) 1/0 ``` ## I submitted each factor to the cluster (see submit.py) #### Now let's look at the outputs ``` factors = ['mood_4factorv2','psychosis_4factorv2', 'externalizing_4factorv2', 'phobias_4factorv2','overall_psychopathology_4factorv2'] #clincal factors factors = ['F1_Exec_Comp_Res_Accuracy_RESIDUALIZED','F2_Social_Cog_Accuracy_RESIDUALIZED','F3_Memory_Accuracy_RESIDUALIZED'] #cogi factors all_factor_predictions = np.zeros((5,data.matrix.shape[-1],data.matrix.shape[0])) prediction_acc = np.zeros((len(factors),data.matrix.shape[-1])) for fidx, factor in enumerate(factors): prediction_acc[fidx] = np.load('/home/mb3152/diverse_development/data/prediction_acc_{0}.npy'.format(factor)) all_factor_predictions[fidx] = np.load('/home/mb3152/diverse_development/data/prediction_{0}.npy'.format(factor)) # from the adult HCP adult_pc = np.load('/home/mb3152/diverse_development/data/hcp_pc.npy').mean(axis=0) adult_strength = np.load('/home/mb3152/diverse_development/data/hcp_strength.npy').mean(axis=0) pearsonr(prediction_acc.mean(axis=0),adult_pc) pearsonr(prediction_acc.mean(axis=0),adult_strength) high_predict = prediction_acc.mean() + (prediction_acc.std()) flexible_nodes = (prediction_acc>high_predict).sum(axis=0) print(pearsonr(flexible_nodes,adult_pc)) spincorrs = pennlinckit.brain.spin_test(adult_pc,flexible_nodes) spin_stat = pennlinckit.brain.spin_stat(adult_pc,flexible_nodes,spincorrs) import seaborn as sns import matplotlib.pylab as plt from pennlinckit import plotting %matplotlib inline plt.close() f,axes = plt.subplots(1,2,figsize=(5.5,3)) sns.regplot(x=flexible_nodes,y=adult_pc,ax=axes[0],truncate=False,x_jitter=.2,scatter_kws={"s": 50,'alpha':0.35}) plt.sca(axes[0]) r,p = pearsonr(adult_pc,flexible_nodes) plt.text(2.25,.025,'r={0},p={1}'.format(np.around(r,2),np.around(p,4))) plt.ylabel('participation coef') plt.xlabel('prediction flex') sns.histplot(spincorrs,ax=axes[1]) plt.sca(axes[1]) plt.vlines(r,0,100,colors='black') plt.tight_layout() sns.despine() plt.savefig('flex.pdf') flex_colors = np.zeros((400,4)) flex_colors[flexible_nodes>=2] = np.array([235,93,104,256])/256. pennlinckit.brain.write_cifti(flex_colors,'flex_nodes') for factor in factors: print (data.data_dict[factor]) flexible_nodes = np.zeros((400)) for high_predict in [0.05,0.06,0.07,0.08]: flexible_nodes =flexible_nodes+ (prediction_acc>high_predict).sum(axis=0) print(pearsonr(flexible_nodes,adult_pc)) flexible_nodes[flexible_nodes>=6] = 6 flex_colors = pennlinckit.brain.make_heatmap(flexible_nodes,cmap=sns.diverging_palette(220, 10,n=1001)) pennlinckit.brain.write_cifti(flex_colors,'flex_nodes_cont') allen = pennlinckit.data.allen_brain_institute() allen_ge = np.corrcoef(allen.expression) diverse_club = adult_pc >= np.percentile(adult_pc,80) rich_club = adult_strength >= np.percentile(adult_strength,80) diverse_club[200:] = False rich_club[200:] = False diverse_ge = allen_ge[np.ix_(diverse_club,diverse_club)].flatten() rich_ge = allen_ge[np.ix_(rich_club,rich_club)].flatten() diverse_ge = diverse_ge[np.isnan(diverse_ge)==False] diverse_ge = diverse_ge[diverse_ge!=1] rich_ge = rich_ge[np.isnan(rich_ge)==False] rich_ge = rich_ge[rich_ge!=1] scipy.stats.ttest_ind(diverse_ge,rich_ge) ```
github_jupyter
``` import pandas as pd import numpy as np import tensorflow as tf from keras.models import Sequential from keras.layers import Dense, Embedding, LSTM, SpatialDropout1D, Bidirectional from sklearn.model_selection import train_test_split from keras.utils.np_utils import to_categorical from keras.callbacks import EarlyStopping from keras.layers import Dropout # load numpy array from csv file from numpy import loadtxt # load array X_train = loadtxt('x_train2.csv', delimiter=',') Y_train = loadtxt('y_train2.csv', delimiter=',') # print the array X_train #reducing the size of the input length so it can train on a CPU X_train = X_train[:, :100] X_train Y_train VOCAB_SIZE = 1254 INPUT_LENGTH = 100 #1000 EMBEDDING_DIM = 128 from keras import backend as K from keras.layers import Layer from keras import initializers, regularizers, constraints # custom dot product function def dot_product(x, kernel): if K.backend() == 'tensorflow': return K.squeeze(K.dot(x, K.expand_dims(kernel)), axis=-1) else: return K.dot(x, kernel) # find a way to return attention weight vector a class AttentionWithContext(Layer): def __init__(self, W_regularizer=None, u_regularizer=None, b_regularizer=None, W_constraint=None, u_constraint=None, b_constraint=None, bias=True, **kwargs): self.supports_masking = True # initialization of all learnable params self.init = initializers.get('glorot_uniform') # regularizers for params, init as None self.W_regularizer = regularizers.get(W_regularizer) self.u_regularizer = regularizers.get(u_regularizer) self.b_regularizer = regularizers.get(b_regularizer) # constraints for params, init as None self.W_constraint = constraints.get(W_constraint) self.u_constraint = constraints.get(u_constraint) self.b_constraint = constraints.get(b_constraint) self.bias = bias super(AttentionWithContext, self).__init__(**kwargs) def build(self, input_shape): # assert len(input_shape) == 3 # weight matrix self.W = self.add_weight((input_shape[-1], input_shape[-1],), initializer=self.init, name='{}_W'.format(self.name), regularizer=self.W_regularizer, constraint=self.W_constraint) # bias term if self.bias: self.b = self.add_weight((input_shape[-1],), initializer='zero', name='{}_b'.format(self.name), regularizer=self.b_regularizer, constraint=self.b_constraint) # context vector self.u = self.add_weight((input_shape[-1],), initializer=self.init, name='{}_u'.format(self.name), regularizer=self.u_regularizer, constraint=self.u_constraint) super(AttentionWithContext, self).build(input_shape) def compute_mask(self, input, input_mask=None): # do not pass the mask to the next layers return None def call(self, x, mask=None): uit = dot_product(x, self.W) if self.bias: uit += self.b uit = K.tanh(uit) ait = dot_product(uit, self.u) a = K.exp(ait) # apply mask after the exp. will be re-normalized next if mask is not None: # Cast the mask to floatX to avoid float64 upcasting in theano a *= K.cast(mask, K.floatx()) # in some cases especially in the early stages of training the sum may be almost zero # and this results in NaN's. A workaround is to add a very small positive number ε to the sum. # a /= K.cast(K.sum(a, axis=1, keepdims=True), K.floatx()) a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx()) a = K.expand_dims(a) weighted_input = x * a return K.sum(weighted_input, axis=1) def compute_output_shape(self, input_shape): return input_shape[0], input_shape[-1] # model def build_model(vocab_size, embedding_dim, input_length): model = Sequential() model.add(Embedding(vocab_size, embedding_dim, input_length=input_length)) model.add(SpatialDropout1D(0.2)) model.add(Bidirectional(LSTM(128, dropout=0.2, recurrent_dropout=0.2, return_sequences=True))) model.add(AttentionWithContext()) model.add(Dense(41, activation='softmax')) return model model = build_model(VOCAB_SIZE, EMBEDDING_DIM, INPUT_LENGTH) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary()) epochs = 5 batch_size = 64 history = model.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size,validation_split=0.1,callbacks=[EarlyStopping(monitor='val_loss', patience=3, min_delta=0.0001)]) example_x = X_train[0] print(np.shape(example_x)) temp = model.predict(X_train[0:100]) # print(len(temp)), temp print(temp[0]) for i in temp: print(np.argmax(i)) ```
github_jupyter
# How to use ID Resolver Feature in BTE ## Important relevant modules ``` from biothings_explorer.resolve_ids import syncQuery as query import nest_asyncio nest_asyncio.apply() ``` ## Generate some sample inputs and convert to curie format ``` ncbigenes = ["85456", "85461", "85462", "8578", "8622", "8630", "8669", "8761", "8798", "8899", "90060", "90102", "90293", "9066", "90874", "91526", "91614", "91775", "9203", "9208", "9209", "92105", "9256", "92610", "92906", "9315", "93183", "9331", "93323", "93589", "93627", "9379", "93953", "9410", "9450", "9470", "9486", "9513", "9562", "9568", "9576", "9605", "9620", "9631", "9635", "9695", "97", "9703", "9705", "9716", "9727", "9766", "9789", "9811", "9818", "9904", "9912", "9942", "9949", "9950", "9953", "9972", "5948", "60343", "63931", "64145", "64411", "64420", "64518", "64784", "65056", "65220", "65244", "6718", "6741", "6883", "6904", "7075", "7110", "7162", "7165", "7289", "7371", "7380", "7518", "7533", "7592", "7629", "7752", "7761", "7803", "7813", "782", "78991", "79002", "79006", "79047", "79056", "79074", "79080", "79085", "79091", "79094", "79149", "7915", "79157", "79165", "79177", "79258", "79570", "79590", "79661", "79672", "79675", "79689", "79750", "79782", "79798", "79802", "79825", "79864", "79891", "79899", "79906", "79908", "79953", "79980", "79989", "80000", "80023", "80097", "80110", "80199", "80341", "8045", "80728", "80774", "80831", "8099", "81627", "81790", "81796", "8273", "83448", "83605", "83642", "83989", "84070", "84074", "84103", "84216", "84612", "84647", "84651", "84668", "84680", "84873", "8498", "85407", "85478", "8562", "8564", "8608", "8704", "8725", "8745", "8853", "8869", "8887", "8894", "8925", "89884", "89944", "90019", "90113", "9033", "90417", "90459", "90956", "914", "91408", "9144", "9149", "91734", "91894", "92086", "92154", "92521", "9275", "92999", "93109", "93408", "93664", "939", "9392", "93973", "93974", "9399", "94122", "9419", "9422", "9436", "9445", "9463", "9467", "9477", "9506", "951", "9515", "9527", "9532", "954", "9552", "9595", "966", "9671", "9694", "970", "9701", "9717", "9743", "9744", "9764", "9771", "9779", "9783", "9784", "9801", "9821", "9827", "9837", "9875", "9882", "9891", "9948", "105375401", "105375414", "105375416", "105375419", "105375591", "105375592", "105375601", "105375603", "105375610", "105375639", "105375642", "105375670", "105375673", "105375709", "105375812", "105375823", "105375841", "105375855", "105375874", "105375876", "105375897", "105375903", "105375911", "105375916", "105375931", "105375937", "105375951", "105375956", "105375988", "105375990", "105376101", "105376127", "105376136", "105376146", "105376180", "105376183", "105376204", "105376208", "105376240", "105376311", "105376317", "105376323", "105376328", "106783498", "106783508", "105376372", "106799914", "105376391", "105376400", "105376422", "105376455", "105376491", "105376496", "105376548", "105376567", "105376583", "105376603", "105376615", "105376641", "105376642", "105376646", "105376712", "105376714", "105376754", "105376791", "105376818", "105376839", "105376859", "105376860", "105376888", "105377006", "105377031", "105377083", "105377088", "105377123", "105377171", "105377186", "105379321", "105377219", "105377237", "105377259", "105379351", "105379352", "105377283", "105377326", "105379380", "105379392", "105379544", "105379545", "105379556", "105379597", "105379600", "105379634", "105379783", "105379807", "105379882", "105943587", "106478954", "106478955", "106478978", "107161157", "107197953", "107228318", "10737", "107522030", "107546780", "107548112", "107980445", "107983949", "107983959", "107983989", "107984002", "107984011", "107984160", "107984179", "107984196", "107984201", "107984272", "107984364", "107984372", "107984399", "107984426", "107984432", "107984439", "107984442", "107984468", "107984485", "107984495", "107984678", "107984682", "108491830", "108510655", "108684029", "108783654", "108868751", "108903149", "109245078", "109616995", "109617011", "109621227", "109623458", "109729135", "110120569", "110120579", "110120592", "110120609", "110120641", "110120669", "110120701", "110120735", "110120764", "110120851", "110120856", "110120930", "110120933", "110120946", "110120951", "110120957", "110120997", "110121027", "110121106", "110121178", "110121182", "110121205", "105377343", "105377349", "105377355", "105377373", "105377376", "105377411", "105377474", "105377504", "105377513", "105377530", "105377537", "105377544", "105377576", "105377599", "105377600", "105377684", "107984747", "107984750", "105377865", "107984830", "107984840", "107984870", "107984880", "107984884", "105377981", "107984915", "105377999", "107984929", "107984950", "105378100", "105378106", "107984999", "107985004", "107985005", "105378199", "105378228", "105378270", "107985059", "107985071", "107985090", "107986230", "105378350", "107985156", "107985158", "107986265", "107986284", "107986306", "107986315", "107985215", "107986317", "107985232", "107985257", "105378476", "105378486", "105378493", "107985295", "105378519", "107985309", "107986369", "107985318", "107986392", "107986394", "107985329", "107986398", "107985337", "105378614", "107985352", "105378615", "107985354", "105378618", "107985361", "105378625", "107985396", "107986443", "107985418", "107986521", "107985471", "107985499", "107985504", "105378792", "105378821", "107985534", "107985537", "107985538", "107986557", "107986567", "107986569", "107986571", "107985568", "105378890", "105379150", "107986846", "105379170", "105379173", "105379176", "105379181", "107985826", "107985840", "105379264", "107985842", "105379275", "107986890", "107985868", "107985871", "107986904", "107986910", "107986931", "107986934", "107985896", "107986949", "107986981", "107986993", "107987011", "107985984", "107985997", "107987022", "107987032", "107987043", "107987073", "107986042", "107986064", "107987109", "107987115", "107987117", "107987120", "107986104", "107986124", "107986125", "107987186", "107986138", "107987218", "107986160", "107987223", "107986169", "107987261", "107987350", "107987393", "107987398", "107987415", "107987467", "107987484", "107988048", "107988049", "108004546", "108178984", "108178991", "108228198", "108251796", "108251803", "108281169", "108281187", "106633809", "106635528", "106635533", "106660609", "106677019", "106699567", "106707172", "106736477", "106480735", "106480739", "106480740", "110121242", "110121244", "110121248", "110121249", "110121250", "110121253", "110121259", "110121260", "110121263", "110121264", "110121277", "110121278", "110121280", "110121282", "110121284", "110121285", "110121289", "110121292", "110121294", "110121296", "110121302", "110121305", "110121306", "110121307", "110121308", "110121311", "110121314", "110121315", "110121317", "110121319", "110121322", "110121324", "110121328", "110121332", "110121333", "110121339", "110121348", "110121349", "110121362", "110121363", "110121366", "110121368", "110121370", "110121374", "110121376", "110121379", "110121381", "110121383", "110121385", "110121389", "110121390", "110121392", "110121395", "110121401", "110121402", "110121406", "110121411", "110121413", "110121418", "110121419", "110121420", "110121421", "110121422", "110121423", "110121425", "110121431", "110121433", "110121439", "110121441", "110121443", "110121447", "110121451", "110121455", "110121460", "110121463", "110121465", "110121470", "110121472", "110121473", "110121478", "110121484", "110121485", "110121486", "110121487", "110121489", "110121494", "110121497", "110121499", "110121501", "110255169", "110255170", "110283621", "110366355", "110366357", "110386947", "110386948", "110437700", "110599568", "110599571", "110599579", "110599582", "110599586", "110599592", "110673972", "110740340", "110740341", "110740348", "110806264", "110806276", "110806289", "110806297", "110806301", "110806306", "110841581", "111064646", "111082987", "111082989", "111082990", "111082991", "111082994", "111089943", "111089944", "111089947", "111099027", "111099028", "111188143", "111188146", "111188148", "111188152", "111188161", "111188164", "111216272", "111216274", "111216275", "111216277", "111216283", "111216284", "111216287", "111240471", "111240475", "111258505", "111258507", "111258513", "111258517", "111365146", "111365151", "111365155", "111365157", "111365158", "111365160", "111365161", "111365162", "111365164", "111365169", "111365171", "111365172", "111365175", "111365177", "111365179", "111365181", "111365183", "111365184", "111365188", "111365189", "111365191", "111365192", "111365194", "111365205", "111365212", "111365215", "111365216", "111365217", "111365226", "111413009", "111413015", "111413020", "111413021", "111413022", "111413025", "111413030", "111413033", "111413037", "111413039", "111413041", "111413042", "111413047", "111413048", "111429603", "111429606", "111429607", "111429612", "111429618", "111429625", "111464985", "111464986", "111464990", "111464992", "111465007", "111465009", "111465010", "111465015", "111500319", "111501763", "111501769", "111501777", "111501778", "111501779", "111501788", "111501791", "111501793", "111519898", "111556113", "111556116", "111556121", "111556136", "111556138", "111556139", "111556140", "111556144", "111556149", "111556151", "111556152", "111556159", "111556160", "111556163", "111562369", "111562375", "111562376", "111589206", "111589207", "111589208", "111589213", "111589216", "111591502", "111674464", "111674471", "111674474", "111674475", "111674476", "111674477", "111721702", "111721703", "111721704", "111721705", "111721706", "111721707", "111721713", "111753249", "111776215", "111776218", "111776219", "111776220", "111811966", "111811967", "111818956", "111818965", "111822955", "111828504", "111828506", "111828511", "111828520", "111828522", "111828524", "111828527", "111828530", "111832670", "111832674", "111875822", "111875823", "111875824", "111875827", "111875828", "111875830", "111875832", "111946223", "111946224", "111946226", "111946227", "111946229", "111946231", "111946237", "111946242", "111946244", "111946246", "111946251", "111982872", "111982873", "111982875", "111982876", "111982878", "111982879", "111982883", "111982884", "111982886", "111982888", "111982891", "111982892", "111982894", "112042783", "112042785", "112067712", "112067713", "112067720", "112081403", "112081411", "112081415", "112136078", "112136097", "112136105", "112136107", "112136108", "112163531", "112163543", "112163546", "112163547", "112163619", "112163620", "112163623", "112163631", "112163646", "112163658", "112163664", "112163680", "112163681", "112267856", "112267866", "112267869", "112267892", "112267899", "112267924", "112267930", "112267936", "112267942", "112694729", "112694768", "112694772", "112695087", "112695096", "112695104", "112695106", "112806073", "112840900", "112840913", "112840919", "112840931", "112840934", "112840938", "112841582", "112841592", "112841604", "112841610", "112872298", "112872299", "112872301", "112872302", "112903834", "112903842", "112935896", "112935900", "112935906", "112935909", "112935917", "112935923", "112935935", "112935937", "112935951", "112935955", "112935957", "112935962", "112935970", "112939922", "112978667", "112978669", "112997545", "112997555", "112997559", "112997561", "112997566", "112997570", "112997571", "112997585", "112997587", "113174972", "113174973", "113174986", "113174988", "113174992", "113175004", "113175012", "113218480", "113218505", "113218506", "113218507", "113218512", "113219439", "113219453", "113219457", "113687192", "113687197", "113687201", "113687205", "113687207", "113743971", "113743975", "113748397", "113748409", "113748412", "113788271", "113788276", "113788279", "113788285", "113788290", "113788297", "113788299", "113839505", "113839509", "113839515", "113839518", "113839523", "113839525", "113839537", "113839546", "113839560", "113839574", "113875009", "113875017", "113875021", "113875022", "113875023", "113875026", "113875028", "113875029", "113875032", "113939916", "113939926", "113939938", "113939942", "113939944", "113939945", "113939946", "113939951", "113939962", "113939966", "113939971", "113939977", "113939979", "113939986", "113939989", "114004352", "114004355", "114004357", "114004360", "114004368", "114004372", "114004375", "114004391", "114004398", "114004410", "114022699", "114022707", "114041", "112267957", "112267965", "112267972", "112267980", "112267992", "112267994", "112268020", "112268026", "112268034", "112268037", "112268046", "112268050", "112268062", "112268070", "112268072", "112268073", "112268098", "112268100", "112268101", "112268112", "112268120", "112268146", "112268151", "112268164", "112268186", "112268192", "112268203", "112268204", "112268205", "112268215", "112268218", "112268235", "112268251", "112268261", "116828", "117188", "145195", "145783", "147276", "152024", "153163", "153684", "170508", "170594", "1713", "1714", "192677", "201", "2112", "2141", "221416", "2270", "28327", "28331", "283422", "283553", "283761", "28389", "284581", "28461", "284837", "28505", "28572", "285796", "28580", "28600", "28603", "286184", "28626", "28636", "28637", "28643", "3406", "28680", "28683", "348249", "28721", "28734", "28746", "28781", "3495", "28825", "28827", "28831", "28834", "3719", "28907", "28938", "317648", "3197", "3210", "338342", "4511", "4657", "406216", "414761", "439949", "441098", "441315", "441317", "441711", "497256", "497657", "503519", "503569", "503613", "54719", "553989", "56119", "5640", "56663", "57061", "57290", "57306", "574458", "6377", "63870", "641364", "641433", "641648", "642797", "642924", "642934", "643770", "644093", "644838", "645206", "619351", "619394", "619411", "619471", "619475", "619488", "619518", "619536", "645553", "645591", "677769", "677794", "677796", "677810", "677811", "677836", "692072", "692089", "646915", "646927", "64695", "692108", "647107", "649352", "693182", "65072", "651430", "652070", "693220", "693221", "652966", "693227", "6984", "653123", "147727", "1867", "192115", "207107", "2156", "28317", "283214", "283303", "28333", "28385", "283854", "28388", "283888", "28398", "283994", "28409", "28414", "284240", "284294", "284551", "28477", "285045", "28517", "285759", "285941", "286023", "286083", "28660", "340107", "340544", "28667", "28730", "28732", "349160", "28780", "28820", "3501", "28869", "28881", "374890", "28913", "28933", "28937", "2988", "317714", "1275", "3220", "3247", "3261", "338027", "339263", "339535", "339539", "386616", "388406", "388572", "388685", "388699", "389634", "389791", "3905", "260402", "266553", "26765", "26788", "26790", "26792", "26796", "26799", "26801", "26808", "26809", "26827", "26864", "2757", "2759", "392187", "399716", "399959", "400046", "442895", "442901", "4570", "4574", "4576", "465", "474151", "404677", "404716", "406956", "4167", "425057", "431710", "440101", "440742", "440864", "440993", "441369", "493818", "494028", "494446", "494558", "503538", "50829", "50830", "50986", "53368", "53400", "54064", "54072", "54089", "552860", "560", "56801", "57054", "574028", "574043", "574509", "574513", "6395", "643401", "643441", "643749", "6104", "644303", "619207", "619398", "619399", "619409", "619562", "619563", "64595", "6762", "677829", "677844", "677845", "678655", "684959", "6893", "692084", "646903", "646976", "647044", "692109", "692110", "692199", "693139", "693146", "693149", "693163", "693181", "693183", "693199", "693202", "693211", "693234", "57792", "57876", "58156", "59081", "59330", "594837", "595098", "60467", "6053", "654412", "654433", "6066", "7063", "7195", "7207", "7229", "7234", "7237", "723788", "7239", "7241", "727978", "728190", "728327", "728342", "728450", "728715", "728753", "729033", "729224", "729291", "729296", "729461", "729558", "729609", "729856", "729968", "731075", "731223", "1483", "148898", "169", "170815", "1784", "179", "1957", "202020", "219731", "221241", "28326", "283332", "283435", "283585", "283624", "28401", "28404", "28408", "284365", "28444", "284454", "28452", "28455", "28457", "28466", "28468", "284788", "28491", "28492", "28503", "285043", "285629", "28564", "28569", "285735", "28606", "286094", "286135", "28615", "28630", "340017", "340113", "340184", "28677", "28694", "347737", "28712", "28714", "28716", "28719", "28725", "28742", "28744", "28770", "28826", "352961", "353127", "3538", "28891", "359822", "28896", "374987", "28943", "28946", "337892", "138948", "140805", "338797", "338862", "140848", "339529", "386627", "387585", "387742", "389676", "390560", "260431", "266691", "266811", "26748", "26766", "26797", "26863", "2728", "282", "282706", "282849", "282980", "283025", "283028", "394261", "399656", "399726", "28306", "28307", "400212", "400221", "400242", "400508", "400553", "400619", "400622", "400627", "400685", "400743", "400800", "400841", "400941", "400945", "401021", "401105", "401321", "401478", "401554", "402483", "442897", "4565", "4579", "474148", "404216", "4096", "413", "431711", "4397", "440028", "440293", "440446", "440600", "440900", "441058", "441313", "441374", "4933", "497634", "50514", "50982", "145624", "148231", "150622", "151658", "152641", "157983", "284632", "28498", "28510", "285205", "28521", "285326", "285370", "28591", "28595", "28620", "28622", "28631", "286370", "286411", "339874", "339988", "28664", "28669", "28705", "28713", "28717", "28753", "349408", "3515", "353347", "28912", "3121", "338005", "338069", "339260", "378881", "387575", "387583", "442893", "442894", "442899", "4429", "449015", "449018", "449483", "450097", "474387", "406215", "4819", "441009", "441086", "441179", "441543", "4937", "4964", "5044", "50608", "53369", "553192", "553195", "56664", "57309", "574040", "574449", "574452", "574475", "574485", "574486", "574492", "642355", "642484", "642633", "642648", "643714", "644714", "645321", "645434", "645676", "645922", "645949", "654841", "646023", "646103", "664728", "6685", "677663", "677679", "677834", "677835", "677849", "677885", "144360", "144571", "145946", "146795", "147184", "148645", "149047", "149950", "150142", "150197", "150381", "151534", "157693", "170593", "171513", "1858", "192144", "197187", "2097", "2233", "2234", "28332", "28335", "283416", "283673", "283688", "284215", "28423", "28424", "28429", "284395", "28481", "28509", "28526", "285422", "285500", "28589", "28602", "28612", "28623", "339822", "339902", "28661", "339926", "348761", "3493", "28814", "3519", "28893", "373071", "28921", "28941", "317773", "121775", "128439", "338799", "4549", "4563", "4577", "404714", "406874", "408259", "4104", "414208", "4195", "431712", "440117", "440390", "440894", "441108", "441123", "4936", "53844", "554249", "554279", "55449", "574048", "574078", "574471", "574499", "574517", "6315", "6378", "642757", "642864", "643327", "643339", "644297", "644873", "613206", "619569", "646071", "677764"] ncibgenes_curies = [('NCBIGene:' + item) for item in ncbigenes] symbols = ["TNKS1BP1", "TANC1", "FHDC1", "SCARF1", "PDE8B", "HSD17B6", "EIF3J", "PABPC4", "DYRK4", "PRPF4B", "CCDC120", "PHLDB2", "KLHL13", "SYT7", "ZNF697", "ANKRD44", "DEPDC7", "NXPE3", "ZMYM3", "LRRFIP1", "LRRFIP2", "INTS4", "TSPOAP1", "TIFA", "HNRNPLL", "NREP", "PIGM", "B4GALT6", "HAUS8", "CACNA2D4", "TBCK", "NRXN2", "GCNA", "SNRNP40", "LY86", "EIF4E2", "CHST10", "FXR2", "MINPP1", "GABBR2", "SPAG6", "VPS9D1", "CELSR1", "NUP155", "CLCA2", "EDEM1", "ACYP1", "KIAA0100", "ST18", "AQR", "RAB11FIP3", "SUSD6", "SPCS2", "CTIF", "NUP58", "RBM19", "ARHGAP44", "XYLB", "AMMECR1", "GOLGA5", "HS3ST3B1", "NUP153", "Z83844.1", "AC004837.1", "DLEU2L", "LINC00544", "AL365205.1", "FAM182A", "LINC01558", "TTTY7", "RBP2", "FAM3A", "MRPS14", "RBSN", "ARAP3", "SUSD1", "TEKT3", "CRTC3", "GPBP1", "NADK", "SPATS2", "AKR1D1", "SSB", "TAF12", "TBCD", "TIE1", "TMF1", "TPBG", "TPD52L2", "TULP3", "UCK2", "UPK3A", "XRCC4", "YWHAH", "ZNF41", "ZNF76", "ZNF200", "ZNF214", "PTP4A1", "EVI5", "CACNB1", "PCYOX1L", "TRIR", "METRN", "KCTD15", "PRRG4", "C2orf49", "CCDC86", "SLC25A23", "METTL22", "CHAC1", "ZSCAN5A", "ALDH5A1", "MFSD11", "LENG1", "ZNF576", "MMEL1", "NKAIN1", "MRPL24", "NEIL1", "FN3KRP", "FASTKD1", "STEAP4", "ZNF385D", "LRRC31", "ARMC5", "HHIPL2", "EFCC1", "JHY", "ZNF671", "PRR5L", "MORN1", "BTNL8", "SYNDIG1", "DSN1", "TTC26", "GREB1L", "NRSN2", "MZT2B", "ZNF614", "FUZ", "BPIFB2", "RASSF7", "ARHGAP39", "LIMD2", "APOL5", "CDK2AP1", "TRMT1L", "RNF170", "SLCO5A1", "SLC10A3", "PUS7L", "CCM2", "SELENOO", "FAM172A", "FAM186B", "QRICH2", "C4orf17", "TMEM117", "PARD6B", "PLA2G12B", "SPINK7", "FAM126A", "ACCS", "ADGRG7", "RANBP3", "NKD1", "CCDC65", "DENR", "KMO", "RDH16", "B4GALT2", "URI1", "ADAM23", "ASAP2", "ST3GAL5", "TAX1BP1", "EIF2S2", "HERC1", "LHX4", "GLB1L2", "SYT8", "VWA5B2", "PKD2L1", "KNSTRN", "ERI1", "ADCK2", "CD2", "BTF3L4", "SYNGR2", "DYRK1B", "IDI2", "C11orf52", "GGTLC1", "MTSS2", "SPECC1", "BCL7B", "ZBTB47", "TMEM44", "MYL10", "CADPS2", "CD27", "TGFBRAP1", "ACTR8", "ATP5IF1", "STOML1", "SYTL5", "CRIPT", "ZNF264", "NCR2", "ITM2B", "PICK1", "SH3BP5", "MED20", "PAGE4", "CD37", "STXBP5L", "GOSR1", "BAG2", "ENTPD2", "SPAG7", "CYTIP", "CD59", "WSCD2", "EMC2", "CD70", "PPP6R2", "SEC14L5", "ARHGAP32", "ACAP1", "KIAA0513", "RAPGEF5", "TBC1D5", "RIMS3", "SNX17", "MRPL19", "RB1CC1", "RGP1", "GINS1", "URB1", "TBC1D4", "NUAK1", "WDR1", "LINC01587", "AC000061.1", "AC016026.1", "AL021546.1", "AC009533.1", "AC046185.1", "TTTY1", "AC091132.1", "TTTY6", "AL355922.1", "AC008764.1", "CCDC39", "LINC00525", "TTTY7B", "AL136372.1", "AL136982.1", "TFPI2-DT", "LOC105375414", "LOC105375416", "LOC105375419", "LOC105375591", "LOC105375592", "LOC105375601", "LOC105375603", "LOC105375610", "LOC105375639", "LOC105375642", "LOC105375670", "LOC105375673", "LOC105375709", "LOC105375812", "LINC02847", "LOC105375841", "LOC105375855", "LOC105375874", "LOC105375876", "LOC105375897", "LOC105375903", "LOC105375911", "LOC105375916", "LOC105375931", "LOC105375937", "LOC105375951", "LOC105375956", "LOC105375988", "LOC105375990", "LOC105376101", "LOC105376127", "LOC105376136", "LOC105376146", "LOC105376180", "LOC105376183", "LOC105376204", "LOC105376208", "LOC105376240", "LOC105376311", "LOC105376317", "LINC02846", "LOC105376328", "LOC106783498", "LOC106783508", "LOC105376372", "YUHAL", "LOC105376391", "LOC105376400", "LOC105376422", "LOC105376455", "LINC02635", "LOC105376496", "LOC105376548", "LOC105376567", "LOC105376583", "LINC02742", "LOC105376615", "LOC105376641", "LOC105376642", "LOC105376646", "LOC105376712", "LOC105376714", "LOC105376754", "LOC105376791", "LOC105376818", "LOC105376839", "LOC105376859", "LOC105376860", "LOC105376888", "LINC02084", "LINC02033", "LOC105377083", "LOC105377088", "LOC105377123", "LOC105377171", "LOC105377186", "LOC105379321", "LOC105377219", "LOC105377237", "LOC105377259", "LOC105379351", "LOC105379352", "LINC02562", "LOC105377326", "LOC105379380", "LOC105379392", "LOC105379544", "LOC105379545", "LOC105379556", "LOC105379597", "LOC105379600", "LOC105379634", "LOC105379783", "LOC105379807", "LOC105379882", "LOC105943587", "EHMT2-AS1", "C4A-AS1", "FAR1-IT1", "YAM1", "ORI6", "LOC107228318", "RFPL3S", "LOC107522030", "LOC107546780", "LOC107548112", "LOC107980445", "LOC107983949", "LOC107983959", "LOC107983989", "LINC02646", "LOC107984011", "LOC107984160", "LOC107984179", "LOC107984196", "LOC107984201", "LOC107984272", "LOC107984364", "LOC107984372", "LOC107984399", "LINC02695", "LOC107984432", "LINC02457", "LOC107984442", "LOC107984468", "LOC107984485", "LOC107984495", "LOC107984678", "LOC107984682", "LOC108491830", "LOC108510655", "DUPXQ25", "LOC108783654", "GSC-DT", "LOC108903149", "LOC109245078", "SNORD70B", "SNORD28B", "LOC109621227", "SNORD170", "MIR4422HG", "LOC110120569", "LOC110120579", "LOC110120592", "LOC110120609", "LOC110120641", "LOC110120669", "LOC110120701", "LOC110120735", "LOC110120764", "LOC110120851", "LOC110120856", "LOC110120930", "LOC110120933", "LOC110120946", "LOC110120951", "LOC110120957", "LOC110120997", "LOC110121027", "LOC110121106", "LOC110121178", "LOC110121182", "LOC110121205", "LOC105377343", "LINC02503", "LINC02173", "ANK2-AS1", "LOC105377376", "LOC105377411", "LOC105377474", "LOC105377504", "LOC105377513", "LOC105377530", "LOC105377537", "LOC105377544", "LOC105377576", "LOC105377599", "LINC02374", "LOC105377684", "LOC107984747", "LOC107984750", "LOC105377865", "LOC107984830", "LOC107984840", "LOC107984870", "LOC107984880", "MMP2-AS1", "LOC105377981", "LOC107984915", "LOC105377999", "LOC107984929", "LOC107984950", "LOC105378100", "LOC105378106", "LOC107984999", "LOC107985004", "LOC107985005", "LOC105378199", "LOC105378228", "LINC02633", "LOC107985059", "LOC107985071", "LOC107985090", "LOC107986230", "LOC105378350", "LOC107985156", "LOC107985158", "LOC107986265", "LOC107986284", "LOC107986306", "LOC107986315", "LOC107985215", "LOC107986317", "LOC107985232", "LINC02602", "LOC105378476", "LOC105378486", "LOC105378493", "LOC107985295", "LOC105378519", "LOC107985309", "LOC107986369", "LOC107985318", "LOC107986392", "LOC107986394", "LOC107985329", "LOC107986398", "LOC107985337", "LOC105378614", "LOC107985352", "LOC105378615", "LOC107985354", "LOC105378618", "LOC107985361", "LOC105378625", "LOC107985396", "LINCADL", "LOC107985418", "LOC107986521", "PEF1-AS1", "LOC107985499", "LOC107985504", "LRRC7-AS1", "LOC105378821", "LOC107985534", "LOC107985537", "SGSM3-AS1", "LOC107986557", "LOC107986567", "LOC107986569", "LOC107986571", "LOC107985568", "LOC105378890", "LOC105379150", "LOC107986846", "LOC105379170", "LOC105379173", "LOC105379176", "CTB-49A3.2", "LOC107985826", "LOC107985840", "LOC105379264", "LOC107985842", "LOC105379275", "LOC107986890", "LOC107985868", "LOC107985871", "LOC107986904", "LOC107986910", "LOC107986931", "LOC107986934", "LOC107985896", "LOC107986949", "LOC107986981", "LOC107986993", "LOC107987011", "LOC107985984", "LOC107985997", "LOC107987022", "LOC107987032", "LOC107987043", "LOC107987073", "LOC107986042", "LOC107986064", "LOC107987109", "LOC107987115", "LOC107987117", "LOC107987120", "LOC107986104", "LOC107986124", "LINC02034", "LOC107987186", "LOC107986138", "LOC107987218", "LOC107986160", "LOC107987223", "LOC107986169", "LOC107987261", "LOC107987350", "LOC107987393", "LOC107987398", "LOC107987415", "LOC107987467", "LOC107987484", "LOC107988048", "LOC107988049", "LOC108004546", "LOC108178984", "LOC108178991", "LOC108228198", "LOC108251796", "LOC108251803", "LOC108281169", "LOC108281187", "SNORD144", "SNORA94", "SNORA103", "MIR4432HG", "LOC106677019", "LOC106699567", "LOC106707172", "LOC106736477", "XIAP-AS1", "RHOA-IT1", "GK-IT1", "LOC110121242", "LOC110121244", "LOC110121248", "LOC110121249"] symbol_curies = [('SYMBOL:' + item) for item in symbols] mondos = ["MONDO:0003011", "MONDO:0003012", "MONDO:0003013", "MONDO:0003015", "MONDO:0003016", "MONDO:0003017", "MONDO:0003018", "MONDO:0003020", "MONDO:0003021", "MONDO:0003022", "MONDO:0003023", "MONDO:0003024", "MONDO:0003025", "MONDO:0003026", "MONDO:0003027", "MONDO:0003028", "MONDO:0003029", "MONDO:0003030", "MONDO:0003031", "MONDO:0003032", "MONDO:0003033", "MONDO:0003034", "MONDO:0003035", "MONDO:0003038", "MONDO:0003041", "MONDO:0003042", "MONDO:0003043", "MONDO:0003044", "MONDO:0003045", "MONDO:0003047", "MONDO:0003048", "MONDO:0003049", "MONDO:0003051", "MONDO:0003052", "MONDO:0003053", "MONDO:0003054", "MONDO:0003055", "MONDO:0003056", "MONDO:0003057", "MONDO:0003058", "MONDO:0003059", "MONDO:0003060", "MONDO:0003062", "MONDO:0003063", "MONDO:0003064", "MONDO:0003065", "MONDO:0003066", "MONDO:0003068", "MONDO:0003069", "MONDO:0003070", "MONDO:0003071", "MONDO:0003072", "MONDO:0003074", "MONDO:0003075", "MONDO:0003076", "MONDO:0003077", "MONDO:0003078", "MONDO:0003079", "MONDO:0003080", "MONDO:0003082", "MONDO:0003083", "MONDO:0003084", "MONDO:0003086", "MONDO:0003087", "MONDO:0003088", "MONDO:0003089", "MONDO:0003090", "MONDO:0003091", "MONDO:0003092", "MONDO:0003093", "MONDO:0003094", "MONDO:0003095", "MONDO:0003096", "MONDO:0003097", "MONDO:0003098", "MONDO:0003099", "MONDO:0003100", "MONDO:0003101", "MONDO:0003102", "MONDO:0003103", "MONDO:0003104", "MONDO:0003106", "MONDO:0003107", "MONDO:0003108", "MONDO:0003109", "MONDO:0003110", "MONDO:0003112", "MONDO:0003113", "MONDO:0003114", "MONDO:0003115", "MONDO:0003116", "MONDO:0003118", "MONDO:0003119", "MONDO:0003120", "MONDO:0003121", "MONDO:0003123", "MONDO:0003124", "MONDO:0003125", "MONDO:0003126", "MONDO:0003127", "MONDO:0003128", "MONDO:0003129", "MONDO:0003130", "MONDO:0003131", "MONDO:0003132", "MONDO:0003133", "MONDO:0003134", "MONDO:0003135", "MONDO:0003136", "MONDO:0003137", "MONDO:0003138", "MONDO:0003139", "MONDO:0003140", "MONDO:0003141", "MONDO:0003142", "MONDO:0003144", "MONDO:0003146", "MONDO:0003147", "MONDO:0003148", "MONDO:0003149", "MONDO:0003151", "MONDO:0003152", "MONDO:0003153", "MONDO:0003154", "MONDO:0003156", "MONDO:0003157", "MONDO:0003158", "MONDO:0003160", "MONDO:0003161", "MONDO:0003162", "MONDO:0003163", "MONDO:0003164", "MONDO:0003165", "MONDO:0003166", "MONDO:0003167", "MONDO:0003169", "MONDO:0003170", "MONDO:0003171", "MONDO:0003172", "MONDO:0003173", "MONDO:0003174", "MONDO:0003175", "MONDO:0003176", "MONDO:0003177", "MONDO:0003178", "MONDO:0003179", "MONDO:0003180", "MONDO:0003181", "MONDO:0003182", "MONDO:0003183", "MONDO:0003184", "MONDO:0003185", "MONDO:0003186", "MONDO:0003187", "MONDO:0003188", "MONDO:0003189", "MONDO:0003190", "MONDO:0003191", "MONDO:0003192", "MONDO:0003193", "MONDO:0003194", "MONDO:0003195", "MONDO:0003196", "MONDO:0003197", "MONDO:0003198", "MONDO:0003200", "MONDO:0003201", "MONDO:0003202", "MONDO:0003203", "MONDO:0003204", "MONDO:0003205", "MONDO:0003206", "MONDO:0003207", "MONDO:0003208", "MONDO:0003209", "MONDO:0003210", "MONDO:0003211", "MONDO:0003212", "MONDO:0003213", "MONDO:0003214", "MONDO:0003215", "MONDO:0003216", "MONDO:0003217", "MONDO:0003218", "MONDO:0003220", "MONDO:0003221", "MONDO:0003222", "MONDO:0003223", "MONDO:0003224", "MONDO:0003226", "MONDO:0003227", "MONDO:0003228", "MONDO:0003229", "MONDO:0003230", "MONDO:0003231", "MONDO:0003234", "MONDO:0003236", "MONDO:0003237", "MONDO:0003238", "MONDO:0003239", "MONDO:0003241", "MONDO:0003242", "MONDO:0003244", "MONDO:0003245", "MONDO:0003246", "MONDO:0003247", "MONDO:0003248", "MONDO:0003249", "MONDO:0003250", "MONDO:0003251", "MONDO:0003252", "MONDO:0003253", "MONDO:0003254", "MONDO:0003255", "MONDO:0003256", "MONDO:0003257", "MONDO:0003258", "MONDO:0003259", "MONDO:0003260", "MONDO:0003261", "MONDO:0003262", "MONDO:0003263", "MONDO:0003264", "MONDO:0003266", "MONDO:0003267", "MONDO:0003268", "MONDO:0003269", "MONDO:0003270", "MONDO:0003271", "MONDO:0003272", "MONDO:0003273", "MONDO:0003274", "MONDO:0003275", "MONDO:0003276", "MONDO:0003278", "MONDO:0003279", "MONDO:0003280", "MONDO:0003281", "MONDO:0003283", "MONDO:0003284", "MONDO:0003285", "MONDO:0003286", "MONDO:0003287", "MONDO:0003288", "MONDO:0003289", "MONDO:0003290", "MONDO:0003292", "MONDO:0003293", "MONDO:0003294", "MONDO:0003295", "MONDO:0003296", "MONDO:0003297", "MONDO:0003298", "MONDO:0003299", "MONDO:0003300", "MONDO:0003301", "MONDO:0003302", "MONDO:0003303", "MONDO:0003305", "MONDO:0003306", "MONDO:0003307", "MONDO:0003308", "MONDO:0003309", "MONDO:0003310", "MONDO:0003311", "MONDO:0003312", "MONDO:0003313", "MONDO:0003314", "MONDO:0003316", "MONDO:0003317", "MONDO:0003318", "MONDO:0003319", "MONDO:0003320", "MONDO:0003321", "MONDO:0003322", "MONDO:0003323", "MONDO:0003324", "MONDO:0003325", "MONDO:0003326", "MONDO:0003327", "MONDO:0003328", "MONDO:0003330", "MONDO:0003331", "MONDO:0003332", "MONDO:0003333", "MONDO:0003335", "MONDO:0003336", "MONDO:0003337", "MONDO:0003338", "MONDO:0003339", "MONDO:0003340", "MONDO:0003341", "MONDO:0003342", "MONDO:0003344", "MONDO:0003347", "MONDO:0003348", "MONDO:0003349", "MONDO:0003350", "MONDO:0003351", "MONDO:0003352", "MONDO:0003353", "MONDO:0003354", "MONDO:0003355", "MONDO:0003356", "MONDO:0003357", "MONDO:0003358", "MONDO:0003359", "MONDO:0003360", "MONDO:0003361", "MONDO:0003362", "MONDO:0003363", "MONDO:0003364", "MONDO:0003365", "MONDO:0003366", "MONDO:0003367", "MONDO:0003368", "MONDO:0003369", "MONDO:0003370", "MONDO:0003371", "MONDO:0003372", "MONDO:0003373", "MONDO:0003374", "MONDO:0003375", "MONDO:0003376", "MONDO:0003377", "MONDO:0003378", "MONDO:0003379", "MONDO:0003380", "MONDO:0003383", "MONDO:0003384", "MONDO:0003385", "MONDO:0003386", "MONDO:0003387", "MONDO:0003388", "MONDO:0003389", "MONDO:0003390", "MONDO:0003391", "MONDO:0003392", "MONDO:0003393", "MONDO:0003394", "MONDO:0003395", "MONDO:0003399", "MONDO:0003400", "MONDO:0003401", "MONDO:0003402", "MONDO:0003403", "MONDO:0003404", "MONDO:0003405", "MONDO:0003407", "MONDO:0003408", "MONDO:0003410", "MONDO:0003411", "MONDO:0003412", "MONDO:0003413", "MONDO:0003414", "MONDO:0003415", "MONDO:0003416", "MONDO:0003417", "MONDO:0003418", "MONDO:0003419", "MONDO:0003420", "MONDO:0003421", "MONDO:0003423", "MONDO:0003426", "MONDO:0003427", "MONDO:0003428", "MONDO:0003429", "MONDO:0003430", "MONDO:0003431", "MONDO:0003432", "MONDO:0003433", "MONDO:0003434", "MONDO:0003435", "MONDO:0003437", "MONDO:0003438", "MONDO:0003439", "MONDO:0003440", "MONDO:0003442", "MONDO:0003443", "MONDO:0003444", "MONDO:0003445", "MONDO:0003446", "MONDO:0003447", "MONDO:0003448", "MONDO:0003449", "MONDO:0003450", "MONDO:0003451", "MONDO:0003453", "MONDO:0003454", "MONDO:0003455", "MONDO:0003456", "MONDO:0003457", "MONDO:0003458", "MONDO:0003459", "MONDO:0003460", "MONDO:0003461", "MONDO:0003462", "MONDO:0003463", "MONDO:0003464", "MONDO:0003465", "MONDO:0003466", "MONDO:0003467", "MONDO:0003468", "MONDO:0003469", "MONDO:0003470", "MONDO:0003471", "MONDO:0003473", "MONDO:0003474", "MONDO:0003475", "MONDO:0003476", "MONDO:0003477", "MONDO:0003478", "MONDO:0003479", "MONDO:0003480", "MONDO:0003481", "MONDO:0003482", "MONDO:0003483", "MONDO:0003484", "MONDO:0003485", "MONDO:0003486", "MONDO:0003487", "MONDO:0003488", "MONDO:0003489", "MONDO:0003490", "MONDO:0003491", "MONDO:0003492", "MONDO:0003493", "MONDO:0003494", "MONDO:0003495", "MONDO:0003496", "MONDO:0003497", "MONDO:0003498", "MONDO:0003499", "MONDO:0003500", "MONDO:0003501", "MONDO:0003502", "MONDO:0003503", "MONDO:0003504", "MONDO:0003505", "MONDO:0003506", "MONDO:0003507", "MONDO:0003508", "MONDO:0003509", "MONDO:0003511", "MONDO:0003512", "MONDO:0003513", "MONDO:0003514", "MONDO:0003515", "MONDO:0003516", "MONDO:0003517", "MONDO:0003518", "MONDO:0003519", "MONDO:0003520", "MONDO:0003521", "MONDO:0003522", "MONDO:0003523", "MONDO:0003524", "MONDO:0003526", "MONDO:0003527", "MONDO:0003530", "MONDO:0003531", "MONDO:0003532", "MONDO:0003533", "MONDO:0003534", "MONDO:0003535", "MONDO:0003536", "MONDO:0003537", "MONDO:0003538", "MONDO:0003539", "MONDO:0003540", "MONDO:0003544", "MONDO:0003545", "MONDO:0003547", "MONDO:0003548", "MONDO:0003549", "MONDO:0003550", "MONDO:0003551", "MONDO:0003552", "MONDO:0003553", "MONDO:0003554", "MONDO:0003555", "MONDO:0003556", "MONDO:0003557", "MONDO:0003558", "MONDO:0003559", "MONDO:0003560", "MONDO:0003561", "MONDO:0003562", "MONDO:0003563", "MONDO:0003564", "MONDO:0003565", "MONDO:0003566", "MONDO:0003567", "MONDO:0003568", "MONDO:0003570", "MONDO:0003571", "MONDO:0003572", "MONDO:0003573", "MONDO:0003574", "MONDO:0003575", "MONDO:0003576", "MONDO:0003577", "MONDO:0003578", "MONDO:0003579", "MONDO:0003580", "MONDO:0003581", "MONDO:0003583", "MONDO:0003584", "MONDO:0003585", "MONDO:0003586", "MONDO:0003587", "MONDO:0003588", "MONDO:0003589", "MONDO:0003590", "MONDO:0003591", "MONDO:0003592", "MONDO:0003593", "MONDO:0003594", "MONDO:0003595", "MONDO:0003596", "MONDO:0003597", "MONDO:0003599", "MONDO:0003600", "MONDO:0003601", "MONDO:0003602", "MONDO:0003603", "MONDO:0003604", "MONDO:0003605", "MONDO:0003606", "MONDO:0003607", "MONDO:0003609", "MONDO:0003610", "MONDO:0003611", "MONDO:0003612", "MONDO:0003613", "MONDO:0003614", "MONDO:0003616", "MONDO:0003617", "MONDO:0003618", "MONDO:0003621", "MONDO:0003622", "MONDO:0003623", "MONDO:0003624", "MONDO:0003625", "MONDO:0003626", "MONDO:0003627", "MONDO:0003628", "MONDO:0003629", "MONDO:0003630", "MONDO:0003631", "MONDO:0003632", "MONDO:0003633", "MONDO:0003635", "MONDO:0003636", "MONDO:0003637", "MONDO:0003638", "MONDO:0003639", "MONDO:0003640", "MONDO:0003641", "MONDO:0003642", "MONDO:0003643", "MONDO:0003644", "MONDO:0003645", "MONDO:0003647", "MONDO:0003648", "MONDO:0003649", "MONDO:0003650", "MONDO:0003651", "MONDO:0003652", "MONDO:0003653", "MONDO:0003654", "MONDO:0003655", "MONDO:0003657", "MONDO:0003658", "MONDO:0003661", "MONDO:0003662", "MONDO:0003663", "MONDO:0003665", "MONDO:0003666", "MONDO:0003667", "MONDO:0003668", "MONDO:0003669", "MONDO:0003670", "MONDO:0003671", "MONDO:0003672", "MONDO:0003673", "MONDO:0003674", "MONDO:0003675", "MONDO:0003676", "MONDO:0003677", "MONDO:0003678", "MONDO:0003679", "MONDO:0003680", "MONDO:0003681", "MONDO:0003682", "MONDO:0003683", "MONDO:0003684", "MONDO:0003685", "MONDO:0003686", "MONDO:0003687", "MONDO:0003688", "MONDO:0003690", "MONDO:0003691", "MONDO:0003692", "MONDO:0003693", "MONDO:0003694", "MONDO:0003695", "MONDO:0003696", "MONDO:0003697", "MONDO:0003698", "MONDO:0003700", "MONDO:0003701", "MONDO:0003702", "MONDO:0003703", "MONDO:0003704", "MONDO:0003705", "MONDO:0003706", "MONDO:0003707", "MONDO:0003708", "MONDO:0003711", "MONDO:0003712", "MONDO:0003713", "MONDO:0003714", "MONDO:0003715", "MONDO:0003716", "MONDO:0003717", "MONDO:0003718", "MONDO:0003719", "MONDO:0003720", "MONDO:0003721", "MONDO:0003722", "MONDO:0003723", "MONDO:0003724", "MONDO:0003725", "MONDO:0003726", "MONDO:0003727", "MONDO:0003728", "MONDO:0003729", "MONDO:0003730", "MONDO:0003731", "MONDO:0003732", "MONDO:0003733", "MONDO:0003734", "MONDO:0003735", "MONDO:0003736", "MONDO:0003737", "MONDO:0003738", "MONDO:0003739", "MONDO:0003740", "MONDO:0003741", "MONDO:0003742", "MONDO:0003743", "MONDO:0003744", "MONDO:0003745", "MONDO:0003746", "MONDO:0003747", "MONDO:0003748", "MONDO:0003750", "MONDO:0003751", "MONDO:0003752", "MONDO:0003753", "MONDO:0003755", "MONDO:0003756", "MONDO:0003758", "MONDO:0003759", "MONDO:0003760", "MONDO:0003761", "MONDO:0003762", "MONDO:0003763", "MONDO:0003764", "MONDO:0003765", "MONDO:0003766", "MONDO:0003767", "MONDO:0003768", "MONDO:0003769", "MONDO:0003770", "MONDO:0003771", "MONDO:0003772", "MONDO:0003773", "MONDO:0003774", "MONDO:0003775", "MONDO:0003776", "MONDO:0003777", "MONDO:0003779", "MONDO:0003782", "MONDO:0003784", "MONDO:0003786", "MONDO:0003787", "MONDO:0003788", "MONDO:0003789", "MONDO:0003790", "MONDO:0003791", "MONDO:0003792", "MONDO:0003793", "MONDO:0003794", "MONDO:0003796", "MONDO:0003797", "MONDO:0003798", "MONDO:0003800", "MONDO:0003801", "MONDO:0003802", "MONDO:0003803", "MONDO:0003804", "MONDO:0003805", "MONDO:0003806", "MONDO:0003807", "MONDO:0003808", "MONDO:0003809", "MONDO:0003810", "MONDO:0003811", "MONDO:0003812", "MONDO:0003813", "MONDO:0003814", "MONDO:0003815", "MONDO:0003816", "MONDO:0003817", "MONDO:0003818", "MONDO:0003819", "MONDO:0003820", "MONDO:0003821", "MONDO:0003822", "MONDO:0003823", "MONDO:0003824", "MONDO:0003826", "MONDO:0003827", "MONDO:0003828", "MONDO:0003829", "MONDO:0003830", "MONDO:0003831", "MONDO:0003832", "MONDO:0003833", "MONDO:0003836", "MONDO:0003837", "MONDO:0003838", "MONDO:0003839", "MONDO:0003840", "MONDO:0003841", "MONDO:0003842", "MONDO:0003843", "MONDO:0003844", "MONDO:0003845", "MONDO:0003846", "MONDO:0003848", "MONDO:0003849", "MONDO:0003850", "MONDO:0003851", "MONDO:0003852", "MONDO:0003853", "MONDO:0003854", "MONDO:0003855", "MONDO:0003856", "MONDO:0003857", "MONDO:0003858", "MONDO:0003859", "MONDO:0003860", "MONDO:0003861", "MONDO:0003862", "MONDO:0003863", "MONDO:0003866", "MONDO:0003867", "MONDO:0003868", "MONDO:0003870", "MONDO:0003871", "MONDO:0003872", "MONDO:0003873", "MONDO:0003874", "MONDO:0003875", "MONDO:0003876", "MONDO:0003877", "MONDO:0003878", "MONDO:0003879", "MONDO:0003880", "MONDO:0003881", "MONDO:0003882", "MONDO:0003883", "MONDO:0003884", "MONDO:0003885", "MONDO:0003886", "MONDO:0003887", "MONDO:0003888", "MONDO:0003889", "MONDO:0003890", "MONDO:0003891", "MONDO:0003892", "MONDO:0003893", "MONDO:0003894", "MONDO:0003895", "MONDO:0003896", "MONDO:0003897", "MONDO:0003898", "MONDO:0003899", "MONDO:0003902", "MONDO:0003903", "MONDO:0003904", "MONDO:0003905", "MONDO:0003906", "MONDO:0003907", "MONDO:0003908", "MONDO:0003909", "MONDO:0003910", "MONDO:0003911", "MONDO:0003912", "MONDO:0003913", "MONDO:0003914", "MONDO:0003915", "MONDO:0003917", "MONDO:0003918", "MONDO:0003919", "MONDO:0003920", "MONDO:0003921", "MONDO:0003922", "MONDO:0003923", "MONDO:0003925", "MONDO:0003926", "MONDO:0003927", "MONDO:0003928", "MONDO:0003929", "MONDO:0003930", "MONDO:0003931", "MONDO:0003933", "MONDO:0003934", "MONDO:0003935", "MONDO:0003936", "MONDO:0003938", "MONDO:0003939", "MONDO:0003940", "MONDO:0003941", "MONDO:0003942", "MONDO:0003943", "MONDO:0003944", "MONDO:0003945", "MONDO:0003946", "MONDO:0003948", "MONDO:0003949", "MONDO:0003950", "MONDO:0003951", "MONDO:0003952", "MONDO:0003953", "MONDO:0003954", "MONDO:0003955", "MONDO:0003956", "MONDO:0003957", "MONDO:0003958", "MONDO:0003959", "MONDO:0003960", "MONDO:0003961", "MONDO:0003962", "MONDO:0003963", "MONDO:0003966", "MONDO:0003967", "MONDO:0003968", "MONDO:0003970", "MONDO:0003971", "MONDO:0003972", "MONDO:0003973", "MONDO:0003974", "MONDO:0003975", "MONDO:0003976", "MONDO:0003977", "MONDO:0003978", "MONDO:0003979", "MONDO:0003980", "MONDO:0003981", "MONDO:0003983", "MONDO:0003984", "MONDO:0003985", "MONDO:0003986", "MONDO:0003987", "MONDO:0003988", "MONDO:0003989", "MONDO:0003990", "MONDO:0003991", "MONDO:0003992", "MONDO:0003993", "MONDO:0003994", "MONDO:0003995", "MONDO:0003997", "MONDO:0003998", "MONDO:0003999", "MONDO:0004000", "MONDO:0004002", "MONDO:0004003", "MONDO:0004005", "MONDO:0004006", "MONDO:0004007", "MONDO:0004008", "MONDO:0004009", "MONDO:0004010", "MONDO:0004011", "MONDO:0004012", "MONDO:0004013", "MONDO:0004014", "MONDO:0004015", "MONDO:0004016", "MONDO:0004017", "MONDO:0004018", "MONDO:0004019", "MONDO:0004020", "MONDO:0004021", "MONDO:0004022", "MONDO:0004023", "MONDO:0004024", "MONDO:0004025", "MONDO:0004026", "MONDO:0004027", "MONDO:0004028", "MONDO:0004029", "MONDO:0004030", "MONDO:0004031", "MONDO:0004032", "MONDO:0004033", "MONDO:0004034", "MONDO:0004035", "MONDO:0004036", "MONDO:0004037", "MONDO:0004039", "MONDO:0004040", "MONDO:0004041", "MONDO:0004042", "MONDO:0004043", "MONDO:0004044", "MONDO:0004045", "MONDO:0004046", "MONDO:0004047", "MONDO:0004048", "MONDO:0004050", "MONDO:0004051", "MONDO:0004052", "MONDO:0004053", "MONDO:0004054", "MONDO:0004055", "MONDO:0004056", "MONDO:0004057", "MONDO:0004058", "MONDO:0004059", "MONDO:0004060", "MONDO:0004061", "MONDO:0004062", "MONDO:0004063", "MONDO:0004064", "MONDO:0004065", "MONDO:0004066", "MONDO:0004067", "MONDO:0004068", "MONDO:0004070", "MONDO:0004071", "MONDO:0004072", "MONDO:0004073", "MONDO:0004074", "MONDO:0004075", "MONDO:0004076", "MONDO:0004077", "MONDO:0004078", "MONDO:0004079"] mondo_curies = mondos doids = ["DOID:4472", "DOID:4473", "DOID:4490", "DOID:4504", "DOID:4505", "DOID:4510", "DOID:4511", "DOID:4512", "DOID:4513", "DOID:4514", "DOID:4515", "DOID:4517", "DOID:4520", "DOID:4521", "DOID:4522", "DOID:4524", "DOID:4525", "DOID:4527", "DOID:4540", "DOID:4546", "DOID:4547", "DOID:4548", "DOID:4549", "DOID:4550", "DOID:4553", "DOID:4555", "DOID:4560", "DOID:4561", "DOID:4584", "DOID:4587", "DOID:4588", "DOID:4591", "DOID:4593", "DOID:4594", "DOID:4606", "DOID:4607", "DOID:4630", "DOID:4633", "DOID:4636", "DOID:4638", "DOID:4639", "DOID:4640", "DOID:4645", "DOID:4650", "DOID:4651", "DOID:4653", "DOID:4656", "DOID:4664", "DOID:467", "DOID:4675", "DOID:4678", "DOID:4679", "DOID:468", "DOID:4681", "DOID:4682", "DOID:4683", "DOID:4685", "DOID:4686", "DOID:4688", "DOID:469", "DOID:4690", "DOID:4691", "DOID:4693", "DOID:4698", "DOID:4699", "DOID:4706", "DOID:4707", "DOID:4708", "DOID:471", "DOID:4716", "DOID:4717", "DOID:472", "DOID:4739", "DOID:4743", "DOID:4749", "DOID:4756", "DOID:4757", "DOID:476", "DOID:4766", "DOID:4767", "DOID:4768", "DOID:4772", "DOID:4777", "DOID:4778", "DOID:4779", "DOID:4780", "DOID:4781", "DOID:4782", "DOID:4783", "DOID:4784", "DOID:4787", "DOID:4788", "DOID:4790", "DOID:4796", "DOID:4812", "DOID:4813", "DOID:482", "DOID:4837", "DOID:4838", "DOID:4846", "DOID:4847", "DOID:4848", "DOID:4855", "DOID:4856", "DOID:4858", "DOID:486", "DOID:4860", "DOID:4863", "DOID:4866", "DOID:4868", "DOID:4871", "DOID:4872", "DOID:4873", "DOID:4876", "DOID:4877", "DOID:4878", "DOID:4879", "DOID:4892", "DOID:4893", "DOID:4894", "DOID:4895", "DOID:4896", "DOID:490", "DOID:4901", "DOID:4902", "DOID:4903", "DOID:4906", "DOID:4910", "DOID:4915", "DOID:4917", "DOID:4918", "DOID:492", "DOID:4922", "DOID:4923", "DOID:4928", "DOID:4930", "DOID:4931", "DOID:4933", "DOID:4934", "DOID:4938", "DOID:4943", "DOID:4948", "DOID:4955", "DOID:4957", "DOID:4970", "DOID:4986", "DOID:4991", "DOID:4993", "DOID:4994", "DOID:4995", "DOID:501", "DOID:502", "DOID:5022", "DOID:5026", "DOID:5031", "DOID:5032", "DOID:5039", "DOID:5040", "DOID:5042", "DOID:5043", "DOID:5044", "DOID:5046", "DOID:5047", "DOID:5048", "DOID:505", "DOID:5056", "DOID:5057", "DOID:5058", "DOID:5059", "DOID:5063", "DOID:5076", "DOID:5083", "DOID:5088", "DOID:5090", "DOID:5093", "DOID:5099", "DOID:5100", "DOID:5102", "DOID:5104", "DOID:5112", "DOID:5118", "DOID:512", "DOID:5123", "DOID:5124", "DOID:5125", "DOID:5126", "DOID:5127", "DOID:5128", "DOID:5129", "DOID:5134", "DOID:5136", "DOID:5137", "DOID:5138", "DOID:5139", "DOID:5140", "DOID:5142", "DOID:5143", "DOID:5146", "DOID:5147", "DOID:5149", "DOID:5150", "DOID:5152", "DOID:5153", "DOID:5155", "DOID:5157", "DOID:5161", "DOID:5166", "DOID:5169", "DOID:5170", "DOID:5171", "DOID:5178", "DOID:5179", "DOID:518", "DOID:5182", "DOID:5183", "DOID:5189", "DOID:5193", "DOID:5194", "DOID:5195", "DOID:5196", "DOID:5200", "DOID:5207", "DOID:5208", "DOID:5209", "DOID:5221", "DOID:5222", "DOID:5224", "DOID:5233", "DOID:5236", "DOID:5238", "DOID:5251", "DOID:5253", "DOID:5254", "DOID:5258", "DOID:5259", "DOID:5260", "DOID:5261", "DOID:5262", "DOID:5263", "DOID:5264", "DOID:5265", "DOID:5267", "DOID:5268", "DOID:5271", "DOID:5272", "DOID:5273", "DOID:5274", "DOID:5275", "DOID:5276", "DOID:528", "DOID:5280", "DOID:5282", "DOID:5283", "DOID:5284", "DOID:5285", "DOID:5286", "DOID:5287", "DOID:5288", "DOID:5292", "DOID:5293", "DOID:5296", "DOID:5297", "DOID:5301", "DOID:5302", "DOID:5306", "DOID:5307", "DOID:5308", "DOID:5309", "DOID:5310", "DOID:5313", "DOID:5324", "DOID:533", "DOID:5330", "DOID:5331", "DOID:5341", "DOID:5342", "DOID:5343", "DOID:5344", "DOID:5345", "DOID:5348", "DOID:5349", "DOID:5351", "DOID:5368", "DOID:5370", "DOID:5373", "DOID:5375", "DOID:5376", "DOID:538", "DOID:5382", "DOID:5384", "DOID:5385", "DOID:5387", "DOID:5390", "DOID:5391", "DOID:5393", "DOID:5395", "DOID:5396", "DOID:5398", "DOID:540", "DOID:5401", "DOID:5402", "DOID:5403", "DOID:5414", "DOID:5421", "DOID:5427", "DOID:5432", "DOID:5433", "DOID:5437", "DOID:5438", "DOID:5439", "DOID:5443", "DOID:5444", "DOID:5446", "DOID:5465", "DOID:5467", "DOID:5468", "DOID:5475", "DOID:5476", "DOID:5477", "DOID:5478", "DOID:5479", "DOID:5480", "DOID:5482", "DOID:5484", "DOID:5487", "DOID:5488", "DOID:5492", "DOID:5494", "DOID:5500", "DOID:5501", "DOID:5503", "DOID:5504", "DOID:5505", "DOID:5507", "DOID:5508", "DOID:5509", "DOID:551", "DOID:5510", "DOID:5511", "DOID:5513", "DOID:5522", "DOID:5524", "DOID:5527", "DOID:5529", "DOID:5530", "DOID:5531", "DOID:5532", "DOID:5534", "DOID:5536", "DOID:5537", "DOID:5538", "DOID:5539", "DOID:5540", "DOID:5545", "DOID:5546", "DOID:5547", "DOID:5550", "DOID:5551", "DOID:5553", "DOID:5560", "DOID:5561", "DOID:5563", "DOID:5564", "DOID:5565", "DOID:5566", "DOID:5568", "DOID:5569", "DOID:5576", "DOID:5577", "DOID:5579", "DOID:5590", "DOID:5591", "DOID:5592", "DOID:5595", "DOID:5597", "DOID:5599", "DOID:5600", "DOID:5602", "DOID:5603", "DOID:5612", "DOID:5615", "DOID:5623", "DOID:5624", "DOID:5625", "DOID:5626", "DOID:5628", "DOID:5629", "DOID:5630", "DOID:5631", "DOID:5632", "DOID:5634", "DOID:5638", "DOID:5639", "DOID:5641", "DOID:5642", "DOID:5643", "DOID:565", "DOID:5655", "DOID:5658", "DOID:566", "DOID:5660", "DOID:5662", "DOID:5665", "DOID:5670", "DOID:5677", "DOID:5678", "DOID:5681", "DOID:5691", "DOID:5693", "DOID:5694", "DOID:5695", "DOID:5696", "DOID:5697", "DOID:5698", "DOID:5699", "DOID:5700", "DOID:5701", "DOID:5703", "DOID:5704", "DOID:5705", "DOID:5711", "DOID:5712", "DOID:5713", "DOID:5714", "DOID:5716", "DOID:5719", "DOID:572", "DOID:5724", "DOID:5725", "DOID:5726", "DOID:5727", "DOID:5729", "DOID:5730", "DOID:5731", "DOID:5732", "DOID:5740", "DOID:5741", "DOID:5743", "DOID:5747", "DOID:5748", "DOID:5749", "DOID:5750", "DOID:5751", "DOID:5752", "DOID:5757", "DOID:5758", "DOID:5760", "DOID:5761", "DOID:5763", "DOID:5764", "DOID:5767", "DOID:5769", "DOID:5772", "DOID:5774", "DOID:5775", "DOID:5776", "DOID:5781", "DOID:5782", "DOID:5784", "DOID:5789", "DOID:5798", "DOID:580", "DOID:5806", "DOID:5809", "DOID:5815", "DOID:5822", "DOID:5826", "DOID:5829", "DOID:5830", "DOID:5831", "DOID:5838", "DOID:5842", "DOID:5843", "DOID:5846", "DOID:5847", "DOID:5848", "DOID:5849", "DOID:5851", "DOID:5852", "DOID:5853", "DOID:5854", "DOID:5855", "DOID:5866", "DOID:5861", "DOID:5862", "DOID:5867", "DOID:5874", "DOID:5876", "DOID:5877", "DOID:5884", "DOID:5890", "DOID:5893", "DOID:5894", "DOID:5895", "DOID:5896", "DOID:5897", "DOID:5907", "DOID:5908", "DOID:5913", "DOID:5914", "DOID:5915", "DOID:5917", "DOID:5921", "DOID:5922", "DOID:5923", "DOID:5926", "DOID:5948", "DOID:5949", "DOID:5957", "DOID:5958", "DOID:5973", "DOID:5975", "DOID:5976", "DOID:5977", "DOID:5982", "DOID:5983", "DOID:5990", "DOID:5996", "DOID:5997", "DOID:5998", "DOID:5999", "DOID:600", "DOID:6001", "DOID:6003", "DOID:6004", "DOID:6015", "DOID:6016", "DOID:6017", "DOID:6018", "DOID:6019", "DOID:602", "DOID:6021", "DOID:6024", "DOID:6025", "DOID:603", "DOID:6032", "DOID:6033", "DOID:6034", "DOID:6037", "DOID:6041", "DOID:6043", "DOID:6048", "DOID:605", "DOID:6052", "DOID:6053", "DOID:6054", "DOID:6059", "DOID:6065", "DOID:6067", "DOID:6082", "DOID:6083", "DOID:6084", "DOID:6085", "DOID:6086", "DOID:6088", "DOID:6089", "DOID:6090", "DOID:6098", "DOID:61", "DOID:6101", "DOID:6102", "DOID:6103", "DOID:6110", "DOID:6112", "DOID:6113", "DOID:6114", "DOID:6115", "DOID:6118", "DOID:6119", "DOID:6139", "DOID:6148", "DOID:6160", "DOID:6161", "DOID:6162", "DOID:6163", "DOID:6166", "DOID:6167", "DOID:6170", "DOID:6190", "DOID:6197", "DOID:6198", "DOID:6199", "DOID:62", "DOID:620", "DOID:6201", "DOID:6203", "DOID:6208", "DOID:6209", "DOID:6210", "DOID:6211", "DOID:6212", "DOID:6214", "DOID:6227", "DOID:6228", "DOID:6229", "DOID:6230", "DOID:6231", "DOID:6232", "DOID:6239", "DOID:6244", "DOID:6249", "DOID:625", "DOID:6256", "DOID:6257", "DOID:6258", "DOID:6259", "DOID:626", "DOID:6274", "DOID:6275", "DOID:6278", "DOID:6284", "DOID:6285", "DOID:6286", "DOID:6291", "DOID:6293", "DOID:6294", "DOID:6297", "DOID:6307", "DOID:6312", "DOID:6313", "DOID:6314", "DOID:6315", "DOID:6316", "DOID:6332", "DOID:6333", "DOID:6334", "DOID:6335", "DOID:6337", "DOID:6339", "DOID:6344", "DOID:6345", "DOID:6370", "DOID:6379", "DOID:6381", "DOID:6386", "DOID:6405", "DOID:6407", "DOID:6408", "DOID:6423", "DOID:6425", "DOID:6438", "DOID:6445", "DOID:6446", "DOID:6448", "DOID:6451", "DOID:6459", "DOID:6460", "DOID:6468", "DOID:6469", "DOID:6474", "DOID:6476", "DOID:6477", "DOID:6481", "DOID:6482", "DOID:6483", "DOID:6484", "DOID:6489", "DOID:6491", "DOID:6492", "DOID:6494", "DOID:6495", "DOID:6501", "DOID:6505", "DOID:6510", "DOID:6511", "DOID:6512", "DOID:6514", "DOID:6517", "DOID:6518", "DOID:6522", "DOID:6523", "DOID:6524", "DOID:6525", "DOID:6530", "DOID:6547", "DOID:6548", "DOID:6553", "DOID:6554", "DOID:6559", "DOID:6562", "DOID:6564", "DOID:6566", "DOID:6567", "DOID:6569", "DOID:6571", "DOID:6575", "DOID:6579", "DOID:6581", "DOID:6585", "DOID:6587", "DOID:6594", "DOID:66", "DOID:6603", "DOID:6605", "DOID:6606", "DOID:6607", "DOID:6608", "DOID:6610", "DOID:6613", "DOID:6621", "DOID:6629", "DOID:663", "DOID:6634", "DOID:6639", "DOID:664", "DOID:6641", "DOID:6643", "DOID:6648", "DOID:6654", "DOID:6657", "DOID:6658", "DOID:6676", "DOID:6677", "DOID:6693", "DOID:6696", "DOID:6697", "DOID:6700", "DOID:6703", "DOID:6705", "DOID:6706", "DOID:6721", "DOID:6723", "DOID:6727", "DOID:6733", "DOID:6735", "DOID:6742", "DOID:6752", "DOID:6758", "DOID:6760", "DOID:6762", "DOID:6774", "DOID:6776", "DOID:6777", "DOID:6786", "DOID:6787", "DOID:6788", "DOID:6789", "DOID:6804", "DOID:6809", "DOID:6811", "DOID:6812", "DOID:6837", "DOID:6838", "DOID:6839", "DOID:6841", "DOID:6844", "DOID:6847", "DOID:6848", "DOID:6854", "DOID:6856", "DOID:6857", "DOID:6858", "DOID:6865", "DOID:6867", "DOID:6868", "DOID:6869", "DOID:6871", "DOID:6873", "DOID:6880", "DOID:6888", "DOID:6898", "DOID:6899", "DOID:6901", "DOID:6903", "DOID:6906", "DOID:6929", "DOID:6931", "DOID:6932", "DOID:6933", "DOID:6934", "DOID:6935", "DOID:6936", "DOID:6938", "DOID:6939", "DOID:6947", "DOID:6948", "DOID:6951", "DOID:6958", "DOID:6959", "DOID:6961", "DOID:6969", "DOID:6970", "DOID:6975", "DOID:6976", "DOID:6977", "DOID:698", "DOID:6988", "DOID:6992", "DOID:6993", "DOID:6994", "DOID:6996", "DOID:6997", "DOID:6998", "DOID:7007", "DOID:7013", "DOID:7014", "DOID:7016", "DOID:7017", "DOID:7024", "DOID:7030", "DOID:7031", "DOID:7032", "DOID:7037", "DOID:7039", "DOID:7041", "DOID:7042", "DOID:7045", "DOID:7046", "DOID:7047", "DOID:7048", "DOID:7049", "DOID:7050", "DOID:7051", "DOID:7054", "DOID:7071", "DOID:7077", "DOID:7079", "DOID:7081", "DOID:7086", "DOID:7088", "DOID:7089", "DOID:709", "DOID:7095", "DOID:7097", "DOID:710", "DOID:7103", "DOID:7105", "DOID:711", "DOID:712", "DOID:7127", "DOID:7136", "DOID:7138", "DOID:7140", "DOID:7142", "DOID:7152", "DOID:7160", "DOID:7168", "DOID:7169", "DOID:7173", "DOID:7174", "DOID:7175", "DOID:7179", "DOID:7181", "DOID:7187", "DOID:7191", "DOID:720", "DOID:7202", "DOID:7206", "DOID:7207", "DOID:7210", "DOID:7211", "DOID:7212", "DOID:7213", "DOID:7214", "DOID:7221", "DOID:7222", "DOID:7223", "DOID:7224", "DOID:7230", "DOID:7231", "DOID:7233", "DOID:7234", "DOID:7236", "DOID:7237", "DOID:724", "DOID:7241", "DOID:7242", "DOID:7244", "DOID:7246", "DOID:7263", "DOID:7266", "DOID:7267", "DOID:7269", "DOID:728", "DOID:7281", "DOID:7284", "DOID:7289", "DOID:7293", "DOID:7297", "DOID:730", "DOID:7302", "DOID:7312", "DOID:7315", "DOID:7320", "DOID:7326", "DOID:7328", "DOID:7332", "DOID:7333", "DOID:7334", "DOID:734", "DOID:7340", "DOID:7347", "DOID:7350", "DOID:7356", "DOID:736", "DOID:7360", "DOID:7363", "DOID:7378", "DOID:7379", "DOID:738", "DOID:7380", "DOID:7381", "DOID:7388", "DOID:7389", "DOID:7390", "DOID:7398", "DOID:7401", "DOID:7402", "DOID:7408", "DOID:7409", "DOID:7411", "DOID:7426", "DOID:7429", "DOID:7430", "DOID:7435", "DOID:7436", "DOID:7437", "DOID:7438", "DOID:7439", "DOID:7441", "DOID:7444", "DOID:745", "DOID:7459", "DOID:746", "DOID:7460", "DOID:7461", "DOID:7463", "DOID:7465", "DOID:7479", "DOID:7480", "DOID:7482", "DOID:7483", "DOID:7488", "DOID:749", "DOID:7491", "DOID:7492", "DOID:7497", "DOID:7501", "DOID:7502", "DOID:7503", "DOID:7506", "DOID:7511", "DOID:7512", "DOID:7514", "DOID:7515", "DOID:7516", "DOID:7518", "DOID:7519", "DOID:7520", "DOID:7521", "DOID:7522", "DOID:7527", "DOID:7528", "DOID:7531", "DOID:7532", "DOID:7533", "DOID:7537", "DOID:7538", "DOID:7539", "DOID:754", "DOID:7540", "DOID:7541", "DOID:7542", "DOID:7549", "DOID:7553", "DOID:7558", "DOID:7559", "DOID:7565", "DOID:7567", "DOID:7574", "DOID:7577", "DOID:7578", "DOID:7583", "DOID:7584", "DOID:7585", "DOID:7586", "DOID:7587", "DOID:7591", "DOID:7596", "DOID:7598", "DOID:7599", "DOID:7600", "DOID:7603", "DOID:7607", "DOID:7609", "DOID:7610", "DOID:7611", "DOID:7612", "DOID:7613", "DOID:7614", "DOID:7615", "DOID:7631", "DOID:7632", "DOID:7634", "DOID:7635", "DOID:7639", "DOID:7642", "DOID:7643", "DOID:7646", "DOID:7650"] doid_curies = doids chembls = ["CHEMBL286494", "CHEMBL1321", "CHEMBL404520", "CHEMBL65794", "CHEMBL373081", "CHEMBL100259", "CHEMBL331378", "CHEMBL279229", "CHEMBL826", "CHEMBL2105527", "CHEMBL566", "CHEMBL1201237", "CHEMBL186720", "CHEMBL2103873", "CHEMBL1540", "CHEMBL46469", "CHEMBL1652", "CHEMBL506110", "CHEMBL1371770", "CHEMBL1236282", "CHEMBL1492", "CHEMBL816", "CHEMBL591665", "CHEMBL1201766", "CHEMBL1165268", "CHEMBL1200685", "CHEMBL1182", "CHEMBL284906", "CHEMBL350221", "CHEMBL590540", "CHEMBL415606", "CHEMBL416146", "CHEMBL114", "CHEMBL517199", "CHEMBL2105060", "CHEMBL3301667", "CHEMBL304902", "CHEMBL14012", "CHEMBL399510", "CHEMBL1251", "CHEMBL1514715", "CHEMBL1476605", "CHEMBL431", "CHEMBL432162", "CHEMBL180101", "CHEMBL305785", "CHEMBL293776", "CHEMBL418971", "CHEMBL82293", "CHEMBL2364611", "CHEMBL4078588", "CHEMBL3663929", "CHEMBL461343", "CHEMBL1097615", "CHEMBL3185236", "CHEMBL253838", "CHEMBL236086", "CHEMBL197672", "CHEMBL1231453", "CHEMBL293405", "CHEMBL12760", "CHEMBL135302", "CHEMBL3989686", "CHEMBL1100", "CHEMBL561639", "CHEMBL170797", "CHEMBL1201170", "CHEMBL223026", "CHEMBL104937", "CHEMBL26", "CHEMBL275544", "CHEMBL445647", "CHEMBL608533", "CHEMBL3806158", "CHEMBL267178", "CHEMBL109005", "CHEMBL511115", "CHEMBL269521", "CHEMBL437526", "CHEMBL286939", "CHEMBL1201096", "CHEMBL2106537", "CHEMBL281812", "CHEMBL3545017", "CHEMBL303984", "CHEMBL1189150", "CHEMBL609728", "CHEMBL2106996", "CHEMBL13254", "CHEMBL11122", "CHEMBL1201213", "CHEMBL253976", "CHEMBL15413", "CHEMBL314691", "CHEMBL16370", "CHEMBL1256722", "CHEMBL273575", "CHEMBL156791", "CHEMBL2338329", "CHEMBL1823241", "CHEMBL3577885", "CHEMBL405862", "CHEMBL227452", "CHEMBL258940", "CHEMBL228057", "CHEMBL39221", "CHEMBL439009", "CHEMBL223339", "CHEMBL307679", "CHEMBL1232767", "CHEMBL1234624", "CHEMBL1235413", "CHEMBL404155", "CHEMBL4070364", "CHEMBL1234133", "CHEMBL571987", "CHEMBL2106561", "CHEMBL31962", "CHEMBL278315", "CHEMBL2107361", "CHEMBL491960", "CHEMBL270515", "CHEMBL205821", "CHEMBL41342", "CHEMBL414357", "CHEMBL175083", "CHEMBL3301669", "CHEMBL187105", "CHEMBL492513", "CHEMBL242737", "CHEMBL494072", "CHEMBL3301604", "CHEMBL3301668", "CHEMBL1788401", "CHEMBL225546", "CHEMBL3301618", "CHEMBL503565", "CHEMBL493062", "CHEMBL149082", "CHEMBL3301595", "CHEMBL588", "CHEMBL4297620", "CHEMBL1938870", "CHEMBL3251336", "CHEMBL1938400", "CHEMBL830", "CHEMBL3182477", "CHEMBL2036958", "CHEMBL406821", "CHEMBL258921", "CHEMBL1213270", "CHEMBL211614", "CHEMBL2063705", "CHEMBL190801", "CHEMBL109648", "CHEMBL299233", "CHEMBL2106316", "CHEMBL270995", "CHEMBL21333", "CHEMBL1865258", "CHEMBL204543", "CHEMBL3338194", "CHEMBL1241855", "CHEMBL126279", "CHEMBL2106650", "CHEMBL384759", "CHEMBL228814", "CHEMBL400520", "CHEMBL3989959", "CHEMBL1511", "CHEMBL125381", "CHEMBL3989525", "CHEMBL123154", "CHEMBL13960", "CHEMBL404108", "CHEMBL566757", "CHEMBL2105158", "CHEMBL87647", "CHEMBL329137", "CHEMBL6731", "CHEMBL324168", "CHEMBL577736", "CHEMBL1236872", "CHEMBL73502", "CHEMBL396298", "CHEMBL512172", "CHEMBL52564", "CHEMBL2107729", "CHEMBL1236970", "CHEMBL175198", "CHEMBL328875", "CHEMBL2105664", "CHEMBL2107430", "CHEMBL2107691", "CHEMBL1159973", "CHEMBL451855", "CHEMBL209821", "CHEMBL3911164", "CHEMBL2028665", "CHEMBL484785", "CHEMBL186007", "CHEMBL30816", "CHEMBL559401", "CHEMBL4076467", "CHEMBL4297449", "CHEMBL4297479", "CHEMBL38851", "CHEMBL558752", "CHEMBL148756", "CHEMBL4297313", "CHEMBL66907", "CHEMBL47181", "CHEMBL3187503", "CHEMBL4280145", "CHEMBL155572", "CHEMBL296468", "CHEMBL583207", "CHEMBL445699", "CHEMBL2106236", "CHEMBL113051", "CHEMBL277945", "CHEMBL203321", "CHEMBL618", "CHEMBL1201391", "CHEMBL339231", "CHEMBL92870", "CHEMBL273481", "CHEMBL781", "CHEMBL404150", "CHEMBL2103959", "CHEMBL2104688", "CHEMBL3187812", "CHEMBL501515", "CHEMBL495758", "CHEMBL3318007", "CHEMBL170077", "CHEMBL594722", "CHEMBL479843", "CHEMBL332958", "CHEMBL218394", "CHEMBL3916929", "CHEMBL3939295", "CHEMBL468900", "CHEMBL256732", "CHEMBL199441", "CHEMBL52011", "CHEMBL346059", "CHEMBL249120", "CHEMBL1232313", "CHEMBL1235541", "CHEMBL76232", "CHEMBL188442", "CHEMBL399583", "CHEMBL1231562", "CHEMBL24683", "CHEMBL1232236", "CHEMBL589583", "CHEMBL1235782", "CHEMBL220467", "CHEMBL1232889", "CHEMBL308333", "CHEMBL1230893", "CHEMBL178803", "CHEMBL1922094", "CHEMBL220428", "CHEMBL496574", "CHEMBL4209111", "CHEMBL1173", "CHEMBL1084102", "CHEMBL3235620", "CHEMBL238465", "CHEMBL22077", "CHEMBL2110978", "CHEMBL340807", "CHEMBL426084", "CHEMBL2110686", "CHEMBL176", "CHEMBL339996", "CHEMBL604710", "CHEMBL3916243", "CHEMBL356479", "CHEMBL3833401", "CHEMBL84446", "CHEMBL8809", "CHEMBL326523", "CHEMBL3707307", "CHEMBL2063869", "CHEMBL290814", "CHEMBL3039520", "CHEMBL95097", "CHEMBL127643", "CHEMBL123132", "CHEMBL58832", "CHEMBL409153", "CHEMBL3670800", "CHEMBL3786343", "CHEMBL469662", "CHEMBL3219124", "CHEMBL233553", "CHEMBL260091", "CHEMBL2386889", "CHEMBL366460", "CHEMBL365795", "CHEMBL394429", "CHEMBL438497", "CHEMBL233349", "CHEMBL89363", "CHEMBL459177", "CHEMBL2103852", "CHEMBL1235787", "CHEMBL4296681", "CHEMBL3353541", "CHEMBL3099551", "CHEMBL1232777", "CHEMBL1097999", "CHEMBL1236544", "CHEMBL426559", "CHEMBL2151437", "CHEMBL1229967", "CHEMBL1330792", "CHEMBL1231801", "CHEMBL603830", "CHEMBL101309", "CHEMBL1231350", "CHEMBL1801250", "CHEMBL765", "CHEMBL540929", "CHEMBL255134", "CHEMBL2035187", "CHEMBL479880", "CHEMBL559147", "CHEMBL1689772", "CHEMBL1738889", "CHEMBL405730", "CHEMBL178938", "CHEMBL343633", "CHEMBL3707313", "CHEMBL2103764", "CHEMBL1765291", "CHEMBL1927030", "CHEMBL2103855", "CHEMBL1094304", "CHEMBL215303", "CHEMBL1276678", "CHEMBL2103883", "CHEMBL428963", "CHEMBL425181", "CHEMBL217092", "CHEMBL3707281", "CHEMBL410668", "CHEMBL3707389", "CHEMBL167731", "CHEMBL4250860", "CHEMBL3133037", "CHEMBL3039529", "CHEMBL53292", "CHEMBL1232207", "CHEMBL23254", "CHEMBL1602127", "CHEMBL1230314", "CHEMBL279956", "CHEMBL1096643", "CHEMBL1235252", "CHEMBL1236482", "CHEMBL3674570", "CHEMBL1234088", "CHEMBL64579", "CHEMBL1233922", "CHEMBL233360", "CHEMBL590753", "CHEMBL365468", "CHEMBL1230617", "CHEMBL272485", "CHEMBL64130", "CHEMBL589370", "CHEMBL1495", "CHEMBL4296682", "CHEMBL253629", "CHEMBL283639", "CHEMBL127592", "CHEMBL430145", "CHEMBL388978", "CHEMBL537968", "CHEMBL4069597", "CHEMBL134529", "CHEMBL561057", "CHEMBL1006", "CHEMBL420910", "CHEMBL267777", "CHEMBL15594", "CHEMBL561", "CHEMBL207456", "CHEMBL1236189", "CHEMBL270672", "CHEMBL1234200", "CHEMBL3410450", "CHEMBL1231667", "CHEMBL360520", "CHEMBL40422", "CHEMBL220808", "CHEMBL1236484", "CHEMBL4285883", "CHEMBL1232960", "CHEMBL2133806", "CHEMBL323202", "CHEMBL1413199", "CHEMBL321944", "CHEMBL8642", "CHEMBL323542", "CHEMBL443052", "CHEMBL1163085", "CHEMBL1399702", "CHEMBL435747", "CHEMBL92915", "CHEMBL206109", "CHEMBL1201354", "CHEMBL1201274", "CHEMBL303934", "CHEMBL680", "CHEMBL343336", "CHEMBL127214", "CHEMBL1371937", "CHEMBL1289494", "CHEMBL174539", "CHEMBL328910", "CHEMBL2107773", "CHEMBL3137321", "CHEMBL2106167", "CHEMBL56053", "CHEMBL279865", "CHEMBL402747", "CHEMBL403741", "CHEMBL304858", "CHEMBL238103", "CHEMBL4279455", "CHEMBL1625607", "CHEMBL1200354", "CHEMBL1161253", "CHEMBL1330", "CHEMBL15023", "CHEMBL1200356", "CHEMBL120563", "CHEMBL275661", "CHEMBL388154", "CHEMBL237370", "CHEMBL1084430", "CHEMBL226267", "CHEMBL3358920", "CHEMBL226403", "CHEMBL77675", "CHEMBL1234647", "CHEMBL2151439", "CHEMBL1232702", "CHEMBL1233085", "CHEMBL1770916", "CHEMBL136737", "CHEMBL1229971", "CHEMBL1628385", "CHEMBL2104765", "CHEMBL215344", "CHEMBL452289", "CHEMBL1160593", "CHEMBL2110732", "CHEMBL2110700", "CHEMBL1625750", "CHEMBL2111047", "CHEMBL1276663", "CHEMBL780", "CHEMBL595439", "CHEMBL1109", "CHEMBL1874750", "CHEMBL177749", "CHEMBL406572", "CHEMBL245807", "CHEMBL607710", "CHEMBL391586", "CHEMBL106258", "CHEMBL3039517", "CHEMBL2103882", "CHEMBL467058", "CHEMBL1277001", "CHEMBL1213271", "CHEMBL19215", "CHEMBL1672635", "CHEMBL200849", "CHEMBL425386", "CHEMBL3334567", "CHEMBL573077", "CHEMBL259850", "CHEMBL197550", "CHEMBL400841", "CHEMBL1161861", "CHEMBL2105348", "CHEMBL290960", "CHEMBL153983", "CHEMBL14130", "CHEMBL342672", "CHEMBL316257", "CHEMBL2105399", "CHEMBL438139", "CHEMBL254951", "CHEMBL303958", "CHEMBL2105637", "CHEMBL14227", "CHEMBL2105720", "CHEMBL7087", "CHEMBL2105606", "CHEMBL2107802", "CHEMBL131854", "CHEMBL76725", "CHEMBL242341", "CHEMBL1080884", "CHEMBL282199", "CHEMBL118841", "CHEMBL7010", "CHEMBL4297496", "CHEMBL180570", "CHEMBL4297490", "CHEMBL4297456", "CHEMBL4297274", "CHEMBL4297350", "CHEMBL4297678", "CHEMBL572879", "CHEMBL230006", "CHEMBL589586", "CHEMBL2104168", "CHEMBL405957", "CHEMBL2104708", "CHEMBL2104462", "CHEMBL483790", "CHEMBL2104608", "CHEMBL4227721", "CHEMBL553426", "CHEMBL459505", "CHEMBL86882", "CHEMBL427409", "CHEMBL3305965", "CHEMBL294144", "CHEMBL2111112", "CHEMBL47259", "CHEMBL403328", "CHEMBL410448", "CHEMBL3707306", "CHEMBL3786896", "CHEMBL1089641", "CHEMBL388931", "CHEMBL1702228", "CHEMBL3707239", "CHEMBL379760", "CHEMBL206646", "CHEMBL2107636", "CHEMBL2105721", "CHEMBL204021", "CHEMBL184048", "CHEMBL2105807", "CHEMBL1788396", "CHEMBL1159717", "CHEMBL3219616", "CHEMBL224282", "CHEMBL1608183", "CHEMBL8145", "CHEMBL7983", "CHEMBL1377382", "CHEMBL342914", "CHEMBL3444166", "CHEMBL2048444", "CHEMBL1206382", "CHEMBL3447079", "CHEMBL1493285", "CHEMBL3601469", "CHEMBL1384622", "CHEMBL1859126", "CHEMBL3454064", "CHEMBL1518470", "CHEMBL3668079", "CHEMBL1257109", "CHEMBL1469294", "CHEMBL3731023", "CHEMBL3449636", "CHEMBL1193665", "CHEMBL3669022", "CHEMBL169295", "CHEMBL2326079", "CHEMBL3980345", "CHEMBL4203108", "CHEMBL3261521", "CHEMBL1771162", "CHEMBL4173647", "CHEMBL3452687", "CHEMBL2219854", "CHEMBL1951396", "CHEMBL1338686", "CHEMBL1369696", "CHEMBL1770260", "CHEMBL1463840", "CHEMBL3454128", "CHEMBL4090216", "CHEMBL1193231", "CHEMBL2270003", "CHEMBL1609952", "CHEMBL24077", "CHEMBL1364289", "CHEMBL3634100", "CHEMBL2219523", "CHEMBL4165132", "CHEMBL1185515", "CHEMBL1562125", "CHEMBL1518452", "CHEMBL3403481", "CHEMBL1902826", "CHEMBL4090207", "CHEMBL181368", "CHEMBL1458184", "CHEMBL407418", "CHEMBL3310458", "CHEMBL2336733", "CHEMBL87136", "CHEMBL100349", "CHEMBL3729490", "CHEMBL4203290", "CHEMBL1684969", "CHEMBL1172071", "CHEMBL4167979", "CHEMBL295104", "CHEMBL146110", "CHEMBL2414672", "CHEMBL545594", "CHEMBL4069048", "CHEMBL4162598", "CHEMBL1789817", "CHEMBL475688", "CHEMBL2236646", "CHEMBL1207085", "CHEMBL3477925", "CHEMBL2431929", "CHEMBL342573", "CHEMBL3913144", "CHEMBL1337434", "CHEMBL1964970", "CHEMBL49543", "CHEMBL3654691", "CHEMBL4284664", "CHEMBL1364755", "CHEMBL3261805", "CHEMBL1171569", "CHEMBL4225141", "CHEMBL1621450", "CHEMBL161650", "CHEMBL3248613", "CHEMBL2094843", "CHEMBL1376957", "CHEMBL2115117", "CHEMBL249952", "CHEMBL138976", "CHEMBL3447453", "CHEMBL1517806", "CHEMBL2413232", "CHEMBL489437", "CHEMBL4288242", "CHEMBL3397067", "CHEMBL1688205", "CHEMBL3653206", "CHEMBL522115", "CHEMBL3447278", "CHEMBL1097377", "CHEMBL3601892", "CHEMBL608146", "CHEMBL319738", "CHEMBL1621157", "CHEMBL1469599", "CHEMBL398441", "CHEMBL109264", "CHEMBL3394509", "CHEMBL3695476", "CHEMBL3718238", "CHEMBL18008", "CHEMBL95267", "CHEMBL1464913", "CHEMBL4211835", "CHEMBL1951655", "CHEMBL1993212", "CHEMBL231150", "CHEMBL3920788", "CHEMBL4284395", "CHEMBL3669837", "CHEMBL1361722", "CHEMBL4109170", "CHEMBL528615", "CHEMBL1535639", "CHEMBL68065", "CHEMBL1453711", "CHEMBL1486647", "CHEMBL227102", "CHEMBL588741", "CHEMBL2316337", "CHEMBL530162", "CHEMBL1191379", "CHEMBL1702490", "CHEMBL3494466", "CHEMBL467304", "CHEMBL1432855", "CHEMBL4284235", "CHEMBL3493825", "CHEMBL3190458", "CHEMBL3303084", "CHEMBL3943978", "CHEMBL3959889", "CHEMBL1392329", "CHEMBL1740054", "CHEMBL1938962", "CHEMBL1974617", "CHEMBL2138710", "CHEMBL4280347", "CHEMBL190890", "CHEMBL1630267", "CHEMBL586099", "CHEMBL2335530", "CHEMBL152018", "CHEMBL3301725", "CHEMBL210240", "CHEMBL3485854", "CHEMBL1630168", "CHEMBL2271836", "CHEMBL4108457", "CHEMBL4280293", "CHEMBL1885439", "CHEMBL2417671", "CHEMBL3775833", "CHEMBL1524706", "CHEMBL1308270", "CHEMBL76685", "CHEMBL551283", "CHEMBL586535", "CHEMBL1800401", "CHEMBL3715528", "CHEMBL1730450", "CHEMBL2273048", "CHEMBL529949", "CHEMBL3715998", "CHEMBL1161339", "CHEMBL3699621", "CHEMBL2237788", "CHEMBL1631571", "CHEMBL3739499", "CHEMBL1504287", "CHEMBL1704457", "CHEMBL3944609", "CHEMBL3094080", "CHEMBL1599391", "CHEMBL117310", "CHEMBL3684475", "CHEMBL3694083", "CHEMBL227119", "CHEMBL530165", "CHEMBL2374122", "CHEMBL2271376", "CHEMBL3303572", "CHEMBL1399015", "CHEMBL394952", "CHEMBL3680324", "CHEMBL3190137", "CHEMBL1800331", "CHEMBL1974837", "CHEMBL364588", "CHEMBL2208381", "CHEMBL4108856", "CHEMBL38795", "CHEMBL3353570", "CHEMBL1535922", "CHEMBL35739", "CHEMBL1325852", "CHEMBL3959931", "CHEMBL1982813", "CHEMBL530168", "CHEMBL4110834", "CHEMBL1554998", "CHEMBL1774983", "CHEMBL1858126", "CHEMBL3741412", "CHEMBL310684", "CHEMBL1370977", "CHEMBL3657955", "CHEMBL474319", "CHEMBL4110510", "CHEMBL2418164", "CHEMBL491426", "CHEMBL1858118", "CHEMBL387205", "CHEMBL1532357", "CHEMBL1317305", "CHEMBL575668", "CHEMBL1882289", "CHEMBL3230843", "CHEMBL1253696", "CHEMBL1935464", "CHEMBL321305", "CHEMBL2440918", "CHEMBL3800737", "CHEMBL1773334", "CHEMBL1516143", "CHEMBL2402870", "CHEMBL284780", "CHEMBL3971142", "CHEMBL2132720", "CHEMBL2041583", "CHEMBL485313", "CHEMBL3219590", "CHEMBL1539932", "CHEMBL211970", "CHEMBL3936430", "CHEMBL3289725", "CHEMBL3290913", "CHEMBL341692", "CHEMBL1886217", "CHEMBL1802552", "CHEMBL132162", "CHEMBL1493786", "CHEMBL4291328", "CHEMBL1540298", "CHEMBL273913", "CHEMBL1237289", "CHEMBL1721615", "CHEMBL1830872", "CHEMBL1734889", "CHEMBL1372752", "CHEMBL1440148", "CHEMBL365193", "CHEMBL186437", "CHEMBL1315418", "CHEMBL479327", "CHEMBL1771745", "CHEMBL4096518", "CHEMBL1524961", "CHEMBL2132221", "CHEMBL1886986", "CHEMBL2311594", "CHEMBL1411049", "CHEMBL1499685"] chembl_curies = [('CHEMBL.COMPOUND:' + item) for item in chembls] chebis = ["CHEBI:114219", "CHEBI:116824", "CHEBI:114249", "CHEBI:116828", "CHEBI:116833", "CHEBI:116836", "CHEBI:116842", "CHEBI:116843", "CHEBI:116849", "CHEBI:116858", "CHEBI:116864", "CHEBI:116869", "CHEBI:116871", "CHEBI:116887", "CHEBI:116893", "CHEBI:116910", "CHEBI:116919", "CHEBI:116946", "CHEBI:116949", "CHEBI:116952", "CHEBI:114410", "CHEBI:114419", "CHEBI:114445", "CHEBI:116985", "CHEBI:114465", "CHEBI:116994", "CHEBI:117006", "CHEBI:117028", "CHEBI:114514", "CHEBI:117030", "CHEBI:117037", "CHEBI:114558", "CHEBI:117057", "CHEBI:114583", "CHEBI:117070", "CHEBI:114605", "CHEBI:117109", "CHEBI:114634", "CHEBI:117127", "CHEBI:114651", "CHEBI:117133", "CHEBI:114683", "CHEBI:114689", "CHEBI:117146", "CHEBI:114692", "CHEBI:114695", "CHEBI:117174", "CHEBI:117196", "CHEBI:114708", "CHEBI:117231", "CHEBI:117239", "CHEBI:114731", "CHEBI:114737", "CHEBI:117280", "CHEBI:114771", "CHEBI:117299", "CHEBI:114818", "CHEBI:117304", "CHEBI:114828", "CHEBI:117317", "CHEBI:114882", "CHEBI:114896", "CHEBI:117351", "CHEBI:114916", "CHEBI:117364", "CHEBI:114933", "CHEBI:117375", "CHEBI:114945", "CHEBI:117383", "CHEBI:117400", "CHEBI:115019", "CHEBI:117408", "CHEBI:115023", "CHEBI:117411", "CHEBI:115049", "CHEBI:117439", "CHEBI:115060", "CHEBI:117443", "CHEBI:117455", "CHEBI:115105", "CHEBI:117465", "CHEBI:115126", "CHEBI:115129", "CHEBI:115133", "CHEBI:115146", "CHEBI:115154", "CHEBI:115163", "CHEBI:115166", "CHEBI:115183", "CHEBI:115184", "CHEBI:115196", "CHEBI:115211", "CHEBI:115237", "CHEBI:115244", "CHEBI:115255", "CHEBI:115259", "CHEBI:115266", "CHEBI:117815", "CHEBI:115282", "CHEBI:117830", "CHEBI:115318", "CHEBI:119748", "CHEBI:119757", "CHEBI:117885", "CHEBI:115341", "CHEBI:119805", "CHEBI:115389", "CHEBI:119818", "CHEBI:119850", "CHEBI:115407", "CHEBI:119871", "CHEBI:118010", "CHEBI:115433", "CHEBI:119929", "CHEBI:115460", "CHEBI:119933", "CHEBI:118039", "CHEBI:118047", "CHEBI:120007", "CHEBI:115473", "CHEBI:120025", "CHEBI:118066", "CHEBI:118067", "CHEBI:120042", "CHEBI:115544", "CHEBI:115572", "CHEBI:118078", "CHEBI:120070", "CHEBI:118079", "CHEBI:115600", "CHEBI:118150", "CHEBI:120095", "CHEBI:115650", "CHEBI:115656", "CHEBI:115667", "CHEBI:115671", "CHEBI:115697", "CHEBI:115718", "CHEBI:118213", "CHEBI:120166", "CHEBI:120214", "CHEBI:115769", "CHEBI:118257", "CHEBI:118271", "CHEBI:120244", "CHEBI:115812", "CHEBI:118291", "CHEBI:115839", "CHEBI:120267", "CHEBI:115851", "CHEBI:120273", "CHEBI:118320", "CHEBI:120302", "CHEBI:118339", "CHEBI:115902", "CHEBI:120342", "CHEBI:118387", "CHEBI:120376", "CHEBI:11596", "CHEBI:120395", "CHEBI:118413", "CHEBI:120404", "CHEBI:118420", "CHEBI:120411", "CHEBI:118443", "CHEBI:120426", "CHEBI:120429", "CHEBI:118453", "CHEBI:120453", "CHEBI:118464", "CHEBI:120504", "CHEBI:118489", "CHEBI:120514", "CHEBI:118517", "CHEBI:118564", "CHEBI:120535", "CHEBI:120551", "CHEBI:118608", "CHEBI:118617", "CHEBI:120563", "CHEBI:118631", "CHEBI:120603", "CHEBI:118634", "CHEBI:120618", "CHEBI:118640", "CHEBI:120624", "CHEBI:118646", "CHEBI:120634", "CHEBI:118659", "CHEBI:120649", "CHEBI:118671", "CHEBI:120673", "CHEBI:118683", "CHEBI:120685", "CHEBI:120693", "CHEBI:115981", "CHEBI:115984", "CHEBI:120711", "CHEBI:116377", "CHEBI:120730", "CHEBI:116402", "CHEBI:120739", "CHEBI:116409", "CHEBI:120741", "CHEBI:116411", "CHEBI:120747", "CHEBI:120763", "CHEBI:118800", "CHEBI:118809", "CHEBI:118836", "CHEBI:120803", "CHEBI:116541", "CHEBI:116546", "CHEBI:118857", "CHEBI:118859", "CHEBI:120834", "CHEBI:116591", "CHEBI:118864", "CHEBI:116596", "CHEBI:118865", "CHEBI:116607", "CHEBI:118876", "CHEBI:116622", "CHEBI:116628", "CHEBI:120923", "CHEBI:116646", "CHEBI:120958", "CHEBI:118893", "CHEBI:116664", "CHEBI:118894", "CHEBI:118902", "CHEBI:118908", "CHEBI:120962", "CHEBI:116701", "CHEBI:118934", "CHEBI:116706", "CHEBI:116720", "CHEBI:118946", "CHEBI:120985", "CHEBI:116742", "CHEBI:118969", "CHEBI:121037", "CHEBI:121069", "CHEBI:118979", "CHEBI:121081", "CHEBI:119003", "CHEBI:11901", "CHEBI:119046", "CHEBI:121158", "CHEBI:121205", "CHEBI:119062", "CHEBI:121230", "CHEBI:119096", "CHEBI:121237", "CHEBI:119103", "CHEBI:121251", "CHEBI:121266", "CHEBI:119120", "CHEBI:121286", "CHEBI:121327", "CHEBI:119153", "CHEBI:119193", "CHEBI:121339", "CHEBI:119214", "CHEBI:121350", "CHEBI:119222", "CHEBI:121357", "CHEBI:121363", "CHEBI:121364", "CHEBI:121372", "CHEBI:121381", "CHEBI:121390", "CHEBI:121436", "CHEBI:121460", "CHEBI:121462", "CHEBI:121471", "CHEBI:121502", "CHEBI:121513", "CHEBI:121515", "CHEBI:121522", "CHEBI:121542", "CHEBI:121544", "CHEBI:121565", "CHEBI:121571", "CHEBI:121580", "CHEBI:121990", "CHEBI:121606", "CHEBI:121999", "CHEBI:121635", "CHEBI:122015", "CHEBI:121648", "CHEBI:122050", "CHEBI:121661", "CHEBI:121666", "CHEBI:122078", "CHEBI:121674", "CHEBI:121691", "CHEBI:122089", "CHEBI:122101", "CHEBI:122102", "CHEBI:122108", "CHEBI:122129", "CHEBI:122173", "CHEBI:122182", "CHEBI:122210", "CHEBI:122211", "CHEBI:122217", "CHEBI:122227", "CHEBI:122238", "CHEBI:122244", "CHEBI:122252", "CHEBI:122269", "CHEBI:122280", "CHEBI:122288", "CHEBI:122296", "CHEBI:122320", "CHEBI:122337", "CHEBI:122354", "CHEBI:122361", "CHEBI:122378", "CHEBI:122380", "CHEBI:122385", "CHEBI:122410", "CHEBI:122432", "CHEBI:122435", "CHEBI:122448", "CHEBI:122450", "CHEBI:122453", "CHEBI:122456", "CHEBI:122474", "CHEBI:122495", "CHEBI:122497", "CHEBI:122520", "CHEBI:122535", "CHEBI:122536", "CHEBI:122540", "CHEBI:122546", "CHEBI:122572", "CHEBI:122593", "CHEBI:122594", "CHEBI:122600", "CHEBI:122609", "CHEBI:122614", "CHEBI:122636", "CHEBI:122644", "CHEBI:122661", "CHEBI:122677", "CHEBI:122683", "CHEBI:122686", "CHEBI:122692", "CHEBI:122700", "CHEBI:122719", "CHEBI:122721", "CHEBI:122729", "CHEBI:122741", "CHEBI:122742", "CHEBI:122748", "CHEBI:122751", "CHEBI:122759", "CHEBI:122767", "CHEBI:122773", "CHEBI:122806", "CHEBI:122810", "CHEBI:122813", "CHEBI:122826", "CHEBI:122832", "CHEBI:122836", "CHEBI:122858", "CHEBI:122878", "CHEBI:122880", "CHEBI:122882", "CHEBI:122892", "CHEBI:122893", "CHEBI:122904", "CHEBI:122905", "CHEBI:122906", "CHEBI:122914", "CHEBI:122915", "CHEBI:122918", "CHEBI:122940", "CHEBI:122942", "CHEBI:122945", "CHEBI:122951", "CHEBI:122968", "CHEBI:122969", "CHEBI:122972", "CHEBI:122973", "CHEBI:122996", "CHEBI:122999", "CHEBI:123", "CHEBI:123001", "CHEBI:123003", "CHEBI:123024", "CHEBI:123027", "CHEBI:123034", "CHEBI:123041", "CHEBI:123042", "CHEBI:123050", "CHEBI:123058", "CHEBI:123060", "CHEBI:123063", "CHEBI:123068", "CHEBI:123087", "CHEBI:123088", "CHEBI:123095", "CHEBI:123105", "CHEBI:123106", "CHEBI:123109", "CHEBI:123114", "CHEBI:123117", "CHEBI:123131", "CHEBI:123135", "CHEBI:123138", "CHEBI:123156", "CHEBI:123163", "CHEBI:123165", "CHEBI:123166", "CHEBI:123196", "CHEBI:123203", "CHEBI:123218", "CHEBI:123227", "CHEBI:123274", "CHEBI:123275", "CHEBI:123277", "CHEBI:123289", "CHEBI:123314", "CHEBI:123325", "CHEBI:123347", "CHEBI:123360", "CHEBI:123363", "CHEBI:123366", "CHEBI:123383", "CHEBI:123390", "CHEBI:123395", "CHEBI:123396", "CHEBI:123397", "CHEBI:123407", "CHEBI:123425", "CHEBI:123439", "CHEBI:123454", "CHEBI:123458", "CHEBI:123519", "CHEBI:123520", "CHEBI:123521", "CHEBI:123536", "CHEBI:123541", "CHEBI:123542", "CHEBI:123545", "CHEBI:123549", "CHEBI:123561", "CHEBI:123584", "CHEBI:123589", "CHEBI:123590", "CHEBI:123597", "CHEBI:123606", "CHEBI:123609", "CHEBI:123613", "CHEBI:123624", "CHEBI:123655", "CHEBI:123678", "CHEBI:123691", "CHEBI:123695", "CHEBI:123701", "CHEBI:123702", "CHEBI:123713", "CHEBI:123719", "CHEBI:123745", "CHEBI:123772", "CHEBI:123775", "CHEBI:123779", "CHEBI:123784", "CHEBI:123788", "CHEBI:123794", "CHEBI:123801", "CHEBI:123807", "CHEBI:123818", "CHEBI:123819", "CHEBI:123824", "CHEBI:123831", "CHEBI:123840", "CHEBI:123841", "CHEBI:123850", "CHEBI:123854", "CHEBI:123864", "CHEBI:123871", "CHEBI:123880", "CHEBI:123888", "CHEBI:123889", "CHEBI:123912", "CHEBI:123938", "CHEBI:123959", "CHEBI:123969", "CHEBI:123991", "CHEBI:124008", "CHEBI:124010", "CHEBI:124041", "CHEBI:124044", "CHEBI:124046", "CHEBI:124053", "CHEBI:124065", "CHEBI:124071", "CHEBI:124086", "CHEBI:124097", "CHEBI:124112", "CHEBI:124114", "CHEBI:124126", "CHEBI:124134", "CHEBI:124152", "CHEBI:124158", "CHEBI:124177", "CHEBI:124183", "CHEBI:124184", "CHEBI:129201", "CHEBI:129204", "CHEBI:129210", "CHEBI:129228", "CHEBI:129241", "CHEBI:129251", "CHEBI:129268", "CHEBI:129270", "CHEBI:129274", "CHEBI:129279", "CHEBI:129297", "CHEBI:129301", "CHEBI:129306", "CHEBI:129313", "CHEBI:129315", "CHEBI:129328", "CHEBI:129359", "CHEBI:129366", "CHEBI:129411", "CHEBI:129412", "CHEBI:129415", "CHEBI:129439", "CHEBI:129445", "CHEBI:129482", "CHEBI:129483", "CHEBI:129527", "CHEBI:129534", "CHEBI:129536", "CHEBI:129542", "CHEBI:129543", "CHEBI:129571", "CHEBI:129575", "CHEBI:129589", "CHEBI:129592", "CHEBI:129599", "CHEBI:129601", "CHEBI:129607", "CHEBI:129610", "CHEBI:129611", "CHEBI:129618", "CHEBI:129621", "CHEBI:129628", "CHEBI:129637", "CHEBI:129647", "CHEBI:129674", "CHEBI:129693", "CHEBI:129706", "CHEBI:129707", "CHEBI:129711", "CHEBI:129734", "CHEBI:129743", "CHEBI:129773", "CHEBI:129781", "CHEBI:129806", "CHEBI:129810", "CHEBI:129819", "CHEBI:129822", "CHEBI:129824", "CHEBI:129843", "CHEBI:129868", "CHEBI:129872", "CHEBI:129879", "CHEBI:129888", "CHEBI:129901", "CHEBI:129905", "CHEBI:129916", "CHEBI:129939", "CHEBI:129945", "CHEBI:129960", "CHEBI:129968", "CHEBI:129980", "CHEBI:129988", "CHEBI:129997", "CHEBI:130002", "CHEBI:130004", "CHEBI:130010", "CHEBI:130021", "CHEBI:130025", "CHEBI:130030", "CHEBI:130032", "CHEBI:130033", "CHEBI:130039", "CHEBI:130040", "CHEBI:130042", "CHEBI:130067", "CHEBI:130089", "CHEBI:130099", "CHEBI:130108", "CHEBI:130110", "CHEBI:130111", "CHEBI:130122", "CHEBI:130157", "CHEBI:130185", "CHEBI:130189", "CHEBI:130193", "CHEBI:130201", "CHEBI:130207", "CHEBI:130210", "CHEBI:13022", "CHEBI:130221", "CHEBI:130223", "CHEBI:130226", "CHEBI:130262", "CHEBI:130274", "CHEBI:130312", "CHEBI:130336", "CHEBI:130338", "CHEBI:130348", "CHEBI:130349", "CHEBI:130384", "CHEBI:130397", "CHEBI:130423", "CHEBI:130436", "CHEBI:130444", "CHEBI:126770", "CHEBI:130450", "CHEBI:130470", "CHEBI:126778", "CHEBI:130478", "CHEBI:126797", "CHEBI:130515", "CHEBI:130516", "CHEBI:126810", "CHEBI:130525", "CHEBI:126823", "CHEBI:130541", "CHEBI:126859", "CHEBI:130547", "CHEBI:130548", "CHEBI:126871", "CHEBI:130607", "CHEBI:130609", "CHEBI:130630", "CHEBI:130634", "CHEBI:126888", "CHEBI:130648", "CHEBI:126912", "CHEBI:126916", "CHEBI:126942", "CHEBI:130673", "CHEBI:130675", "CHEBI:126959", "CHEBI:130680", "CHEBI:126974", "CHEBI:130696", "CHEBI:127037", "CHEBI:130743", "CHEBI:127043", "CHEBI:130771", "CHEBI:130786", "CHEBI:130797", "CHEBI:127084", "CHEBI:127093", "CHEBI:127124", "CHEBI:130872", "CHEBI:130874", "CHEBI:130952", "CHEBI:130954", "CHEBI:127405", "CHEBI:130996", "CHEBI:127435", "CHEBI:127437", "CHEBI:131034", "CHEBI:127442", "CHEBI:127447", "CHEBI:127475", "CHEBI:131111", "CHEBI:131142", "CHEBI:13115", "CHEBI:127533", "CHEBI:131161", "CHEBI:127545", "CHEBI:131182", "CHEBI:127561", "CHEBI:131204", "CHEBI:131205", "CHEBI:131207", "CHEBI:127578", "CHEBI:127583", "CHEBI:127597", "CHEBI:131237", "CHEBI:131280", "CHEBI:131283", "CHEBI:124202", "CHEBI:131285", "CHEBI:124220", "CHEBI:124222", "CHEBI:131304", "CHEBI:131325", "CHEBI:124247", "CHEBI:124258", "CHEBI:131338", "CHEBI:124270", "CHEBI:131341", "CHEBI:124293", "CHEBI:124296", "CHEBI:131370", "CHEBI:124314", "CHEBI:131383", "CHEBI:124326", "CHEBI:131410", "CHEBI:124351", "CHEBI:131414", "CHEBI:124364", "CHEBI:131432", "CHEBI:124384", "CHEBI:124396", "CHEBI:127624", "CHEBI:124423", "CHEBI:127633", "CHEBI:131547", "CHEBI:124442", "CHEBI:127658", "CHEBI:124446", "CHEBI:131555", "CHEBI:127739", "CHEBI:127777", "CHEBI:124479", "CHEBI:131596", "CHEBI:127838", "CHEBI:124542", "CHEBI:127841", "CHEBI:124570", "CHEBI:127854", "CHEBI:131688", "CHEBI:124605", "CHEBI:127879", "CHEBI:124611", "CHEBI:127890", "CHEBI:124619", "CHEBI:131716", "CHEBI:127909", "CHEBI:131717", "CHEBI:127917", "CHEBI:124665", "CHEBI:124670", "CHEBI:128298", "CHEBI:124678", "CHEBI:124704", "CHEBI:124708", "CHEBI:128358", "CHEBI:135179", "CHEBI:137463", "CHEBI:135210", "CHEBI:137472", "CHEBI:135228", "CHEBI:137480", "CHEBI:135255", "CHEBI:135273", "CHEBI:134409", "CHEBI:135306", "CHEBI:135310", "CHEBI:134428", "CHEBI:134429", "CHEBI:134443", "CHEBI:134458", "CHEBI:134465", "CHEBI:137571", "CHEBI:135402", "CHEBI:137583", "CHEBI:13759", "CHEBI:135443", "CHEBI:134480", "CHEBI:134481", "CHEBI:137660", "CHEBI:137674", "CHEBI:135478", "CHEBI:134511", "CHEBI:135505", "CHEBI:137710", "CHEBI:135515", "CHEBI:137714", "CHEBI:134557", "CHEBI:134560", "CHEBI:135533", "CHEBI:134561", "CHEBI:137726", "CHEBI:135579", "CHEBI:135583", "CHEBI:137793", "CHEBI:135650", "CHEBI:134659", "CHEBI:135652", "CHEBI:134661", "CHEBI:137928", "CHEBI:135678", "CHEBI:135686", "CHEBI:135689", "CHEBI:135700", "CHEBI:137978", "CHEBI:134730", "CHEBI:135751", "CHEBI:138007", "CHEBI:134741", "CHEBI:134752", "CHEBI:134764", "CHEBI:135795", "CHEBI:134787", "CHEBI:134789", "CHEBI:138061", "CHEBI:38567", "CHEBI:38583", "CHEBI:65424", "CHEBI:38604", "CHEBI:38622", "CHEBI:65455", "CHEBI:38672", "CHEBI:65466", "CHEBI:65467", "CHEBI:38727", "CHEBI:38747", "CHEBI:38757", "CHEBI:65529", "CHEBI:67888", "CHEBI:32564", "CHEBI:65558", "CHEBI:32639", "CHEBI:32645", "CHEBI:65576", "CHEBI:67898", "CHEBI:65606", "CHEBI:32693", "CHEBI:32721", "CHEBI:67959", "CHEBI:32758", "CHEBI:67978", "CHEBI:67986", "CHEBI:32776", "CHEBI:32787", "CHEBI:68030", "CHEBI:65661", "CHEBI:65669", "CHEBI:32840", "CHEBI:32847", "CHEBI:32850", "CHEBI:65724", "CHEBI:3287", "CHEBI:32935", "CHEBI:6808", "CHEBI:68088", "CHEBI:68092", "CHEBI:68100", "CHEBI:68102", "CHEBI:68117", "CHEBI:39051", "CHEBI:68160", "CHEBI:68174", "CHEBI:68194", "CHEBI:68262", "CHEBI:39135", "CHEBI:39180", "CHEBI:39186", "CHEBI:68307", "CHEBI:39188", "CHEBI:68310", "CHEBI:39336", "CHEBI:68324", "CHEBI:65748", "CHEBI:39367", "CHEBI:68377", "CHEBI:65759", "CHEBI:32968", "CHEBI:65771", "CHEBI:68439", "CHEBI:65781", "CHEBI:33010", "CHEBI:33026", "CHEBI:65805", "CHEBI:68583", "CHEBI:33077", "CHEBI:65846", "CHEBI:62442", "CHEBI:68604", "CHEBI:68618", "CHEBI:62498", "CHEBI:65891", "CHEBI:68635", "CHEBI:68637", "CHEBI:62517", "CHEBI:65900", "CHEBI:65905", "CHEBI:68888", "CHEBI:68899", "CHEBI:68912", "CHEBI:6592", "CHEBI:68938", "CHEBI:65926", "CHEBI:6895", "CHEBI:68973", "CHEBI:65946", "CHEBI:6899", "CHEBI:65996", "CHEBI:62617", "CHEBI:6906", "CHEBI:62697", "CHEBI:62707", "CHEBI:692", "CHEBI:59492", "CHEBI:62752", "CHEBI:66195", "CHEBI:59516", "CHEBI:62796", "CHEBI:66249", "CHEBI:69226", "CHEBI:69227", "CHEBI:59571", "CHEBI:62841", "CHEBI:59583", "CHEBI:59594", "CHEBI:62866", "CHEBI:59667", "CHEBI:69265", "CHEBI:62879", "CHEBI:69269", "CHEBI:59706", "CHEBI:6928", "CHEBI:59738", "CHEBI:69296", "CHEBI:59767", "CHEBI:59785", "CHEBI:62884", "CHEBI:59898", "CHEBI:3311", "CHEBI:60176", "CHEBI:33119", "CHEBI:69565", "CHEBI:69912", "CHEBI:66717", "CHEBI:69922", "CHEBI:69927", "CHEBI:66737", "CHEBI:69959", "CHEBI:69963", "CHEBI:33590", "CHEBI:66746", "CHEBI:66750", "CHEBI:70009", "CHEBI:63263", "CHEBI:66756", "CHEBI:70025", "CHEBI:66778", "CHEBI:63297", "CHEBI:70049", "CHEBI:63308", "CHEBI:66808", "CHEBI:6332", "CHEBI:66822", "CHEBI:7008", "CHEBI:63410", "CHEBI:70088", "CHEBI:70109", "CHEBI:70114", "CHEBI:63443", "CHEBI:70194", "CHEBI:70204", "CHEBI:66863", "CHEBI:70215", "CHEBI:70223", "CHEBI:70231", "CHEBI:63506", "CHEBI:70245", "CHEBI:70271", "CHEBI:63531", "CHEBI:66888", "CHEBI:66889", "CHEBI:3392", "CHEBI:66896", "CHEBI:33959", "CHEBI:669", "CHEBI:34006", "CHEBI:34022", "CHEBI:63582", "CHEBI:66911", "CHEBI:3403", "CHEBI:63645", "CHEBI:66972", "CHEBI:63652", "CHEBI:34235", "CHEBI:63671", "CHEBI:34253", "CHEBI:34265", "CHEBI:60792", "CHEBI:63985", "CHEBI:34683", "CHEBI:34695", "CHEBI:64037", "CHEBI:6404", "CHEBI:29848"] chebi_curies = chebis print("######################################## Genes ########################################") print("Length of NCBIGene ID list: {}".format(len(ncibgenes_curies))) print("Sample NCBIGene IDs: {}".format(ncibgenes_curies[:5])) print("Length of Gene Symbol list: {}".format(len(symbol_curies))) print("Sample Gene Symbols: {}".format(symbol_curies[:5])) print("######################################## Chemicals ########################################") print("Length of CHEMBL ID list: {}".format(len(chembl_curies))) print("Sample CHEMBL IDs: {}".format(chembl_curies[:5])) print("Length of CHEBI ID list: {}".format(len(chebi_curies))) print("Sample CHEBI IDs: {}".format(chebi_curies[:5])) print("######################################## Diseases ########################################") print("Length of MONDO ID list: {}".format(len(mondo_curies))) print("Sample MONDO IDs: {}".format(mondo_curies[:5])) print("Length of DOID list: {}".format(len(doid_curies))) print("Sample DOIDs: {}".format(doid_curies[:5])) print("######################################## Total ########################################") print("Total number of ids for resolving {}".format(len(ncibgenes_curies + symbol_curies + chembl_curies + chebi_curies + mondo_curies + doid_curies))) ``` ## Group all IDs based on Input Types ``` input_ids = { 'Gene': ncibgenes_curies + symbol_curies, 'ChemicalSubstance': chembl_curies + chebi_curies, 'Disease': mondo_curies + doid_curies } ``` ## Resolve IDs ``` query(input_ids) ```
github_jupyter
# Time Series Filters ``` %matplotlib inline from __future__ import print_function import pandas as pd import matplotlib.pyplot as plt import statsmodels.api as sm dta = sm.datasets.macrodata.load_pandas().data index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3')) print(index) dta.index = index del dta['year'] del dta['quarter'] print(sm.datasets.macrodata.NOTE) print(dta.head(10)) fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) dta.realgdp.plot(ax=ax); legend = ax.legend(loc = 'upper left'); legend.prop.set_size(20); ``` ### Hodrick-Prescott Filter The Hodrick-Prescott filter separates a time-series $y_t$ into a trend $\tau_t$ and a cyclical component $\zeta_t$ $$y_t = \tau_t + \zeta_t$$ The components are determined by minimizing the following quadratic loss function $$\min_{\\{ \tau_{t}\\} }\sum_{t}^{T}\zeta_{t}^{2}+\lambda\sum_{t=1}^{T}\left[\left(\tau_{t}-\tau_{t-1}\right)-\left(\tau_{t-1}-\tau_{t-2}\right)\right]^{2}$$ ``` gdp_cycle, gdp_trend = sm.tsa.filters.hpfilter(dta.realgdp) gdp_decomp = dta[['realgdp']].copy() gdp_decomp["cycle"] = gdp_cycle gdp_decomp["trend"] = gdp_trend fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) gdp_decomp[["realgdp", "trend"]]["2000-03-31":].plot(ax=ax, fontsize=16); legend = ax.get_legend() legend.prop.set_size(20); ``` ### Baxter-King approximate band-pass filter: Inflation and Unemployment #### Explore the hypothesis that inflation and unemployment are counter-cyclical. The Baxter-King filter is intended to explictly deal with the periodicty of the business cycle. By applying their band-pass filter to a series, they produce a new series that does not contain fluctuations at higher or lower than those of the business cycle. Specifically, the BK filter takes the form of a symmetric moving average $$y_{t}^{*}=\sum_{k=-K}^{k=K}a_ky_{t-k}$$ where $a_{-k}=a_k$ and $\sum_{k=-k}^{K}a_k=0$ to eliminate any trend in the series and render it stationary if the series is I(1) or I(2). For completeness, the filter weights are determined as follows $$a_{j} = B_{j}+\theta\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$ $$B_{0} = \frac{\left(\omega_{2}-\omega_{1}\right)}{\pi}$$ $$B_{j} = \frac{1}{\pi j}\left(\sin\left(\omega_{2}j\right)-\sin\left(\omega_{1}j\right)\right)\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$ where $\theta$ is a normalizing constant such that the weights sum to zero. $$\theta=\frac{-\sum_{j=-K^{K}b_{j}}}{2K+1}$$ $$\omega_{1}=\frac{2\pi}{P_{H}}$$ $$\omega_{2}=\frac{2\pi}{P_{L}}$$ $P_L$ and $P_H$ are the periodicity of the low and high cut-off frequencies. Following Burns and Mitchell's work on US business cycles which suggests cycles last from 1.5 to 8 years, we use $P_L=6$ and $P_H=32$ by default. ``` bk_cycles = sm.tsa.filters.bkfilter(dta[["infl","unemp"]]) ``` * We lose K observations on both ends. It is suggested to use K=12 for quarterly data. ``` fig = plt.figure(figsize=(12,10)) ax = fig.add_subplot(111) bk_cycles.plot(ax=ax, style=['r--', 'b-']); ``` ### Christiano-Fitzgerald approximate band-pass filter: Inflation and Unemployment The Christiano-Fitzgerald filter is a generalization of BK and can thus also be seen as weighted moving average. However, the CF filter is asymmetric about $t$ as well as using the entire series. The implementation of their filter involves the calculations of the weights in $$y_{t}^{*}=B_{0}y_{t}+B_{1}y_{t+1}+\dots+B_{T-1-t}y_{T-1}+\tilde B_{T-t}y_{T}+B_{1}y_{t-1}+\dots+B_{t-2}y_{2}+\tilde B_{t-1}y_{1}$$ for $t=3,4,...,T-2$, where $$B_{j} = \frac{\sin(jb)-\sin(ja)}{\pi j},j\geq1$$ $$B_{0} = \frac{b-a}{\pi},a=\frac{2\pi}{P_{u}},b=\frac{2\pi}{P_{L}}$$ $\tilde B_{T-t}$ and $\tilde B_{t-1}$ are linear functions of the $B_{j}$'s, and the values for $t=1,2,T-1,$ and $T$ are also calculated in much the same way. $P_{U}$ and $P_{L}$ are as described above with the same interpretation. The CF filter is appropriate for series that may follow a random walk. ``` print(sm.tsa.stattools.adfuller(dta['unemp'])[:3]) print(sm.tsa.stattools.adfuller(dta['infl'])[:3]) cf_cycles, cf_trend = sm.tsa.filters.cffilter(dta[["infl","unemp"]]) print(cf_cycles.head(10)) fig = plt.figure(figsize=(14,10)) ax = fig.add_subplot(111) cf_cycles.plot(ax=ax, style=['r--','b-']); ``` Filtering assumes *a priori* that business cycles exist. Due to this assumption, many macroeconomic models seek to create models that match the shape of impulse response functions rather than replicating properties of filtered series. See VAR notebook.
github_jupyter
# NumPy Data Access Using ArcPy ``` import arcpy as ARCPY import arcpy.da as DA inputFC = r'../data/CA_Polygons.shp' fieldNames = ['PCR2000', 'POP2000', 'PERCNOHS'] tab = DA.TableToNumPyArray(inputFC, fieldNames) print(tab) ``` # SSDataObject 1. Environment Settings (Except Extent) 2. Bad Records 3. Error/Warning Messages 4. Localization 5. **Feature Accounting** * Cursors and DataAccess are not assured to read attributes in order. * Keeps track of the shapes and their attributes so that one can create output features w/o post-joins. * Unique ID works with Spatial Weights Formats in ArcGIS, PySAL, R, Matlab, GeoDa etc.. ``` import SSDataObject as SSDO import os as OS inputFC = r'../data/CA_Polygons.shp' fullFC = OS.path.abspath(inputFC) fullPath, fcName = OS.path.split(fullFC) ssdo = SSDO.SSDataObject(inputFC) uniqueIDField = "MYID" fieldNames = ['PCR2010', 'POP2010', 'PERCNOHS'] ssdo.obtainData(uniqueIDField, fieldNames) ssdo = SSDO.SSDataObject(inputFC) ssdo.obtainData("MYID", fieldNames) print(ssdo.fields['POP2010'].data) ``` # Using PANDAS to get that R Feel ``` import pandas as PANDAS df = ssdo.getDataFrame() print(df) ``` # Advanced Analysis [SciPy Example - KMeans] ``` import numpy as NUM import scipy.cluster.vq as CLUST import arcgisscripting as ARC X = df.as_matrix() whiteData = CLUST.whiten(X) centers, distortion = CLUST.kmeans(whiteData, 6) groups = ARC._ss.closest_centroid(whiteData, centers) print(groups) ``` # Max-P Regions Using PySAL ``` import pysal as PYSAL import pysal2ArcGIS as PYSAL_UTILS swmFile = OS.path.join(fullPath, "rook_bin.swm") w = PYSAL_UTILS.swm2Weights(ssdo, swmFile) maxp = PYSAL.region.Maxp(w, X[:,0:2], 3000000., floor_variable = X[:,2]) maxpGroups = NUM.empty((ssdo.numObs,), int) for regionID, orderIDs in enumerate(maxp.regions): maxpGroups[orderIDs] = regionID maxpGroups ``` # SKATER for Comparison ``` import Partition as PART skater = PART.Partition(ssdo, fieldNames, spaceConcept = "GET_SPATIAL_WEIGHTS_FROM_FILE", weightsFile = swmFile, kPartitions = 6) print(skater.partition) ARCPY.env.overwriteOutput = True outputFC = r'../data/cluster_output.shp' outK = SSDO.CandidateField('KMEANS', 'LONG', groups + 1) outMax = SSDO.CandidateField('MAXP', 'LONG', maxpGroups + 1) outSKATER = SSDO.CandidateField('SKATER', 'LONG', skater.partitionOutput) outFields = {'KMEANS': outK, 'MAXP': outMax, 'SKATER': outSKATER} appendFields = fieldNames + ["NEW_NAME"] ssdo.output2NewFC(outputFC, outFields, appendFields = appendFields) ```
github_jupyter
# Examples of all decoders (except Kalman Filter) In this example notebook, we: 1. Import the necessary packages 2. Load a data file (spike trains and outputs we are predicting) 3. Preprocess the data for use in all decoders 4. Run all decoders and print the goodness of fit 5. Plot example decoded outputs See "Examples_kf_decoder" for a Kalman filter example. <br> Because the Kalman filter utilizes different preprocessing, we don't include an example here. to keep this notebook more understandable ## 1. Import Packages Below, we import both standard packages, and functions from the accompanying .py files ``` #Import standard packages import numpy as np import matplotlib.pyplot as plt %matplotlib inline from scipy import io from scipy import stats import pickle # If you would prefer to load the '.h5' example file rather than the '.pickle' example file. You need the deepdish package # import deepdish as dd #Import function to get the covariate matrix that includes spike history from previous bins from preprocessing_funcs import get_spikes_with_history #Import metrics from metrics import get_R2 from metrics import get_rho #Import decoder functions from decoders import WienerCascadeDecoder from decoders import WienerFilterDecoder from decoders import DenseNNDecoder from decoders import SimpleRNNDecoder from decoders import GRUDecoder from decoders import LSTMDecoder from decoders import XGBoostDecoder from decoders import SVRDecoder ``` ## 2. Load Data The data for this example can be downloaded at this [link](https://www.dropbox.com/sh/n4924ipcfjqc0t6/AACPWjxDKPEzQiXKUUFriFkJa?dl=0&preview=example_data_s1.pickle). It was recorded by Raeed Chowdhury from Lee Miller's lab at Northwestern. The data that we load is in the format described below. We have another example notebook, "Example_format_data", that may be helpful towards putting the data in this format. Neural data should be a matrix of size "number of time bins" x "number of neurons", where each entry is the firing rate of a given neuron in a given time bin The output you are decoding should be a matrix of size "number of time bins" x "number of features you are decoding" ``` folder='' #ENTER THE FOLDER THAT YOUR DATA IS IN # folder='/home/jglaser/Data/DecData/' # folder='/Users/jig289/Dropbox/Public/Decoding_Data/' with open(folder+'example_data_s1.pickle','rb') as f: # neural_data,vels_binned=pickle.load(f,encoding='latin1') #If using python 3 neural_data,vels_binned=pickle.load(f) #If using python 2 # #If you would prefer to load the '.h5' example file rather than the '.pickle' example file. # data=dd.io.load(folder+'example_data_s1.h5') # neural_data=data['neural_data'] # vels_binned=data['vels_binned'] ``` ## 3. Preprocess Data ### 3A. User Inputs The user can define what time period to use spikes from (with respect to the output). ``` bins_before=6 #How many bins of neural data prior to the output are used for decoding bins_current=1 #Whether to use concurrent time bin of neural data bins_after=6 #How many bins of neural data after the output are used for decoding ``` ### 3B. Format Covariates #### Format Input Covariates ``` # Format for recurrent neural networks (SimpleRNN, GRU, LSTM) # Function to get the covariate matrix that includes spike history from previous bins X=get_spikes_with_history(neural_data,bins_before,bins_after,bins_current) # Format for Wiener Filter, Wiener Cascade, XGBoost, and Dense Neural Network #Put in "flat" format, so each "neuron / time" is a single feature X_flat=X.reshape(X.shape[0],(X.shape[1]*X.shape[2])) ``` #### Format Output Covariates ``` #Set decoding output y=vels_binned ``` ### 3C. Split into training / testing / validation sets Note that hyperparameters should be determined using a separate validation set. Then, the goodness of fit should be be tested on a testing set (separate from the training and validation sets). #### User Options ``` #Set what part of data should be part of the training/testing/validation sets training_range=[0, 0.7] testing_range=[0.7, 0.85] valid_range=[0.85,1] ``` #### Split Data ``` num_examples=X.shape[0] #Note that each range has a buffer of"bins_before" bins at the beginning, and "bins_after" bins at the end #This makes it so that the different sets don't include overlapping neural data training_set=np.arange(np.int(np.round(training_range[0]*num_examples))+bins_before,np.int(np.round(training_range[1]*num_examples))-bins_after) testing_set=np.arange(np.int(np.round(testing_range[0]*num_examples))+bins_before,np.int(np.round(testing_range[1]*num_examples))-bins_after) valid_set=np.arange(np.int(np.round(valid_range[0]*num_examples))+bins_before,np.int(np.round(valid_range[1]*num_examples))-bins_after) #Get training data X_train=X[training_set,:,:] X_flat_train=X_flat[training_set,:] y_train=y[training_set,:] #Get testing data X_test=X[testing_set,:,:] X_flat_test=X_flat[testing_set,:] y_test=y[testing_set,:] #Get validation data X_valid=X[valid_set,:,:] X_flat_valid=X_flat[valid_set,:] y_valid=y[valid_set,:] ``` ### 3D. Process Covariates We normalize (z_score) the inputs and zero-center the outputs. Parameters for z-scoring (mean/std.) should be determined on the training set only, and then these z-scoring parameters are also used on the testing and validation sets. ``` #Z-score "X" inputs. X_train_mean=np.nanmean(X_train,axis=0) X_train_std=np.nanstd(X_train,axis=0) X_train=(X_train-X_train_mean)/X_train_std X_test=(X_test-X_train_mean)/X_train_std X_valid=(X_valid-X_train_mean)/X_train_std #Z-score "X_flat" inputs. X_flat_train_mean=np.nanmean(X_flat_train,axis=0) X_flat_train_std=np.nanstd(X_flat_train,axis=0) X_flat_train=(X_flat_train-X_flat_train_mean)/X_flat_train_std X_flat_test=(X_flat_test-X_flat_train_mean)/X_flat_train_std X_flat_valid=(X_flat_valid-X_flat_train_mean)/X_flat_train_std #Zero-center outputs y_train_mean=np.mean(y_train,axis=0) y_train=y_train-y_train_mean y_test=y_test-y_train_mean y_valid=y_valid-y_train_mean ``` ## 4. Run Decoders Note that in this example, we are evaluating the model fit on the validation set ### 4A. Wiener Filter (Linear Regression) ``` #Declare model model_wf=WienerFilterDecoder() #Fit model model_wf.fit(X_flat_train,y_train) #Get predictions y_valid_predicted_wf=model_wf.predict(X_flat_valid) #Get metric of fit R2s_wf=get_R2(y_valid,y_valid_predicted_wf) print('R2s:', R2s_wf) ``` ### 4B. Wiener Cascade (Linear Nonlinear Model) ``` #Declare model model_wc=WienerCascadeDecoder(degree=3) #Fit model model_wc.fit(X_flat_train,y_train) #Get predictions y_valid_predicted_wc=model_wc.predict(X_flat_valid) #Get metric of fit R2s_wc=get_R2(y_valid,y_valid_predicted_wc) print('R2s:', R2s_wc) ``` ### 4C. XGBoost (Extreme Gradient Boosting) ``` #Declare model model_xgb=XGBoostDecoder(max_depth=3,num_round=200,eta=0.3,gpu=-1) #Fit model model_xgb.fit(X_flat_train, y_train) #Get predictions y_valid_predicted_xgb=model_xgb.predict(X_flat_valid) #Get metric of fit R2s_xgb=get_R2(y_valid,y_valid_predicted_xgb) print('R2s:', R2s_xgb) ``` ### 4D. SVR (Support Vector Regression) ``` #The SVR works much better when the y values are normalized, so we first z-score the y values #They have previously been zero-centered, so we will just divide by the stdev (of the training set) y_train_std=np.nanstd(y_train,axis=0) y_zscore_train=y_train/y_train_std y_zscore_test=y_test/y_train_std y_zscore_valid=y_valid/y_train_std #Declare model model_svr=SVRDecoder(C=5, max_iter=4000) #Fit model model_svr.fit(X_flat_train,y_zscore_train) #Get predictions y_zscore_valid_predicted_svr=model_svr.predict(X_flat_valid) #Get metric of fit R2s_svr=get_R2(y_zscore_valid,y_zscore_valid_predicted_svr) print('R2s:', R2s_svr) ``` ### 4E. Dense Neural Network ``` #Declare model model_dnn=DenseNNDecoder(units=400,dropout=0.25,num_epochs=10) #Fit model model_dnn.fit(X_flat_train,y_train) #Get predictions y_valid_predicted_dnn=model_dnn.predict(X_flat_valid) #Get metric of fit R2s_dnn=get_R2(y_valid,y_valid_predicted_dnn) print('R2s:', R2s_dnn) ``` ### 4F. Simple RNN ``` #Declare model model_rnn=SimpleRNNDecoder(units=400,dropout=0,num_epochs=5) #Fit model model_rnn.fit(X_train,y_train) #Get predictions y_valid_predicted_rnn=model_rnn.predict(X_valid) #Get metric of fit R2s_rnn=get_R2(y_valid,y_valid_predicted_rnn) print('R2s:', R2s_rnn) ``` ### 4G. GRU (Gated Recurrent Unit) ``` #Declare model model_gru=GRUDecoder(units=400,dropout=0,num_epochs=5) #Fit model model_gru.fit(X_train,y_train) #Get predictions y_valid_predicted_gru=model_gru.predict(X_valid) #Get metric of fit R2s_gru=get_R2(y_valid,y_valid_predicted_gru) print('R2s:', R2s_gru) ``` ### 4H. LSTM (Long Short Term Memory) ``` #Declare model model_lstm=LSTMDecoder(units=400,dropout=0,num_epochs=5) #Fit model model_lstm.fit(X_train,y_train) #Get predictions y_valid_predicted_lstm=model_lstm.predict(X_valid) #Get metric of fit R2s_lstm=get_R2(y_valid,y_valid_predicted_lstm) print('R2s:', R2s_lstm) ``` ## 5. Make Plots ``` #As an example, I plot an example 1000 values of the x velocity (column index 0), both true and predicted with the Wiener filter #Note that I add back in the mean value, so that both true and predicted values are in the original coordinates fig_x_wf=plt.figure() plt.plot(y_valid[1000:2000,0]+y_train_mean[0],'b') plt.plot(y_valid_predicted_wf[1000:2000,0]+y_train_mean[0],'r') #Save figure # fig_x_wf.savefig('x_velocity_decoding.eps') ```
github_jupyter
``` results = {'nist_mdipfl50_mymodel-hardest_trainacc': [100.0, 97.1, 93.9, 89.9, 87.8, 85.7, 85.0, 83.2, 82.1, 81.0], 'nist_mdipfl50_mymodel-hardest_valacc': [100.0, 96.2, 93.2, 88.3, 87.1, 82.9, 79.9, 81.1, 79.5, 75.1], 'nist_mdipfl50_mymodel_trainacc': [100.0, 95.1, 89.1, 87.6, 84.9, 83.3, 80.0, 76.7, 75.9, 71.2], 'nist_mdipfl50_mymodel_valacc': [100.0, 92.4, 87.3, 86.2, 81.5, 77.6, 75.0, 76.0, 69.6, 70.6], 'nist_mdipfl50_mymodel_nopretrain-hardest_trainacc': [100.0, 97.2, 92.8, 90.1, 87.4, 85.8, 82.7, 82.4, 78.5, 76.6], 'nist_mdipfl50_mymodel_nopretrain-hardest_valacc': [100.0, 94.3, 91.8, 87.3, 83.4, 80.8, 79.3, 77.3, 77.2, 75.5], 'nist_mdipfl50_mymodel_nopretrain_trainacc': [100.0, 82.6, 71.2, 66.5, 58.5, 56.3, 54.5, 51.5, 48.5, 46.0], 'nist_mdipfl50_mymodel_nopretrain_valacc': [100.0, 80.8, 72.4, 66.4, 61.0, 54.7, 54.8, 50.6, 45.6, 42.2], 'nist_mdipfl50_mymodel_siamese_trainacc': [100.0, 98.7, 97.4, 96.2, 95.0, 93.3, 93.4, 92.0, 91.3, 90.2], 'nist_mdipfl50_mymodel_siamese_valacc': [100.0, 97.3, 95.2, 94.0, 90.4, 88.8, 87.1, 88.3, 87.2, 86.2], 'nist_mdipfl50_mymodel_siamese_nopretrain_trainacc': [100.0, 98.6, 96.7, 96.1, 94.0, 93.5, 93.4, 92.7, 92.0, 91.6], 'nist_mdipfl50_mymodel_siamese_nopretrain_valacc': [100.0, 97.7, 95.1, 93.9, 92.7, 92.2, 88.0, 88.1, 85.9, 87.3], 'nist_mymodel-hardest_trainacc': [100.0, 96.2, 93.5, 91.7, 89.4, 87.6, 81.9, 82.1, 83.6, 81.0], 'nist_mymodel-hardest_valacc': [100.0, 95.7, 92.5, 91.1, 86.7, 82.4, 81.0, 80.2, 76.9, 75.3], 'nist_mymodel_trainacc': [100.0, 96.3, 92.4, 90.4, 86.7, 86.2, 83.4, 81.5, 79.7, 77.7], 'nist_mymodel_valacc': [100.0, 94.8, 87.7, 86.0, 83.9, 80.5, 82.0, 75.6, 77.5, 74.2], 'nist_mymodel_nopretrain-hardest_trainacc': [100.0, 96.6, 93.0, 88.5, 87.5, 85.2, 83.2, 80.8, 79.8, 78.8], 'nist_mymodel_nopretrain-hardest_valacc': [100.0, 95.8, 91.0, 87.1, 87.7, 84.5, 80.4, 80.6, 78.3, 76.1], 'nist_mymodel_nopretrain_trainacc': [100.0, 96.5, 93.5, 90.9, 90.0, 86.3, 84.3, 83.8, 84.4, 80.2], 'nist_mymodel_nopretrain_valacc': [100.0, 94.7, 93.4, 88.0, 86.7, 83.5, 83.3, 79.8, 80.1, 76.7], 'nist_mymodel_siamese_trainacc': [100.0, 99.1, 97.9, 97.6, 96.0, 94.1, 95.1, 92.9, 93.3, 91.9], 'nist_mymodel_siamese_valacc': [100.0, 98.3, 96.4, 95.6, 93.7, 92.2, 90.9, 90.4, 88.8, 89.9], 'nist_mymodel_siamese_nopretrain_trainacc': [100.0, 99.3, 98.1, 97.6, 96.7, 95.1, 96.0, 94.2, 92.3, 91.9], 'nist_mymodel_siamese_nopretrain_valacc': [100.0, 98.2, 97.5, 94.9, 94.7, 92.3, 93.6, 90.1, 87.8, 89.5], 'nist_sq_mymodel-hardest_trainacc': [100.0, 94.1, 88.6, 87.3, 84.3, 82.4, 79.0, 78.4, 74.1, 73.0], 'nist_sq_mymodel-hardest_valacc': [100.0, 94.1, 86.3, 85.9, 81.2, 78.1, 77.6, 71.7, 68.1, 69.8], 'nist_sq_mymodel_trainacc': [100.0, 91.9, 86.6, 80.9, 77.3, 75.2, 73.4, 69.6, 66.0, 65.7], 'nist_sq_mymodel_valacc': [100.0, 90.3, 83.8, 81.3, 77.3, 72.4, 68.2, 68.2, 65.1, 60.4], 'nist_sq_mymodel_nopretrain-hardest_trainacc': [100.0, 93.9, 87.0, 80.0, 75.9, 74.3, 74.0, 70.6, 66.6, 68.9], 'nist_sq_mymodel_nopretrain-hardest_valacc': [100.0, 90.4, 83.0, 80.5, 76.4, 68.9, 70.6, 66.1, 64.2, 64.2], 'nist_sq_mymodel_nopretrain_trainacc': [100.0, 83.2, 71.4, 66.2, 61.6, 52.5, 52.2, 46.7, 43.3, 45.1], 'nist_sq_mymodel_nopretrain_valacc': [100.0, 81.9, 70.3, 65.7, 61.4, 57.2, 52.1, 49.1, 45.0, 46.3], 'nist_sq_mymodel_siamese_trainacc': [100.0, 98.6, 96.2, 95.2, 95.7, 94.5, 93.1, 92.1, 90.8, 89.9], 'nist_sq_mymodel_siamese_valacc': [100.0, 97.5, 94.7, 94.0, 94.5, 90.5, 91.6, 89.7, 88.0, 86.5], 'nist_sq_mymodel_siamese_nopretrain_trainacc': [100.0, 98.1, 93.7, 93.9, 90.1, 89.4, 87.9, 86.8, 84.3, 85.0], 'nist_sq_mymodel_siamese_nopretrain_valacc': [100.0, 94.9, 92.8, 91.1, 88.8, 84.9, 83.3, 84.4, 82.1, 79.8], 'nu_cvpr2016_mdipfl50_mymodel-hardest_trainacc': [100.0, 82.6, 77.6, 67.9, 60.5, 55.4, 53.2, 54.5, 49.8, 47.9], 'nu_cvpr2016_mdipfl50_mymodel-hardest_valacc': [100.0, 85.2, 77.9, 68.8, 63.0, 60.7, 55.9, 53.9, 49.9, 44.5], 'nu_cvpr2016_mdipfl50_mymodel_trainacc': [100.0, 85.4, 78.2, 75.5, 68.2, 62.0, 61.9, 57.3, 54.1, 49.7], 'nu_cvpr2016_mdipfl50_mymodel_valacc': [100.0, 86.6, 75.6, 69.3, 66.8, 63.3, 58.8, 54.7, 51.9, 52.5], 'nu_cvpr2016_mdipfl50_mymodel_siamese_trainacc': [100.0, 64.1, 50.8, 45.0, 36.6, 34.6, 30.3, 28.4, 25.5, 25.5], 'nu_cvpr2016_mdipfl50_mymodel_siamese_valacc': [100.0, 62.0, 48.5, 42.0, 37.7, 30.5, 26.3, 25.7, 23.1, 19.4], 'nu_cvpr2016_mdipfl50_nasnet-hardest_trainacc': [100.0, 89.4, 78.3, 71.7, 65.3, 64.4, 60.2, 57.3, 53.4, 50.8], 'nu_cvpr2016_mdipfl50_nasnet-hardest_valacc': [100.0, 89.1, 78.9, 71.1, 67.4, 64.4, 57.7, 55.5, 53.9, 49.3], 'nu_cvpr2016_mdipfl50_nasnet_trainacc': [100.0, 87.5, 75.3, 71.0, 67.7, 61.7, 58.6, 56.3, 52.0, 50.3], 'nu_cvpr2016_mdipfl50_nasnet_valacc': [100.0, 87.2, 79.4, 70.7, 64.5, 65.4, 60.0, 56.5, 53.3, 49.8], 'nu_cvpr2016_mdipfl50_nasnet_nopretrain-hardest_trainacc': [100.0, 82.1, 68.9, 63.3, 55.8, 52.5, 49.5, 47.0, 42.6, 42.2], 'nu_cvpr2016_mdipfl50_nasnet_nopretrain-hardest_valacc': [100.0, 80.7, 70.2, 62.1, 58.4, 50.8, 43.6, 43.1, 40.7, 40.4], 'nu_cvpr2016_mdipfl50_nasnet_nopretrain_trainacc': [100.0, 88.0, 80.4, 71.6, 68.2, 65.7, 62.5, 57.2, 54.7, 50.8], 'nu_cvpr2016_mdipfl50_nasnet_nopretrain_valacc': [100.0, 88.4, 80.7, 72.0, 67.6, 65.2, 62.5, 57.3, 52.1, 50.0], 'nu_cvpr2016_mdipfl50_nasnet_siamese_trainacc': [100.0, 82.2, 70.8, 64.7, 58.9, 54.3, 52.6, 47.7, 45.7, 43.3], 'nu_cvpr2016_mdipfl50_nasnet_siamese_valacc': [100.0, 76.6, 69.3, 61.3, 55.4, 52.6, 48.4, 45.5, 43.2, 41.8], 'nu_cvpr2016_mdipfl50_nasnet_siamese_nopretrain_trainacc': [100.0, 80.6, 69.8, 59.4, 53.0, 49.3, 48.6, 43.7, 40.2, 38.7], 'nu_cvpr2016_mdipfl50_nasnet_siamese_nopretrain_valacc': [100.0, 79.7, 68.2, 57.0, 55.3, 50.2, 47.8, 43.0, 41.0, 37.9], 'nu_cvpr2016_mdipfl50_resnet18-hardest_trainacc': [100.0, 94.2, 89.7, 87.5, 83.1, 79.1, 76.8, 77.5, 77.7, 74.2], 'nu_cvpr2016_mdipfl50_resnet18-hardest_valacc': [100.0, 93.2, 89.3, 83.0, 81.2, 78.7, 72.7, 72.0, 73.2, 70.7], 'nu_cvpr2016_mdipfl50_resnet18_trainacc': [100.0, 95.7, 91.2, 89.1, 85.2, 82.4, 80.5, 79.5, 74.4, 73.8], 'nu_cvpr2016_mdipfl50_resnet18_valacc': [100.0, 94.3, 88.2, 84.5, 82.9, 78.7, 76.1, 74.1, 70.0, 71.1], 'nu_cvpr2016_mdipfl50_resnet18_nopretrain-hardest_trainacc': [100.0, 95.2, 86.4, 83.5, 79.3, 77.4, 73.4, 71.5, 68.4, 66.0], 'nu_cvpr2016_mdipfl50_resnet18_nopretrain-hardest_valacc': [100.0, 92.8, 87.4, 82.4, 80.2, 73.4, 70.4, 69.3, 65.9, 61.7], 'nu_cvpr2016_mdipfl50_resnet18_nopretrain_trainacc': [100.0, 96.8, 92.8, 90.9, 86.0, 86.2, 83.7, 81.6, 80.8, 77.4], 'nu_cvpr2016_mdipfl50_resnet18_nopretrain_valacc': [100.0, 94.8, 90.2, 88.6, 82.9, 81.0, 81.5, 74.9, 74.8, 75.7], 'nu_cvpr2016_mdipfl50_resnet18_siamese_trainacc': [100.0, 92.8, 87.4, 85.2, 78.9, 74.3, 73.4, 72.2, 69.9, 66.9], 'nu_cvpr2016_mdipfl50_resnet18_siamese_valacc': [100.0, 91.8, 83.5, 79.4, 76.1, 74.0, 71.3, 65.8, 66.5, 65.6], 'nu_cvpr2016_mdipfl50_resnet18_siamese_nopretrain_trainacc': [100.0, 83.6, 78.6, 71.2, 71.2, 63.5, 61.3, 59.2, 56.9, 54.1], 'nu_cvpr2016_mdipfl50_resnet18_siamese_nopretrain_valacc': [100.0, 82.9, 76.0, 70.3, 67.7, 66.0, 62.0, 58.1, 56.7, 55.1], 'nu_cvpr2016_resnet18-hardest_trainacc': [100.0, 97.8, 94.4, 93.8, 89.3, 88.7, 87.5, 85.3, 83.0, 82.8], 'nu_cvpr2016_resnet18-hardest_valacc': [100.0, 95.8, 93.4, 91.3, 89.7, 87.8, 85.4, 85.7, 83.1, 82.3], 'nu_cvpr2016_resnet18_trainacc': [100.0, 97.3, 95.7, 93.0, 91.8, 89.1, 87.8, 85.4, 85.1, 83.9], 'nu_cvpr2016_resnet18_valacc': [100.0, 96.1, 93.6, 90.7, 90.4, 85.9, 86.8, 84.9, 82.0, 78.6], 'nu_cvpr2016_resnet18_siamese_trainacc': [100.0, 97.8, 97.9, 95.6, 95.7, 92.8, 93.7, 92.5, 91.4, 89.2], 'nu_cvpr2016_resnet18_siamese_valacc': [100.0, 96.8, 95.8, 92.3, 90.7, 90.0, 87.6, 86.0, 85.0, 83.5]} ``` ## NIST SD19 ``` configs = ["nist_mymodel", "nist_sq_mymodel", "nist_mdipfl50_mymodel"] pretrain = ["", "_nopretrain"] architecture_mining = ["_siamese", "", "-hardest"] rows = [] for c in configs: for p in pretrain: for a in architecture_mining: if a == '-hardest': rows.append(c+p+a) else: rows.append(c+a+p) rows for r in rows: if r+"_valacc" not in results: raise Exception('we have a problem: cant find ' + r + '_valacc') print('Original 128x128' if 'nist_mymodel' in r else ('Highlighted 96x96' if 'nist_sq_mymodel' in r else 'MDIPFL50 50x37'), 'No' if '_nopretrain' in r else 'Yes', 'Siamese' if '_siamese' in r else 'Triplet', 'Hardest' if '-hardest' in r else ('.' if '_siamese' in r else 'Semi-hard'), '\t'.join(str(x) for x in results[r+"_valacc"]), sep='\t') ``` ## ATS-CVPR2016 ``` configs = ["nu_cvpr2016_mdipfl50_resnet18"] pretrain = ["", "_nopretrain"] architecture_mining = ["_siamese", "", "-hardest"] rows = [] for c in configs: for p in pretrain: for a in architecture_mining: if a == '-hardest': rows.append(c+p+a) else: rows.append(c+a+p) rows for r in rows: if r+"_valacc" not in results: raise Exception('we have a problem: cant find ' + r + '_valacc') print('No' if '_nopretrain' in r else 'Yes', 'Siamese' if '_siamese' in r else 'Triplet', 'Hardest' if '-hardest' in r else ('.' if '_siamese' in r else 'Semi-hard'), '\t'.join(str(x) for x in results[r+"_valacc"]), sep='\t') ```
github_jupyter
# Slider bar decline curve in python Created by Thomas Martin, PhD canidate at [CoRE](https://core.mines.edu/) at Colorado School of Mines. Personal website is [here](https://tmartin.carrd.co/), and email is thomasmartin@mines.edu. ``` import matplotlib.pyplot as plt import pandas as pd import numpy as np import pylab import scipy as sp from scipy.optimize import curve_fit from matplotlib.widgets import Slider, Button, RadioButtons cd drive/My Drive/T21_well_bonanza ls ``` ### Wyoming production data, had to change from a .xls to a .csv ``` df = pd.read_csv('RAPI3723253.csv') df = df.rename(columns={"OIL BBLS":"oilBBLS", "GAS MCF":"gasMCF","WATER BBLS":"waterBBLS", "Month/Year":"Month_Year"}) df.head() ``` Let's make a quick QC plot (no sliders) ``` plt.figure(figsize=(6,3), dpi=150) plt.plot(df.index, df.oilBBLS, color='g') plt.xlabel('Months since 1st production', size = 18) plt.ylabel('BBLs per Month', size =16) ``` # Production Slider ``` #@title String fields Fluid_type = 'Gas' #@param ["Oil", "Gas", "Water"] #@title What Months do you want to show { display-mode: "form" } MinMonth_slider = 1 #@param {type:"slider", min:1, max:323, step:1} MaxMonth_slider = 152 #@param {type:"slider", min:0, max:323, step:1} #print(MinMonth_slider) #print(MaxMonth_slider) if MinMonth_slider > MaxMonth_slider: print('Error Error!, check min and max month') def model_func(x, a, k, b): return a * np.exp(-k*x) + b plt.figure(figsize=(10,6), dpi=150) if Fluid_type == "Oil": y = df.oilBBLS[MinMonth_slider:MaxMonth_slider] p0 = (1.,1.e-12,1.) # starting search koefs plt.semilogy(df.index[MinMonth_slider:MaxMonth_slider], df.oilBBLS[MinMonth_slider:MaxMonth_slider], color='g', linewidth=2, label='Prod data') elif Fluid_type == "Gas": y = df.gasMCF[MinMonth_slider:MaxMonth_slider] p0 = (1.,1.e-12,1.) # starting search koefs plt.semilogy(df.index[MinMonth_slider:MaxMonth_slider], df.gasMCF[MinMonth_slider:MaxMonth_slider], color='r', linewidth=2, label='Prod data') elif Fluid_type == "Water": p0 = (1.,1.e-11,1.) # starting search koefs y = df.waterBBLS[MinMonth_slider:MaxMonth_slider] plt.semilogy(df.index[MinMonth_slider:MaxMonth_slider], df.waterBBLS[MinMonth_slider:MaxMonth_slider], color='b', linewidth=2, label='Prod data') x = df.index[MinMonth_slider:MaxMonth_slider] opt, pcov = curve_fit(model_func, x, y, p0, maxfev=50000) a, k, b = opt x2 = np.linspace(MinMonth_slider, MaxMonth_slider, 20) y2 = model_func(x2, a, k, b) plt.plot(x2, y2, linewidth=3, linestyle='--', color='black', label='Fit. func: $f(x) = %.3f e^{%.3f x} %+.3f$' % (a,k,b)) plt.legend() plt.grid(True) plt.xlabel('Months since 1st production', size = 18) if Fluid_type == "Oil": plt.ylim(2,20000) plt.ylabel('BBLs per Month', size =16) elif Fluid_type == "Gas": plt.ylim(2,200000) plt.ylabel('MCF per Month', size =16) elif Fluid_type == "water": plt.ylim(2,2000) plt.ylabel('BBL per month', size =16) print('Number of Months') print(MaxMonth_slider- MinMonth_slider,) ```
github_jupyter
Tutorial: VIC 5 Image Driver Parameter Conversion ==== ***Converting parameters from ASCII VIC 4 format to netCDF VIC 5 Image Driver Format*** This Jupyter Notebook outlines one approach to converting VIC parameters from ASCII to netCDF format. For this tutorial, we'll convert three datasets from ASCII to netCDF: 1. **Livneh et al. (2015) - 1/16th deg. VIC parameters** - Description: http://www.colorado.edu/lab/livneh/data - Data: ftp://livnehpublicstorage.colorado.edu/public/Livneh.2015.NAmer.Dataset/nldas.vic.params/ - Citation: Livneh B., E.A. Rosenberg, C. Lin, B. Nijssen, V. Mishra, K.M. Andreadis, E.P. Maurer, and D.P. Lettenmaier, 2013: A Long-term hydrologically based dataset of land surface fluxes and states for the conterminous United States: update and extensions, Journal of Climate, 26, 9384–9392. 2. **Global 1/2 deg. VIC parameters** - Description: http://www.hydro.washington.edu/SurfaceWaterGroup/Data/vic_global_0.5deg.html - Data: ftp://ftp.hydro.washington.edu/pub/HYDRO/data/VIC_param/vic_params_global_0.5deg.tgz - Citation: Nijssen, B.N., G.M. O'Donnell, D.P. Lettenmaier and E.F. Wood, 2001: Predicting the discharge of global rivers, J. Clim., 14(15), 3307-3323, doi: 10.1175/1520-0442(2001)014<3307:PTDOGR>2.0.CO;2. 3. **Maurer et al. (2002) - 1/8th deg. VIC parameters** - Description: http://www.hydro.washington.edu/SurfaceWaterGroup/Data/VIC_retrospective/index.html - Data: http://www.hydro.washington.edu/SurfaceWaterGroup/Data/VIC_retrospective/index.html - Citation: Maurer, E.P., A.W. Wood, J.C. Adam, D.P. Lettenmaier, and B. Nijssen, 2002: A long-term hydrologically-based data set of land surface fluxes and states for the conterminous United States, J. Climate 15, 3237-3251. All of these datasets include the following parameter sets: - Soil Parameter file - Vegetation Library file - Vegetation Parameter file - Snowbands file ### Outputs For each of the parameter sets above, we'll be producing two files: 1. VIC 5 Image Driver Input Parameters (netCDF file defining model parameters) 2. VIC 5 Image Driver Domain File (netCDF file defining spatial extent of model domain) ### Python Imports and Setup ``` %matplotlib inline import os import getpass from datetime import datetime import numpy as np import xarray as xr import matplotlib.pyplot as plt # For more information on tonic, see: https://github.com/UW-Hydro/tonic/ import tonic.models.vic.grid_params as gp # Metadata to be used later user = getpass.getuser() now = datetime.now() print('python version : %s' % os.sys.version) print('numpy version : %s' % np.version.full_version) print('xarray version : %s' % xr.version.version) print('User : %s' % user) print('Date/Time : %s' % now) ``` ## Set Path Information ``` # Set the path to the datasets here dpath = './' # root input data path opath = './' # output data path ldpath = os.path.join(dpath, 'Livneh_0.0625_NLDAS') # Path to Livneh Parameters gdpath = os.path.join(dpath, 'Nijssen_0.5_Global') # Path to Global Parameters mdpath = os.path.join(dpath, 'Maurer_0.125_NLDAS') # Path to Maurer Parameters ``` # Livneh Domain File Along with the VIC model parameters, we also need a domain file that describes the spatial extent and active grid cells in the model domain. The domain file must exactly match the parameters and forcings. The Livneh dataset includes a DEM file in netCDF format that we can use to construct the domain file. Steps: 1. Open dem 2. Set useful global attributes 3. Create the mask/frac variables using the non-missing dem mask. 4. Calculate the grid cell area using `cdo` 5. Add the grid cell area back into the domain dataset. 6. Save the domain dataset ``` dom_file = os.path.join(opath, 'domain.vic.global0.0625deg.%s.nc' % now.strftime('%Y%m%d')) dem = xr.open_dataset(os.path.join(ldpath, 'Composite.DEM.NLDAS.mex.0625.nc')) dom_ds = xr.Dataset() # Set global attributes dom_ds.attrs['title'] = 'VIC domain data' dom_ds.attrs['Conventions'] = 'CF-1.6' dom_ds.attrs['history'] = 'created by %s, %s' % (user, now) dom_ds.attrs['user_comment'] = 'VIC domain data' dom_ds.attrs['source'] = 'generated from VIC North American 1/16 deg. model parameters, see Livneh et al. (2015) for more information' # since we have it, put the elevation in the domain file dom_ds['elev'] = dem['Band1'] dom_ds['elev'].attrs['long_name'] = 'gridcell_elevation' dom_ds['elev'].attrs['units'] = 'm' # Get the mask variable dom_ds['mask'] = dem['Band1'].notnull().astype(np.int) dom_ds['mask'].attrs['long_name'] = 'domain mask' dom_ds['mask'].attrs['comment'] = '0 indicates cell is not active' # For now, the frac variable is going to be just like the mask dom_ds['frac'] = dom_ds['mask'].astype(np.float) dom_ds['frac'].attrs['long_name'] = 'fraction of grid cell that is active' dom_ds['frac'].attrs['units'] = '1' # Save the output domain to a temporary file. dom_ds.to_netcdf('temp.nc') dom_ds.close() # This shell command uses cdo step calculates the grid cell area !cdo -O gridarea temp.nc area.nc !rm temp.nc # This step extracts the area from the temporary area.nc file area = xr.open_dataset('area.nc')['cell_area'] dom_ds['area'] = area # Write the final domain file dom_ds.to_netcdf(dom_file) dom_ds.close() # Document the domain and plot print(dom_ds) dom_ds['mask'].plot() ``` # Livneh Parameters VIC 5 uses the same parameters as VIC 4. The following steps will read/parse the ASCII formatted parameter files and construct the netCDF formatted parameter file. We'll use the domain file constructed in the previous step to help define the spatial grid. Steps: 1. Read the soil/snow/veg/veglib files 2. Read the target grid (domain file) 3. Map the parameters to the spatial grid defined by the domain file 4. Write the parameters to a netCDF file. ``` soil_file = os.path.join(ldpath, 'vic.nldas.mexico.soil.txt') snow_file = os.path.join(ldpath, 'vic.nldas.mexico.snow.txt.L13') veg_file = os.path.join(ldpath, 'vic.nldas.mexico.veg.txt') vegl_file = os.path.join(ldpath, 'LDAS_veg_lib') out_file = os.path.join(opath, 'livneh_nldas.mexico_vic_5.0.0_parameters.nc') # Set options that define the shape/type of parameters cols = gp.Cols(nlayers=3, snow_bands=5, organic_fract=False, spatial_frost=False, spatial_snow=False, july_tavg_supplied=False, veglib_fcan=False, veglib_photo=False) n_veg_classes = 11 vegparam_lai = True lai_src = 'FROM_VEGPARAM' # ----------------------------------------------------------------- # # Read the soil parameters soil_dict = gp.soil(soil_file, c=gp.Cols(nlayers=3)) # Read the snow parameters snow_dict = gp.snow(snow_file, soil_dict, c=cols) # Read the veg parameter file veg_dict = gp.veg(veg_file, soil_dict, vegparam_lai=vegparam_lai, lai_src=lai_src, veg_classes=n_veg_classes) # Read the veg library file veg_lib, lib_bare_idx = gp.veg_class(vegl_file, c=cols) # Determine the grid shape target_grid, target_attrs = gp.read_netcdf(dom_file) for old, new in [('lon', 'xc'), ('lat', 'yc')]: target_grid[new] = target_grid.pop(old) target_attrs[new] = target_attrs.pop(old) # Grid all the parameters grid_dict = gp.grid_params(soil_dict, target_grid, version_in='4.1.2.c', vegparam_lai=vegparam_lai, lib_bare_idx=lib_bare_idx, lai_src=lai_src, veg_dict=veg_dict, veglib_dict=veg_lib, snow_dict=snow_dict, lake_dict=None) # Write a netCDF file with all the parameters gp.write_netcdf(out_file, target_attrs, target_grid=target_grid, vegparam_lai=vegparam_lai, lai_src=lai_src, soil_grid=grid_dict['soil_dict'], snow_grid=grid_dict['snow_dict'], veg_grid=grid_dict['veg_dict']) ``` # Global 1/2 deg. Parameters For the global 1/2 deg. parameters, we will follow the same steps as for the Livneh case with one exception. We don't have a domain file this time so we'll use `tonic`'s `calc_grid` function to make one for us. ``` soil_file = os.path.join(gdpath, 'global_soil_param_new') snow_file = os.path.join(gdpath, 'global_snowbands_new') veg_file = os.path.join(gdpath, 'global_veg_param_new') vegl_file = os.path.join(gdpath, 'world_veg_lib.txt') out_file = os.path.join(gdpath, 'global_0.5deg.vic_5.0.0_parameters.nc') # Set options that define the shape/type of parameters cols = gp.Cols(nlayers=3, snow_bands=5, organic_fract=False, spatial_frost=False, spatial_snow=False, july_tavg_supplied=False, veglib_fcan=False, veglib_photo=False) n_veg_classes = 11 root_zones = 2 vegparam_lai = True lai_src = 'FROM_VEGPARAM' # ----------------------------------------------------------------- # # Read the soil parameters soil_dict = gp.soil(soil_file, c=cols) # Read the snow parameters snow_dict = gp.snow(snow_file, soil_dict, c=cols) # Read the veg parameter file veg_dict = gp.veg(veg_file, soil_dict, vegparam_lai=vegparam_lai, lai_src=lai_src, veg_classes=n_veg_classes, max_roots=root_zones) # Read the veg library file veg_lib, lib_bare_idx = gp.veg_class(vegl_file, c=cols) # Determine the grid shape target_grid, target_attrs = gp.calc_grid(soil_dict['lats'], soil_dict['lons']) # Grid all the parameters grid_dict = gp.grid_params(soil_dict, target_grid, version_in='4', vegparam_lai=vegparam_lai, lai_src=lai_src, lib_bare_idx=lib_bare_idx, veg_dict=veg_dict, veglib_dict=veg_lib, snow_dict=snow_dict, lake_dict=None) # Write a netCDF file with all the parameters gp.write_netcdf(out_file, target_attrs, target_grid=target_grid, vegparam_lai=vegparam_lai, lai_src=lai_src, soil_grid=grid_dict['soil_dict'], snow_grid=grid_dict['snow_dict'], veg_grid=grid_dict['veg_dict']) ``` # Global Domain File Since the global soil parameters didn't come with a domain file that we could use, we'll construct one using the output from `tonic`'s `calc_grid` function. ``` dom_ds = xr.Dataset() # Set global attributes dom_ds.attrs['title'] = 'VIC domain data' dom_ds.attrs['Conventions'] = 'CF-1.6' dom_ds.attrs['history'] = 'created by %s, %s' % (user, now) dom_ds.attrs['user_comment'] = 'VIC domain data' dom_ds.attrs['source'] = 'generated from VIC Global 0.5 deg. model parameters, see Nijssen et al. (2001) for more information' dom_file = os.path.join(opath, 'domain.vic.global0.5deg.%s.nc' % now.strftime('%Y%m%d')) # Get the mask variable dom_ds['mask'] = xr.DataArray(target_grid['mask'], coords={'lat': target_grid['yc'], 'lon': target_grid['xc']}, dims=('lat', 'lon', )) # For now, the frac variable is going to be just like the mask dom_ds['frac'] = dom_ds['mask'].astype(np.float) dom_ds['frac'].attrs['long_name'] = 'fraction of grid cell that is active' dom_ds['frac'].attrs['units'] = '1' # Set variable attributes for k, v in target_attrs.items(): if k == 'xc': k = 'lon' elif k == 'yc': k = 'lat' dom_ds[k].attrs = v # Write temporary file for gridarea calculation dom_ds.to_netcdf('temp.nc') # This step calculates the grid cell area !cdo -O gridarea temp.nc area.nc !rm temp.nc # Extract the area variable area = xr.open_dataset('area.nc').load()['cell_area'] dom_ds['area'] = area # write the domain file dom_ds.to_netcdf(dom_file) dom_ds.close() # document and plot the domain print(dom_ds) dom_ds.mask.plot() !rm area.nc ``` # Maurer 1/8 deg. Parameters Finally, we'll repeat the same steps for the Maurer 1/8 deg. parameters. ``` soil_file = os.path.join(mdpath, 'soil', 'us_all.soil.wsne') snow_file = os.path.join(mdpath, 'snow', 'us_all.snowbands.wsne') veg_file = os.path.join(mdpath, 'veg', 'us_all.veg.wsne') vegl_file = os.path.join(ldpath, 'LDAS_veg_lib') # from livneh out_file = os.path.join(mdpath, 'nldas_0.125deg.vic_5.0.0_parameters.nc') cols = gp.Cols(nlayers=3, snow_bands=5, organic_fract=False, spatial_frost=False, spatial_snow=False, july_tavg_supplied=False, veglib_fcan=False, veglib_photo=False) n_veg_classes = 11 root_zones = 2 vegparam_lai = True lai_src = 'FROM_VEGPARAM' # ----------------------------------------------------------------- # # Read the soil parameters soil_dict = gp.soil(soil_file, c=cols) # Read the snow parameters snow_dict = gp.snow(snow_file, soil_dict, c=cols) # Read the veg parameter file veg_dict = gp.veg(veg_file, soil_dict, vegparam_lai=vegparam_lai, lai_src=lai_src, veg_classes=n_veg_classes, max_roots=root_zones) # Read the veg library file veg_lib, lib_bare_idx = gp.veg_class(vegl_file, c=cols) # Determine the grid shape target_grid, target_attrs = gp.calc_grid(soil_dict['lats'], soil_dict['lons']) # Grid all the parameters grid_dict = gp.grid_params(soil_dict, target_grid, version_in='4', vegparam_lai=vegparam_lai, lai_src=lai_src, lib_bare_idx=lib_bare_idx, veg_dict=veg_dict, veglib_dict=veg_lib, snow_dict=snow_dict, lake_dict=None) # Write a netCDF file with all the parameters gp.write_netcdf(out_file, target_attrs, target_grid=target_grid, vegparam_lai=vegparam_lai, lai_src=lai_src, soil_grid=grid_dict['soil_dict'], snow_grid=grid_dict['snow_dict'], veg_grid=grid_dict['veg_dict']) ``` # 1/8 deg CONUS domain file ``` dom_ds = xr.Dataset() # Set global attributes dom_ds.attrs['title'] = 'VIC domain data' dom_ds.attrs['Conventions'] = 'CF-1.6' dom_ds.attrs['history'] = 'created by %s, %s' % (user, now) dom_ds.attrs['user_comment'] = 'VIC domain data' dom_ds.attrs['source'] = 'generated from VIC CONUS 1.8 deg model parameters, see Maurer et al. (2002) for more information' dom_file = os.path.join(opath, 'domain.vic.conus0.0125deg.%s.nc' % now.strftime('%Y%m%d')) # Get the mask variable dom_ds['mask'] = xr.DataArray(target_grid['mask'], coords={'lat': target_grid['yc'], 'lon': target_grid['xc']}, dims=('lat', 'lon', )) # For now, the frac variable is going to be just like the mask dom_ds['frac'] = dom_ds['mask'].astype(np.float) dom_ds['frac'].attrs['long_name'] = 'fraction of grid cell that is active' dom_ds['frac'].attrs['units'] = '1' # Set variable attributes for k, v in target_attrs.items(): if k == 'xc': k = 'lon' elif k == 'yc': k = 'lat' dom_ds[k].attrs = v # Write temporary file for gridarea calculation dom_ds.to_netcdf('temp.nc') # This step calculates the grid cell area !cdo -O gridarea temp.nc area.nc !rm temp.nc # Extract the area variable area = xr.open_dataset('area.nc').load()['cell_area'] dom_ds['area'] = area # write the domain file dom_ds.to_netcdf(dom_file) dom_ds.close() # document and plot the domain print(dom_ds) dom_ds.mask.plot() plt.close('all') ```
github_jupyter
### Predict lung masks and Covid vs non-Covid classification for new patient CXR using Module 1 trained on the V7 lung segmentation database, and Module 2 trained on the HFHS dataset ``` # In[1]: import os, sys, shutil from os import listdir from os.path import isfile, join import random import numpy as np import cv2 import pandas as pd import json import datetime import csv, h5py import pydicom from pydicom.data import get_testdata_files # In[2]: from MODULES_1.Generators import train_generator_1, val_generator_1, test_generator_1 from MODULES_1.Generators import train_generator_2, val_generator_2, test_generator_2 from MODULES_1.Networks import ResNet_Atrous, Dense_ResNet_Atrous from MODULES_1.Losses import dice_coeff from MODULES_1.Losses import tani_loss, tani_coeff, weighted_tani_coeff from MODULES_1.Losses import weighted_tani_loss, other_metrics from MODULES_1.Constants import _Params, _Paths from MODULES_1.Utils import get_class_threshold, get_model_memory_usage import tensorflow as tf from tensorflow.keras.optimizers import Adam from tensorflow.keras.models import Model, model_from_json, load_model, clone_model from tensorflow.python.client import device_lib import matplotlib.pyplot as plt import datetime # automatic reload of external definitions if changed during testing %load_ext autoreload %autoreload 2 # In[3]: # ### CONSTANTS HEIGHT, WIDTH, CHANNELS, IMG_COLOR_MODE, MSK_COLOR_MODE, NUM_CLASS, \ KS1, KS2, KS3, DL1, DL2, DL3, NF, NFL, NR1, NR2, DIL_MODE, W_MODE, LS, \ TRAIN_SIZE, VAL_SIZE, TEST_SIZE, DR1, DR2, CLASSES, IMG_CLASS = _Params() TRAIN_IMG_PATH, TRAIN_MSK_PATH, TRAIN_MSK_CLASS, VAL_IMG_PATH, \ VAL_MSK_PATH, VAL_MSK_CLASS, TEST_IMG_PATH, TEST_MSK_PATH, TEST_MSK_CLASS = _Paths() # In[4]: # ### LOAD LUNG SEGMENTATION MODEL FROM PREVIOUS RUN AND COMPILE model_selection = 'model_' + str(NF) + 'F_' + str(NR1) + 'R1_' + str(NR2) + 'R2' model_number = '2020-10-16_21_26' # model number from an earlier run filepath = 'models/' + model_selection + '_' + model_number + '_all' + '.h5' strategy = tf.distribute.MirroredStrategy() with strategy.scope(): model = load_model(filepath, compile=False) model.compile(optimizer=Adam(), loss=weighted_tani_loss, metrics=[tani_coeff]) print(model_selection,model_number) # In[5] # ### PREDICT MASKS - PLOTS print(CLASSES) # Source directory containing COVID patients lung CXR's dcm_source_img_path = 'new_patient_cxr/image_dcm/' # Source/target directory containing COVID patients lung DCM CXR's converted to PNG source_resized_img_path = 'new_patient_cxr/image_resized_equalized_from_dcm/' # Target directories containing masks predicted for the CXR images in the source directory: # For V7 database target_resized_msk_path_binary = 'new_patient_cxr/mask_binary/' target_resized_msk_path_float = 'new_patient_cxr/mask_float/' target_img_mask_path = 'new_patient_cxr/image_mask/' # Remove existing target directories and all their content if already present pwd = os.getcwd() root_dir = 'new_patient_cxr' if root_dir == pwd: for root, dirs, files in os.walk(source_resized_img_path): for f in files: os.unlink(os.path.join(root, f)) for d in dirs: shutil.rmtree(os.path.join(root, d)) for root, dirs, files in os.walk(target_resized_msk_path_binary): for f in files: os.unlink(os.path.join(root, f)) for d in dirs: shutil.rmtree(os.path.join(root, d)) for root, dirs, files in os.walk(target_resized_msk_path_float): for f in files: os.unlink(os.path.join(root, f)) for d in dirs: shutil.rmtree(os.path.join(root, d)) for root, dirs, files in os.walk(target_img_mask_path): for f in files: os.unlink(os.path.join(root, f)) for d in dirs: shutil.rmtree(os.path.join(root, d)) # Create directory that will store the DCM derived png CXR if not os.path.exists(source_resized_img_path): os.makedirs(source_img_path) # Create directories that will store the masks on which to train the classification network if not os.path.exists(target_resized_msk_path_binary): os.makedirs(target_resized_msk_path_binary) if not os.path.exists(target_resized_msk_path_float): os.makedirs(target_resized_msk_path_float) if not os.path.exists(target_img_mask_path): os.makedirs(target_img_mask_path) # get CXR DCM image names from source directory and convert to png PNG = True print(f'DCM source: {dcm_source_img_path}') print(f'PNG source: {source_resized_img_path}') source_img_names = [f for f in listdir(dcm_source_img_path) if isfile(join(dcm_source_img_path, f))] for name in source_img_names: print(f'DCM image: {name}') filename = dcm_source_img_path + name dataset = pydicom.dcmread(filename) # if 'PatientID' in dataset: # PatientID = dataset.PatientID # else: # PatientID = 'NaN' # if 'PatientAge' in dataset: # PatientAge = dataset.PatientAge # else: # PatientAge = 'NaN' # if 'PatientSex' in dataset: # PatientSex = dataset.PatientSex # else: # PatientSex = 'NaN' # if 'AccessionNumber' in dataset: # AccessionNumber = dataset.AccessionNumber # else: # AccessionNumber = 'NaN' # if 'SeriesNumber' in dataset: # SeriesNumber = dataset.SeriesNumber # else: # SeriesNumber = 'NaN' # if 'InstanceNumber' in dataset: # InstanceNumber = dataset.InstanceNumber # else: # InstanceNumber = 'NaN' # if 'Modality' in dataset: # Modality = dataset.Modality # else: # Modality = 'NaN' # if 'SeriesDescription' in dataset: # SeriesDescription = dataset.SeriesDescription # else: # SeriesDescription = 'NaN' # if 'BodyPartExamined' in dataset: # BodyPartExamined = dataset.BodyPartExamined # else: # BodyPartExamined = 'NaN' # if 'PatientOrientation' in dataset: # PatientOrientation = dataset.PatientOrientation # else: # PatientOrientation = 'NaN' # if 'Laterality' in dataset: # if 'ImageLaterality' in dataset: # Laterality = dataset.ImageLaterality # else: # Laterality = dataset.Laterality # else: # Laterality = 'NaN' # if 'StudyTime' in dataset: # StudyTime = dataset.StudyTime # else: # StudyTime = 'NaN' # if 'StudyDate' in dataset: # StudyDate = dataset.StudyDate # else: # StudyDate = 'NaN' # if 'CassetteOrientation' in dataset: # CassetteOrientation = dataset.CassetteOrientation # else: # CassetteOrientation = 'NaN' # if 'AcquisitionNumber' in dataset: # AcquisitionNumber = dataset.AcquisitionNumber # else: # AcquisitionNumber = 'NaN' # if 'WindowCenter' in dataset: # WindowCenter = dataset.WindowCenter # else: # WindowCenter = 'NaN' # if 'WindowWidth' in dataset: # WindowWidth = dataset.WindowWidth # else: # WindowWidth = 'NaN' # if 'ExposureIndex' in dataset: # ExposureIndex = dataset.ExposureIndex # else: # ExposureIndex = 'NaN' # if 'RelativeXRayExposure' in dataset: # RelativeXRayExposure = dataset.RelativeXRayExposure # else: # RelativeXRayExposure = 'NaN' # if 'TargetExposureIndex' in dataset: # TargetExposureIndex = dataset.TargetExposureIndex # else: # TargetExposureIndex = 'NaN' # if 'DeviationIndex' in dataset: # DeviationIndex = dataset.DeviationIndex # else: # DeviationIndex = 'NaN' # if 'Sensitivity' in dataset: # Sensitivity = dataset.Sensitivity # else: # Sensitivity = 'NaN' # if 'PixelSpacing' in dataset: # PixelSpacing = dataset.PixelSpacing # else: # PixelSpacing = 'NaN' # if 'Rows' in dataset: # Rows = dataset.Rows # else: # Rows = 'NaN' # if 'Columns' in dataset: # Columns = dataset.Columns # else: # Columns = 'NaN' # if 'PhotometricInterpretation' in dataset: # PhotometricInterpretation = dataset.PhotometricInterpretation # else: # PhotometricInterpretation = 'NaN' # Write PNG image img = dataset.pixel_array.astype(float) minval = np.min(img) maxval = np.max(img) scaled_img = (img - minval)/(maxval-minval) * 255.0 WIDTH = 340 HEIGHT = 300 # input_img = cv2.imread(source_img_path + name, cv2.IMREAD_GRAYSCALE) resized_img = cv2.resize(scaled_img, (WIDTH, HEIGHT), cv2.INTER_LINEAR) resized_img_8bit = cv2.convertScaleAbs(resized_img, alpha=1.0) equalized_img = cv2.equalizeHist(resized_img_8bit) if PNG == False: new_name = name.replace('.dcm', '.jpg') else: new_name = name.replace('.dcm', '.png') # cv2.imwrite(os.path.join(source_resized_img_path, new_name), resized_img_8bit) cv2.imwrite(os.path.join(source_resized_img_path, new_name), equalized_img) print(f'PNG image: {new_name}') print(source_resized_img_path) # get CXR image names from source directory source_img_names = [f for f in listdir(source_resized_img_path) if isfile(join(source_resized_img_path, f))] for name in source_img_names: print(f'Image name: {name}') if name == '.DS_Store': continue input_img = cv2.imread(source_resized_img_path + name, cv2.IMREAD_GRAYSCALE) scaled_img = input_img/255 scaled_img = np.expand_dims(scaled_img,axis = [0,-1]) mask = model(scaled_img).numpy() mask_float = np.squeeze(mask[0,:,:,0]) mask_binary = (mask_float > 0.5)*1 mask_float *=255 mask_binary *=255 cv2.imwrite(target_resized_msk_path_float + name, mask_float) cv2.imwrite(target_resized_msk_path_binary + name, mask_binary) fig = plt.figure(figsize=(20,10)) fig.subplots_adjust(hspace=0.4, wspace=0.2) ax = fig.add_subplot(1, 2, 1) ax.imshow(np.squeeze(input_img), cmap="gray") ax = fig.add_subplot(1, 2, 2) ax.imshow(np.squeeze(mask_binary), cmap="gray") plt.savefig(target_img_mask_path + name + '_img_and_pred_mask.png') plt.close() print(input_img.shape,mask.shape,mask_float.shape,mask_binary.shape) # In[6] # ### PLOT CONTINUOUS MASK fig = plt.figure(figsize=(20,10)) fig.subplots_adjust(hspace=0.4, wspace=0.2) ax = fig.add_subplot(1, 2, 1) ax.imshow(np.squeeze(input_img), cmap="gray") ax = fig.add_subplot(1, 2, 2) ax.imshow(np.squeeze(mask_float[:,:]), cmap="gray") # In[7] # ### PLOT BINARY MASK fig = plt.figure(figsize=(20,10)) fig.subplots_adjust(hspace=0.4, wspace=0.2) ax = fig.add_subplot(1, 2, 1) ax.imshow(np.squeeze(input_img), cmap="gray") ax = fig.add_subplot(1, 2, 2) ax.imshow(np.squeeze(mask_binary[:,:]), cmap="gray") # PREDICTION and HEAT MAP # In[8]: from MODULES_2.Generators import get_generator, DataGenerator from MODULES_2.Networks import WaveletScatteringTransform, ResNet from MODULES_2.Networks import SelectChannel, TransposeChannel, ScaleByInput, Threshold from MODULES_2.Losses import other_metrics_binary_class from MODULES_2.Constants import _Params, _Paths from MODULES_2.Utils import get_class_threshold, standardize, commonelem_set from MODULES_2.Utils import _HEAT_MAP_DIFF from MODULES_2.Utils import get_roc_curve, compute_gradcam, get_roc_curve_sequence, plot_confusion_matrix from MODULES_2.Utils import get_mean_roc_curve_sequence, get_multi_roc_curve_sequence from tensorflow.keras.layers import Input, Average, Lambda, Multiply, Add, GlobalAveragePooling2D, Activation from tensorflow.keras import backend as K from tensorflow.keras.preprocessing import image from tensorflow.keras.utils import plot_model from tensorflow.compat.v1.logging import INFO, set_verbosity # load_ext autoreload %reload_ext autoreload %autoreload 2 # In[9]: # ### READ NEW PATIENT DATA new_patient_df = pd.read_csv("new_patient.csv",index_col = 0) n_new_patient = len(new_patient_df) print(n_new_patient) # name_df = new_patient_df[new_patient_df['Image'].str.contains(name)] # name_df.to_csv('new_patient.csv', encoding='utf-8', header='true', index=True) # In[10]: # ### RECOVER STANDARDIZATION PARAMETERS FROM MODULE 1 with open('standardization_parameters_V7.json') as json_file: standardization_parameters = json.load(json_file) train_image_mean = standardization_parameters['mean'] train_image_std = standardization_parameters['std'] print(train_image_mean, train_image_std) # In[11]: # ### PREPARE STANDARDIZED IMAGES as H5 FILES # Source directories for images and masks IMAGE_DIR = "new_patient_cxr/image_resized_equalized_from_dcm/" MASK_DIR = "new_patient_cxr/mask_float/" # Target directory for H5 file containing both images and masks H5_IMAGE_DIR = "new_patient_cxr/H5/" print(IMAGE_DIR) print(H5_IMAGE_DIR) pwd = os.getcwd() if not os.path.isdir(H5_IMAGE_DIR): os.mkdir(H5_IMAGE_DIR) # Loop over the set of images to predict for i in range(n_new_patient): print(f'{i},index={new_patient_df.index[i]}') valid_image_name, valid_pos_label, valid_neg_label, valid_weight = \ new_patient_df.iloc[i]['Image'],\ new_patient_df.iloc[i]['Positive'],\ new_patient_df.iloc[i]['Negative'],\ new_patient_df.iloc[i]['ClassWeight'] valid_image = cv2.imread(IMAGE_DIR + valid_image_name, cv2.IMREAD_GRAYSCALE) # Resize or equalize if this was not already done during datasets preparation # valid_image = cv2.resize(valid_image, (WIDTH, HEIGHT), cv2.INTER_LINEAR) # valid_image = cv2.equalizeHist(valid_image) valid_image = np.expand_dims(valid_image,axis=-1) # External learned mask of segmented lungs valid_learned_mask = cv2.imread(MASK_DIR + valid_image_name, cv2.IMREAD_GRAYSCALE).astype('float64') valid_learned_mask /= 255 valid_learned_mask = np.expand_dims(valid_learned_mask,axis=-1) # Internal thresholded mask low_ind = valid_image < 6 high_ind = valid_image > 225 valid_thresholded_mask = np.ones_like(valid_image) valid_thresholded_mask[low_ind] = 0 valid_thresholded_mask[high_ind] = 0 # Combine the two masks valid_mask = np.multiply(valid_thresholded_mask,valid_learned_mask) # Standardization with training mean and std valid_image = valid_image.astype(np.float64) valid_image -= train_image_mean valid_image /= train_image_std with h5py.File(H5_IMAGE_DIR + valid_image_name[:-4] + '.h5', 'w') as hf: # Images Xset = hf.create_dataset( name='X', data=valid_image, shape=(HEIGHT, WIDTH, 1), maxshape=(HEIGHT, WIDTH, 1), compression="gzip", compression_opts=9) # Masks Mset = hf.create_dataset( name='M', data=valid_mask, shape=(HEIGHT, WIDTH, 1), maxshape=(HEIGHT, WIDTH, 1), compression="gzip", compression_opts=9) # Labels yset = hf.create_dataset( name='y', data=[valid_pos_label,valid_neg_label]) # Class weights wset = hf.create_dataset( name='w', data=valid_weight) # In[12]: # ### Generate json dictionary for the new patient names new_patient_h5_name_list = [] for i in range(n_new_patient): # print(f'{i},index={new_patient_df.index[i]}') new_patient_image_name = new_patient_df.iloc[i]['Image'] new_patient_h5_name_list.append(valid_image_name[:-4] + '.h5') new_patient_h5_dict = {"new_patient":new_patient_h5_name_list} # data set with open(H5_IMAGE_DIR + 'new_patient_dataset.json', 'w') as filehandle: json.dump(new_patient_h5_dict, filehandle) print(new_patient_h5_dict["new_patient"]) # In[13]: # ### DEVICES # physical_devices_GPU = tf.config.list_physical_devices('GPU') # print("Num GPUs:", len(physical_devices_GPU)) # physical_devices_CPU = tf.config.list_physical_devices('CPU') # print("Num CPUs:", len(physical_devices_CPU)) # local_device_protos = device_lib.list_local_devices() # print(local_device_protos) # In[14]: # ### MODEL AND RUN SELECTION HEIGHT, WIDTH, CHANNELS, IMG_COLOR_MODE, MSK_COLOR_MODE, NUM_CLASS, \ KS1, KS2, KS3, DL1, DL2, DL3, NF, NFL, NR1, NR2, DIL_MODE, W_MODE, LS, \ SHIFT_LIMIT, SCALE_LIMIT, ROTATE_LIMIT, ASPECT_LIMIT, U_AUG, \ TRAIN_SIZE, VAL_SIZE, DR1, DR2, CLASSES, IMG_CLASS, MSK_FLOAT, MSK_THRESHOLD, \ MRA, MRALEVEL, MRACHANNELS, WAVELET, WAVEMODE, WST, WST_J, WST_L, WST_FIRST_IMG, \ SCALE_BY_INPUT, SCALE_THRESHOLD = _Params() TRAIN_IMG_PATH, TRAIN_MSK_PATH, TRAIN_MSK_CLASS, VAL_IMG_PATH, \ VAL_MSK_PATH, VAL_MSK_CLASS = _Paths() # In[15]: # ### Additional or modified network or fit parameters NEW_RUN = False NEW_MODEL_NUMBER = False UPSAMPLE = False UPSAMPLE_KERNEL = (2,2) KS1=(3, 3) KS2=(3, 3) KS3=(3, 3) WSTCHANNELS = 50 RESNET_DIM_1 = 75 RESNET_DIM_2 = 85 SCALE_BY_INPUT = False SCALE_THRESHOLD = 0.6 SCALE_TO_SPAN = False SPAN = 1.0 ATT = 'mh' HEAD_SIZE = 64 NUM_HEAD = 2 VALUE_ATT = True BLUR_ATT = False BLUR_ATT_STD = 0.1 BLUR_SBI = False BLUR_SBI_STD = 0.1 NR1 = 2 PREP = True STEM = True KFOLD = 'Simple' # 'Simple','Strati','Group' VAL_SIZE = 15 OPTIMIZER = Adam(learning_rate=0.002, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=True) # In[16]: model_selection = 'model_' + str(NF) + 'F_' + str(NR1) + 'R1_' + str(NR2) + 'R2' if NEW_MODEL_NUMBER: model_number = str(datetime.datetime.now())[0:10] + '_' + \ str(datetime.datetime.now())[11:13] + '_' + \ str(datetime.datetime.now())[14:16] else: model_number = '2021-02-16_11_28' print(f"\n HEIGHT={HEIGHT}\n WIDTH={WIDTH}\n CHANNELS={CHANNELS}\n NUM_CLASS={NUM_CLASS}") print(f"\n KS1={KS1}\n KS2={KS2}\n KS3={KS3}\n DL1={DL1}\n DL2={DL2}\n DL3={DL3}\n NR1={NR1}\n NF={NF}") print(f"\n OPT_LR={K.eval(OPTIMIZER.lr)}\n OPT_B1={K.eval(OPTIMIZER.beta_1)}\n OPT_B2={K.eval(OPTIMIZER.beta_2)}") print(f" OPT_EPS={K.eval(OPTIMIZER.epsilon)}\n OPT_AMS={K.eval(OPTIMIZER.amsgrad)}") print(f"\n PREP={PREP}\n STEM={STEM}") print(f"\n WST={WST}\n WST_J={WST_J}\n WST_L={WST_L}\n WST_FIRST_IMG={WST_FIRST_IMG}") print(f"\n RESNET_DIM_1={RESNET_DIM_1}\n RESNET_DIM_2={RESNET_DIM_2}") print(f"\n MRACHANNELS={MRACHANNELS}\n MSK_FLOAT={MSK_FLOAT}\n MSK_THRESHOLD={MSK_THRESHOLD}") print(f"\n TRAIN_SIZE={TRAIN_SIZE}\n VAL_SIZE={VAL_SIZE}") print(f"\n SCALE_BY_INPUT={SCALE_BY_INPUT}\n SCALE_THRESHOLD={SCALE_THRESHOLD}") print(f"\n SCALE_TO_SPAN={SCALE_TO_SPAN}\n SPAN={SPAN}") print(f"\n ATT={ATT}\n HEAD_SIZE={HEAD_SIZE}\n NUM_HEAD={NUM_HEAD}\n VALUE_ATT={VALUE_ATT}") print(f"\n BLUR_ATT={BLUR_ATT}\n BLUR_ATT_STD={BLUR_ATT_STD}\n BLUR_SBI={BLUR_SBI}\n BLUR_SBI_STD={BLUR_SBI_STD}") print(f"\n KFOLD={KFOLD}") print(f"\n NEW_RUN={NEW_RUN}\n NEW_MODEL_NUMBER={NEW_MODEL_NUMBER}") print(f"\n MODEL={model_selection}\n MODEL_NUMBER={model_number}") # In[17]: # ### ENSEMBLE MODEL K.clear_session() if SCALE_BY_INPUT: loi = 'multiply_2' else: loi = 'multiply_1' strategy = tf.distribute.MirroredStrategy() with strategy.scope(): # MODELS wst_model = WaveletScatteringTransform(input_shape=(HEIGHT, WIDTH, CHANNELS), upsample=UPSAMPLE, upsample_kernel=UPSAMPLE_KERNEL) resnet_model = ResNet(input_shape_1=(RESNET_DIM_1, RESNET_DIM_2, WSTCHANNELS), input_shape_2=(RESNET_DIM_1, RESNET_DIM_2, 1), num_class=NUM_CLASS, ks1=KS1, ks2=KS2, ks3=KS3, dl1=DL1, dl2=DL2, dl3=DL3, filters=NF,resblock1=NR1, r_filters=NFL, resblock2=NR2, dil_mode=DIL_MODE, sp_dropout=DR1,re_dropout=DR2, prep=PREP, stem=STEM, mask_float=MSK_FLOAT, mask_threshold=MSK_THRESHOLD, att=ATT, head_size=HEAD_SIZE, num_heads=NUM_HEAD, value_att=VALUE_ATT, scale_by_input=SCALE_BY_INPUT, scale_threshold=SCALE_THRESHOLD, scale_to_span=SCALE_TO_SPAN, span=SPAN, blur_sbi=BLUR_SBI, blur_sbi_std=BLUR_SBI_STD, return_seq=True) # recover individual resnet models resnet_model_0 = clone_model(resnet_model) resnet_model_0.load_weights('models/' + model_selection + '_' + model_number + '_M0' + '_resnet_weights.h5') for layer in resnet_model_0.layers: layer.trainable = False resnet_model__0 = Model(inputs=[resnet_model_0.inputs], outputs=[resnet_model_0.get_layer(loi).output]) resnet_model_1 = clone_model(resnet_model) resnet_model_1.load_weights('models/' + model_selection + '_' + model_number + '_M1' + '_resnet_weights.h5') for layer in resnet_model_1.layers: layer.trainable = False resnet_model__1 = Model(inputs=[resnet_model_1.inputs], outputs=[resnet_model_1.get_layer(loi).output]) resnet_model_2 = clone_model(resnet_model) resnet_model_2.load_weights('models/' + model_selection + '_' + model_number + '_M2' + '_resnet_weights.h5') for layer in resnet_model_2.layers: layer.trainable = False resnet_model__2 = Model(inputs=[resnet_model_2.inputs], outputs=[resnet_model_2.get_layer(loi).output]) resnet_model_3 = clone_model(resnet_model) resnet_model_3.load_weights('models/' + model_selection + '_' + model_number + '_M3' + '_resnet_weights.h5') for layer in resnet_model_3.layers: layer.trainable = False resnet_model__3 = Model(inputs=[resnet_model_3.inputs], outputs=[resnet_model_3.get_layer(loi).output]) resnet_model_4 = clone_model(resnet_model) resnet_model_4.load_weights('models/' + model_selection + '_' + model_number + '_M4' + '_resnet_weights.h5') for layer in resnet_model_4.layers: layer.trainable = False resnet_model__4 = Model(inputs=[resnet_model_4.inputs], outputs=[resnet_model_4.get_layer(loi).output]) resnet_model_5 = clone_model(resnet_model) resnet_model_5.load_weights('models/' + model_selection + '_' + model_number + '_M5' + '_resnet_weights.h5') for layer in resnet_model_5.layers: layer.trainable = False resnet_model__5 = Model(inputs=[resnet_model_5.inputs], outputs=[resnet_model_5.get_layer(loi).output]) # GRAPH 1 wst_input_1 = Input(shape=(HEIGHT, WIDTH, CHANNELS)) wst_input_2 = Input(shape=(HEIGHT, WIDTH, CHANNELS)) wst_output_1 = wst_model([wst_input_1,wst_input_2]) y0 = resnet_model__0(wst_output_1) y1 = resnet_model__1(wst_output_1) y2 = resnet_model__2(wst_output_1) y3 = resnet_model__3(wst_output_1) y4 = resnet_model__4(wst_output_1) y5 = resnet_model__5(wst_output_1) d3 = Average()([y0,y1,y2,y3,y4,y5]) d3 = GlobalAveragePooling2D()(d3) resnet_output = Activation("softmax", name = 'softmax')(d3) ensemble_model = Model([wst_input_1,wst_input_2], resnet_output,name='ensemble_wst_resnet') ensemble_model.compile(optimizer=Adam(), loss=tf.keras.losses.CategoricalCrossentropy(), metrics=[tf.keras.metrics.CategoricalAccuracy()]) print(wst_model.name + ' model selected') print(ensemble_model.name + ' model selected') # ensemble_model.summary() # plot_model(wst_model, show_shapes=True,\ # show_layer_names=False,\ # to_file='saved_images/' + model_selection + '_' + model_number + '_wst_architecture.png') # plot_model(resnet_model, show_shapes=True,\ # show_layer_names=False,\ # to_file='saved_images/' + model_selection + '_' + model_number + '_resnet_architecture.png') # plot_model(ensemble_model, show_shapes=True,\ # show_layer_names=False,\ # to_file='saved_images/' + model_selection + '_' + model_number + '_ensemble_wst_resnet_architecture.png') # In[18]: # ### GENERATOR for 1 IMAGE at a time datadir = H5_IMAGE_DIR dataset = new_patient_h5_dict valid_1_generator = DataGenerator(dataset["new_patient"], datadir, augment=False, shuffle=False, standard=False,\ batch_size=1, dim=(HEIGHT, WIDTH, MRACHANNELS), mask_dim=(HEIGHT, WIDTH, 1), \ mlDWT=False, mralevel=MRALEVEL, wave=WAVELET, wavemode=WAVEMODE, verbose=0) print(dataset) # In[19]: # ### PREDICT valid_y_true = [] valid_y_pred = [] for i in range(len(dataset["new_patient"])): x_m, y, w = valid_1_generator.__getitem__(i) valid_y_true.append(y[0].tolist()) y_pred = ensemble_model(x_m).numpy().tolist() valid_y_pred.append(y_pred[0]) valid_y_true = np.array(valid_y_true) valid_y_pred = np.array(valid_y_pred) print(valid_y_true,valid_y_pred) valid_pos_list = np.array(dataset["new_patient"])[valid_y_true[:,0]==1].tolist() valid_neg_list = np.array(dataset["new_patient"])[valid_y_true[:,1]==1].tolist() new_patient_list = np.array(dataset["new_patient"]).tolist() print(new_patient_list,valid_pos_list,valid_neg_list) # In[20]: # ### HEAT MAPS pwd = os.getcwd() os.system('mkdir new_patient_gradcam_WST_RESNET') gradcam_path = os.path.join(pwd,'new_patient_gradcam_WST_RESNET/') # os.system('mkdir large_set_gradcam_valid_WST_RESNET/negative') # OUT_IMAGE_DIR = os.path.join(gradcam_path,'negative/') OUT_IMAGE_DIR = gradcam_path print(H5_IMAGE_DIR) print(OUT_IMAGE_DIR) if valid_y_true[:,0]==1: LABEL = "POSITIVE" else: LABEL = "NEGATIVE" FIG_SIZE = (16,20) _HEAT_MAP_DIFF(ensemble_model,generator=valid_1_generator,layer='average',\ labels=['Positive score','Negative score'],header='LABELED: ' + LABEL,figsize=FIG_SIZE,\ image_dir=H5_IMAGE_DIR,out_image_dir=OUT_IMAGE_DIR,\ img_list=new_patient_list,first_img=0,last_img=len(new_patient_list),\ img_width=WIDTH,img_height=HEIGHT,display=True) # In[21]: # ### RUN CXR WITH ALL MODELS IN MEMORY # ### GENERATE LUNG MASKS # get CXR image names from source directory source_img_names = [f for f in listdir(source_resized_img_path) if isfile(join(source_resized_img_path, f))] for name in source_img_names: print(f"CXR: {name}") if name == '.DS_Store': continue input_img = cv2.imread(source_resized_img_path + name, cv2.IMREAD_GRAYSCALE) scaled_img = input_img/255 scaled_img = np.expand_dims(scaled_img,axis = [0,-1]) mask = model(scaled_img).numpy() mask_float = np.squeeze(mask[0,:,:,0]) mask_binary = (mask_float > 0.5)*1 mask_float *=255 mask_binary *=255 cv2.imwrite(target_resized_msk_path_float + name, mask_float) cv2.imwrite(target_resized_msk_path_binary + name, mask_binary) # ### READ PATIENT DATA AND STANDARDIZATION PARAMETERS new_patient_df = pd.read_csv("new_patient.csv",index_col = 0) n_new_patient = len(new_patient_df) # print(n_new_patient) with open('standardization_parameters_V7.json') as json_file: standardization_parameters = json.load(json_file) train_image_mean = standardization_parameters['mean'] train_image_std = standardization_parameters['std'] # print(train_image_mean, train_image_std) # ### PREPARE H5 FILES # Loop over the set of images to predict. In this case 'valid_pos_label' and 'valid_neg_label' are the tentative # radiologist assignments already in the dataframe. for i in range(n_new_patient): # print(f'{i},index={new_patient_df.index[i]}') valid_image_name, valid_pos_label, valid_neg_label, valid_weight = \ new_patient_df.iloc[i]['Image'],\ new_patient_df.iloc[i]['Positive'],\ new_patient_df.iloc[i]['Negative'],\ new_patient_df.iloc[i]['ClassWeight'] valid_image = cv2.imread(IMAGE_DIR + valid_image_name, cv2.IMREAD_GRAYSCALE) # Resize or equalize if this was not already done during datasets preparation # valid_image = cv2.resize(valid_image, (WIDTH, HEIGHT), cv2.INTER_LINEAR) # valid_image = cv2.equalizeHist(valid_image) valid_image = np.expand_dims(valid_image,axis=-1) # External learned mask of segmented lungs valid_learned_mask = cv2.imread(MASK_DIR + valid_image_name, cv2.IMREAD_GRAYSCALE).astype('float64') valid_learned_mask /= 255 valid_learned_mask = np.expand_dims(valid_learned_mask,axis=-1) # Internal thresholded mask low_ind = valid_image < 6 high_ind = valid_image > 225 valid_thresholded_mask = np.ones_like(valid_image) valid_thresholded_mask[low_ind] = 0 valid_thresholded_mask[high_ind] = 0 # Combine the two masks valid_mask = np.multiply(valid_thresholded_mask,valid_learned_mask) # Standardization with training mean and std valid_image = valid_image.astype(np.float64) valid_image -= train_image_mean valid_image /= train_image_std with h5py.File(H5_IMAGE_DIR + valid_image_name[:-4] + '.h5', 'w') as hf: # Images Xset = hf.create_dataset( name='X', data=valid_image, shape=(HEIGHT, WIDTH, 1), maxshape=(HEIGHT, WIDTH, 1), compression="gzip", compression_opts=9) # Masks Mset = hf.create_dataset( name='M', data=valid_mask, shape=(HEIGHT, WIDTH, 1), maxshape=(HEIGHT, WIDTH, 1), compression="gzip", compression_opts=9) # Labels yset = hf.create_dataset( name='y', data=[valid_pos_label,valid_neg_label]) # Class weights wset = hf.create_dataset( name='w', data=valid_weight) # ### GENERATE JSON DICTIONARY FOR NEW PATIENT NAMES new_patient_h5_name_list = [] for i in range(n_new_patient): new_patient_image_name = new_patient_df.iloc[i]['Image'] new_patient_h5_name_list.append(valid_image_name[:-4] + '.h5') new_patient_h5_dict = {"new_patient":new_patient_h5_name_list} with open(H5_IMAGE_DIR + 'new_patient_dataset.json', 'w') as filehandle: json.dump(new_patient_h5_dict, filehandle) # print(new_patient_h5_dict["new_patient"]) # ### PREDICT valid_y_true = [] valid_y_pred = [] for i in range(len(dataset["new_patient"])): x_m, y, w = valid_1_generator.__getitem__(i) valid_y_true.append(y[0].tolist()) y_pred = ensemble_model(x_m).numpy().tolist() valid_y_pred.append(y_pred[0]) valid_y_true = np.array(valid_y_true) valid_y_pred = np.array(valid_y_pred) new_patient_list = np.array(dataset["new_patient"]).tolist() # print(new_patient_list) # ### HEAT MAPS pwd = os.getcwd() os.system('mkdir new_patient_gradcam_WST_RESNET') gradcam_path = os.path.join(pwd,'new_patient_gradcam_WST_RESNET/') OUT_IMAGE_DIR = gradcam_path # print(H5_IMAGE_DIR) # print(OUT_IMAGE_DIR) if valid_y_true[:,0]==1: LABEL = "POSITIVE" else: LABEL = "NEGATIVE" print(f"Tentative radiologist assignment: {LABEL}") FIG_SIZE = (16,20) _HEAT_MAP_DIFF(ensemble_model,generator=valid_1_generator,layer='average',\ labels=['Positive score','Negative score'],header='LABELED: ' + LABEL,figsize=FIG_SIZE,\ image_dir=H5_IMAGE_DIR,out_image_dir=OUT_IMAGE_DIR,\ img_list=new_patient_list,first_img=0,last_img=len(new_patient_list),\ img_width=WIDTH,img_height=HEIGHT,display=True) ```
github_jupyter
``` #import libraries import networkx as nx import numpy as np import matplotlib.pyplot as plt import scipy.stats import collections ``` source: http://snap.stanford.edu/data/twitch-social-networks.html 3 réseaux de Twitch de pays différents. ``` #Read datasets # #GB infile='data/musae_ENGB_edges.csv' GB=nx.read_edgelist(infile, delimiter=',') #France infile='data/musae_FR_edges.csv' FR=nx.read_edgelist(infile, delimiter=',') #Portugal infile='data/musae_PTBR_edges.csv' PT=nx.read_edgelist(infile, delimiter=',') #Number of nodes/number of edges print ('GB',GB.number_of_nodes(),GB.number_of_edges()) print ('FR',FR.number_of_nodes(),FR.number_of_edges()) print ('PT',PT.number_of_nodes(),PT.number_of_edges()) #network density print (nx.density(GB)) print (nx.density(FR)) print (nx.density(PT)) #is connected? print(nx.is_connected(GB)) print(nx.is_connected(FR)) print(nx.is_connected(PT)) ``` # Degree distribution visualization ``` # Degree list kkGB=[GB.degree(u) for u in GB.nodes()] maxDegreeGB=max(kkGB) minDegreeGB=min(kkGB) averageDegreeGB=np.mean(kkGB) stdDegreeGB=np.std(kkGB) print (maxDegreeGB,minDegreeGB,averageDegreeGB,stdDegreeGB) kkFR=[FR.degree(u) for u in FR.nodes()] maxDegreeFR=max(kkFR) minDegreeFR=min(kkFR) averageDegreeFR=np.mean(kkFR) stdDegreeFR=np.std(kkFR) print (maxDegreeFR,minDegreeFR,averageDegreeFR,stdDegreeFR) kkPT=[PT.degree(u) for u in PT.nodes()] maxDegreePT=max(kkPT) minDegreePT=min(kkPT) averageDegreePT=np.mean(kkPT) stdDegreePT=np.std(kkPT) print (maxDegreePT,minDegreePT,averageDegreePT,stdDegreePT) ``` ### Plotting the degree distribution ``` # a function for log binning for distributions def logBinning(degreeList,nbin): kmin=min(degreeList) kmax=max(degreeList) logBins = np.logspace(np.log10(kmin), np.log10(kmax),num=nbin) logBinDensity, binedges = np.histogram(degreeList, bins=logBins, density=True) logBins = np.delete(logBins, -1) return logBinDensity, logBins ``` Ce sont les 3 distributions, assez équivalentes et toutes hétérogènes. ``` y,x=logBinning(np.array(kkGB),10) plt.loglog(x,y,'o',label='GB',markersize=10) y,x=logBinning(np.array(kkFR),10) plt.loglog(x,y,'s',label='FR',markersize=10) y,x=logBinning(np.array(kkPT),10) plt.loglog(x,y,'*',label='PT',markersize=10) plt.xlabel('k',size=15) plt.ylabel('P(k)',size=15) plt.legend() plt.show() ``` # Clustering spectrum On va faire des moyennes de clustering pour certaines classes de valeurs. - En bleu : tous les points. - En orange : la moyenne par classe de degrés k (pas par k !) ``` ccGB=[nx.clustering(GB,u) for u in GB.nodes()] ccFR=[nx.clustering(FR,u) for u in FR.nodes()] ccPT=[nx.clustering(PT,u) for u in PT.nodes()] #For 1 country xx=[u for (u,v) in zip(kkGB,ccGB) if v>0] #j'enlève les 0 car on ne peut pas faire log0 yy=[v for (u,v) in zip(kkGB,ccGB) if v>0] plt.loglog(xx,yy,'o',alpha=0.1) logBins=np.logspace(np.log2(np.min(xx)),np.log2(np.max(xx)),base=2,num=15) ybin,xbin,binnumber=scipy.stats.binned_statistic(xx,yy,statistic='mean',bins=logBins) plt.loglog(xbin[:-1],ybin,'o',label='HU',markersize=10) plt.xlabel('k',size=15) plt.ylabel('c(k)',size=15) plt.show() #For all countries xx=[u for (u,v) in zip(kkGB,ccGB) if v>0] yy=[v for (u,v) in zip(kkGB,ccGB) if v>0] logBins=np.logspace(np.log2(np.min(xx)),np.log2(np.max(xx)),base=2,num=15) ybin,xbin,binnumber=scipy.stats.binned_statistic(xx,yy,statistic='mean',bins=logBins) plt.loglog(xbin[:-1],ybin,'o',label='GB',markersize=10) xx=[u for (u,v) in zip(kkFR,ccFR) if v>0] yy=[v for (u,v) in zip(kkFR,ccFR) if v>0] logBins=np.logspace(np.log2(np.min(xx)),np.log2(np.max(xx)),base=2,num=15) ybin,xbin,binnumber=scipy.stats.binned_statistic(xx,yy,statistic='mean',bins=logBins) plt.loglog(xbin[:-1],ybin,'s',label='FR',markersize=10) xx=[u for (u,v) in zip(kkPT,ccPT) if v>0] yy=[v for (u,v) in zip(kkPT,ccPT) if v>0] logBins=np.logspace(np.log2(np.min(xx)),np.log2(np.max(xx)),base=2,num=15) ybin,xbin,binnumber=scipy.stats.binned_statistic(xx,yy,statistic='mean',bins=logBins) plt.loglog(xbin[:-1],ybin,'*',label='PT',markersize=10) plt.xlabel('k',size=15) plt.ylabel('c(k)',size=15) plt.legend() plt.show() ``` # Degree mixing Je choisis le noeud 3e noeud (numéro 980). ``` ego=list(GB.nodes())[2] ego neighEgo=list(GB.neighbors(ego)) neighEgo degreeNeighEgo=[GB.degree(v) for v in neighEgo] degreeNeighEgo np.mean(degreeNeighEgo) ``` knn pour tous les noeuds et pour tous le réseau. ``` knnGB=[ np.mean([GB.degree(v) for v in GB.neighbors(u)]) for u in GB.nodes()] knnFR=[np.mean([FR.degree(v) for v in FR.neighbors(u)]) for u in FR.nodes()] knnPT=[np.mean([PT.degree(v) for v in PT.neighbors(u)]) for u in PT.nodes()] #For 1 country xx=[u for (u,v) in zip(kkGB,knnGB) if v>0] yy=[v for (u,v) in zip(kkGB,knnGB) if v>0] plt.loglog(xx,yy,'o',alpha=0.1) logBins=np.logspace(np.log2(np.min(xx)),np.log2(np.max(xx)),base=2,num=15) #if I use np.linspace I will have linear bins ybin,xbin,binnumber=scipy.stats.binned_statistic(xx,yy,statistic='mean',bins=logBins) plt.loglog(xbin[:-1],ybin,'o',label='HU',markersize=10) plt.xlabel('k',size=15) plt.ylabel('knn(k)',size=15) plt.show() ``` - Angleterre : Les petits sont très connectés aux grands(Desassortatif) - France : on a une phase de desassortativité mais une part d'assortativité. - Portugal : entre les deux. On ne connaît pas les différences d'utilisation de twitch en France et au Portugal pour interpréter. ``` #For all countries xx=[u for (u,v) in zip(kkGB,knnGB) if v>0] yy=[v for (u,v) in zip(kkGB,knnGB) if v>0] logBins=np.logspace(np.log2(np.min(xx)),np.log2(np.max(xx)),base=2,num=15) ybin,xbin,binnumber=scipy.stats.binned_statistic(xx,yy,statistic='mean',bins=logBins) plt.loglog(xbin[:-1],ybin,'o',label='GB',markersize=10) xx=[u for (u,v) in zip(kkFR,knnFR) if v>0] yy=[v for (u,v) in zip(kkFR,knnFR) if v>0] logBins=np.logspace(np.log2(np.min(xx)),np.log2(np.max(xx)),base=2,num=15) ybin,xbin,binnumber=scipy.stats.binned_statistic(xx,yy,statistic='mean',bins=logBins) plt.loglog(xbin[:-1],ybin,'s',label='FR',markersize=10) xx=[u for (u,v) in zip(kkPT,knnPT) if v>0] yy=[v for (u,v) in zip(kkPT,knnPT) if v>0] logBins=np.logspace(np.log2(np.min(xx)),np.log2(np.max(xx)),base=2,num=15) ybin,xbin,binnumber=scipy.stats.binned_statistic(xx,yy,statistic='mean',bins=logBins) plt.loglog(xbin[:-1],ybin,'*',label='PT',markersize=10) plt.xlabel('k',size=15) plt.ylabel('knn(k)',size=15) plt.legend() plt.show() ```
github_jupyter
``` import json import os import geopandas as gpd import matplotlib.pyplot as plt import pandas as pd import osmnx as ox import random import numpy as np %matplotlib inline cities = ['adelaide', 'auckland', 'baltimore', 'bangkok', 'barcelona', 'belfast', 'bern', 'chennai', 'mexico_city', 'cologne', 'ghent', 'graz', 'hanoi', 'hong_kong', 'lisbon', 'melbourne', 'odense', 'olomouc', 'sao_paulo', 'phoenix', 'seattle', 'sydney', 'valencia', 'vic' ] process_folder = '../../process' pop_col = ["pop_ghs_2015"] dest_col = ["destinations"] filenames_filepath = "./groundtruthing.csv" np.random.seed(24) filenames = {} for city in cities: print(f"start {city}") process_config_path = f"../../process/configuration/{city}.json" with open(process_config_path) as json_file: config = json.load(json_file) input_folder = os.path.join(process_folder, config['folder']) gpkg_input = os.path.join(input_folder, config['geopackagePath']) pop = gpd.read_file(gpkg_input, layer='pop_ghs_2015' ) dests = gpd.read_file(gpkg_input, layer='destinations' ) fresh_food = dests[dests['dest_name_full'].str.contains('Fresh Food / Market')] gdf_study_area = gpd.read_file(gpkg_input, layer="urban_study_region") study_area = gdf_study_area["geometry"].iloc[0] crs = gdf_study_area.crs if pop.crs != crs: pop = pop.to_crs(crs) if fresh_food.crs != crs: fresh_food = fresh_food.to_crs(crs) import warnings warnings.filterwarnings("ignore", "GeoSeries.notna", UserWarning) # temp warning suppression pop_clipped = gpd.clip(pop, study_area) fresh_food_clipped = gpd.clip(fresh_food, study_area) joined_freshfood = gpd.sjoin(fresh_food_clipped, pop_clipped, how='left', op='within') ordered_joined_freshfood = joined_freshfood.sort_values('pop_est') split_joined_freshfood = np.array_split(ordered_joined_freshfood, 5) q1_dests = (split_joined_freshfood[0]) q2_dests = (split_joined_freshfood[1]) q3_dests = (split_joined_freshfood[2]) q4_dests = (split_joined_freshfood[3]) q5_dests = (split_joined_freshfood[4]) q1_dests['quantile'] = 1 q2_dests['quantile'] = 2 q3_dests['quantile'] = 3 q4_dests['quantile'] = 4 q5_dests['quantile'] = 5 q1_sample_dests = q1_dests.sample(10) q2_sample_dests = q2_dests.sample(10) q3_sample_dests = q3_dests.sample(10) q4_sample_dests = q4_dests.sample(10) q5_sample_dests = q5_dests.sample(10) sample_dests = [q1_sample_dests, q2_sample_dests, q3_sample_dests, q4_sample_dests, q5_sample_dests] final_sample_dests = pd.concat(sample_dests) final_sample_dests = final_sample_dests.to_crs({'init': 'epsg:4326'}) final_sample_dests['lat'] = final_sample_dests.geometry.y final_sample_dests['lon'] = final_sample_dests.geometry.x final_sample_dests = final_sample_dests.set_index('osm_id') print(f"{city} shape below") print(final_sample_dests.shape) for index, row in final_sample_dests.iterrows(): filenames[index] = {} city_name = city hexagon_pop_quantile = row['quantile'] latitude = row['lat'] longitude = row['lon'] google_maps_screenshot = f"{latitude}_{longitude}_{city}_google_maps_image" google_satellite_screenshot = f"{latitude}_{longitude}_{city}_google_satellite_image" google_street_view_screenshot = f"{latitude}_{longitude}_{city}_google_street_view_image" # calculate total street length and edge count in each dataset, then add to indicators filenames[index]["Hexagon_Pop_Quintile"] = hexagon_pop_quantile filenames[index]["City_Name"] = city_name filenames[index]["Latitude"] = latitude filenames[index]["Longitude"] = longitude filenames[index]["Google_Maps_Date"] = "" filenames[index]["Google_Maps_Assessment"] = "" filenames[index]["Google_Maps_Screenshot"] = google_maps_screenshot filenames[index]["Google_Satellite_Date"] = "" filenames[index]["Google_Satellite_Assessment"] = "" filenames[index]["Google_Satellite_Screenshot"] = google_satellite_screenshot filenames[index]["Google_Street_View_Date"] = "" filenames[index]["Google_Street_View_Assessment"] = "" filenames[index]["Google_Street_View_Screenshot"] = google_street_view_screenshot filenames[index]["Assessment"] = "" filenames[index]["Comments"] = "" print(ox.ts(), f"finshed names for {city}") # turn indicators into a dataframe and save to disk df_filenames = pd.DataFrame(filenames).T df_filenames.to_csv(filenames_filepath, index=True, encoding="utf-8") print(ox.ts(), f'all done, saved filenames to disk at "{filenames_filepath}"') ```
github_jupyter
# Algoritmos de Ordenação ``` from IPython.display import Image Image("complexity.png") ``` ## 1. Selection Sort ``` # Implementação class SelectionSort(object): def sort(self, data): for i in range(0, len(data)-1): min_index = self.min_index(i + 1, data) if (data[min_index] < data[i]): data[i], data[min_index] = data[min_index], data[i] return data def min_index(self, index, data): min_index = index for i in range(index + 1, len(data)): if (data[i] < data[min_index]): min_index = i return min_index sorter = SelectionSort() print(sorter.sort([5, 4, 3, 2, 1])) ``` ## 2. Merge Sort ``` # Implementação class MergeSort(object): def sort(self, data): if len(data) <= 1: return data else: max_size = len(data) mid = int(max_size/2) data[:mid] = self.sort(data[:mid]) data[mid:] = self.sort(data[mid:]) i = 0 j = mid while j < max_size: if data[i] >= data[j]: les = data[j] data[i + 1: j + 1] = data[i : j] data[i] = les i = i + 1 j = j + 1 else: i = i + 1 return data sorter = MergeSort() print(sorter.sort([5, 2, 3, 2, 1])) ``` ## 3. Insertion Sort ``` # Implementação class InsertSort(object): def sort(self, data): max_size = len(data) key = 0 for i in range(1, max_size): pivot = key if (data[i] < data[pivot]): while (data[i] <= data[pivot]): if (pivot > 0): pivot = pivot - 1 else: break else: while (data[i] > data[pivot]): if (pivot < max_size-1): pivot = pivot + 1 else: break les = data[i] data[pivot + 1: i + 1] = data[pivot: i] data[pivot] = les return data sorter = InsertSort() print(sorter.sort([3, 2, 5, 4, 1])) ``` ## 4. Quick Sort ``` # Implementação class QuickSort(object): def sort(self, data): max_size = len(data) if (max_size > 1): pivot_index = int(max_size / 2) + (max_size % 2) pivot = data[pivot_index] i = 0 j = max_size - 1 while (i <= j): while (data[i] < pivot) and (i < max_size): i = i + 1 while (data[j] > pivot) and (j > 0): j = j - 1 if (i <= j): data[i], data[j] = data[j], data[i] i = i + 1 j = j - 1 if (j > 0): data[:j+1] = self.sort(data[:j+1]) if (i < max_size): data[i:] = self.sort(data[i:]) return data sorter = QuickSort() print(sorter.sort([3, 2, 5, 4, 1])) ```
github_jupyter