text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
# Argparse ## What is Argparse? [Argparse Tutorial](https://www.pythonforbeginners.com/argparse/argparse-tutorial) Argparse is a parser for command-line options, arguments and subcommands. This library makes it easy to write user-friendly command-line interfaces by: * defines what arguments it requires * and figuring out how to parse those out of sys.argv. The argparse module automatically generates help and usage messages and issues errors when users give the program invalid arguments. ## Getting Started See [agrparse_1.py](argparse_1.py) ```python # import the library import argparse # initialize the parser parser = argparse.ArgumentParser() # parse arguments from sys.argv parser.parse_args() ``` ``` # run the code and ask for help !python argparse_1.py --help ``` ### Positional Arguments See [argparse_2.py](argparse_2.py) ```python import argparse parser = argparse.ArgumentParser() parser.add_argument("name") parser.add_argument("age") parser.add_argument("city") args = parser.parse_args() print(args.name, args.age, args.city) ``` ``` !python argparse_2.py --help !python argparse_2.py Ben 37 Pittsburgh !python argparse_2.py Brian "??" Pittsburgh ``` ### Extending the help text See [argparse_3.py](argparse_3.py) ```python import argparse parser = argparse.ArgumentParser() parser.add_argument("name", help="the name of the person you want to find") parser.add_argument("age", help="the age of the person you'd like to find") parser.add_argument("city", help="the city you'd like to search") args = parser.parse_args() ``` ``` !python argparse_3.py --help ``` ### Changing the default argument type Argparse treats all arguments as strings by default. You can change the expected data type when you add each argument. See [argparse_4.py](argparse_4.py) ```python import argparse parser = argparse.ArgumentParser() parser.add_argument("name", help="the name of the person you want to find") parser.add_argument("age", help="the age of the person you'd like to find", type=int) parser.add_argument("city", help="the city you'd like to search") args = parser.parse_args() ``` ``` !python argparse_4.py Ben abc Pittsburgh !python argparse_4.py Ben 37 Pittsburgh ``` ### Optional arguments See [argparse_5.py](argparse_5.py) ```python import argparse parser = argparse.ArgumentParser() parser.add_argument("--verbose", help="increase output verbosity", action="store_true") args = parser.parse_args() if args.verbose: print("verbosity turned on") ``` An optional argument (or option) is (by default) given None as a value when its not being used. * Using the --verbosity option, only two values are actually useful, True or False. * The keyword "action" is being given the value "store_true" which means that if the option is specifed, then assign the value "True" to args.verbose * Not specifying the option implies False. ``` !python argparse_5.py --help !python argparse_5.py --verbose !python argparse_5.py ``` ### Short options See [argparse_6.py](argparse_6.py) ```python import argparse parser = argparse.ArgumentParser() parser.add_argument("-n", "--name", help="the name of the person you want to find") parser.add_argument("-a", "--age", help="the age of the person you'd like to find", type=int) parser.add_argument("-c", "--city", help="the city you'd like to search") parser.add_argument("-v", "--verbose", help="increase output verbosity", action="store_true") args = parser.parse_args() if args.verbose: print(f"Searching for {args.name} {args.age} years of age in or around {args.city}") else: print(f"Searching for {args.name}") ``` ``` !python argparse_6.py --help !python argparse_6.py -n Ben -a 37 -c Pittsburgh !python argparse_6.py --name Ben --age 37 --city Pittsburgh !python argparse_6.py --name Ben --age 37 --city Pittsburgh --verbose ```
github_jupyter
<a href="https://colab.research.google.com/github/kumiori/mec647/blob/main/MEC647_Fracture_5.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` %%capture import sys try: import google.colab # noqa: F401 except ImportError: import ufl # noqa: F401 import dolfinx # noqa: F401 else: try: import ufl import dolfinx except ImportError: !wget "https://fem-on-colab.github.io/releases/fenicsx-install.sh" -O "/tmp/fenicsx-install.sh" && bash "/tmp/fenicsx-install.sh"; import ufl # noqa: F401 import dolfinx # noqa: F401 try: import pyvista except ImportError: !{sys.executable} -m pip install --upgrade pyvista itkwidgets; import pyvista # noqa: F401 from pyvista.utilities import xvfb try: import gmsh except ImportError: !{sys.executable} -m pip install gmsh import gmsh %%capture !sudo apt install libgl1-mesa-glx xvfb; !{sys.executable} -m pip install pythreejs; !{sys.executable} -m pip install ipygany; !{sys.executable} -m pip install --upgrade PyYAML # !pip install --ignore-installed PyYAML ``` # Fracture Let $\Omega \subset (0, L)^D$, with $D=1, 2, 3$, $L$ finite, being the (or one) characteristic length of the specimen. For any - displacement field $u\in V_t : H^1(\Omega, R^n) + bcs(t)$ with $n=1, 2$ or $3$, and - damage field $\alpha \in H^1(\Omega, R)$, consider the energy $E(u, \alpha)$ defined as $$ E_\ell(u, \alpha)=\frac{1}{2}\int_\Omega a(\alpha) W(u) dx + \underbrace{\frac{G_c}{c_w} \int \left(\frac{1}{\ell}w(\alpha) + \ell |\nabla \alpha|^2 \right)dx}_{\text{Surface energy}}- \int_\Omega f.u dx$$ In practice, $\ell \ll L$. Above, $W$ is the elastic energy density, reading (in linearised elasticity as) $$ W(u) = Ae(u):e(u) $$ where $A$ is the 4-th order tensor of elasticity, in the isotropic and homogeneous case, it corresponds to a linear combination with two coefficients, say, $A_0$ the stiffness (dimensional), and $\nu$ the Poisson ratio (non-dimensional). Further, $w(\alpha)$ corresponds to the dissipated energy to damage, homogeneously, the specimen, the gradient term accounts for spatial variations. **Keypoint:** these two terms are weighted by $\ell$, a parameter that is homogeneous to a length and is understood as a *material* quantity (as opposed to *numerical*). Define $D(\alpha_0):=\left\{ \alpha \in H^1(\Omega), \alpha \geq \alpha_0 \right\}$, for some $\alpha_0(x)\geq 0$ pointwise. We solve two types of problems (by increasing difficulty): - **The static problem**: Given a load (boundary conditions) and an initial state of damage $\alpha_0$, what is the equilibrium displacement and repartition of damage? In other terms: $$\operatorname{ min loc} \left\{ E_\ell(u, \alpha): u \in V_t, \alpha \in D(\alpha_0) \right\}$$. - **The evolution problem**: Given a load **history** (boundary conditions as a function of $t$) and an initial state of damage $\alpha_0$, what is the *evolution* of equilibrium displacement and repartition of damage, i.e. the map $t\mapsto (u_t, \alpha_t)$, such that - (Irrevers.) $\alpha_t \nearrow t$, - (Stability) $(u_t, \alpha_t) = \operatorname{arg loc min} \left\{ E_\ell (v, \beta), (v, \beta) \in V_t \times D(\alpha_t) \right\}$ ### Parameters In the energy above: - Two elasticity parameters, such as - $A_0$ the stiffness of the sound material - $\nu$ the Poisson ratio - equivalently, $\mu, \lambda$ the Lamé parameters - Two fracture/damage parameters: - $\ell$ the internal damage length - $G_c$ the material toughness ### Back of the envelope computation. 1. Show that the energy above can be written as a function of only two non-dimensional parameters (ex.: $\nu, \tilde \ell)$, by dimensional analysis. ``` # library include import numpy as np import yaml import json import sys import os from pathlib import Path from mpi4py import MPI import petsc4py from petsc4py import PETSc import dolfinx import dolfinx.plot from dolfinx import log import ufl from dolfinx.io import XDMFFile import logging logging.basicConfig(level=logging.INFO) import dolfinx import dolfinx.plot import dolfinx.io from dolfinx.fem import ( Constant, Function, FunctionSpace, assemble_scalar, dirichletbc, form, locate_dofs_geometrical, set_bc, ) import matplotlib.pyplot as plt ```
github_jupyter
# T81-558: Applications of Deep Neural Networks **Module 11: Natural Language Processing and Speech Recognition** * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # Module Video Material Main video lecture: * [Module 11, Part 1: Chatbots and NLP](https://www.youtube.com/watch?v=bv_iVVrlfbU) * [Module 11, Part 2: End to End Networks](https://www.youtube.com/watch?v=qN9hHlZKIL4) * [Module 11, Part 3: Word2Vec](https://www.youtube.com/watch?v=Ae3GVw5nTYU) # Helpful Functions You will see these at the top of every module. These are simply a set of reusable functions that we will make use of. Each of them will be explained as the semester progresses. They are explained in greater detail as the course progresses. Class 4 contains a complete overview of these functions. ``` import base64 import os import matplotlib.pyplot as plt import numpy as np import pandas as pd import requests from sklearn import preprocessing # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1 # at every location where the original column (name) matches each of the target_values. One column is added for # each target value. def encode_text_single_dummy(df, name, target_values): for tv in target_values: l = list(df[name].astype(str)) l = [1 if str(x) == str(tv) else 0 for x in l] name2 = f"{name}-{tv}" df[name2] = l # Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue). def encode_text_index(df, name): le = preprocessing.LabelEncoder() df[name] = le.fit_transform(df[name]) return le.classes_ # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Convert all missing values in the specified column to the median def missing_median(df, name): med = df[name].median() df[name] = df[name].fillna(med) # Convert all missing values in the specified column to the default def missing_default(df, name, default_value): df[name] = df[name].fillna(default_value) # Convert a Pandas dataframe to the x,y inputs that TensorFlow needs def to_xy(df, target): result = [] for x in df.columns: if x != target: result.append(x) # find out the type of the target column. Is it really this hard? :( target_type = df[target].dtypes target_type = target_type[0] if hasattr( target_type, '__iter__') else target_type # Encode to int for classification, float otherwise. TensorFlow likes 32 bits. if target_type in (np.int64, np.int32): # Classification dummies = pd.get_dummies(df[target]) return df[result].values.astype(np.float32), dummies.values.astype(np.float32) # Regression return df[result].values.astype(np.float32), df[[target]].values.astype(np.float32) # Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return f"{h}:{m:>02}:{s:>05.2f}" # Regression chart. def chart_regression(pred, y, sort=True): t = pd.DataFrame({'pred': pred, 'y': y.flatten()}) if sort: t.sort_values(by=['y'], inplace=True) plt.plot(t['y'].tolist(), label='expected') plt.plot(t['pred'].tolist(), label='prediction') plt.ylabel('output') plt.legend() plt.show() # Remove all rows where the specified column is +/- sd standard deviations def remove_outliers(df, name, sd): drop_rows = df.index[(np.abs(df[name] - df[name].mean()) >= (sd * df[name].std()))] df.drop(drop_rows, axis=0, inplace=True) # Encode a column to a range between normalized_low and normalized_high. def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1, data_low=None, data_high=None): if data_low is None: data_low = min(df[name]) data_high = max(df[name]) df[name] = ((df[name] - data_low) / (data_high - data_low)) \ * (normalized_high - normalized_low) + normalized_low # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - Pandas dataframe output. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) r = requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"), 'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code == 200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ``` # Chat Bots Using the above code you can create your own primitive chat bots. A some what famous video on Youtube from Cornell University shows what happens [when two chat bots converse](https://www.youtube.com/watch?v=WnzlbyTZsQY). Other interesting chat bot type technology: * [CleverBot](http://www.cleverbot.com/) * [Computer Science Paper Generator](https://pdos.csail.mit.edu/archive/scigen/) ### Other Resources * [Word Net](http://wordnet.princeton.edu/) * [bAbI Datasets](https://research.fb.com/downloads/babi/) # End-To-End Memory Networks The origional source papers for End-to-End Memory Networks: * Jason Weston, Antoine Bordes, Sumit Chopra, Tomas Mikolov, Alexander M. Rush, ["Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks"](http://arxiv.org/abs/1502.05698) * Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Rob Fergus, ["End-To-End Memory Networks"](http://arxiv.org/abs/1503.08895) Other useful links for End-To-End Memory Networks * [bAbI Datasets](https://research.fb.com/downloads/babi/) * [Keras End-To-End Memory Networks](https://github.com/fchollet/keras/blob/master/examples/babi_memnn.py) * [Online JavaScript Demo of End-to-End Memory Networks](http://yerevann.com/dmn-ui/#/) ## Imports and Utility Functions The following imports are needed to create the end-to-end memory network. Neither Keras nor TensorFlow directly support End-to-End Memory Networks (yet), so it is necessary to create them using existing tools. Several functions are needed defined here to read the bAbI dataset that we are using to train. ``` from keras.models import Sequential, Model from keras.layers.embeddings import Embedding from keras.layers import Input, Activation, Dense, Permute, Dropout, add, dot, concatenate from keras.layers import LSTM from keras.utils.data_utils import get_file from keras.preprocessing.sequence import pad_sequences from keras.models import load_model from sklearn.metrics import confusion_matrix from sklearn import metrics from functools import reduce import pickle import tarfile import numpy as np import re import os import time def tokenize(sent): '''Return the tokens of a sentence including punctuation. >>> tokenize('Bob dropped the apple. Where is the apple?') ['Bob', 'dropped', 'the', 'apple', '.', 'Where', 'is', 'the', 'apple', '?'] ''' return [x.strip() for x in re.split('(\W+)', sent) if x.strip()] def parse_stories(lines, only_supporting=False): '''Parse stories provided in the bAbi tasks format If only_supporting is true, only the sentences that support the answer are kept. ''' data = [] story = [] for line in lines: line = line.decode('utf-8').strip() nid, line = line.split(' ', 1) nid = int(nid) if nid == 1: story = [] if '\t' in line: q, a, supporting = line.split('\t') q = tokenize(q) substory = None if only_supporting: # Only select the related substory supporting = map(int, supporting.split()) substory = [story[i - 1] for i in supporting] else: # Provide all the substories substory = [x for x in story if x] data.append((substory, q, a)) story.append('') else: sent = tokenize(line) story.append(sent) return data def get_stories(f, only_supporting=False, max_length=None): '''Given a file name, read the file, retrieve the stories, and then convert the sentences into a single story. If max_length is supplied, any stories longer than max_length tokens will be discarded. ''' data = parse_stories(f.readlines(), only_supporting=only_supporting) flatten = lambda data: reduce(lambda x, y: x + y, data) data = [(flatten(story), q, answer) for story, q, answer in data if not max_length or len(flatten(story)) < max_length] return data def vectorize_stories(data): inputs, queries, answers = [], [], [] for story, query, answer in data: inputs.append([word_idx[w] for w in story]) queries.append([word_idx[w] for w in query]) answers.append(word_idx[answer]) return (pad_sequences(inputs, maxlen=story_maxlen), pad_sequences(queries, maxlen=query_maxlen), np.array(answers)) ``` ## Getting the Data The data is downloaded from the Internet, if needed. As you can see (below), this dataset contains stories and questions about those stories. The computer is not learning these specific stories below, but rather how to read a story and answer a question about that story. Consider the first story "Mary moved to the bathroom. John went to the hallway." the computer is not learning that Mary is in the bathroom or John is in the hallway, this changes per story. Rather, the computer is learning to parse the story and extract information about individual people and their locations. The computer is learning to read, at least in a limited sense. ``` try: path = get_file('babi-tasks-v1-2.tar.gz', origin='https://s3.amazonaws.com/text-datasets/babi_tasks_1-20_v1-2.tar.gz') except: print('Error downloading dataset, please download it manually:\n' '$ wget http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-2.tar.gz\n' '$ mv tasks_1-20_v1-2.tar.gz ~/.keras/datasets/babi-tasks-v1-2.tar.gz') raise tar = tarfile.open(path) challenges = { # QA1 with 10,000 samples 'single_supporting_fact_10k': 'tasks_1-20_v1-2/en-10k/qa1_single-supporting-fact_{}.txt', # QA2 with 10,000 samples 'two_supporting_facts_10k': 'tasks_1-20_v1-2/en-10k/qa2_two-supporting-facts_{}.txt', } challenge_type = 'single_supporting_fact_10k' challenge = challenges[challenge_type] print('Extracting stories for the challenge:', challenge_type) train_stories = get_stories(tar.extractfile(challenge.format('train'))) test_stories = get_stories(tar.extractfile(challenge.format('test'))) # See what the data looks like for i in range(5): print("Story: {}".format(' '.join(train_stories[i][0]))) print("Query: {}".format(' '.join(train_stories[i][1]))) print("Answer: {}".format(train_stories[i][2])) print("---") ``` ## Building the Vocabulary This type of neural network can only deal with a set vocabulary. The words are indexed and each becomes a number. Words not in the training vocabulary will not be recognized. ``` vocab = set() for story, q, answer in train_stories + test_stories: vocab |= set(story + q + [answer]) vocab = sorted(vocab) # Reserve 0 for masking via pad_sequences vocab_size = len(vocab) + 1 story_maxlen = max(map(len, (x for x, _, _ in train_stories + test_stories))) query_maxlen = max(map(len, (x for _, x, _ in train_stories + test_stories))) print('-') print('Vocab size:', vocab_size, 'unique words') print('Story max length:', story_maxlen, 'words') print('Query max length:', query_maxlen, 'words') print('Number of training stories:', len(train_stories)) print('Number of test stories:', len(test_stories)) print('-') print('Here\'s what a "story" tuple looks like (input, query, answer):') print(train_stories[0]) print('-') for s in list(enumerate(vocab)): print(s) ``` ## Building the Training and Test Data The training data that is actually sent to the neural network is the vectorized representation of the sentences. Each word is replaced by its vocab number. Additionally, there are two parts to the input (x) data: story and query. The answer (x) is a always a single vocab word number. This is a classification network. Any of the vocab words could potentially be the answer. Stories can be at most 68 words and questions at most 4. Both of these limits are automatically determined from the training data. ``` print('Vectorizing the word sequences...') word_idx = dict((c, i + 1) for i, c in enumerate(vocab)) inputs_train, queries_train, answers_train = vectorize_stories(train_stories) inputs_test, queries_test, answers_test = vectorize_stories(test_stories) print('-') print('inputs: integer tensor of shape (samples, max_length)') print('inputs_train shape:', inputs_train.shape) print('inputs_test shape:', inputs_test.shape) print('-') print('queries: integer tensor of shape (samples, max_length)') print('queries_train shape:', queries_train.shape) print('queries_test shape:', queries_test.shape) print('-') print('answers: binary (1 or 0) tensor of shape (samples, vocab_size)') print('answers_train shape:', answers_train.shape) print('answers_test shape:', answers_test.shape) print('-') # See individual training element. print("Story (x): {}".format(inputs_train[0])) print("Question (x): {}".format(queries_train[0])) print("Answer: {}".format(answers_train[0])) ``` ## Compile the Neural Network ``` print('Compiling...') # placeholders input_sequence = Input((story_maxlen,)) question = Input((query_maxlen,)) # encoders # embed the input sequence into a sequence of vectors input_encoder_m = Sequential() input_encoder_m.add(Embedding(input_dim=vocab_size, output_dim=64)) input_encoder_m.add(Dropout(0.3)) # output: (samples, story_maxlen, embedding_dim) # embed the input into a sequence of vectors of size query_maxlen input_encoder_c = Sequential() input_encoder_c.add(Embedding(input_dim=vocab_size, output_dim=query_maxlen)) input_encoder_c.add(Dropout(0.3)) # output: (samples, story_maxlen, query_maxlen) # embed the question into a sequence of vectors question_encoder = Sequential() question_encoder.add(Embedding(input_dim=vocab_size, output_dim=64, input_length=query_maxlen)) question_encoder.add(Dropout(0.3)) # output: (samples, query_maxlen, embedding_dim) # encode input sequence and questions (which are indices) # to sequences of dense vectors input_encoded_m = input_encoder_m(input_sequence) input_encoded_c = input_encoder_c(input_sequence) question_encoded = question_encoder(question) # compute a 'match' between the first input vector sequence # and the question vector sequence # shape: `(samples, story_maxlen, query_maxlen)` match = dot([input_encoded_m, question_encoded], axes=(2, 2)) match = Activation('softmax')(match) # add the match matrix with the second input vector sequence response = add([match, input_encoded_c]) # (samples, story_maxlen, query_maxlen) response = Permute((2, 1))(response) # (samples, query_maxlen, story_maxlen) # concatenate the match matrix with the question vector sequence answer = concatenate([response, question_encoded]) # the original paper uses a matrix multiplication for this reduction step. # we choose to use a RNN instead. answer = LSTM(32)(answer) # (samples, 32) # one regularization layer -- more would probably be needed. answer = Dropout(0.3)(answer) answer = Dense(vocab_size)(answer) # (samples, vocab_size) # we output a probability distribution over the vocabulary answer = Activation('softmax')(answer) # build the final model model = Model([input_sequence, question], answer) model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['accuracy']) print("Done.") ``` ## Train the Neural Network It will take some time (probably up to 1/2 hour) to train this network on a CPU. The network is saved. If you've previously saved the neural network, you can skip this step and load the neural network in the next step. ``` start_time = time.time() # train model.fit([inputs_train, queries_train], answers_train, batch_size=32, epochs=120, validation_data=([inputs_test, queries_test], answers_test)) # save save_path = "./dnn/" # save entire network to HDF5 (save everything, suggested) model.save(os.path.join(save_path,"chatbot.h5")) # save the vocab too, indexes must be the same pickle.dump( vocab, open( os.path.join(save_path,"vocab.pkl"), "wb" ) ) elapsed_time = time.time() - start_time print("Elapsed time: {}".format(hms_string(elapsed_time))) # Load the model, if it exists, load vocab too save_path = "./dnn/" model = load_model(os.path.join(save_path,"chatbot.h5")) vocab = pickle.load( open( os.path.join(save_path,"vocab.pkl"), "rb" ) ) ``` ## Evaluate Accuracy We evaluate the accuracy, using the same technique as previous classification networks. ``` pred = model.predict([inputs_test, queries_test]) # See what the predictions look like, they are just probabilities of each class. print(pred) # Use argmax to turn those into actual predictions. The class (word) with the highest # probability is the answer. pred = np.argmax(pred,axis=1) print(pred) score = metrics.accuracy_score(answers_test, pred) print("Final accuracy: {}".format(score)) ``` ## Adhoc Query You might want to create your own stories and questions. ``` print("Remember, I only know these words: {}".format(vocab)) print() story = "Daniel went to the hallway. Mary went to the bathroom. Daniel went to the bedroom." query = "Where is Sandra?" adhoc_stories = (tokenize(story), tokenize(query), '?') adhoc_train, adhoc_query, adhoc_answer = vectorize_stories([adhoc_stories]) pred = model.predict([adhoc_train, adhoc_query]) print(pred[0]) pred = np.argmax(pred,axis=1) print("Answer: {}({})".format(vocab[pred[0]-1],pred)) ``` # Word2Vec Word2vec is a group of related models that are used to produce word embeddings. These models are shallow, two-layer neural networks that are trained to reconstruct linguistic contexts of words. Word2vec takes as its input a large corpus of text and produces a vector space, typically of several hundred dimensions, with each unique word in the corpus being assigned a corresponding vector in the space. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located in close proximity to one another in the space. Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). [Efficient estimation of word representations in vector space](https://arxiv.org/abs/1301.3781). arXiv preprint arXiv:1301.3781. ![Word2Vec](https://pbs.twimg.com/media/C7jJxIjWkAA8E_s.jpg) [Trust Word2Vec](https://twitter.com/DanilBaibak/status/844647217885581312) ### Suggested Software for Word2Vec * [GoogleNews Vectors](https://code.google.com/archive/p/word2vec/), [GitHub Mirror](https://github.com/mmihaltz/word2vec-GoogleNews-vectors) * [Python Gensim](https://radimrehurek.com/gensim/) ``` try: path = get_file('GoogleNews-vectors-negative300.bin.gz', origin='https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz') except: print('Error downloading') raise print(path) import gensim # Not that the path below refers to a location on my hard drive. # You should download GoogleNews Vectors (see suggested software above) model = gensim.models.KeyedVectors.load_word2vec_format(path, binary=True) ``` Word2vec makes each word a vector. We are using the 300-number vector, which can be seen for the word "hello". ``` w = model['hello'] print(len(w)) print(w) ``` The code below shows the distance between two words. ``` import numpy as np w1 = model['king'] w2 = model['queen'] dist = np.linalg.norm(w1-w2) print(dist) ``` This shows the classic word2vec equation of **queen = (king - man) + female** ``` model.most_similar(positive=['woman', 'king'], negative=['man']) ``` The following code shows which item does not belong with the others. ``` model.doesnt_match("house garage store dog".split()) ``` The following code shows the similarity between two words. ``` model.similarity('iphone', 'android') ``` The following code shows which words are most similar to the given one. ``` model.most_similar('dog') ``` # More on LSTM * [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) * [LSTM Music](https://www.youtube.com/watch?v=0VTI1BBLydE) * [Natural Language Processing from Scratch](https://arxiv.org/abs/1103.0398)
github_jupyter
# [deplacy](https://koichiyasuoka.github.io/deplacy/)句法分析 ## 用[Trankit](https://github.com/nlp-uoregon/trankit) ``` !pip install deplacy trankit transformers import trankit nlp=trankit.Pipeline("traditional-chinese") doc=nlp("希望是附麗於存在的,有存在,便有希望,有希望,便是光明。") import deplacy deplacy.render(doc) deplacy.serve(doc,port=None) # import graphviz # graphviz.Source(deplacy.dot(doc)) ``` ## 用[Stanza](https://stanfordnlp.github.io/stanza) ``` !pip install deplacy stanza import stanza stanza.download("zh-hant") nlp=stanza.Pipeline("zh-hant") doc=nlp("希望是附麗於存在的,有存在,便有希望,有希望,便是光明。") import deplacy deplacy.render(doc) deplacy.serve(doc,port=None) # import graphviz # graphviz.Source(deplacy.dot(doc)) ``` ## 用[UDPipe 2](http://ufal.mff.cuni.cz/udpipe/2) ``` !pip install deplacy def nlp(t): import urllib.request,urllib.parse,json with urllib.request.urlopen("https://lindat.mff.cuni.cz/services/udpipe/api/process?model=zh_gsd&tokenizer&tagger&parser&data="+urllib.parse.quote(t)) as r: return json.loads(r.read())["result"] doc=nlp("希望是附麗於存在的,有存在,便有希望,有希望,便是光明。") import deplacy deplacy.render(doc) deplacy.serve(doc,port=None) # import graphviz # graphviz.Source(deplacy.dot(doc)) ``` ## 用[esupar](https://github.com/KoichiYasuoka/esupar) ``` !pip install deplacy esupar import esupar nlp=esupar.load("zh") doc=nlp("希望是附麗於存在的,有存在,便有希望,有希望,便是光明。") import deplacy deplacy.render(doc) deplacy.serve(doc,port=None) # import graphviz # graphviz.Source(deplacy.dot(doc)) ``` ## 用[NLP-Cube](https://github.com/Adobe/NLP-Cube) ``` !pip install deplacy nlpcube from cube.api import Cube nlp=Cube() nlp.load("zh") doc=nlp("希望是附麗於存在的,有存在,便有希望,有希望,便是光明。") import deplacy deplacy.render(doc) deplacy.serve(doc,port=None) # import graphviz # graphviz.Source(deplacy.dot(doc)) ``` ## 用[spacy-udpipe](https://github.com/TakeLab/spacy-udpipe) ``` !pip install deplacy spacy-udpipe import spacy_udpipe spacy_udpipe.download("zh") nlp=spacy_udpipe.load("zh") doc=nlp("希望是附麗於存在的,有存在,便有希望,有希望,便是光明。") import deplacy deplacy.render(doc) deplacy.serve(doc,port=None) # import graphviz # graphviz.Source(deplacy.dot(doc)) ``` ## 用[UD-Chinese](https://pypi.org/project/udchinese) ``` !pip install deplacy udchinese import udchinese nlp=udchinese.load() doc=nlp("希望是附麗於存在的,有存在,便有希望,有希望,便是光明。") import deplacy deplacy.render(doc) deplacy.serve(doc,port=None) # import graphviz # graphviz.Source(deplacy.dot(doc)) ``` ## 用[spaCy](https://spacy.io/) ``` !pip install deplacy !sudo pip install -U spacy !sudo python -m spacy download zh_core_web_trf import pkg_resources,imp imp.reload(pkg_resources) import spacy nlp=spacy.load("zh_core_web_trf") doc=nlp("希望是附麗於存在的,有存在,便有希望,有希望,便是光明。") import deplacy deplacy.render(doc) deplacy.serve(doc,port=None) # import graphviz # graphviz.Source(deplacy.dot(doc)) ``` ## 用[DDParser](https://github.com/baidu/DDParser) ``` !pip install deplacy ddparser from ddparser import DDParser ddp=DDParser(use_pos=True) nlp=lambda t:"".join(["\n".join(["\t".join([str(i+1),w,w,p,p,"_",str(h),d,"_","SpaceAfter=No"]) for i,(w,p,h,d) in enumerate(zip(s["word"],s["postag"],s["head"],s["deprel"]))])+"\n\n" for s in ddp.parse(t)]) doc=nlp("希望是附麗於存在的,有存在,便有希望,有希望,便是光明。") import deplacy deplacy.render(doc) deplacy.serve(doc,port=None) # import graphviz # graphviz.Source(deplacy.dot(doc)) ```
github_jupyter
``` import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" import torchfly torchfly.set_random_seed(123) import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader, Dataset from torch.nn.utils.rnn import pad_sequence import numpy as np import regex as re import random import itertools import tqdm import time from torchfly.utils.model_utils import get_pretrained_states try: from torch.utils.tensorboard import SummaryWriter except: from tensorboardX import SummaryWriter from apex import amp from allennlp.training.checkpointer import Checkpointer # from pytorch_transformers import AdamW, WarmupLinearSchedule, GPT2Tokenizer from transformers import AdamW from transformers import WarmupLinearSchedule from torchfly.text.tokenizers import UnifiedBPETokenizer from torchfly.modules.losses import SequenceFocalLoss, SequenceCrossEntropyLoss from torchfly.modules.transformers import GPT2SimpleLM, UnifiedGPT2SmallConfig from cam676_eval.cam676_eval import clean_sentence, entities, entity_dict, success_f1_metric, bleu_metric # set tokenizer tokenizer = UnifiedBPETokenizer() tokenizer.sep_token = "None" # add speicial tokens in the same order as Roberta # tokenizer.add_tokens(["<s>", "<pad>", "</s>", "<unk>", "<mask>"]) ''' class GPT2SmallConfig: vocab_size = 50257 + len(tokenizer.added_tokens_encoder) n_special = len(tokenizer.added_tokens_encoder) n_positions = 1024 n_ctx = 1024 n_embd = 768 n_layer = 12 n_head = 12 resid_pdrop = 0.1 embd_pdrop = 0.1 attn_pdrop = 0.1 layer_norm_epsilon = 1e-5 initializer_range = 0.02 gradient_checkpointing = False class GPT2MediumConfig: vocab_size = len(tokenizer.added_tokens_encoder) n_special = len(tokenizer.added_tokens_encoder) n_positions = 1024 n_ctx = 1024 n_embd = 1024 n_layer = 24 n_head = 16 resid_pdrop = 0.1 embd_pdrop = 0.1 attn_pdrop = 0.1 layer_norm_epsilon = 1e-5 initializer_range = 0.02 gradient_checkpointing = True ''' model_A = GPT2SimpleLM(UnifiedGPT2SmallConfig) model_B = GPT2SimpleLM(UnifiedGPT2SmallConfig) # model_A.load_state_dict(torch.load("../../../Checkpoint/best.th")) # model_B.load_state_dict(torch.load("../../../Checkpoint/best.th")) model_A.load_state_dict(get_pretrained_states("unified-gpt2-small")) model_B.load_state_dict(get_pretrained_states("unified-gpt2-small")) def align_keep_indices(batch_keep_indices): prev = batch_keep_indices[1] new_batch_keep_indices = [prev] for i in range(1, len(batch_keep_indices)): curr = batch_keep_indices[i] new = [] for idx in curr: new.append(prev.index(idx)) new_batch_keep_indices.append(new) prev = curr return new_batch_keep_indices class CamRestDataset(Dataset): def __init__(self, data, tokenizer): self.data = data self.tokenizer = tokenizer self.bos = tokenizer.encode("<s>") self.user_bos = tokenizer.encode("A:") self.system_bos = tokenizer.encode("B:") self.eos = [628, 198] def __len__(self): return len(self.data) def __getitem__(self, index): full_dialog = self.data[index] full_dialog_tokens = [] cur_pos = 0 for turn_dialog in full_dialog: # user user_tokens = self.user_bos + tokenizer.encode(turn_dialog['user']) + self.eos user_pos = torch.arange(cur_pos, cur_pos + len(user_tokens)) cur_pos = user_pos[-1] + 1 # belief span belief_tokens = self.bos + \ tokenizer.encode(";".join(turn_dialog['bspan_inform'][1:])) + \ self.eos belief_pos = torch.arange(cur_pos, cur_pos + len(belief_tokens)) cur_pos = belief_pos[-1] # system if np.random.rand() < 0.04: turn_dialog["degree"] = 0 database = tokenizer.encode(str(turn_dialog["degree"])) # database_pos = torch.LongTensor([1023]) system_tokens = self.system_bos + \ tokenizer.encode(turn_dialog['replaced_response']) + \ self.eos system_pos = torch.arange(cur_pos, cur_pos + len(system_tokens) + 1) cur_pos = system_pos[-1] + 1 # concat database and response system_tokens = database + system_tokens # system_pos = torch.cat([database_pos, system_pos], dim=0) user_tokens = torch.LongTensor(user_tokens) system_tokens = torch.LongTensor(system_tokens) belief_tokens = torch.LongTensor(belief_tokens) full_dialog_tokens.append((user_tokens, user_pos, system_tokens, system_pos, belief_tokens, belief_pos)) return full_dialog_tokens class Collate_Function: """This function handles batch collate. """ def __init__(self, tokenizer): self.tokenizer = tokenizer self.pad = self.tokenizer.encode("<pad>")[0] def __call__(self, unpacked_data): max_turn_len = max([len(item) for item in unpacked_data]) batch_dialogs = [] batch_keep_indices = [] for turn_num in range(max_turn_len): keep_indices = [] for batch_idx in range(len(unpacked_data)): if turn_num < len(unpacked_data[batch_idx]): keep_indices.append(batch_idx) user_tokens = pad_sequence([unpacked_data[idx][turn_num][0] for idx in keep_indices], batch_first=True, padding_value=self.pad) user_pos = pad_sequence([unpacked_data[idx][turn_num][1] for idx in keep_indices], batch_first=True, padding_value=0) system_tokens = pad_sequence([unpacked_data[idx][turn_num][2] for idx in keep_indices], batch_first=True, padding_value=self.pad) system_pos = pad_sequence([unpacked_data[idx][turn_num][3] for idx in keep_indices], batch_first=True, padding_value=0) belief_tokens = pad_sequence([unpacked_data[idx][turn_num][4] for idx in keep_indices], batch_first=True, padding_value=self.pad) belief_pos = pad_sequence([unpacked_data[idx][turn_num][5] for idx in keep_indices], batch_first=True, padding_value=0) user_mask = (user_tokens != self.pad).byte() system_mask = (system_tokens != self.pad).byte() belief_mask = (belief_tokens != self.pad).byte() batch_dialogs.append((user_tokens, user_pos, user_mask, system_tokens, system_pos, system_mask, belief_tokens, belief_pos, belief_mask)) batch_keep_indices.append(keep_indices) # align keep indices # batch_keep_indices = align_keep_indices(batch_keep_indices) return batch_dialogs, batch_keep_indices def calculate_loss(logits, target, mask): logits = logits[:, :-1].contiguous() target = target[:, 1:].contiguous() mask = mask[:, 1:].contiguous().float() loss = criterion(logits, target, mask, label_smoothing=0.02, reduce=True) return loss def filter_past(past, keep_indices): past = [item[:, keep_indices] for item in past] return past def replace_punc(x): x = x.replace("<", "").replace(">", "") return x.replace(".", " .").replace(",", " .").replace("?", " ?").replace("?", " ?") train_data = torch.load("../data/DataProcess/train_data.pkl") val_data = torch.load("../data/DataProcess/val_data.pkl") test_data = torch.load("../data/DataProcess/test_data.pkl") indices = np.arange(len(train_data)) np.random.shuffle(indices) # use all data indices = indices[: 200] train_data = [train_data[idx] for idx in indices] train_dataset = CamRestDataset(train_data, tokenizer) val_dataset = CamRestDataset(val_data, tokenizer) test_dataset = CamRestDataset(test_data, tokenizer) train_batch_size = 1 collate_func = Collate_Function(tokenizer) train_dataloader = DataLoader(dataset=train_dataset, shuffle=True, batch_size=train_batch_size, collate_fn=collate_func) eval_batch_size = 16 val_dataloader = DataLoader(dataset=val_dataset, shuffle=False, batch_size=eval_batch_size, collate_fn=collate_func) test_dataloader = DataLoader(dataset=test_dataset, shuffle=False, batch_size=eval_batch_size, collate_fn=collate_func) criterion = SequenceFocalLoss(gamma=0.0, beta=0.0) device = torch.device("cuda") model_A = model_A.to(device) model_B = model_A ``` ## Training ``` if not os.path.isdir("Checkpoint"): os.makedirs("Checkpoint") checkpointer = Checkpointer(serialization_dir="Checkpoint", keep_serialized_model_every_num_seconds=3600*2, num_serialized_models_to_keep=10) # optimizer num_epochs = 10 num_gradients_accumulation = 1 num_train_optimization_steps = num_train_optimization_steps = len(train_dataset) * num_epochs // train_batch_size // num_gradients_accumulation param_optimizer = list(model_A.named_parameters()) + list(model_B.named_parameters()) no_decay = ['ln', 'bias', 'LayerNorm'] optimizer_grouped_parameters = [ {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01}, {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0} ] optimizer = AdamW(optimizer_grouped_parameters, lr=5e-5, correct_bias=False) scheduler = WarmupLinearSchedule(optimizer, warmup_steps=500, t_total=num_train_optimization_steps) # [model_A, model_B], optimizer = amp.initialize([model_A, model_B], optimizer, opt_level="O0") user_weight = 1.0 def train_one_iter(batch_dialogs, batch_keep_indices, update_count, fp16=False): aligned_batch_keep_indices = align_keep_indices(batch_keep_indices) mask = torch.ByteTensor([]).to(device) prev_batch_size = batch_dialogs[0][0].shape[0] past = None all_logits = [] target = [] total_loss = 0 for turn_num in range(len(batch_keep_indices)): # data send to gpu dialogs = batch_dialogs[turn_num] dialogs = [item.to(device) for item in dialogs] user_tokens, user_pos, user_mask, \ system_tokens, system_pos, system_mask, \ belief_tokens, belief_pos, belief_mask = dialogs # filtering algorithm keep_indices = aligned_batch_keep_indices[turn_num] if len(keep_indices) != prev_batch_size: past = filter_past(past, keep_indices) mask = mask[keep_indices, :] # User Utterance mask = torch.cat([mask, user_mask], dim=-1) logits, past = model_A(user_tokens, position_ids=user_pos, mask=mask, past=past) all_logits.append(logits) target.append(user_tokens) # A_loss = calculate_loss(logits, user_tokens, user_mask) # System Response mask = torch.cat([mask, system_mask], dim=-1) logits, past = model_B(system_tokens, position_ids=system_pos, mask=mask, past=past) all_logits.append(logits) target.append(system_tokens) # B_loss = calculate_loss(logits, system_tokens, system_mask) # tail # total_loss = total_loss + user_weight * A_loss + B_loss prev_batch_size = user_tokens.shape[0] # breakpoint all_logits = torch.cat(all_logits, dim=1) all_logits = all_logits[:, :-1].contiguous() target = torch.cat(target, dim=1) target = target[:, 1:].contiguous() target_mask = torch.ones_like(target).float() total_loss = criterion(all_logits, target, target_mask, label_smoothing=0.02, reduce=True) # gradient accumulation total_loss /= len(batch_keep_indices) total_loss /= num_gradients_accumulation if fp16: with amp.scale_loss(total_loss, optimizer) as scaled_loss: scaled_loss.backward() else: total_loss.backward() record_loss = total_loss.item() * num_gradients_accumulation perplexity = np.exp(record_loss) return record_loss, perplexity def validate(dataloader, data): model_A.eval() model_B.eval() temperature = 0.5 all_response = [] for batch_dialogs, batch_keep_indices in tqdm.notebook.tqdm(dataloader): aligned_batch_keep_indices = align_keep_indices(batch_keep_indices) past = None generated_responses = [[] for i in range(batch_dialogs[0][0].shape[0])] mask = torch.ByteTensor([]).to(device) prev_batch_size = batch_dialogs[0][0].shape[0] with torch.no_grad(): for turn_num in range(len(batch_keep_indices)): # data send to gpu dialogs = batch_dialogs[turn_num] dialogs = [item.to(device) for item in dialogs] user_tokens, user_pos, user_mask, \ system_tokens, system_pos, system_mask, \ belief_tokens, belief_pos, belief_mask = dialogs # batch filtering algorithm keep_indices = aligned_batch_keep_indices[turn_num] if len(keep_indices) != prev_batch_size: past = filter_past(past, keep_indices) mask = mask[keep_indices, :] # define some initials cur_batch_size = user_tokens.shape[0] flags = np.ones(cur_batch_size) generated_tokens = [[] for i in range(cur_batch_size)] # feed in user mask = torch.cat([mask, user_mask], dim=-1) _, past = model_A(user_tokens, position_ids=user_pos, mask=mask, past=past) # response generation response = [] # first three tokens prev_input = system_tokens[:, :3] cur_pos = system_pos[:, :3] temp_past = past temp_mask = F.pad(mask, pad=(0,3), value=1) # feed into B logits, temp_past = model_B(prev_input, position_ids=cur_pos, mask=temp_mask, past=temp_past) # set current position cur_pos = cur_pos[:, -1].unsqueeze(1) + 1 for i in range(50): logits = logits[:, -1, :] / temperature prev_tokens = torch.argmax(logits, dim=-1) np_prev_tokens = prev_tokens.cpu().numpy() # nucleus sampling # logits = top_filtering(logits, top_k=100, top_p=0.7) # probs = F.softmax(logits, -1) # prev_input = torch.multinomial(probs, num_samples=1) # add to generated tokens list count = 0 for idx, value in enumerate(flags): if value != 0: generated_tokens[idx].append(np_prev_tokens[count]) count += 1 # filtering algorithm if np.any(np_prev_tokens == 628): # set flags 0 count = 0 for idx, value in enumerate(flags): if value == 1: if np_prev_tokens[count] == 628: flags[idx] = 0 count += 1 # compute which one to keep keep_indices = np.argwhere(np_prev_tokens != 628).squeeze(1) # filter prev_tokens = prev_tokens[keep_indices.tolist()] cur_pos = cur_pos[keep_indices.tolist(), :] temp_mask = temp_mask[keep_indices.tolist(), :] temp_past = [item[:, keep_indices.tolist()] for item in temp_past] np_prev_tokens = np_prev_tokens[keep_indices.tolist()] if np.all(flags == 0): break # prepare for the next token temp_mask = F.pad(temp_mask, pad=(0, 1), value=1) logits, temp_past = model_B(prev_tokens.view(-1, 1), position_ids=cur_pos, mask=temp_mask, past=temp_past) cur_pos = cur_pos + 1 # real system_tokens feed in mask = torch.cat([mask, system_mask], dim=-1) _, past = model_B(system_tokens, position_ids=system_pos, mask=mask, past=past) # inject into generated_responses_list decoded_responses = [tokenizer.decode(item).replace("\n", "") for item in generated_tokens] count = 0 for idx in batch_keep_indices[turn_num]: generated_responses[idx].append(decoded_responses[count]) count += 1 # add to the final responses for item in generated_responses: all_response.extend(item) # Stage 2 # prepare for metric eval dialog_data = [] count = 0 all_results = [] for i in range(len(data)): raw_dialog = data[i] for turn_num in range(len(raw_dialog)): replaced_response = clean_sentence( replace_punc(raw_dialog[turn_num]["replaced_response"].lower().replace("slot", "SLOT")), entity_dict) generated_response = clean_sentence(replace_punc(all_response[count].lower().replace("slot", "SLOT")), entity_dict) dialog_data.append({"dial_id": raw_dialog[turn_num]["dial_id"], "turn_num": raw_dialog[turn_num]["turn_num"], "response": replaced_response, "generated_response":generated_response }) count += 1 sccuess_f1 = success_f1_metric(dialog_data) bleu = bleu_metric(dialog_data) return {"bleu": bleu, "sccuess_f1": sccuess_f1 } update_count = 0 progress_bar = tqdm.tqdm_notebook start = time.time() for ep in range(num_epochs): "Training" pbar = progress_bar(train_dataloader) model_A.train() model_B.train() for batch_dialogs, batch_keep_indices in pbar: record_loss, perplexity = train_one_iter(batch_dialogs, batch_keep_indices, update_count, fp16=False) update_count += 1 if update_count % num_gradients_accumulation == num_gradients_accumulation - 1: # update for gradient accumulation scheduler.step() # torch.nn.utils.clip_grad_norm_(model_A.parameters(), 5.0) # torch.nn.utils.clip_grad_norm_(model_B.parameters(), 5.0) optimizer.step() optimizer.zero_grad() # speed measure end = time.time() speed = train_batch_size * num_gradients_accumulation / (end - start) start = end # show progress pbar.set_postfix(loss=record_loss, perplexity=perplexity, speed=speed) "Evaluation" print(f"Epoch {ep} Validation") eval_res = validate(val_dataloader, val_data) print(eval_res) print(f"Epoch {ep} Test") eval_res = validate(test_dataloader, test_data) print(eval_res) checkpointer.save_checkpoint(ep, [model_A.state_dict(), model_A.state_dict()], {"None": None}, True ) ```
github_jupyter
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W3D3_ReinforcementLearningForGames/W3D3_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Tutorial 1: Learn to play games with RL **Week 3, Day 3: Reinforcement Learning for Games** **By Neuromatch Academy** __Content creators:__ Mandana Samiei, Raymond Chua, Tim Lilicrap, Blake Richards __Content reviewers:__ Arush Tagade, Lily Cheng, Melvin Selim Atay __Content editors:__ Melvin Selim Atay, Spiros Chavlis __Production editors:__ Namrata Bafna, Spiros Chavlis --- # Tutorial Objectives In this tutotial, you will learn how to implement a game loop and improve the performance of a random player. The specific objectives for this tutorial: * Understand the format of two-players games * Learn about value network and policy network * Learn about Monte Carlo Tree Search (MCTS) and compare its performance to policy-based and value-based players ``` # @title Video 0: Introduction from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1kq4y1H7MQ", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"v4wafEsgopE", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) # @title Tutorial slides # @markdown These are the slides for the videos in the tutorial from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/3zn9w/?direct%26mode=render%26action=download%26mode=render", width=854, height=480) ``` --- # Setup In this section, we have: 1. **Import cell**: imports all libraries you use in the tutorial 2. **Hidden Figure settings cell**: sets up the plotting style (copy exactly) 3. **Hidden Plotting functions cell**: contains all functions used to create plots throughout the tutorial (so students don't waste time looking at boilerplate matplotlib but can here if they wish to). Please use only matplotlib for plotting for consistency. 4. **Hidden Helper functions cell**: This should contain functions that students have previously used or that are very simple. Any helper functions that are being used for the first time and are important should be placed directly above the relevant text or exercise (see Section 1.1 for an example) ``` # @title Clone a repo from github and import modules # @markdown Run this cell! !git clone https://github.com/raymondchua/nma_rl_games.git import sys sys.path.append('/content/nma_rl_games/alpha-zero') import Arena from utils import * from Game import Game from MCTS import MCTS from NeuralNet import NeuralNet from othello.OthelloPlayers import * from othello.OthelloLogic import Board from othello.OthelloGame import OthelloGame from othello.pytorch.NNet import NNetWrapper as NNet # @title Install dependencies !pip install tqdm --quiet !pip install coloredlogs --quiet # Imports from __future__ import print_function import os import math import time import torch import random import logging import argparse import coloredlogs import numpy as np import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.autograd import Variable from pickle import Pickler, Unpickler from tqdm.notebook import tqdm from torchvision import datasets, transforms from collections import deque from random import shuffle from pickle import Pickler, Unpickler log = logging.getLogger(__name__) coloredlogs.install(level='INFO') # Change this to DEBUG to see more info. # @title Set random seed # @markdown Executing `set_seed(seed=seed)` you are setting the seed # for DL its critical to set the random seed so that students can have a # baseline to compare their results to expected results. # Read more here: https://pytorch.org/docs/stable/notes/randomness.html # Call `set_seed` function in the exercises to ensure reproducibility. import random import torch def set_seed(seed=None, seed_torch=True): if seed is None: seed = np.random.choice(2 ** 32) random.seed(seed) np.random.seed(seed) if seed_torch: torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True print(f'Random seed {seed} has been set.') # In case that `DataLoader` is used def seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 np.random.seed(worker_seed) random.seed(worker_seed) # @title Set device (GPU or CPU). Execute `set_device()` # especially if torch modules used. # inform the user if the notebook uses GPU or CPU. def set_device(): device = "cuda" if torch.cuda.is_available() else "cpu" if device != "cuda": print("WARNING: For this notebook to perform best, " "if possible, in the menu under `Runtime` -> " "`Change runtime type.` select `GPU` ") else: print("GPU is enabled in this notebook.") return device SEED = 2021 set_seed(seed=SEED) DEVICE = set_device() args = dotdict({ 'numIters': 1, # in training setting this was 1000 and num of episodes=100 'numEps': 1, # Number of complete self-play games to simulate during a new iteration. 'tempThreshold': 15, # To control exploration and exploitation 'updateThreshold': 0.6, # During arena playoff, new neural net will be accepted if threshold or more of games are won. 'maxlenOfQueue': 200, # Number of game examples to train the neural networks. 'numMCTSSims': 15, # Number of games moves for MCTS to simulate. 'arenaCompare': 10, # Number of games to play during arena play to determine if new net will be accepted. 'cpuct': 1, 'maxDepth':5, # Maximum number of rollouts 'numMCsims': 5, # Number of monte carlo simulations 'mc_topk': 3, # top k actions for monte carlo rollout 'checkpoint': './temp/', 'load_model': False, 'load_folder_file': ('/dev/models/8x100x50','best.pth.tar'), 'numItersForTrainExamplesHistory': 20, # define neural network arguments 'lr': 0.001, # lr: learning rate 'dropout': 0.3, 'epochs': 10, 'batch_size': 64, 'cuda': torch.cuda.is_available(), 'num_channels': 512, }) ``` --- # Section 1: Create a game/agent loop for RL ``` # @title Video 1: A game loop for RL from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1iw411979L", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"s4BK_yrknf4", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` ***Goal***: How to setup a game environment with multiple players for reinforcement learning experiments. ***Exercise***: * Build an agent that plays random moves * Connect with connect 4 game * Generate games including wins and losses ``` class OthelloGame(Game): square_content = { -1: "X", +0: "-", +1: "O" } @staticmethod def getSquarePiece(piece): return OthelloGame.square_content[piece] def __init__(self, n): self.n = n def getInitBoard(self): # return initial board (numpy board) b = Board(self.n) return np.array(b.pieces) def getBoardSize(self): # (a,b) tuple return (self.n, self.n) def getActionSize(self): # return number of actions, n is the board size and +1 is for no-op action return self.n*self.n + 1 def getNextState(self, board, player, action): # if player takes action on board, return next (board,player) # action must be a valid move if action == self.n*self.n: return (board, -player) b = Board(self.n) b.pieces = np.copy(board) move = (int(action/self.n), action%self.n) b.execute_move(move, player) return (b.pieces, -player) def getValidMoves(self, board, player): # return a fixed size binary vector valids = [0]*self.getActionSize() b = Board(self.n) b.pieces = np.copy(board) legalMoves = b.get_legal_moves(player) if len(legalMoves)==0: valids[-1]=1 return np.array(valids) for x, y in legalMoves: valids[self.n*x+y]=1 return np.array(valids) def getGameEnded(self, board, player): # return 0 if not ended, 1 if player 1 won, -1 if player 1 lost # player = 1 b = Board(self.n) b.pieces = np.copy(board) if b.has_legal_moves(player): return 0 if b.has_legal_moves(-player): return 0 if b.countDiff(player) > 0: return 1 return -1 def getCanonicalForm(self, board, player): # return state if player==1, else return -state if player==-1 return player*board def getSymmetries(self, board, pi): # mirror, rotational assert(len(pi) == self.n**2+1) # 1 for pass pi_board = np.reshape(pi[:-1], (self.n, self.n)) l = [] for i in range(1, 5): for j in [True, False]: newB = np.rot90(board, i) newPi = np.rot90(pi_board, i) if j: newB = np.fliplr(newB) newPi = np.fliplr(newPi) l += [(newB, list(newPi.ravel()) + [pi[-1]])] return l def stringRepresentation(self, board): return board.tostring() def stringRepresentationReadable(self, board): board_s = "".join(self.square_content[square] for row in board for square in row) return board_s def getScore(self, board, player): b = Board(self.n) b.pieces = np.copy(board) return b.countDiff(player) @staticmethod def display(board): n = board.shape[0] print(" ", end="") for y in range(n): print(y, end=" ") print("") print("-----------------------") for y in range(n): print(y, "|", end="") # print the row # for x in range(n): piece = board[y][x] # get the piece to print print(OthelloGame.square_content[piece], end=" ") print("|") print("-----------------------") ``` ## Section 1.1: Create a random player ### Coding Exercise 1.1: Implement a random player ``` class RandomPlayer(): def __init__(self, game): self.game = game def play(self, board): ################################################# ## TODO for students: ## ## 1. Please compute the valid moves using gerValidMoves(). ## ## 2. Compute the probability over actions.## ## 3. Pick a random action based on the probability computed above.## # Fill out function and remove ## raise NotImplementedError("Implement the random player") ################################################# valids = ... prob = ... a = ... return a # to_remove solution class RandomPlayer(): def __init__(self, game): self.game = game def play(self, board): valids = self.game.getValidMoves(board, 1) prob = valids/valids.sum() a = np.random.choice(self.game.getActionSize(), p=prob) return a ``` ## Section 1.2. Initiate the game board ``` # Display the board game = OthelloGame(6) board = game.getInitBoard() game.display(board) # observe the game board size print('Board size = {}' .format(game.getBoardSize())) # observe the action size print('Action size = {}'.format(game.getActionSize())) ``` ## Section 1.3. Create two random agents to play against each other ``` # define the random player player1 = RandomPlayer(game).play # player 1 is a random player player2 = RandomPlayer(game).play # player 2 is a random player # define number of games num_games = 20 # start the competition arena = Arena.Arena(player1, player2 , game, display=None) # to see the steps of the competition set "display=OthelloGame.display" result = arena.playGames(num_games, verbose=False) # return ( number of games won by player1, num of games won by player2, num of games won by nobody) ``` ``` Arena.playGames (1): 100%|██████████| 10/10 [00:00<00:00, 20.29it/s] Arena.playGames (2): 100%|██████████| 10/10 [00:00<00:00, 22.65it/s] ``` ## Section 1.4. Compute win rate for the random player (player 1) ``` print("\nNumber of games won by player1 = {},\nNumber of games won by player2 = {} out of {} games" .format(result[0], result[1], num_games)) win_rate_player1 = result[0]/num_games print('\nWin rate for player 1 over 20 games: {}%'.format(win_rate_player1*100)) ``` ``` Number of games won by player1 = 11, Number of games won by player2 = 9 out of 20 games Win rate for player 1 over 20 games: 55.00000000000001% ``` --- # Section 2: Train a value function from expert game data **Goal:** Learn how to train a value function from a dataset of games played by an expert. **Exercise:** * Load a dataset of expert generated games. * Train a network to minimize MSE for win/loss predictions given board states sampled throughout the game. This will be done on a very small number of games. We will provide a network trained on a larger dataset. ``` # @title Video 2: Train a value function from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1jf4y157xQ", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"RVo6rVP9iC0", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) display(out) ``` ## Section 2.1. Load expert data ``` def loadTrainExamples(folder, filename): trainExamplesHistory = [] modelFile = os.path.join(folder, filename) examplesFile = modelFile + ".examples" if not os.path.isfile(examplesFile): print(f'File "{examplesFile}" with trainExamples not found!') r = input("Continue? [y|n]") if r != "y": sys.exit() else: print("File with train examples found. Loading it...") with open(examplesFile, "rb") as f: trainExamplesHistory = Unpickler(f).load() print('Loading done!') # examples based on the model were already collected (loaded) return trainExamplesHistory path = F"/content/nma_rl_games/alpha-zero/pretrained_models/data/" loaded_games = loadTrainExamples(folder=path, filename='checkpoint_1.pth.tar') ``` ## Section 2.2. Define the Neural Network Architecture for Othello ### Coding Exercise 2.2: Implement the NN `OthelloNNet` for Othello ``` class OthelloNNet(nn.Module): def __init__(self, game, args): # game params self.board_x, self.board_y = game.getBoardSize() self.action_size = game.getActionSize() self.args = args super(OthelloNNet, self).__init__() self.conv1 = nn.Conv2d(1, args.num_channels, 3, stride=1, padding=1) self.conv2 = nn.Conv2d(args.num_channels, args.num_channels, 3, stride=1, padding=1) self.conv3 = nn.Conv2d(args.num_channels, args.num_channels, 3, stride=1) self.conv4 = nn.Conv2d(args.num_channels, args.num_channels, 3, stride=1) self.bn1 = nn.BatchNorm2d(args.num_channels) self.bn2 = nn.BatchNorm2d(args.num_channels) self.bn3 = nn.BatchNorm2d(args.num_channels) self.bn4 = nn.BatchNorm2d(args.num_channels) self.fc1 = nn.Linear(args.num_channels * (self.board_x - 4) * (self.board_y - 4), 1024) self.fc_bn1 = nn.BatchNorm1d(1024) self.fc2 = nn.Linear(1024, 512) self.fc_bn2 = nn.BatchNorm1d(512) self.fc3 = nn.Linear(512, self.action_size) self.fc4 = nn.Linear(512, 1) def forward(self, s): # s: batch_size x board_x x board_y s = s.view(-1, 1, self.board_x, self.board_y) # batch_size x 1 x board_x x board_y s = F.relu(self.bn1(self.conv1(s))) # batch_size x num_channels x board_x x board_y s = F.relu(self.bn2(self.conv2(s))) # batch_size x num_channels x board_x x board_y s = F.relu(self.bn3(self.conv3(s))) # batch_size x num_channels x (board_x-2) x (board_y-2) s = F.relu(self.bn4(self.conv4(s))) # batch_size x num_channels x (board_x-4) x (board_y-4) s = s.view(-1, self.args.num_channels * (self.board_x - 4) * (self.board_y - 4)) s = F.dropout(F.relu(self.fc_bn1(self.fc1(s))), p=self.args.dropout, training=self.training) # batch_size x 1024 s = F.dropout(F.relu(self.fc_bn2(self.fc2(s))), p=self.args.dropout, training=self.training) # batch_size x 512 pi = self.fc3(s) # batch_size x action_size v = self.fc4(s) # batch_size x 1 ################################################# ## TODO for students: Please compute a probability distribution over 'pi' using log softmax (for numerical stability) # Fill out function and remove raise NotImplementedError("Calculate the probability distribution and the value") ################################################# # return a probability distribution over actions at the current state and the value of the current state. return ..., ... # to_remove solution class OthelloNNet(nn.Module): def __init__(self, game, args): # game params self.board_x, self.board_y = game.getBoardSize() self.action_size = game.getActionSize() self.args = args super(OthelloNNet, self).__init__() self.conv1 = nn.Conv2d(1, args.num_channels, 3, stride=1, padding=1) self.conv2 = nn.Conv2d(args.num_channels, args.num_channels, 3, stride=1, padding=1) self.conv3 = nn.Conv2d(args.num_channels, args.num_channels, 3, stride=1) self.conv4 = nn.Conv2d(args.num_channels, args.num_channels, 3, stride=1) self.bn1 = nn.BatchNorm2d(args.num_channels) self.bn2 = nn.BatchNorm2d(args.num_channels) self.bn3 = nn.BatchNorm2d(args.num_channels) self.bn4 = nn.BatchNorm2d(args.num_channels) self.fc1 = nn.Linear(args.num_channels * (self.board_x - 4) * (self.board_y - 4), 1024) self.fc_bn1 = nn.BatchNorm1d(1024) self.fc2 = nn.Linear(1024, 512) self.fc_bn2 = nn.BatchNorm1d(512) self.fc3 = nn.Linear(512, self.action_size) self.fc4 = nn.Linear(512, 1) def forward(self, s): # s: batch_size x board_x x board_y s = s.view(-1, 1, self.board_x, self.board_y) # batch_size x 1 x board_x x board_y s = F.relu(self.bn1(self.conv1(s))) # batch_size x num_channels x board_x x board_y s = F.relu(self.bn2(self.conv2(s))) # batch_size x num_channels x board_x x board_y s = F.relu(self.bn3(self.conv3(s))) # batch_size x num_channels x (board_x-2) x (board_y-2) s = F.relu(self.bn4(self.conv4(s))) # batch_size x num_channels x (board_x-4) x (board_y-4) s = s.view(-1, self.args.num_channels * (self.board_x - 4) * (self.board_y - 4)) s = F.dropout(F.relu(self.fc_bn1(self.fc1(s))), p=self.args.dropout, training=self.training) # batch_size x 1024 s = F.dropout(F.relu(self.fc_bn2(self.fc2(s))), p=self.args.dropout, training=self.training) # batch_size x 512 pi = self.fc3(s) # batch_size x action_size v = self.fc4(s) # batch_size x 1 # return a probability distribution over actions at the current state and the value of the current state. return F.log_softmax(pi, dim=1), torch.tanh(v) ``` ## Section 2.3. Define the Value network During the training the ground truth will be uploaded from the **MCTS simulations** available at 'checkpoint_x.path.tar.examples'. ### Coding Exercise 2.3: Implement the `ValueNetwork` ``` class ValueNetwork(NeuralNet): def __init__(self, game): self.nnet = OthelloNNet(game, args) self.board_x, self.board_y = game.getBoardSize() self.action_size = game.getActionSize() if args.cuda: self.nnet.cuda() def train(self, games): """ examples: list of examples, each example is of form (board, pi, v) """ optimizer = optim.Adam(self.nnet.parameters()) for examples in games: for epoch in range(args.epochs): print('EPOCH ::: ' + str(epoch + 1)) self.nnet.train() v_losses = [] # to store the losses per epoch batch_count = int(len(examples) / args.batch_size) # len(examples)=200, batch-size=64, batch_count=3 t = tqdm(range(batch_count), desc='Training Value Network') for _ in t: sample_ids = np.random.randint(len(examples), size=args.batch_size) # read the ground truth information from MCTS simulation using the loaded examples boards, pis, vs = list(zip(*[examples[i] for i in sample_ids])) # length of boards, pis, vis = 64 boards = torch.FloatTensor(np.array(boards).astype(np.float64)) target_vs = torch.FloatTensor(np.array(vs).astype(np.float64)) # predict if args.cuda: # to run on GPU if available boards, target_vs = boards.contiguous().cuda(), target_vs.contiguous().cuda() ################################################# ## TODO for students: ## 1. Compute the value predicted by OthelloNNet() ## ## 2. First implement the loss_v() function below and then use it to update the value loss. ## # Fill out function and remove raise NotImplementedError("Compute the output") ################################################# # compute output _, out_v = ... l_v = ... # total loss # record loss v_losses.append(l_v.item()) t.set_postfix(Loss_v=l_v.item()) # compute gradient and do SGD step optimizer.zero_grad() l_v.backward() optimizer.step() def predict(self, board): """ board: np array with board """ # timing start = time.time() # preparing input board = torch.FloatTensor(board.astype(np.float64)) if args.cuda: board = board.contiguous().cuda() board = board.view(1, self.board_x, self.board_y) self.nnet.eval() with torch.no_grad(): _, v = self.nnet(board) return v.data.cpu().numpy()[0] def loss_v(self, targets, outputs): ################################################# ## TODO for students: Please compute Mean squared error and return as output. ## # Fill out function and remove raise NotImplementedError("Calculate the loss") ################################################# # Mean squared error (MSE) return ... def save_checkpoint(self, folder='checkpoint', filename='checkpoint.pth.tar'): filepath = os.path.join(folder, filename) if not os.path.exists(folder): print("Checkpoint Directory does not exist! Making directory {}".format(folder)) os.mkdir(folder) else: print("Checkpoint Directory exists! ") torch.save({'state_dict': self.nnet.state_dict(),}, filepath) print("Model saved! ") def load_checkpoint(self, folder='checkpoint', filename='checkpoint.pth.tar'): # https://github.com/pytorch/examples/blob/master/imagenet/main.py#L98 filepath = os.path.join(folder, filename) if not os.path.exists(filepath): raise ("No model in path {}".format(filepath)) map_location = None if args.cuda else 'cpu' checkpoint = torch.load(filepath, map_location=map_location) self.nnet.load_state_dict(checkpoint['state_dict']) # to_remove solution class ValueNetwork(NeuralNet): def __init__(self, game): self.nnet = OthelloNNet(game, args) self.board_x, self.board_y = game.getBoardSize() self.action_size = game.getActionSize() if args.cuda: self.nnet.cuda() def train(self, games): """ examples: list of examples, each example is of form (board, pi, v) """ optimizer = optim.Adam(self.nnet.parameters()) for examples in games: for epoch in range(args.epochs): print('EPOCH ::: ' + str(epoch + 1)) self.nnet.train() v_losses = [] # to store the losses per epoch batch_count = int(len(examples) / args.batch_size) # len(examples)=200, batch-size=64, batch_count=3 t = tqdm(range(batch_count), desc='Training Value Network') for _ in t: sample_ids = np.random.randint(len(examples), size=args.batch_size) # read the ground truth information from MCTS simulation using the loaded examples boards, pis, vs = list(zip(*[examples[i] for i in sample_ids])) # length of boards, pis, vis = 64 boards = torch.FloatTensor(np.array(boards).astype(np.float64)) target_vs = torch.FloatTensor(np.array(vs).astype(np.float64)) # predict if args.cuda: # to run on GPU if available boards, target_vs = boards.contiguous().cuda(), target_vs.contiguous().cuda() # compute output _, out_v = self.nnet(boards) l_v = self.loss_v(target_vs, out_v) # total loss # record loss v_losses.append(l_v.item()) t.set_postfix(Loss_v=l_v.item()) # compute gradient and do SGD step optimizer.zero_grad() l_v.backward() optimizer.step() def predict(self, board): """# Mean squared error (MSE) board: np array with board """ # timing start = time.time() # preparing input board = torch.FloatTensor(board.astype(np.float64)) if args.cuda: board = board.contiguous().cuda() board = board.view(1, self.board_x, self.board_y) self.nnet.eval() with torch.no_grad(): _, v = self.nnet(board) return v.data.cpu().numpy()[0] def loss_v(self, targets, outputs): # Mean squared error (MSE) return torch.sum((targets - outputs.view(-1)) ** 2) / targets.size()[0] def save_checkpoint(self, folder='checkpoint', filename='checkpoint.pth.tar'): filepath = os.path.join(folder, filename) if not os.path.exists(folder): print("Checkpoint Directory does not exist! Making directory {}".format(folder)) os.mkdir(folder) else: print("Checkpoint Directory exists! ") torch.save({'state_dict': self.nnet.state_dict(),}, filepath) print("Model saved! ") def load_checkpoint(self, folder='checkpoint', filename='checkpoint.pth.tar'): # https://github.com/pytorch/examples/blob/master/imagenet/main.py#L98 filepath = os.path.join(folder, filename) if not os.path.exists(filepath): raise ("No model in path {}".format(filepath)) map_location = None if args.cuda else 'cpu' checkpoint = torch.load(filepath, map_location=map_location) self.nnet.load_state_dict(checkpoint['state_dict']) ``` ## Section 2.4. Train the value network and observe the MSE loss progress Only run this cell if you do not have access to the pretrained models in the rl_for_games repositry. ``` game = OthelloGame(6) vnet = ValueNetwork(game) vnet.train(loaded_games) ``` --- # Section 3: Use a trained value network to play games **Goal**: Learn how to use a value function in order to make a player that works better than a random player. **Exercise:** * Sample random valid moves and use the value function to rank them * Choose the best move as the action and play it Show that doing so beats the random player **Hint:** You might need to change the sign of the value based on the player ``` # @title Video 3: Play games using a value function from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1u54y1J7E6", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"HreQzd7iusI", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` ## Coding Exercise 3: Value-based player ``` model_save_name = 'ValueNetwork.pth.tar' path = F"/content/nma_rl_games/alpha-zero/pretrained_models/models/" game = OthelloGame(6) vnet = ValueNetwork(game) vnet.load_checkpoint(folder=path, filename=model_save_name) class ValueBasedPlayer(): def __init__(self, game, vnet): self.game = game self.vnet = vnet def play(self, board): valids = self.game.getValidMoves(board, 1) candidates = [] max_num_actions = 3 va = np.where(valids)[0] va_list = va.tolist() shuffle(va_list) ################################################# ## TODO for students: In the first part, please return the next board state using getNextState(), then predict ## ## the value of next state using value network, and finally add the value and action as a tuple to the candidate list. ## ## Note that you need to reverse the sign of the value. Since we are in a 2-player game regime, in zero-sum games ## the players flip every turn. ## ## and the one returned from the network is computed for the current player but the next one to play if the opponent! ## # Fill out function and remove raise NotImplementedError("Implement the value-based player") ################################################# for a in va_list: nextBoard, _ = ... value = ... candidates += ... if len(candidates) == max_num_actions: break candidates.sort() return candidates[0][1] # playing games between a value-based player and a random player num_games = 20 player1 = ValueBasedPlayer(game, vnet).play player2 = RandomPlayer(game).play arena = Arena.Arena(player1, player2, game, display=OthelloGame.display) ## Uncomment the code below to check your code! # result = arena.playGames(num_games, verbose=False) # print(result) # to_remove solution class ValueBasedPlayer(): def __init__(self, game, vnet): self.game = game self.vnet = vnet def play(self, board): valids = self.game.getValidMoves(board, 1) candidates = [] max_num_actions = 3 va = np.where(valids)[0] va_list = va.tolist() shuffle(va_list) for a in va_list: # return next board state using getNextState() function nextBoard, _ = self.game.getNextState(board, 1, a) # predict the value of next state using value network value = self.vnet.predict(nextBoard) # add the value and the action as a tuple to the candidate lists, note that you might need to change the sign of the value based on the player candidates += [(-value, a)] if len(candidates) == max_num_actions: break candidates.sort() return candidates[0][1] # playing games between a value-based player and a random player num_games = 20 player1 = ValueBasedPlayer(game, vnet).play player2 = RandomPlayer(game).play arena = Arena.Arena(player1, player2, game, display=OthelloGame.display) ## Uncomment the code below to check your code! result = arena.playGames(num_games, verbose=False) print(result) ``` ``` Arena.playGames (1): 100%|██████████| 10/10 [00:01<00:00, 6.42it/s] Arena.playGames (2): 100%|██████████| 10/10 [00:01<00:00, 7.49it/s](15, 5, 0) ``` **Result of pitting a value-based player against a random player** ``` print("\nNumber of games won by player1 = {}, \nNumber of games won by player2 = {}, out of {} games" .format(result[0], result[1], num_games)) win_rate_player1 = result[0]/num_games # result[0] is the number of times that player 1 wins print('\nWin rate for player 1 over {} games: {}%'.format(num_games, win_rate_player1*100)) ``` ``` Number of games won by player1 = 15, Number of games won by player2 = 5, out of 20 games Win rate for player 1 over 20 games: 75.0% ``` --- # Section 4: Train a policy network from expert game data **Goal**: How to train a policy network via supervised learning / behavioural cloning. **Exercise**: * Train a network to predict the next move in an expert dataset by maximizing the log likelihood of the next action. ``` # @title Video 4: Train a policy network from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1tg411M7Rg", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"DVSJE2d9tNI", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` ## Coding Exercise 4: Implement `PolicyNetwork` ``` class PolicyNetwork(NeuralNet): def __init__(self, game): self.nnet = OthelloNNet(game, args) self.board_x, self.board_y = game.getBoardSize() self.action_size = game.getActionSize() if args.cuda: self.nnet.cuda() def train(self, games): """ examples: list of examples, each example is of form (board, pi, v) """ optimizer = optim.Adam(self.nnet.parameters()) for examples in games: for epoch in range(args.epochs): print('EPOCH ::: ' + str(epoch + 1)) self.nnet.train() pi_losses = [] batch_count = int(len(examples) / args.batch_size) t = tqdm(range(batch_count), desc='Training Policy Network') for _ in t: sample_ids = np.random.randint(len(examples), size=args.batch_size) boards, pis, _ = list(zip(*[examples[i] for i in sample_ids])) boards = torch.FloatTensor(np.array(boards).astype(np.float64)) target_pis = torch.FloatTensor(np.array(pis)) # predict if args.cuda: boards, target_pis = boards.contiguous().cuda(), target_pis.contiguous().cuda() ################################################# ## TODO for students: ## ## 1. Compute the policy (pi) predicted by OthelloNNet() ## ## 2. Implement the loss_pi() function below and then use it to update the policy loss. ## # Fill out function and remove raise NotImplementedError("Compute the output") ################################################# # compute output out_pi, _ = ... l_pi = ... # record loss pi_losses.append(l_pi.item()) t.set_postfix(Loss_pi=l_pi.item()) # compute gradient and do SGD step optimizer.zero_grad() l_pi.backward() optimizer.step() def predict(self, board): """ board: np array with board """ # timing start = time.time() # preparing input board = torch.FloatTensor(board.astype(np.float64)) if args.cuda: board = board.contiguous().cuda() board = board.view(1, self.board_x, self.board_y) self.nnet.eval() with torch.no_grad(): pi,_ = self.nnet(board) return torch.exp(pi).data.cpu().numpy()[0] def loss_pi(self, targets, outputs): ################################################# ## TODO for students: To implement the loss function, please compute and return the negative log likelihood of targets. ## # Fill out function and remove raise NotImplementedError("Compute the loss") ################################################# return ... def save_checkpoint(self, folder='checkpoint', filename='checkpoint.pth.tar'): filepath = os.path.join(folder, filename) if not os.path.exists(folder): print("Checkpoint Directory does not exist! Making directory {}".format(folder)) os.mkdir(folder) else: print("Checkpoint Directory exists! ") torch.save({'state_dict': self.nnet.state_dict(),}, filepath) print("Model saved! ") def load_checkpoint(self, folder='checkpoint', filename='checkpoint.pth.tar'): # https://github.com/pytorch/examples/blob/master/imagenet/main.py#L98 filepath = os.path.join(folder, filename) if not os.path.exists(filepath): raise ("No model in path {}".format(filepath)) map_location = None if args.cuda else 'cpu' checkpoint = torch.load(filepath, map_location=map_location) self.nnet.load_state_dict(checkpoint['state_dict']) game = OthelloGame(6) ## we use the same actor-critic network to output a policy # pnet = PolicyNetwork(game) # pnet.train(loaded_games) # to_remove solution class PolicyNetwork(NeuralNet): def __init__(self, game): self.nnet = OthelloNNet(game, args) self.board_x, self.board_y = game.getBoardSize() self.action_size = game.getActionSize() if args.cuda: self.nnet.cuda() def train(self, games): """ examples: list of examples, each example is of form (board, pi, v) """ optimizer = optim.Adam(self.nnet.parameters()) for examples in games: for epoch in range(args.epochs): print('EPOCH ::: ' + str(epoch + 1)) self.nnet.train() pi_losses = [] batch_count = int(len(examples) / args.batch_size) t = tqdm(range(batch_count), desc='Training Policy Network') for _ in t: sample_ids = np.random.randint(len(examples), size=args.batch_size) boards, pis, _ = list(zip(*[examples[i] for i in sample_ids])) boards = torch.FloatTensor(np.array(boards).astype(np.float64)) target_pis = torch.FloatTensor(np.array(pis)) # predict if args.cuda: boards, target_pis = boards.contiguous().cuda(), target_pis.contiguous().cuda() # compute output out_pi, _ = self.nnet(boards) l_pi = self.loss_pi(target_pis, out_pi) # record loss pi_losses.append(l_pi.item()) t.set_postfix(Loss_pi=l_pi.item()) # compute gradient and do SGD step optimizer.zero_grad() l_pi.backward() optimizer.step() def predict(self, board): """ board: np array with board """ # timing start = time.time() # preparing input board = torch.FloatTensor(board.astype(np.float64)) if args.cuda: board = board.contiguous().cuda() board = board.view(1, self.board_x, self.board_y) self.nnet.eval() with torch.no_grad(): pi,_ = self.nnet(board) return torch.exp(pi).data.cpu().numpy()[0] def loss_pi(self, targets, outputs): # loss function. compute and return the negative log likelihood of targets! return -torch.sum(targets * outputs) / targets.size()[0] def save_checkpoint(self, folder='checkpoint', filename='checkpoint.pth.tar'): filepath = os.path.join(folder, filename) if not os.path.exists(folder): print("Checkpoint Directory does not exist! Making directory {}".format(folder)) os.mkdir(folder) else: print("Checkpoint Directory exists! ") torch.save({'state_dict': self.nnet.state_dict(),}, filepath) print("Model saved! ") def load_checkpoint(self, folder='checkpoint', filename='checkpoint.pth.tar'): # https://github.com/pytorch/examples/blob/master/imagenet/main.py#L98 filepath = os.path.join(folder, filename) if not os.path.exists(filepath): raise ("No model in path {}".format(filepath)) map_location = None if args.cuda else 'cpu' checkpoint = torch.load(filepath, map_location=map_location) self.nnet.load_state_dict(checkpoint['state_dict']) game = OthelloGame(6) ## we use the same actor-critic network to output a policy pnet = PolicyNetwork(game) pnet.train(loaded_games) ``` ### Train the policy network Only run this cell if you do not have access to the pretrained models in the rl_for_games repositry.. ``` game = OthelloGame(6) pnet = PolicyNetwork(game) pnet.train(loaded_games) ``` --- # Section 5: Use a trained policy network to play games **Goal**: How to use a policy network to play games. **Exercise:** * Use the policy network to give probabilities for the next move. * Build a player that takes the move given the maximum probability by the network. * Compare this to another player that samples moves according to the probability distribution output by the network. ``` # @title Video 5: Play games using a policy network from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1DU4y1n7gD", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"hhhBmSXIZGY", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` ## Coding Exercise 5: `Implement the PolicyBasedPlayer` ``` model_save_name = 'PolicyNetwork.pth.tar' path = F"/content/nma_rl_games/alpha-zero/pretrained_models/models/" game = OthelloGame(6) pnet = PolicyNetwork(game) pnet.load_checkpoint(folder=path, filename=model_save_name) class PolicyBasedPlayer(): def __init__(self, game, pnet, greedy=True): self.game = game self.pnet = pnet self.greedy = greedy def play(self, board): valids = self.game.getValidMoves(board, 1) ################################################# ## TODO for students: ## ## 1. Compute the action probabilities using policy network pnet() ## 2. Mask invalid moves using valids variable and the action probabilites computed above. ## 3. Compute the sum over valid actions and store them in sum_vap. # Fill out function and remove raise NotImplementedError("Define the play") ################################################# action_probs = ... vap = ... # masking invalid moves sum_vap = ... if sum_vap > 0: vap /= sum_vap # renormalize else: # if all valid moves were masked we make all valid moves equally probable print("All valid moves were masked, doing a workaround.") vap = vap + valids vap /= np.sum(vap) if self.greedy: # greedy policy player a = np.where(vap == np.max(vap))[0][0] else: # sample-based policy player a = np.random.choice(self.game.getActionSize(), p=vap) return a # playing games num_games = 20 player1 = PolicyBasedPlayer(game, pnet, greedy=True).play player2 = RandomPlayer(game).play arena = Arena.Arena(player1, player2, game, display=OthelloGame.display) ## Uncomment below to test! # result = arena.playGames(num_games, verbose=False) # print(result) # to_remove solution class PolicyBasedPlayer(): def __init__(self, game, pnet, greedy=True): self.game = game self.pnet = pnet self.greedy = greedy def play(self, board): valids = self.game.getValidMoves(board, 1) action_probs = self.pnet.predict(board) vap = action_probs*valids # masking invalid moves sum_vap = np.sum(vap) if sum_vap > 0: vap /= sum_vap # renormalize else: # if all valid moves were masked we make all valid moves equally probable print("All valid moves were masked, doing a workaround.") vap = vap + valids vap /= np.sum(vap) if self.greedy: # greedy policy player a = np.where(vap == np.max(vap))[0][0] else: # sample-based policy player a = np.random.choice(self.game.getActionSize(), p=vap) return a # playing games num_games = 20 player1 = PolicyBasedPlayer(game, pnet, greedy=True).play player2 = RandomPlayer(game).play arena = Arena.Arena(player1, player2, game, display=OthelloGame.display) ## Uncomment below to test! result = arena.playGames(num_games, verbose=False) print(result) ``` ``` Arena.playGames (1): 100%|██████████| 10/10 [00:01<00:00, 9.61it/s] Arena.playGames (2): 100%|██████████| 10/10 [00:00<00:00, 11.36it/s](17, 3, 0) ``` ``` win_rate_player1 = result[0] / num_games print('\n Win rate for player 1 over {} games: {}%'.format(num_games, win_rate_player1*100)) ``` ``` Win rate for player 1 over 20 games: 80.0% ``` ## Section 5.1. Comparing a player that samples from the action probabilities versus the policy player that returns the maximum probability There's often randomness in the results as we are running the players for a low number of games (only 20 games due compute+time costs). So, when students are running the cells they might not get the expected result. To better measure the strength of players you can run more games! ``` num_games = 20 game = OthelloGame(6) player1 = PolicyBasedPlayer(game, pnet, greedy=False).play player2 = RandomPlayer(game).play arena = Arena.Arena(player1, player2, game, display=OthelloGame.display) result = arena.playGames(num_games, verbose=False) print(result) ``` ``` Arena.playGames (1): 100%|██████████| 10/10 [00:01<00:00, 9.68it/s] Arena.playGames (2): 100%|██████████| 10/10 [00:00<00:00, 11.29it/s](13, 7, 0) ``` ``` win_rate_player1 = result[0]/num_games print('\n Win rate for player 1 over {} games: {}%'.format(num_games, win_rate_player1*100)) ``` ``` Win rate for player 1 over 20 games: 90.0% ``` ## Section 5.2. Compare greedy policy based player versus value based player ``` num_games = 20 game = OthelloGame(6) player1 = PolicyBasedPlayer(game, pnet).play player2 = ValueBasedPlayer(game, vnet).play arena = Arena.Arena(player1, player2, game, display=OthelloGame.display) result = arena.playGames(num_games, verbose=False) print(result) ``` ``` Arena.playGames (1): 100%|██████████| 10/10 [00:01<00:00, 5.57it/s] Arena.playGames (2): 100%|██████████| 10/10 [00:01<00:00, 5.94it/s](14, 6, 0) ``` ``` win_rate_player1 = result[0]/num_games print('\n Win rate for player 1 over {} games: {}%'.format(num_games, win_rate_player1*100)) ``` ``` Win rate for player 1 over 20 games: 60.0% ``` ## Section 5.3. Compare greedy policy based player versus sample-based policy player ``` num_games = 20 game = OthelloGame(6) player1 = PolicyBasedPlayer(game, pnet).play # greedy player player2 = PolicyBasedPlayer(game, pnet, greedy=False).play # sample-based player arena = Arena.Arena(player1, player2, game, display=OthelloGame.display) result = arena.playGames(num_games, verbose=False) print(result) ``` ``` Arena.playGames (1): 100%|██████████| 10/10 [00:01<00:00, 7.76it/s] Arena.playGames (2): 100%|██████████| 10/10 [00:01<00:00, 9.08it/s](14, 6, 0) ``` ``` win_rate_player1 = result[0]/num_games print('\n Win rate for player 1 over {} games: {}%'.format(num_games, win_rate_player1*100)) ``` ``` Win rate for player 1 over 20 games: 70.0% ``` --- # Section 6: Plan using Monte Carlo rollouts **Goal**: Teach the students the core idea behind using simulated rollouts to understand the future and value actions. **Exercise**: * Build a loop to run Monte Carlo simulations using the policy network. * Use this to obtain better estimates of the value of moves. ``` # @title Video 6: Play using Monte-Carlo rollouts from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1MM4y1T77C", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"EpoIjzytpxQ", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` ## Coding Exercise 6: `MonteCarlo` ``` class MonteCarlo(): def __init__(self, game, nnet, args): self.game = game self.nnet = nnet self.args = args self.Ps = {} # stores initial policy (returned by neural net) self.Es = {} # stores game.getGameEnded ended for board s # call this rollout def simulate(self, canonicalBoard): """ This function performs one monte carlo rollout """ s = self.game.stringRepresentation(canonicalBoard) init_start_state = s temp_v = 0 isfirstAction = None for i in range(self.args.maxDepth): # maxDepth if s not in self.Es: self.Es[s] = self.game.getGameEnded(canonicalBoard, 1) if self.Es[s] != 0: # terminal state temp_v= -self.Es[s] break self.Ps[s], v = self.nnet.predict(canonicalBoard) valids = self.game.getValidMoves(canonicalBoard, 1) self.Ps[s] = self.Ps[s] * valids # masking invalid moves sum_Ps_s = np.sum(self.Ps[s]) if sum_Ps_s > 0: self.Ps[s] /= sum_Ps_s # renormalize else: # if all valid moves were masked make all valid moves equally probable # NB! All valid moves may be masked if either your NNet architecture is insufficient or you've get overfitting or something else. # If you have got dozens or hundreds of these messages you should pay attention to your NNet and/or training process. log.error("All valid moves were masked, doing a workaround.") self.Ps[s] = self.Ps[s] + valids self.Ps[s] /= np.sum(self.Ps[s]) ################################################# ## TODO for students: Take a random action. ## 1. Take the random action. ## 2. Find the next state and the next player from the environment. ## 3. Get the canonical form of the next state. # Fill out function and remove raise NotImplementedError("Take the action, find the next state") ################################################# a = ... next_s, next_player = self.game.getNextState(..., ..., ...) next_s = self.game.getCanonicalForm(..., ...) s = self.game.stringRepresentation(next_s) temp_v = v return temp_v # to_remove solution class MonteCarlo(): def __init__(self, game, nnet, args): self.game = game self.nnet = nnet self.args = args self.Ps = {} # stores initial policy (returned by neural net) self.Es = {} # stores game.getGameEnded ended for board s # call this rollout def simulate(self, canonicalBoard): """ This function performs one monte carlo rollout """ s = self.game.stringRepresentation(canonicalBoard) init_start_state = s temp_v = 0 isfirstAction = None for i in range(self.args.maxDepth): # maxDepth if s not in self.Es: self.Es[s] = self.game.getGameEnded(canonicalBoard, 1) if self.Es[s] != 0: # terminal state temp_v= -self.Es[s] break self.Ps[s], v = self.nnet.predict(canonicalBoard) valids = self.game.getValidMoves(canonicalBoard, 1) self.Ps[s] = self.Ps[s] * valids # masking invalid moves sum_Ps_s = np.sum(self.Ps[s]) if sum_Ps_s > 0: self.Ps[s] /= sum_Ps_s # renormalize else: # if all valid moves were masked make all valid moves equally probable # NB! All valid moves may be masked if either your NNet architecture is insufficient or you've get overfitting or something else. # If you have got dozens or hundreds of these messages you should pay attention to your NNet and/or training process. log.error("All valid moves were masked, doing a workaround.") self.Ps[s] = self.Ps[s] + valids self.Ps[s] /= np.sum(self.Ps[s]) # Take a random action a = np.random.choice(self.game.getActionSize(), p=self.Ps[s]) # Find the next state and the next player next_s, next_player = self.game.getNextState(canonicalBoard, 1, a) next_s = self.game.getCanonicalForm(next_s, next_player) s = self.game.stringRepresentation(next_s) temp_v = v return temp_v ``` --- # Section 7: Use Monte Carlo simulations to play games **Goal:** Teach students how to use simple Monte Carlo planning to play games. ``` # @title Video 7: Play with planning from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Kg411M78Y", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"-KV8DvNjn5Q", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` ## Coding Exercise 7: Monte-Carlo simulations * Incorporate Monte Carlo simulations into an agent. * Run the resulting player versus the random, value-based, and policy-based players. ``` # Load MC model from the repository mc_model_save_name = 'MC.pth.tar' path = F"/content/nma_rl_games/alpha-zero/pretrained_models/models/" class MonteCarloBasedPlayer(): def __init__(self, game, nnet, args): self.game = game self.nnet = nnet self.args = args ################################################# ## TODO for students: Instantiate the Monte Carlo class. # Fill out function and remove raise NotImplementedError("Use Monte Carlo!") ################################################# self.mc = ... self.K = self.args.mc_topk def play(self, canonicalBoard): self.qsa = [] s = self.game.stringRepresentation(canonicalBoard) Ps, v = self.nnet.predict(canonicalBoard) valids = self.game.getValidMoves(canonicalBoard, 1) Ps = Ps * valids # masking invalid moves sum_Ps_s = np.sum(Ps) if sum_Ps_s > 0: Ps /= sum_Ps_s # renormalize else: # if all valid moves were masked make all valid moves equally probable # NB! All valid moves may be masked if either your NNet architecture is insufficient or you've get overfitting or something else. # If you have got dozens or hundreds of these messages you should pay attention to your NNet and/or training process. log = logging.getLogger(__name__) log.error("All valid moves were masked, doing a workaround.") Ps = Ps + valids Ps /= np.sum(Ps) num_valid_actions = np.shape(np.nonzero(Ps))[1] if num_valid_actions < self.K: top_k_actions = np.argpartition(Ps,-num_valid_actions)[-num_valid_actions:] else: top_k_actions = np.argpartition(Ps,-self.K)[-self.K:] # to get actions that belongs to top k prob ################################################# ## TODO for students: ## 1. For each action in the top-k actions ## 2. Get the next state using getNextState() function. You can find the implementation of this function in Section 1 in OthelloGame() class. ## 3. Get the canonical form of the getNextState(). # Fill out function and remove raise NotImplementedError("Loop for the top actions") ################################################# for action in ...: next_s, next_player = self.game.getNextState(..., ..., ...) next_s = self.game.getCanonicalForm(..., ...) values = [] # do some rollouts for rollout in range(self.args.numMCsims): value = self.mc.simulate(canonicalBoard) values.append(value) # average out values avg_value = np.mean(values) self.qsa.append((avg_value, action)) self.qsa.sort(key=lambda a: a[0]) self.qsa.reverse() best_action = self.qsa[0][1] return best_action def getActionProb(self, canonicalBoard, temp=1): if self.game.getGameEnded(canonicalBoard, 1) != 0: return np.zeros((self.game.getActionSize())) else: action_probs = np.zeros((self.game.getActionSize())) best_action = self.play(canonicalBoard) action_probs[best_action] = 1 return action_probs game = OthelloGame(6) rp = RandomPlayer(game).play # all players num_games = 20 # Feel free to change this number n1 = NNet(game) # nNet players n1.load_checkpoint(folder=path, filename=mc_model_save_name) args1 = dotdict({'numMCsims': 10, 'maxRollouts':5, 'maxDepth':5, 'mc_topk': 3}) ## Uncomment below to check Monte Carlo agent! # mc1 = MonteCarloBasedPlayer(game, n1, args1) # n1p = lambda x: np.argmax(mc1.getActionProb(x)) # arena = Arena.Arena(n1p, rp, game, display=OthelloGame.display) # MC_result = arena.playGames(num_games, verbose=False) # print("\n Number of games won by player1 = {}, num of games won by player2 = {}, out of {} games" .format(MC_result[0], MC_result[1], num_games)) # win_rate_player1 = result[0]/num_games # print('\n Win rate for player 1 over {} games: {}%'.format(num_games, win_rate_player1*100)) # to_remove solution class MonteCarloBasedPlayer(): def __init__(self, game, nnet, args): self.game = game self.nnet = nnet self.args = args self.mc = MonteCarlo(game, nnet, args) self.K = self.args.mc_topk def play(self, canonicalBoard): self.qsa = [] s = self.game.stringRepresentation(canonicalBoard) Ps, v = self.nnet.predict(canonicalBoard) valids = self.game.getValidMoves(canonicalBoard, 1) Ps = Ps * valids # masking invalid moves sum_Ps_s = np.sum(Ps) if sum_Ps_s > 0: Ps /= sum_Ps_s # renormalize else: # if all valid moves were masked make all valid moves equally probable # NB! All valid moves may be masked if either your NNet architecture is insufficient or you've get overfitting or something else. # If you have got dozens or hundreds of these messages you should pay attention to your NNet and/or training process. log = logging.getLogger(__name__) log.error("All valid moves were masked, doing a workaround.") Ps = Ps + valids Ps /= np.sum(Ps) num_valid_actions = np.shape(np.nonzero(Ps))[1] if num_valid_actions < self.K: top_k_actions = np.argpartition(Ps,-num_valid_actions)[-num_valid_actions:] else: top_k_actions = np.argpartition(Ps,-self.K)[-self.K:] # to get actions that belongs to top k prob for action in top_k_actions: next_s, next_player = self.game.getNextState(canonicalBoard, 1, action) next_s = self.game.getCanonicalForm(next_s, next_player) values = [] # do some rollouts for rollout in range(self.args.numMCsims): value = self.mc.simulate(canonicalBoard) values.append(value) # average out values avg_value = np.mean(values) self.qsa.append((avg_value, action)) self.qsa.sort(key=lambda a: a[0]) self.qsa.reverse() best_action = self.qsa[0][1] return best_action def getActionProb(self, canonicalBoard, temp=1): if self.game.getGameEnded(canonicalBoard, 1) != 0: return np.zeros((self.game.getActionSize())) else: action_probs = np.zeros((self.game.getActionSize())) best_action = self.play(canonicalBoard) action_probs[best_action] = 1 return action_probs game = OthelloGame(6) rp = RandomPlayer(game).play # all players num_games = 20 # Feel free to change this number n1 = NNet(game) # nNet players n1.load_checkpoint(folder=path, filename=mc_model_save_name) args1 = dotdict({'numMCsims': 10, 'maxRollouts':5, 'maxDepth':5, 'mc_topk': 3}) ## Uncomment below to check Monte Carlo agent! mc1 = MonteCarloBasedPlayer(game, n1, args1) n1p = lambda x: np.argmax(mc1.getActionProb(x)) arena = Arena.Arena(n1p, rp, game, display=OthelloGame.display) MC_result = arena.playGames(num_games, verbose=False) print("\n Number of games won by player1 = {}, num of games won by player2 = {}, out of {} games" .format(MC_result[0], MC_result[1], num_games)) win_rate_player1 = result[0]/num_games print('\n Win rate for player 1 over {} games: {}%'.format(num_games, win_rate_player1*100)) ``` ``` Win rate for player 1 over 20 games: 45.0% ``` --- # Section 8: Plan using Monte Carlo Tree Search **Goal:** Teach students to understand the core ideas behind Monte Carlo Tree Search. ``` # @title Video 8: Plan with MCTS from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV11v411n7gg", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"tKBcMtoEzQA", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` ## Coding Exercise 8: MCTS planner * Plug together pre-built Selection, Expansion & Backpropagation code to complete an MCTS planner. * Deploy the MCTS planner to understand an interesting position, producing value estimates and action counts. ``` class MCTS(): """ This class handles the MCTS tree. """ def __init__(self, game, nnet, args): self.game = game self.nnet = nnet self.args = args self.Qsa = {} # stores Q values for s,a (as defined in the paper) self.Nsa = {} # stores #times edge s,a was visited self.Ns = {} # stores #times board s was visited self.Ps = {} # stores initial policy (returned by neural net) self.Es = {} # stores game.getGameEnded ended for board s self.Vs = {} # stores game.getValidMoves for board s def search(self, canonicalBoard): """ This function performs one iteration of MCTS. It is recursively called till a leaf node is found. The action chosen at each node is one that has the maximum upper confidence bound as in the paper. Once a leaf node is found, the neural network is called to return an initial policy P and a value v for the state. This value is propagated up the search path. In case the leaf node is a terminal state, the outcome is propagated up the search path. The values of Ns, Nsa, Qsa are updated. NOTE: the return values are the negative of the value of the current state. This is done since v is in [-1,1] and if v is the value of a state for the current player, then its value is -v for the other player. Returns: v: the negative of the value of the current canonicalBoard """ s = self.game.stringRepresentation(canonicalBoard) if s not in self.Es: self.Es[s] = self.game.getGameEnded(canonicalBoard, 1) if self.Es[s] != 0: # terminal node return -self.Es[s] if s not in self.Ps: # leaf node self.Ps[s], v = self.nnet.predict(canonicalBoard) valids = self.game.getValidMoves(canonicalBoard, 1) self.Ps[s] = self.Ps[s] * valids # masking invalid moves sum_Ps_s = np.sum(self.Ps[s]) if sum_Ps_s > 0: self.Ps[s] /= sum_Ps_s # renormalize else: # if all valid moves were masked make all valid moves equally probable # NB! All valid moves may be masked if either your NNet architecture is insufficient or you've get overfitting or something else. # If you have got dozens or hundreds of these messages you should pay attention to your NNet and/or training process. log = logging.getLogger(__name__) log.error("All valid moves were masked, doing a workaround.") self.Ps[s] = self.Ps[s] + valids self.Ps[s] /= np.sum(self.Ps[s]) self.Vs[s] = valids self.Ns[s] = 0 return -v valids = self.Vs[s] cur_best = -float('inf') best_act = -1 ################################################# ## TODO for students: ## Implement the highest upper confidence bound depending whether we observed the state-action pair which is stored in self.Qsa[(s, a)]. You can find the formula in the slide 52 in video 8 above. # Fill out function and remove raise NotImplementedError("Complete the for loop") ################################################# # pick the action with the highest upper confidence bound for a in range(self.game.getActionSize()): if valids[a]: if (s, a) in self.Qsa: u = ... + ... * ... * math.sqrt(...) / (1 + ...) else: u = ... * ... * math.sqrt(... + 1e-8) if u > cur_best: cur_best = u best_act = a a = best_act next_s, next_player = self.game.getNextState(canonicalBoard, 1, a) next_s = self.game.getCanonicalForm(next_s, next_player) v = self.search(next_s) if (s, a) in self.Qsa: self.Qsa[(s, a)] = (self.Nsa[(s, a)] * self.Qsa[(s, a)] + v) / (self.Nsa[(s, a)] + 1) self.Nsa[(s, a)] += 1 else: self.Qsa[(s, a)] = v self.Nsa[(s, a)] = 1 self.Ns[s] += 1 return -v def getNsa(self): return self.Nsa # to_remove solution class MCTS(): """ This class handles the MCTS tree. """ def __init__(self, game, nnet, args): self.game = game self.nnet = nnet self.args = args self.Qsa = {} # stores Q values for s,a (as defined in the paper) self.Nsa = {} # stores #times edge s,a was visited self.Ns = {} # stores #times board s was visited self.Ps = {} # stores initial policy (returned by neural net) self.Es = {} # stores game.getGameEnded ended for board s self.Vs = {} # stores game.getValidMoves for board s def search(self, canonicalBoard): """ This function performs one iteration of MCTS. It is recursively called till a leaf node is found. The action chosen at each node is one that has the maximum upper confidence bound as in the paper. Once a leaf node is found, the neural network is called to return an initial policy P and a value v for the state. This value is propagated up the search path. In case the leaf node is a terminal state, the outcome is propagated up the search path. The values of Ns, Nsa, Qsa are updated. NOTE: the return values are the negative of the value of the current state. This is done since v is in [-1,1] and if v is the value of a state for the current player, then its value is -v for the other player. Returns: v: the negative of the value of the current canonicalBoard """ s = self.game.stringRepresentation(canonicalBoard) if s not in self.Es: self.Es[s] = self.game.getGameEnded(canonicalBoard, 1) if self.Es[s] != 0: # terminal node return -self.Es[s] if s not in self.Ps: # leaf node self.Ps[s], v = self.nnet.predict(canonicalBoard) valids = self.game.getValidMoves(canonicalBoard, 1) self.Ps[s] = self.Ps[s] * valids # masking invalid moves sum_Ps_s = np.sum(self.Ps[s]) if sum_Ps_s > 0: self.Ps[s] /= sum_Ps_s # renormalize else: # if all valid moves were masked make all valid moves equally probable # NB! All valid moves may be masked if either your NNet architecture is insufficient or you've get overfitting or something else. # If you have got dozens or hundreds of these messages you should pay attention to your NNet and/or training process. log = logging.getLogger(__name__) log.error("All valid moves were masked, doing a workaround.") self.Ps[s] = self.Ps[s] + valids self.Ps[s] /= np.sum(self.Ps[s]) self.Vs[s] = valids self.Ns[s] = 0 return -v valids = self.Vs[s] cur_best = -float('inf') best_act = -1 # pick the action with the highest upper confidence bound for a in range(self.game.getActionSize()): if valids[a]: if (s, a) in self.Qsa: u = self.Qsa[(s, a)] + self.args.cpuct * self.Ps[s][a] * math.sqrt(self.Ns[s]) / (1 + self.Nsa[(s, a)]) else: u = self.args.cpuct * self.Ps[s][a] * math.sqrt(self.Ns[s] + 1e-8) if u > cur_best: cur_best = u best_act = a a = best_act next_s, next_player = self.game.getNextState(canonicalBoard, 1, a) next_s = self.game.getCanonicalForm(next_s, next_player) v = self.search(next_s) if (s, a) in self.Qsa: self.Qsa[(s, a)] = (self.Nsa[(s, a)] * self.Qsa[(s, a)] + v) / (self.Nsa[(s, a)] + 1) self.Nsa[(s, a)] += 1 else: self.Qsa[(s, a)] = v self.Nsa[(s, a)] = 1 self.Ns[s] += 1 return -v def getNsa(self): return self.Nsa ``` --- # Section 9: Use MCTS to play games **Goal:** Teach the students how to use the results of an MCTS to play games. **Exercise:** * Plug the MCTS planner into an agent. * Play games against other agents. * Explore the contributions of prior network, value function, number of simulations / time to play, and explore/exploit parameters. ``` # @title Video 9: Play with MCTS from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1ng411M7Gz", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"ejG3kN_leRk", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` ## Coding Exercise 9: Agent that uses an MCTS planner * Plug the MCTS planner into an agent. * Play games against other agents. * Explore the contributions of prior network, value function, number of simulations / time to play, and explore/exploit parameters. ``` class MonteCarloTreeSearchBasedPlayer(): def __init__(self, game, nnet, args): self.game = game self.nnet = nnet self.args = args def play(self, canonicalBoard, temp=1): for i in range(self.args.numMCTSSims): ################################################# ## TODO for students: # Run MCTS search function. # Fill out function and remove raise NotImplementedError("Plug the planner") ################################################# ... s = self.game.stringRepresentation(canonicalBoard) ################################################# ## TODO for students: # Call the Nsa function from MCTS class and store it in the self.Nsa # Fill out function and remove raise NotImplementedError("Compute Nsa (number of times edge s,a was visited)") ################################################# self.Nsa = ... self.counts = [self.Nsa[(s, a)] if (s, a) in self.Nsa else 0 for a in range(self.game.getActionSize())] if temp == 0: bestAs = np.array(np.argwhere(self.counts == np.max(self.counts))).flatten() bestA = np.random.choice(bestAs) probs = [0] * len(self.counts) probs[bestA] = 1 return probs self.counts = [x ** (1. / temp) for x in self.counts] self.counts_sum = float(sum(self.counts)) probs = [x / self.counts_sum for x in self.counts] return np.argmax(probs) def getActionProb(self, canonicalBoard, temp=1): action_probs = np.zeros((self.game.getActionSize())) best_action = self.play(canonicalBoard) action_probs[best_action] = 1 return action_probs # Load MCTS model from the repository mcts_model_save_name = 'MCTS.pth.tar' path = F"/content/nma_rl_games/alpha-zero/pretrained_models/models/" game = OthelloGame(6) rp = RandomPlayer(game).play # all players num_games = 20 # games n1 = NNet(game) # nnet players n1.load_checkpoint(folder=path, filename=mcts_model_save_name) args1 = dotdict({'numMCTSSims': 50, 'cpuct':1.0}) ## Uncomment below to check your agent! # mcts1 = MonteCarloTreeSearchBasedPlayer(game, n1, args1) # n1p = lambda x: np.argmax(mcts1.getActionProb(x, temp=0)) # arena = Arena.Arena(n1p, rp, game, display=OthelloGame.display) # MCTS_result = arena.playGames(num_games, verbose=False) # print("\nNumber of games won by player1 = {}, num of games won by player2 = {}, out of {} games" .format(MCTS_result[0], MCTS_result[1], num_games)) # win_rate_player1 = MCTS_result[0]/num_games # print('\nWin rate for player 1 over {} games: {}%'.format(num_games, win_rate_player1*100)) # to_remove solution class MonteCarloTreeSearchBasedPlayer(): def __init__(self, game, nnet, args): self.game = game self.nnet = nnet self.args = args self.mcts = MCTS(game, nnet, args) def play(self, canonicalBoard, temp=1): for i in range(self.args.numMCTSSims): self.mcts.search(canonicalBoard) s = self.game.stringRepresentation(canonicalBoard) self.Nsa = self.mcts.getNsa() self.counts = [self.Nsa[(s, a)] if (s, a) in self.Nsa else 0 for a in range(self.game.getActionSize())] if temp == 0: bestAs = np.array(np.argwhere(self.counts == np.max(self.counts))).flatten() bestA = np.random.choice(bestAs) probs = [0] * len(self.counts) probs[bestA] = 1 return probs self.counts = [x ** (1. / temp) for x in self.counts] self.counts_sum = float(sum(self.counts)) probs = [x / self.counts_sum for x in self.counts] return np.argmax(probs) def getActionProb(self, canonicalBoard, temp=1): action_probs = np.zeros((self.game.getActionSize())) best_action = self.play(canonicalBoard) action_probs[best_action] = 1 return action_probs # Load MCTS model from the repository mcts_model_save_name = 'MCTS.pth.tar' path = F"/content/nma_rl_games/alpha-zero/pretrained_models/models/" game = OthelloGame(6) rp = RandomPlayer(game).play # all players num_games = 20 # games n1 = NNet(game) # nnet players n1.load_checkpoint(folder=path, filename=mcts_model_save_name) args1 = dotdict({'numMCTSSims': 50, 'cpuct':1.0}) ## Uncomment below to check your agent! mcts1 = MonteCarloTreeSearchBasedPlayer(game, n1, args1) n1p = lambda x: np.argmax(mcts1.getActionProb(x, temp=0)) arena = Arena.Arena(n1p, rp, game, display=OthelloGame.display) MCTS_result = arena.playGames(num_games, verbose=False) print("\nNumber of games won by player1 = {}, num of games won by player2 = {}, out of {} games" .format(MCTS_result[0], MCTS_result[1], num_games)) win_rate_player1 = MCTS_result[0]/num_games print('\nWin rate for player 1 over {} games: {}%'.format(num_games, win_rate_player1*100)) ``` ``` Number of games won by player1 = 19, num of games won by player2 = 1, out of 20 games Win rate for player 1 over 20 games: 95.0% ``` --- # Section 10: Ethical aspects ``` # @title Video 10: Unstoppable opponents from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV19M4y1K75Z", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"4LKZwDP_Qac", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` --- # Summary In this tutotial, you have learned how to implement a game loop and improve the performance of a random player. More specifically, you are now able to understand the format of two-players games. We learned about value-based and policy-based players, and we compare them with the MCTS method. ``` # @title Video 11: Outro from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1w64y167qd", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"8JcHw-2cwtM", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ```
github_jupyter
# Forecasting II: state space models This tutorial covers state space modeling with the [pyro.contrib.forecast](http://docs.pyro.ai/en/latest/contrib.forecast.html) module. This tutorial assumes the reader is already familiar with [SVI](http://pyro.ai/examples/svi_part_ii.html), [tensor shapes](http://pyro.ai/examples/tensor_shapes.html), and [univariate forecasting](http://pyro.ai/examples/forecasting_i.html). See also: - [Forecasting I: univariate, heavy tailed](http://pyro.ai/examples/forecasting_i.html) - [Forecasting III: hierarchical models](http://pyro.ai/examples/forecasting_iii.html) #### Summary - Pyro's [ForecastingModel](http://docs.pyro.ai/en/latest/contrib.forecast.html#pyro.contrib.forecast.forecaster.ForecastingModel) can combine regression, variational inference, and exact inference. - To model a linear-Gaussian dynamical system, use a [GaussianHMM](http://docs.pyro.ai/en/latest/distributions.html#gaussianhmm) `noise_dist`. - To model a heavy-tailed linear dynamical system, use [LinearHMM](http://docs.pyro.ai/en/latest/distributions.html#linearhmm) with heavy-tailed distributions. - To enable inference with [LinearHMM](http://docs.pyro.ai/en/latest/distributions.html#linearhmm), use a [LinearHMMReparam](http://docs.pyro.ai/en/latest/infer.reparam.html#pyro.infer.reparam.hmm.LinearHMMReparam) reparameterizer. ``` import math import torch import pyro import pyro.distributions as dist import pyro.poutine as poutine from pyro.contrib.examples.bart import load_bart_od from pyro.contrib.forecast import ForecastingModel, Forecaster, eval_crps from pyro.infer.reparam import LinearHMMReparam, StableReparam, SymmetricStableReparam from pyro.ops.tensor_utils import periodic_repeat from pyro.ops.stats import quantile import matplotlib.pyplot as plt %matplotlib inline assert pyro.__version__.startswith('1.5.1') pyro.set_rng_seed(20200305) ``` ## Intro to state space models In the [univariate tutorial](http://pyro.ai/examples/forecasting_i.html) we saw how to model time series as regression plus a local level model, using variational inference. This tutorial covers a different way to model time series: state space models and exact inference. Pyro's forecasting module allows these two paradigms to be combined, for example modeling seasonality with regression, including a slow global trend, and using a state-space model for short-term local trend. Pyro implements a few state space models, but the most important are the [GaussianHMM](http://docs.pyro.ai/en/latest/distributions.html#gaussianhmm) distribution and its heavy-tailed generalization the [LinearHMM](http://docs.pyro.ai/en/latest/distributions.html#linearhmm) distribution. Both of these model a linear dynamical system with hidden state; both are multivariate, and both allow learning of all process parameters. On top of these the [pyro.contrib.timeseries](http://docs.pyro.ai/en/latest/contrib.timeseries.html) module implements a variety of multivariate Gaussian Process models that compile down to `GaussianHMM`s. Pyro's inference for `GaussianHMM` uses parallel-scan Kalman filtering, allowing fast analysis of very long time series. Similarly, Pyro's inference for `LinearHMM` uses entirely parallel auxiliary variable methods to reduce to a `GaussianHMM`, which then permits parallel-scan inference. Thus both methods allow parallelization of long time series analysis, even for a single univariate time series. Let's again look at the [BART train](https://www.bart.gov/about/reports/ridership) ridership dataset: ``` dataset = load_bart_od() print(dataset.keys()) print(dataset["counts"].shape) print(" ".join(dataset["stations"])) data = dataset["counts"].sum([-1, -2]).unsqueeze(-1).log1p() print(data.shape) plt.figure(figsize=(9, 3)) plt.plot(data, 'b.', alpha=0.1, markeredgewidth=0) plt.title("Total hourly ridership over nine years") plt.ylabel("log(# rides)") plt.xlabel("Hour after 2011-01-01") plt.xlim(0, len(data)); plt.figure(figsize=(9, 3)) plt.plot(data) plt.title("Total hourly ridership over one month") plt.ylabel("log(# rides)") plt.xlabel("Hour after 2011-01-01") plt.xlim(len(data) - 24 * 30, len(data)); ``` ## GaussianHMM Let's start by modeling hourly seasonality together with a local linear trend, where we model seasonality via regression and local linear trend via a [GaussianHMM](http://docs.pyro.ai/en/latest/distributions.html#gaussianhmm). This noise model includes a mean-reverting hidden state (an [Ornstein-Uhlenbeck process](https://en.wikipedia.org/wiki/Ornstein%E2%80%93Uhlenbeck_process)) plus Gaussian observation noise. ``` T0 = 0 # beginning T2 = data.size(-2) # end T1 = T2 - 24 * 7 * 2 # train/test split means = data[:T1 // (24 * 7) * 24 * 7].reshape(-1, 24 * 7).mean(0) class Model1(ForecastingModel): def model(self, zero_data, covariates): duration = zero_data.size(-2) # We'll hard-code the periodic part of this model, learning only the local model. prediction = periodic_repeat(means, duration, dim=-1).unsqueeze(-1) # On top of this mean prediction, we'll learn a linear dynamical system. # This requires specifying five pieces of data, on which we will put structured priors. init_dist = dist.Normal(0, 10).expand([1]).to_event(1) timescale = pyro.sample("timescale", dist.LogNormal(math.log(24), 1)) # Note timescale is a scalar but we need a 1x1 transition matrix (hidden_dim=1), # thus we unsqueeze twice using [..., None, None]. trans_matrix = torch.exp(-1 / timescale)[..., None, None] trans_scale = pyro.sample("trans_scale", dist.LogNormal(-0.5 * math.log(24), 1)) trans_dist = dist.Normal(0, trans_scale.unsqueeze(-1)).to_event(1) # Note the obs_matrix has shape hidden_dim x obs_dim = 1 x 1. obs_matrix = torch.tensor([[1.]]) obs_scale = pyro.sample("obs_scale", dist.LogNormal(-2, 1)) obs_dist = dist.Normal(0, obs_scale.unsqueeze(-1)).to_event(1) noise_dist = dist.GaussianHMM( init_dist, trans_matrix, trans_dist, obs_matrix, obs_dist, duration=duration) self.predict(noise_dist, prediction) ``` We can then train the model on many years of data. Note that because we are being variational about only time-global variables, and exactly integrating out time-local variables (via `GaussianHMM`), stochastic gradients are very low variance; this allows us to use a large learning rate and few steps. ``` %%time pyro.set_rng_seed(1) pyro.clear_param_store() covariates = torch.zeros(len(data), 0) # empty forecaster = Forecaster(Model1(), data[:T1], covariates[:T1], learning_rate=0.1, num_steps=400) for name, value in forecaster.guide.median().items(): if value.numel() == 1: print("{} = {:0.4g}".format(name, value.item())) ``` Plotting forecasts of the next two weeks of data, we see mostly reasonable forecasts, but an anomaly on Christmas when rides were overpredicted. This is to be expected, as we have not modeled yearly seasonality or holidays. ``` samples = forecaster(data[:T1], covariates, num_samples=100) samples.clamp_(min=0) # apply domain knowledge: the samples must be positive p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1) crps = eval_crps(samples, data[T1:]) print(samples.shape, p10.shape) plt.figure(figsize=(9, 3)) plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3) plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast') plt.plot(torch.arange(T1 - 24 * 7, T2), data[T1 - 24 * 7: T2], 'k-', label='truth') plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps)) plt.ylabel("log(# rides)") plt.xlabel("Hour after 2011-01-01") plt.xlim(T1 - 24 * 7, T2) plt.text(78732, 3.5, "Christmas", rotation=90, color="green") plt.legend(loc="best"); ``` Next let's change the model to use heteroskedastic observation noise, depending on the hour of week. ``` class Model2(ForecastingModel): def model(self, zero_data, covariates): duration = zero_data.size(-2) prediction = periodic_repeat(means, duration, dim=-1).unsqueeze(-1) init_dist = dist.Normal(0, 10).expand([1]).to_event(1) timescale = pyro.sample("timescale", dist.LogNormal(math.log(24), 1)) trans_matrix = torch.exp(-1 / timescale)[..., None, None] trans_scale = pyro.sample("trans_scale", dist.LogNormal(-0.5 * math.log(24), 1)) trans_dist = dist.Normal(0, trans_scale.unsqueeze(-1)).to_event(1) obs_matrix = torch.tensor([[1.]]) # To model heteroskedastic observation noise, we'll sample obs_scale inside a plate, # then repeat to full duration. This is the only change from Model1. with pyro.plate("hour_of_week", 24 * 7, dim=-1): obs_scale = pyro.sample("obs_scale", dist.LogNormal(-2, 1)) obs_scale = periodic_repeat(obs_scale, duration, dim=-1) obs_dist = dist.Normal(0, obs_scale.unsqueeze(-1)).to_event(1) noise_dist = dist.GaussianHMM( init_dist, trans_matrix, trans_dist, obs_matrix, obs_dist, duration=duration) self.predict(noise_dist, prediction) %%time pyro.set_rng_seed(1) pyro.clear_param_store() covariates = torch.zeros(len(data), 0) # empty forecaster = Forecaster(Model2(), data[:T1], covariates[:T1], learning_rate=0.1, num_steps=400) for name, value in forecaster.guide.median().items(): if value.numel() == 1: print("{} = {:0.4g}".format(name, value.item())) ``` Note this gives us a much longer timescale and thereby more accurate short-term predictions: ``` samples = forecaster(data[:T1], covariates, num_samples=100) samples.clamp_(min=0) # apply domain knowledge: the samples must be positive p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1) crps = eval_crps(samples, data[T1:]) plt.figure(figsize=(9, 3)) plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3) plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast') plt.plot(torch.arange(T1 - 24 * 7, T2), data[T1 - 24 * 7: T2], 'k-', label='truth') plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps)) plt.ylabel("log(# rides)") plt.xlabel("Hour after 2011-01-01") plt.xlim(T1 - 24 * 7, T2) plt.text(78732, 3.5, "Christmas", rotation=90, color="green") plt.legend(loc="best"); plt.figure(figsize=(9, 3)) plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3) plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast') plt.plot(torch.arange(T1 - 24 * 7, T2), data[T1 - 24 * 7: T2], 'k-', label='truth') plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps)) plt.ylabel("log(# rides)") plt.xlabel("Hour after 2011-01-01") plt.xlim(T1 - 24 * 2, T1 + 24 * 4) plt.legend(loc="best"); ``` ## Heavy-tailed modeling with LinearHMM Next let's change our model to a linear-[Stable](http://docs.pyro.ai/en/latest/distributions.html#pyro.distributions.Stable) dynamical system, exhibiting learnable heavy tailed behavior in both the process noise and observation noise. As we've already seen in the [univariate tutorial](http://pyro.ai/examples/forecasting_i.html), this will require special handling of stable distributions by [poutine.reparam()](http://docs.pyro.ai/en/latest/poutine.html#pyro.poutine.handlers.reparam). For state space models, we combine [LinearHMMReparam](http://docs.pyro.ai/en/latest/infer.reparam.html#pyro.infer.reparam.hmm.LinearHMMReparam) with other reparameterizers like [StableReparam](http://docs.pyro.ai/en/latest/infer.reparam.html#pyro.infer.reparam.stable.StableReparam) and [SymmetricStableReparam](http://docs.pyro.ai/en/latest/infer.reparam.html#pyro.infer.reparam.stable.SymmetricStableReparam). All reparameterizers preserve behavior of the generative model, and only serve to enable inference via auxiliary variable methods. ``` class Model3(ForecastingModel): def model(self, zero_data, covariates): duration = zero_data.size(-2) prediction = periodic_repeat(means, duration, dim=-1).unsqueeze(-1) # First sample the Gaussian-like parameters as in previous models. init_dist = dist.Normal(0, 10).expand([1]).to_event(1) timescale = pyro.sample("timescale", dist.LogNormal(math.log(24), 1)) trans_matrix = torch.exp(-1 / timescale)[..., None, None] trans_scale = pyro.sample("trans_scale", dist.LogNormal(-0.5 * math.log(24), 1)) obs_matrix = torch.tensor([[1.]]) with pyro.plate("hour_of_week", 24 * 7, dim=-1): obs_scale = pyro.sample("obs_scale", dist.LogNormal(-2, 1)) obs_scale = periodic_repeat(obs_scale, duration, dim=-1) # In addition to the Gaussian parameters, we will learn a global stability # parameter to determine tail weights, and an observation skew parameter. stability = pyro.sample("stability", dist.Uniform(1, 2).expand([1]).to_event(1)) skew = pyro.sample("skew", dist.Uniform(-1, 1).expand([1]).to_event(1)) # Next we construct stable distributions and a linear-stable HMM distribution. trans_dist = dist.Stable(stability, 0, trans_scale.unsqueeze(-1)).to_event(1) obs_dist = dist.Stable(stability, skew, obs_scale.unsqueeze(-1)).to_event(1) noise_dist = dist.LinearHMM( init_dist, trans_matrix, trans_dist, obs_matrix, obs_dist, duration=duration) # Finally we use a reparameterizer to enable inference. rep = LinearHMMReparam(None, # init_dist is already Gaussian. SymmetricStableReparam(), # trans_dist is symmetric. StableReparam()) # obs_dist is asymmetric. with poutine.reparam(config={"residual": rep}): self.predict(noise_dist, prediction) ``` Note that since this model introduces auxiliary variables that are learned by variational inference, gradients are higher variance and we need to train for longer. ``` %%time pyro.set_rng_seed(1) pyro.clear_param_store() covariates = torch.zeros(len(data), 0) # empty forecaster = Forecaster(Model3(), data[:T1], covariates[:T1], learning_rate=0.1) for name, value in forecaster.guide.median().items(): if value.numel() == 1: print("{} = {:0.4g}".format(name, value.item())) samples = forecaster(data[:T1], covariates, num_samples=100) samples.clamp_(min=0) # apply domain knowledge: the samples must be positive p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1) crps = eval_crps(samples, data[T1:]) plt.figure(figsize=(9, 3)) plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3) plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast') plt.plot(torch.arange(T1 - 24 * 7, T2), data[T1 - 24 * 7: T2], 'k-', label='truth') plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps)) plt.ylabel("log(# rides)") plt.xlabel("Hour after 2011-01-01") plt.xlim(T1 - 24 * 7, T2) plt.text(78732, 3.5, "Christmas", rotation=90, color="green") plt.legend(loc="best"); plt.figure(figsize=(9, 3)) plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3) plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast') plt.plot(torch.arange(T1 - 24 * 7, T2), data[T1 - 24 * 7: T2], 'k-', label='truth') plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps)) plt.ylabel("log(# rides)") plt.xlabel("Hour after 2011-01-01") plt.xlim(T1 - 24 * 2, T1 + 24 * 4) plt.legend(loc="best"); ```
github_jupyter
``` import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd import psutil import os plot_dir = '/Users/m216613/Downloads' # Figure 1 - compare test accuracy # creating the dataset acc_compare_dic = {'D1':89, 'I1':84, 'R1':85, 'V1':88, 'D2':78, 'D3':89, 'D4':87, 'D5':89, 'D6':87, 'D7':88, 'D8':89, 'D9':88, 'D10':89, 'D11':88, 'D12':90, 'O1':90} model_names = sorted(acc_compare_dic, key=acc_compare_dic.get, reverse=True) acc_values = list() for i in model_names: acc_values.append(acc_compare_dic[i]) #print(acc_values) fig1 = plt.figure(figsize = (8, 8)) # creating the bar plot rects = plt.bar(model_names, acc_values, width = 0.4) plt.xlabel("Model Names") plt.ylabel("Patch-Level Validation Accuracy (%)") plt.title("Classification Performance \n [Patch-Level] \n Comparison Test on BACH") ax = plt.gca() ax.tick_params(axis='x', colors='blue') ax.tick_params(axis='y', colors='red') my_colors = 'rgbkymc' def autolabel(rects): """ Attach a text label above each bar displaying its height """ for rect in rects: height = rect.get_height() plt.text(rect.get_x() + rect.get_width()/2., 1.01*height, '%d' % int(height), ha='center', va='bottom') autolabel(rects) plt.tight_layout() # save plot plt.savefig(os.path.join(plot_dir, 'BACH_Compare_Test_Figure1.png')) # visualize plot plt.show() # Figure 2 - compare test accuracy # creating the dataset acc_compare_dic = {'C1':88, 'C2':87, 'C3':81, 'C4':78, 'C5':79, 'C6':82, 'C7':85, 'C8':75, 'C9':82, 'C10':81, 'C11':82, 'C12':85, 'C13':78, 'C14':84, 'C15':87, 'C16':68, 'C17':87, 'C18':50, 'C19':85, 'C20':87, 'C21':50, 'C22':68, 'C23':85, 'C24':50, 'C25':81, 'C26':82, 'D1':94, 'O1':94} model_names = sorted(acc_compare_dic, key=acc_compare_dic.get, reverse=True) acc_values = list() for i in model_names: acc_values.append(acc_compare_dic[i]) #print(acc_values) fig1 = plt.figure(figsize = (8, 8)) # creating the bar plot rects = plt.bar(model_names, acc_values, width = 0.4) plt.xlabel("Model Names") plt.ylabel("Slide-Level Validation Accuracy (%)") plt.title("Classification Performance \n [Slide-Level] \n Comparison Test on BACH") ax = plt.gca() ax.tick_params(axis='x', colors='blue') ax.tick_params(axis='y', colors='red') my_colors = 'rgbkymc' def autolabel(rects): """ Attach a text label above each bar displaying its height """ for rect in rects: height = rect.get_height() plt.text(rect.get_x() + rect.get_width()/2., 1.01*height, '%d' % int(height), ha='center', va='bottom') autolabel(rects) plt.tight_layout() # save plot plt.savefig(os.path.join(plot_dir, 'BACH_Compare_Test_Figure2.png')) # visualize plot plt.show() model_vars = pd.read_csv('bach.csv') model_vars.head(2) model_vars.pivot(index='Model Index', columns='Learning Rate', values='Epoch').head(2) acc_compare_dic = {'D1':89, 'I1':84, 'R1':85, 'V1':88, 'D2':78, 'D3':89, 'D4':87, 'D5':89, 'D6':87, 'D7':88, 'D8':89, 'D9':88, 'D10':89, 'D11':88, 'D12':90, 'O1':90} model_names = sorted(acc_compare_dic, key=acc_compare_dic.get, reverse=True) acc_values = list() for i in model_names: acc_values.append(acc_compare_dic[i]) fig = plt.figure(figsize = (24, 16)) plt.subplot(2, 2, 1) rects1 = plt.bar(model_names, acc_values, width = 0.5) plt.xlabel("Model Names") plt.ylabel("Patch-Level Validation Accuracy (%)") plt.title("Classification Performance \n [Patch-Level] \n Comparison Test on BACH") ax = plt.gca() ax.tick_params(axis='x', colors='blue') ax.tick_params(axis='y', colors='red') my_colors = 'rgbkymc' def autolabel(rects): """ Attach a text label above each bar displaying its height """ for rect in rects: height = rect.get_height() plt.text(rect.get_x() + rect.get_width()/2., 1.01*height, '%d' % int(height), ha='center', va='bottom') autolabel(rects1) acc_compare_dic = {'C1':88, 'C2':87, 'C3':81, 'C4':78, 'C5':79, 'C6':82, 'C7':85, 'C8':75, 'C9':82, 'C10':81, 'C11':82, 'C12':85, 'C13':78, 'C14':84, 'C15':87, 'C16':68, 'C17':87, 'C18':50, 'C19':85, 'C20':87, 'C21':50, 'C22':68, 'C23':85, 'C24':50, 'C25':81, 'C26':82, 'D1':94, 'O1':94} model_names = sorted(acc_compare_dic, key=acc_compare_dic.get, reverse=True) acc_values = list() for i in model_names: acc_values.append(acc_compare_dic[i]) plt.subplot(2, 2, 2) rects2 = plt.bar(model_names, acc_values, width = 0.8) plt.xlabel("Model Names") plt.ylabel("Slide-Level Validation Accuracy (%)") plt.title("Classification Performance \n [Slide-Level] \n Comparison Test on BACH") ax = plt.gca() ax.tick_params(axis='x', colors='blue') ax.tick_params(axis='y', colors='red') autolabel(rects2) plt.savefig(os.path.join(plot_dir, 'BACH_Compare_Test_Figure.png')) # visualize plot plt.show() ```
github_jupyter
# Urban Networks II Overview of today's topics: - Network modeling and analysis in a study site - Simulating commutes - Network efficiency - Network perturbation - Comparative network analysis - Urban accessibility ``` import geopandas as gpd import matplotlib.pyplot as plt import networkx as nx import numpy as np import osmnx as ox import pandana import pandas as pd from shapely.geometry import Point # consistent randomization np.random.seed(0) # configure OSMnx cache_folder = '../../data/cache2' ox.config(log_console=True, use_cache=True, cache_folder=cache_folder) ``` ## 1. Model a study site First, we will identify a study site, model its street network, and calculate some simple indicators. ``` # create a study site: geocode city hall, convert coords to shapely geometry, # project geometry to UTM, buffer by 5km, project back to lat-lng latlng_coords = ox.geocode('Los Angeles City Hall') latlng_point = Point(latlng_coords[1], latlng_coords[0]) latlng_point_proj, crs = ox.projection.project_geometry(latlng_point) polygon_proj = latlng_point_proj.buffer(5000) polygon, crs = ox.projection.project_geometry(polygon_proj, crs=crs, to_latlong=True) polygon # model the street network within study site # your parameterization makes assumptions about your interests here G = ox.graph_from_polygon(polygon, network_type='drive', truncate_by_edge=True) fig, ax = ox.plot_graph(G, node_size=0, edge_color='w', edge_linewidth=0.3) # add speeds and travel times G = ox.add_edge_speeds(G) G = ox.add_edge_travel_times(G) # study site area in km^2 polygon_proj.area / 1e6 # how many intersections does it contain? street_counts = pd.Series(dict(G.nodes(data='street_count'))) intersect_count = len(street_counts[street_counts > 2]) intersect_count # what's the intersection density? intersect_count / (polygon_proj.area / 1e6) # now clean up the intersections and re-calculate clean_intersects = ox.consolidate_intersections(ox.project_graph(G), rebuild_graph=False, tolerance=10) clean_intersect_count = len(clean_intersects) clean_intersect_count # what's the cleaned intersection density? clean_intersect_count / (polygon_proj.area / 1e6) ``` ## 2. Simulate commutes We'll use a random sample of LEHD LODES data to get home/work coordinates. This is an imperfect proxy for "true" work locations from a payroll enumeration. You can read more about LODES and its limitations [here](https://doi.org/10.1080/21681376.2018.1455535). These data are processed in a separate [notebook](process-lodes.ipynb) to keep the data easy on your CPU and memory for this lecture. Our trip simulation will use naive assumptions about travel time (e.g., free flow, no congestion, rough imputation of speed limits) for simplicity, but these can be enriched with effort. ``` od = pd.read_csv('../../data/od.csv').sample(1000) od.shape od # get home/work network nodes home_nodes = ox.get_nearest_nodes(G, X=od['home_lng'], Y=od['home_lat'], method='balltree') work_nodes = ox.get_nearest_nodes(G, X=od['work_lng'], Y=od['work_lat'], method='balltree') def calc_path(G, orig, dest, weight='travel_time'): try: return ox.shortest_path(G, orig, dest, weight) except nx.exception.NetworkXNoPath: # if path cannot be solved return None %%time paths = [calc_path(G, orig, dest) for orig, dest in zip(home_nodes, work_nodes)] len(paths) # filter out any nulls (ie, not successfully solved) paths = [path for path in paths if path is not None] len(paths) # plot 100 routes fig, ax = ox.plot_graph_routes(G, routes=paths[0:100], node_size=0, edge_linewidth=0.2, orig_dest_size=0, route_colors='c', route_linewidth=2, route_alpha=0.2) # now it's your turn # how do these routes change if we minimize distance traveled instead? # what kinds of streets get more/fewer trips assigned to them? ``` ## 3. Network efficiency How "efficient" are our commuter's routes? That is, how does their distance traveled compare to straight-line distances from home to work? ``` def calc_efficiency(G, route, attr='length'): # sum the edge lengths in the route trip_distance = sum(ox.utils_graph.get_route_edge_attributes(G, route=route, attribute=attr)) # fast vectorized great-circle distance calculator gc_distance = ox.distance.great_circle_vec(lat1=G.nodes[route[0]]['y'], lng1=G.nodes[route[0]]['x'], lat2=G.nodes[route[-1]]['y'], lng2=G.nodes[route[-1]]['x']) return gc_distance / trip_distance # calculate each trip's efficiency and make a pandas series trip_efficiency = pd.Series([calc_efficiency(G, path) for path in paths]) # the straight-line distance is what % of each network distance traveled? trip_efficiency trip_efficiency.describe() # now it's your turn # what if i were instead interested in how much longer trips are than straight-line would be? ``` ## 4. Network perturbation Oh no! There's been an earthquake! The earthquake has knocked out 10% of the street network. Let's simulate that perturbation and see how routes have to change. ``` # randomly knock-out 10% of the network's nodes frac = 0.10 n = int(len(G.nodes) * frac) nodes_to_remove = pd.Series(G.nodes).sample(n).index G_per = G.copy() G_per.remove_nodes_from(nodes_to_remove) # get home/work network nodes again, calculate routes, drop nulls home_nodes_per = ox.get_nearest_nodes(G_per, X=od['home_lng'], Y=od['home_lat'], method='balltree') work_nodes_per = ox.get_nearest_nodes(G_per, X=od['work_lng'], Y=od['work_lat'], method='balltree') paths_per = [calc_path(G_per, orig, dest) for orig, dest in zip(home_nodes_per, work_nodes_per)] paths_per = [path for path in paths_per if path is not None] len(paths_per) # calculate each trip's efficiency and make a pandas series trip_efficiency_per = pd.Series([calc_efficiency(G_per, path) for path in paths_per]) trip_efficiency_per.describe() ``` How many routes are now disconnected? How did trip efficiency change? ``` # what % of formerly solvable routes are now unsolvable? 1 - (len(paths_per) / len(paths)) # knocking out x% of the network made (solvable) trips what % less efficient? 1 - (trip_efficiency_per.mean() / trip_efficiency.mean()) # plot n routes apiece, before (cyan) and after (yellow) perturbation n = 100 all_paths = paths[:n] + paths_per[:n] colors = ['c'] * n + ['y'] * n # shuffle the order, so you don't just plot new atop old paths_colors = pd.DataFrame({'path': all_paths, 'color': colors}).sample(frac=1) fig, ax = ox.plot_graph_routes(G, routes=paths_colors['path'], node_size=0, edge_linewidth=0.2, orig_dest_size=0, route_colors=paths_colors['color'], route_linewidth=2, route_alpha=0.3) ``` Central LA performs relatively well because it has a relatively dense and gridlike network that offers multiple redundancy options. 1. What if you conduct this analysis in a disconnected, dendritic suburb on the urban fringe? 2. What if you model a walkable network rather than a drivable one? 3. What if the network perturbation isn't a spatially random process? Take these questions as prompts for self-paced exercise. For example, let's say the LA river has flooded. Use OSMnx to attach elevations to all the nodes in our street network, then knock-out the 10% at the lowest elevation (ie, around the river). How does that change network characteristics like connectivity and efficiency? Or, model a coastal town Miami Beach, then knock-out the network nodes below some sea-level rise threshold. What happens? What neighborhoods are most affected? What communities live in those vulnerable places? ``` # now it's your turn # use the prompts above to conduct a self-directed analysis of network perturbation # either using elevation/flooding or any of the 3 prompts above ``` ## 5. Compare places to each other Here we'll model and analyze a set of sub-sites within a study area to compare their characteristics. ``` # study area within 1/2 mile of SF Civic Center latlng_coords = ox.geocode('Civic Center, San Francisco, CA, USA') latlng_point = Point(latlng_coords[1], latlng_coords[0]) latlng_point_proj, crs = ox.projection.project_geometry(latlng_point) polygon_proj = latlng_point_proj.buffer(800) sf_polygon, crs = ox.projection.project_geometry(polygon_proj, crs=crs, to_latlong=True) # get the tracts that intersect the study area polygon tracts = gpd.read_file('../../data/tl_2020_06_tract/').set_index('GEOID') mask = tracts.intersects(sf_polygon) cols = ['ALAND', 'geometry'] sf_tracts = tracts.loc[mask, cols] sf_tracts.head() ``` Let's use a custom filter to model "surface streets." You get to pick what to include and exclude, using the [Overpass Query Language](https://wiki.openstreetmap.org/wiki/Overpass_API/Overpass_QL). ``` # build a custom filter cf1 = '["highway"~"residential|living_street|tertiary|secondary|primary"]' cf2 = '["service"!~"alley|driveway|emergency_access|parking|parking_aisle|private"]' cf3 = '["area"!~"yes"]' custom_filter = cf1 + cf2 + cf3 custom_filter # model the street network across all the study sub-sites G_all = ox.graph_from_polygon(sf_tracts.unary_union, custom_filter=custom_filter) len(G_all.nodes) %%time # calculate clean intersection counts per tract intersect_counts = {} for label, geom in zip(sf_tracts.index, sf_tracts['geometry']): G_tmp = ox.graph_from_polygon(geom, custom_filter=custom_filter) clean_intersects = ox.consolidate_intersections(ox.project_graph(G_tmp), rebuild_graph=False) intersect_counts[label] = len(clean_intersects) # calculate intersection density per km^2 sf_tracts['intersect_count'] = pd.Series(intersect_counts) sf_tracts['intersect_density'] = sf_tracts['intersect_count'] / (sf_tracts['ALAND'] / 1e6) sf_tracts['intersect_density'].describe() # plot the tracts and the network plt.style.use('dark_background') fig, ax = plt.subplots(figsize=(6, 6)) ax.axis('off') ax.set_title('Intersection density (per km2)') ax = sf_tracts.plot(ax=ax, column='intersect_density', cmap='Reds_r', legend=True, legend_kwds={'shrink': 0.8}) fig, ax = ox.plot_graph(G_all, ax=ax, node_size=0, edge_color='#111111') fig.savefig('map.png', dpi=300, facecolor='#111111', bbox_inches='tight') ``` Our simplified, naive assumptions in this analysis have some shortcomings that resulting in analytical problems. How would you improve it? 1. Periphery effects? 2. Incorrect study site sizes? 3. What are we counting and not counting here? ``` # now it's your turn # how would you improve this analysis to make it more meaningful and interpretable? ``` ## 6. Urban accessibility If you're interested in isochrone mapping, see the [OSMnx examples](https://github.com/gboeing/osmnx-examples) for a demonstration. Here, we'll analyze food deserts in central LA using OSMnx and [Pandana](https://udst.github.io/pandana/). Pandana uses contraction hierarchies for imprecise but very fast shortest path calculation. ``` # specify some parameters for the analysis walk_time = 20 # max walking horizon in minutes walk_speed = 4.5 # km per hour # model the walkable network within our original study site G_walk = ox.graph_from_polygon(polygon, network_type='walk') fig, ax = ox.plot_graph(G_walk, node_size=0, edge_color='w', edge_linewidth=0.3) # set a uniform walking speed on every edge for u, v, data in G_walk.edges(data=True): data['speed_kph'] = walk_speed G_walk = ox.add_edge_travel_times(G_walk) # extract node/edge GeoDataFrames, retaining only necessary columns (for pandana) nodes = ox.graph_to_gdfs(G_walk, edges=False)[['x', 'y']] edges = ox.graph_to_gdfs(G_walk, nodes=False).reset_index()[['u', 'v', 'travel_time']] # get all the "fresh food" stores on OSM within the study site # you could load any amenities DataFrame, but we'll get ours from OSM tags = {'shop': ['grocery', 'greengrocer', 'supermarket']} amenities = ox.geometries_from_bbox(north=nodes['y'].max(), south=nodes['y'].min(), east=nodes['x'].min(), west=nodes['x'].max(), tags=tags) amenities.shape # construct the pandana network model network = pandana.Network(node_x=nodes['x'], node_y=nodes['y'], edge_from=edges['u'], edge_to=edges['v'], edge_weights=edges[['travel_time']]) # extract (approximate, unprojected) centroids from the amenities' geometries centroids = amenities.centroid # specify a max travel distance for this analysis # then set the amenities' locations on the network maxdist = walk_time * 60 # minutes -> seconds, to match travel_time units network.set_pois(category='grocery', maxdist=maxdist, maxitems=3, x_col=centroids.x, y_col=centroids.y) # calculate travel time to nearest amenity from each node in network distances = network.nearest_pois(distance=maxdist, category='grocery', num_pois=3) distances.astype(int).head() # plot distance to nearest amenity fig, ax = ox.plot_graph(G_walk, node_size=0, edge_linewidth=0.1, edge_color='gray', show=False, close=False) sc = ax.scatter(x=nodes['x'], y=nodes['y'], c=distances[1], s=1, cmap='inferno_r') ax.set_title(f'Walking time to nearest grocery store') plt.colorbar(sc, shrink=0.7).outline.set_edgecolor('none') ``` This tells us about the travel time to the nearest amenities, from each node in the network. What if we're instead interested in how many amenities we can reach within our time horizon? ``` # set a variable on the network, using the amenities' nodes node_ids = network.get_node_ids(centroids.x, centroids.y) network.set(node_ids, name='grocery') # aggregate the variable to all the nodes in the network # when counting, the decay doesn't matter (but would for summing) access = network.aggregate(distance=maxdist, type='count', decay='linear', name='grocery') # let's cap it at 5, assuming no further utility from a larger choice set access = access.clip(upper=5) access.describe() # plot amenity count within your walking horizon fig, ax = ox.plot_graph(G_walk, node_size=0, edge_linewidth=0.1, edge_color='gray', show=False, close=False) sc = ax.scatter(x=nodes['x'], y=nodes['y'], c=access, s=1, cmap='inferno') ax.set_title(f'Grocery stores within a {walk_time} minute walk') plt.colorbar(sc, shrink=0.7).outline.set_edgecolor('none') # now it's your turn # map walking time to nearest school in our study site, capped at 30 minutes # what kinds of communities have better/worse walking access to schools? # see documentation at https://wiki.openstreetmap.org/wiki/Tag:amenity=school ```
github_jupyter
# Penalised Regression ## YouTube Videos 1. **Scikit Learn Linear Regression:** https://www.youtube.com/watch?v=EvnpoUTXA0E 2. **Scikit Learn Linear Penalise Regression:** https://www.youtube.com/watch?v=RhsEAyDBkTQ ## Introduction We often do not want the coefficients/ weights to be too large. Hence we append the loss function with a penalty function to discourage large values of $w$. \begin{align} \mathcal{L} & = \sum_{i=1}^N (y_i-f(x_i|w,b))^2 + \alpha \sum_{j=1}^D w_j^2 + \beta \sum_{j=1}^D |w_j| \end{align} where, $f(x_i|w,b) = wx_i+b$. The values of $\alpha$ and $\beta$ are positive (or zero), with higher values enforcing the weights to be closer to zero. ## Lesson Structure 1. The task of this lesson is to infer the weights given the data (observations, $y$ and inputs $x$). 2. We will be using the module `sklearn.linear_model`. ``` import numpy as np from sklearn.linear_model import LinearRegression import matplotlib.pyplot as plt %matplotlib inline # In order to reproduce the exact same number we need to set the seed for random number generators: np.random.seed(1) ``` A normally distributed random looks as follows: ``` e = np.random.randn(10000,1) plt.hist(e,100) #histogram with 100 bins plt.ylabel('y') plt.xlabel('x') plt.title('Histogram of Normally Distributed Numbers') plt.show() ``` Generate observations $y$ given feature (design) matrix $X$ according to: $$ y = Xw + \xi\\ \xi_i \sim \mathcal{N}(0,\sigma^2) $$ In this particular case, $w$ is a 100 dimensional vector where 90% of the numbers are zero. i.e. only 10 of the numbers are non-zero. ``` # Generate the data N = 40 # Number of observations D = 100 # Dimensionality x = np.random.randn(N,D) # get random observations of x w_true = np.zeros((D,1)) # create a weight vector of zeros idx = np.random.choice(100,10,replace=False) # randomly choose 10 of those weights w_true[idx] = np.random.randn(10,1) # populate then with 10 random weights e = np.random.randn(N,1) # have a noise vector y = np.matmul(x,w_true) + e # generate observations # create validation set: N_test = 50 x_test = np.random.randn(50,D) y_test_true = np.matmul(x_test,w_true) model = LinearRegression() model.fit(x,y) # plot the true vs estimated coeffiecients plt.plot(np.arange(100),np.squeeze(model.coef_)) plt.plot(np.arange(100),w_true) plt.legend(["Estimated","True"]) plt.title('Estimated Weights') plt.show() ``` One way of testing how good your model is to look at metrics. In the case of regression Mean Squared Error (MSE) is a common metric which is defined as: $$ \frac{1}{N}\sum_{i=1}^N \xi_i^2$$ where, $\xi_i = y_i-f(x_i|w,b)$. Furthermore it is best to look at the MSE on a validation set, rather than on the training dataset that we used to train the model. ``` y_est = model.predict(x_test) mse = np.mean(np.square(y_test_true-y_est)) print(mse) ``` Ridge regression is where you penalise the weights by setting the $\alpha$ parameter right at the top. It penalises it so that the higher **the square of the weights** the higher the loss. ``` from sklearn.linear_model import Ridge model = Ridge(alpha=5.0,fit_intercept = False) model.fit(x,y) # plot the true vs estimated coeffiecients plt.plot(np.arange(100),np.squeeze(model.coef_)) plt.plot(np.arange(100),w_true) plt.legend(["Estimated","True"]) plt.show() ``` This model is slightly better than without any penalty on the weights. ``` y_est = model.predict(x_test) mse = np.mean(np.square(y_test_true-y_est)) print(mse) ``` Lasso is a model that encourages weights to go to zero exactly, as opposed to Ridge regression which encourages small weights. ``` from sklearn.linear_model import Lasso model = Lasso(alpha=0.1,fit_intercept = False) model.fit(x,y) # plot the true vs estimated coeffiecients plt.plot(np.arange(100),np.squeeze(model.coef_)) plt.plot(np.arange(100),w_true) plt.legend(["Estimated","True"]) plt.title('Lasso regression weight inference') plt.show() ``` The MSE is significantly better than both the above models. ``` y_est = model.predict(x_test)[:,None] mse = np.mean(np.square(y_test_true-y_est)) print(mse) ``` Automated Relevance Determination (ARD) regression is similar to lasso in that it encourages zero weights. However, the advantage is that you do not need to set a penalisation parameter, $\alpha$, $\beta$ in this model. ``` from sklearn.linear_model import ARDRegression model = ARDRegression(fit_intercept = False) model.fit(x,y) # plot the true vs estimated coeffiecients plt.plot(np.arange(100),np.squeeze(model.coef_)) plt.plot(np.arange(100),w_true) plt.legend(["Estimated","True"]) plt.show() y_est = model.predict(x_test)[:,None] mse = np.mean(np.square(y_test_true-y_est)) print(mse) ``` ### Note: Rerun the above with setting N=400 ## Inverse Problems The following section is optional and you may skip it. It is not necessary for understanding Deep Learning. Inverse problems are where given the outputs you are required to infer the inputs. A typical example is X-rays. Given the x-ray sensor readings, the algorithm needs to build an image of an individuals bone structure. See [here](http://scikit-learn.org/stable/auto_examples/applications/plot_tomography_l1_reconstruction.html#sphx-glr-auto-examples-applications-plot-tomography-l1-reconstruction-py) for an example of l1 reguralisation applied to a compressed sensing problem (has a resemblance to the x-ray problem).
github_jupyter
# Welcome to the PyLibSIMBA walkthrough We will go through the simple steps it takes to submit a message to a blockchain, and verify the message arrived. ## First lets import PyLibSIMBA This is already installed in the notebook so we can do ``` import pylibsimba ``` ## Test 1 - Generate a Wallet This holds the public and private keys used to sign a transaction, and creates an address to send the transactions from. We will import a helper class and generate a wallet. ### Advanced usage: If you create a contract on simbachain.com which uses a blockchain which requires currency, ensure that a wallet with that address has the correct balance (via their website, spigot etc), and initialise the Wallet object above with the mnemonic associated with that wallet. The Wallet object will then have the same address and can be used to sign transactions on that blockchain. ``` from pylibsimba.wallet import Wallet ``` Now we can generate a wallet to use on the SIMBAChain test blockchain. ``` wallet = Wallet(None) wallet.generate_wallet() addr = wallet.get_address() print(addr) ``` ## Test 2 - Get the SIMBAChain object. For the this step we need to import a few things. Specifically we need the ability to get a SIMBAChain object. Most of the outputs are in JSON format, so we import that too. ``` from pylibsimba import get_simba_instance import json ``` Now we can get a SIMBAChain instance, which is used for all interactions with the SIMBAChain API. This requires a few parameters: * url : The API URL * We'll use the test endpoint from simbachain.com * wallet : The Wallet to use * This is the wallet we just made above * api_key : (Optional) The API key * This is a test API pre generated for this example. * Please register at simbachain.com to gain the ability to create your own contracts and access them via you own API key. * management_key : (Optional) The Management API key * As for the api_key, but unused for now. ``` simba = get_simba_instance( 'https://api.simbachain.com/v1/libSimba-SimbaChat-Quorum/', wallet, '04d1729f7144873851a745d2ae85639f55c8e3de5aea626a2bcd0055c01ba6fc', '') ``` ### Using the SIMBAChain instance Lets check the balance of our wallet. To do this we simply get_balance() from the simba instance we've just created. The output is a dict, so we'll use json to dump it to a string. ``` balance = simba.get_balance() print("Balance: {}".format(json.dumps(balance, indent=4))) ``` The output shows the balance as -1, but we can also see that ```"poa": true``` meaning this is a Proof of Authority blockchain, so no currency is required. ## Test 3 - Calling a method on a Smart Contract via SIMBAChain.com First, we need the parameters to call the method with. As it's an example method, the params are given below. (We'll look at how to get the paramters for a different method later) ``` method_params = { 'assetId': "0xbad65ff688a28efdd17d979c12f0ab2e2de305dbc8a2aa6be45ed644da822cfb", 'name': "A Test Room", 'createdBy': "PyLibSIMBA", } ``` Call the method with the parameters. That's it! ``` resp = simba.call_method('createRoom', method_params) print("Successful submitting? {}".format(resp.transaction_id)) ``` ### Did it work? There is a delay between submitting a transaction, and it appearing on the blockchain. We have the option to "send and forget", or check to see if it has "deployed" successfully. To be completely sure, we can run wait_for_success_or_error() to check. Be aware this can take some time, and a variety of errors can be thrown if a problem is detected. Please check the documentation on Exceptions to learn about the types of errors caught by the PyLibSIMBA SDK. ``` try: final_resp = simba.wait_for_success_or_error(resp.transaction_id) print("Successful? {}".format(final_resp)) except Exception as e1: print("Failure! {}".format(e1)) ``` ## Test 4 - Calling a method and submitting files Similar to above, we need some method parameters, and we'll add some files too. ``` method_params = { 'assetId': "A Test Room", 'chatRoom': "A Test Room", 'message': "Hello World", 'sentBy': "PyLibSIMBA" } files = { "test file 1.txt": open("test_files/test file 1.txt", 'rb'), "test file 2.txt": open("test_files/test file 2.txt", 'rb') } ``` Instead of call_method() we use call_method_with_file() To be sure it worked, we'll add a wait again. This operation will take several seconds to complete. N.B, Ensure the files exist in the given location, or the method will return an error. ``` try: resp = simba.call_method_with_file('sendMessage', method_params, files) print("Successful submitting? {}".format(resp.transaction_id)) resp = simba.wait_for_success_or_error(resp.transaction_id) print("Successfully deployed? {}".format(resp)) except Exception as e1: print("Something went wrong: {}".format(e1)) !ls ../tests ``` ## Test 5 - A list of transactions for the method "createRoom" We can get a list of all transactions for the method "createRoom". The result is a PagedResponse class, which holds information about the number of results and a way to 'page' through them. We can also filter so we only see the transactions that we have created, using createdBy_exact. The output from this can be extensive and very detailed, so we'll use json.dumps() to "pretty print" it. ``` method_params = { 'createdBy_exact': "PyLibSIMBA" } result_pages = simba.get_method_transactions('createRoom', method_params) print("Number of results for transaction {}: {}".format('createRoom', result_pages.count())) print("Got data : \n{}".format( json.dumps( result_pages.data(), indent=4 ) )) ``` ## Test 6 - Get an existing example transaction object by the transaction ID. Use the example transaction ID given, or one from previous called methods above. ``` transaction_id = "97b56a4dd3ff4fe7820f46a7101f72e2" txn = simba.get_transaction(transaction_id) print("Transaction : \n{}".format( json.dumps(txn, indent=4) )) ``` ## Test 7 - Get the Transaction Metadata object for an existing example, by the transaction hash. ``` transaction_hash = "0x7565461be84259d5e365c2c3225696a6d74245f1eca1ecc050b1fedd5a4a1f4d" txn_metadata = simba.get_bundle_metadata_for_transaction(transaction_hash) print("Transaction Metadata: \n{}".format(json.dumps(txn_metadata, indent=4))) ``` ## Test 8 - Get a bundle of files from a given transaction, from the transaction hash Writes the bundle to "the_bundle.zip". This implementation sets stream=True so that the requests module doesn't download the whole bundle into memory first. We can also check for errors with raise_for_status() ``` transaction_hash = "0x7565461be84259d5e365c2c3225696a6d74245f1eca1ecc050b1fedd5a4a1f4d" req = simba.get_bundle_for_transaction(transaction_hash, stream=True) req.raise_for_status() ``` ### Writing the "bundle" file to disk The bundle of files is actually a zip file with all of the files submitted in the transaction, along with some metadata used to check their validity. Writing the file is done as usual with the *python requests* package. To check the file was written to disk, we do a isfile() check ``` output_file = 'the_bundle.zip' with open('the_bundle.zip', 'wb') as f: for chunk in req.iter_content(chunk_size=8192): if chunk: # filter out keep-alive new chunks f.write(chunk) import os print("Wrote file {}: {}".format( output_file, os.path.isfile(os.path.abspath(output_file))) ) ``` ## Test 9 - Get the first file from a bundle for a given transaction, from the transaction hash Writes the file to "file_0.txt" Very similar to the example above, but specifying an index from the list of files submitted in the transaction. ``` transaction_hash = "0x7565461be84259d5e365c2c3225696a6d74245f1eca1ecc050b1fedd5a4a1f4d" req = simba.get_file_from_bundle_for_transaction(transaction_hash, 0, stream=True) req.raise_for_status() output_file = 'file_0.txt' with open(output_file, 'wb') as f: for chunk in req.iter_content(chunk_size=8192): if chunk: # filter out keep-alive new chunks f.write(chunk) print("Wrote file {}: {}".format( output_file, os.path.isfile(os.path.abspath(output_file))) ) ``` ## Test 10 - Get a file by name, from a bundle for a given transaction, from the transaction hash Again, as above, but getting the file by name. Writes the file to "File1.txt" ``` transaction_hash = "0x7565461be84259d5e365c2c3225696a6d74245f1eca1ecc050b1fedd5a4a1f4d" filename = "File1.txt" req = simba.get_file_from_bundle_by_name_for_transaction(transaction_hash, filename, stream=True) req.raise_for_status() output_file = 'File1.txt' with open(output_file, 'wb') as f: for chunk in req.iter_content(chunk_size=8192): if chunk: # filter out keep-alive new chunks f.write(chunk) print("Wrote file \n{}: {}".format( output_file, os.path.isfile(os.path.abspath(output_file))) ) ``` ## Test 12 - Get the organisations this user belongs to This is useful when performing more low-level API calls. ``` paged_response = simba.get_organisations() for org in paged_response.data(): print(org['id']) ``` ## Test 13 - Push arbitrary solidity code to simbachain.com The create_contract() method takes four parameters: * The simba object * The path to a file containing the solidity code * The name to call the Smart Contract. This will be shown in the dashboard. * An organisation id. A user can be a member of multiple organisations, so this is the organisation the Smart Contract will be associated with. A contract name must be unique within an organisation, so we will add a timestamp to the name so this test organisation will accept it. ``` from _datetime import datetime from pylibsimba import util response = util.create_contract( simba, '../tests/example.sol', 'example_contract_{}'.format(datetime.now().isoformat()), '5cd5cef4cabb4b009e00b6b3ff45ee08' ) print("Wrote contract : \n{}".format( json.dumps(response.json(), indent=4) )) ``` ## Get in touch! If you have any issues with the demo above, please let us know via the GitHub issues pages, <https://github.com/SIMBAChain/PyLibSIMBA/issues>
github_jupyter
``` import sys sys.path.append("..") import json experiment = 'cifar' directory_results = 'results/' with open(directory_results+experiment+'-BBI.json', 'r') as f: bestrun_BBI = json.load(f) with open(directory_results+experiment+'-sgd.json', 'r') as f: bestrun_sgd = json.load(f) with open(directory_results+'best-parameters-'+experiment+'.json', 'r') as f: bestpar = json.load(f) with open(directory_results+'scanning-parameters-'+experiment+'.json', 'r') as f: scanningpar = json.load(f) print('Parameters used for this scan:\n') for key in list(scanningpar): print(key,":", scanningpar[key]) print('Results at the end of the long run with the best parameters:\n') for key in list(bestrun_sgd): if key!= 'epoch': print(key) print("\tsgd:", bestrun_sgd[key][-1]) print("\tBBI:", bestrun_BBI[key][-1]) from IPython.display import Image optimizers = ['sgd','BBI'] opt = optimizers[0] print(opt+"\n","Best parameters: ", bestpar[opt]) Image(filename=directory_results+experiment+'-'+opt+'.png') opt = optimizers[1] print(opt+"\n","Best parameters: ", bestpar[opt]) Image(filename=directory_results+experiment+'-'+opt+'.png') # Here we can run the experiment more times to get statistics """ This file is the main starting point for all experiments """ import torch import torch.nn as nn import torch.optim as optim import torch.backends.cudnn as cudnn import os import argparse import sys from inflation import BBI import numpy as np from experiments.cifar import cifar from run_experiment_hyperopt import * from hyperopt import hp, tpe, Trials, fmin experiment = "cifar" problem_number = None #fixed BBI parameters threshold_BBI = 2000 threshold0_BBI = 100 consEn_BBI = True nFixedBounces_BBI = 100 deltaEn = 0.0 seed = 42 def run_experiment_sgd_name_seed(epochs = 2, name = "sgd", seed = seed, stepsize = 1e-3, rho = .99): param_list = ["main.py", experiment, "--optimizer", "sgd", "--lr", str(stepsize), "--rho", str(rho), "--epochs", str(epochs), "--seed", str(seed), "--progress", "false", "--device", "cuda", "-n", name] if experiment == "PDE_PoissonD": param_list.append("--problem") param_list.append(str(problem_number)) return run_experiment(param_list) def run_experiment_BBI_name(epochs = 2, name = "BBI", stepsize = 1e-3, threshold = threshold_BBI, threshold0 = threshold0_BBI, consEn = consEn_BBI, nFixedBounces = nFixedBounces_BBI, deltaEn = deltaEn): param_list = ["main.py", experiment, "--optimizer", "BBI", "--lr", str(stepsize), "--epochs", str(epochs),"--seed", str(seed), "--threshold", str(threshold), "--threshold0", str(threshold0), "--nFixedBounces", str(nFixedBounces), "--deltaEn", str(deltaEn), "--consEn", str(consEn), "--progress", "false","--device", "cuda", "-n", name] if experiment == "PDE_PoissonD": param_list.append("--problem") param_list.append(str(problem_number)) return run_experiment(param_list) best_par_sgd = bestpar['sgd'] epochs_check = 150 nruns = 10 seeds = [42, 27, 313, 5, 99, 429,42892, 318,242984,1042042,4209420,2,48,488429,19428,4289,1568,5920,2381,5502,48572,2385,111,234,4456,5,7,343,64,12,73] nruns_start = 0 for i in range(nruns_start,nruns): print(i) run_experiment_sgd_name_seed(epochs=epochs_check,name = "sgd-"+str(i), seed = seeds[i], **best_par_sgd) best_par_BBI = bestpar['BBI'] epochs_check = 150 nruns = 10 nruns_start = 0 for i in range(nruns_start,nruns): print(i) run_experiment_BBI_name(epochs=epochs_check,name = "BBI-"+str(i), **best_par_BBI) #These are the new runs nruns = 10 all_results_sgd = [] all_results_bbi = [] for i in range(nruns): with open(directory_results+'sgd-'+str(i)+'.json', 'r') as f: all_results_sgd.append(json.load(f)) with open(directory_results+'BBI-'+str(i)+'.json', 'r') as f: all_results_bbi.append(json.load(f)) accs_sgd = [] accs_bbi = [] #These are the new runs for i in range(nruns): accs_sgd.append(all_results_sgd[i]['acc test'][-1]) accs_bbi.append(all_results_bbi[i]['acc test'][-1]) # This is the previous run accs_sgd.append(bestrun_sgd['acc test'][-1]) accs_bbi.append(bestrun_BBI['acc test'][-1]) #These are not the same runs in the paper (less statistics), but the results are comparable. print("SGD:") print(accs_sgd) print("\tMean: ", torch.mean(torch.tensor(accs_sgd)).item()) print("\tStd: ", torch.std(torch.tensor(accs_sgd)).item()) print("BBI:") print(accs_bbi) print("\tMean: ", torch.mean(torch.tensor(accs_bbi)).item()) print("\tStd: ", torch.std(torch.tensor(accs_bbi)).item()) ```
github_jupyter
##### Copyright 2018 The TensorFlow Hub Authors. Licensed under the Apache License, Version 2.0 (the "License"); ``` # Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== ``` # TF-Hub Action Recognition Model <table align="left"><td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/action_recognition_with_tf_hub.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab </a> </td><td> <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/action_recognition_with_tf_hub.ipynb"> <img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td></table> This Colab demonstrates use of action recognition from video data using the [tfhub.dev/deepmind/i3d-kinetics-400/1](https://tfhub.dev/deepmind/i3d-kinetics-400/1) module. The underlying model is described in the paper "[Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset](https://arxiv.org/abs/1705.07750)" by Joao Carreira and Andrew Zisserman. The paper was posted on arXiv in May 2017, and was published as a CVPR 2017 conference paper. The source code is publicly available on [github](https://github.com/deepmind/kinetics-i3d). "Quo Vadis" introduced a new architecture for video classification, the Inflated 3D Convnet or I3D. This architecture achieved state-of-the-art results on the UCF101 and HMDB51 datasets from fine-tuning these models. I3D models pre-trained on Kinetics also placed first in the CVPR 2017 [Charades challenge](http://vuchallenge.org/charades.html). The original module was trained on the [kinetics-400 dateset](https://deepmind.com/research/open-source/open-source-datasets/kinetics/) and knows about 400 diferrent actions. Labels for these actions can be found in the [label map file](https://github.com/deepmind/kinetics-i3d/blob/master/data/label_map.txt). In this Colab we will use it recognize activites in videos from a UCF101 dataset. # Setting up the environment ``` # Install the necessary python packages. !pip install -q "tensorflow>=1.7" "tensorflow-hub" "imageio" #@title Import the necessary modules # TensorFlow and TF-Hub modules. from absl import logging import tensorflow as tf import tensorflow_hub as hub logging.set_verbosity(logging.ERROR) # Some modules to help with reading the UCF101 dataset. import random import re import os import tempfile import cv2 import numpy as np # Some modules to display an animation using imageio. import imageio from IPython import display from urllib import request # requires python3 #@title Helper functions for the UCF101 dataset # Utilities to fetch videos from UCF101 dataset UCF_ROOT = "http://crcv.ucf.edu/THUMOS14/UCF101/UCF101/" _VIDEO_LIST = None _CACHE_DIR = tempfile.mkdtemp() def list_ucf_videos(): """Lists videos available in UCF101 dataset.""" global _VIDEO_LIST if not _VIDEO_LIST: index = request.urlopen(UCF_ROOT).read().decode("utf-8") videos = re.findall("(v_[\w_]+\.avi)", index) _VIDEO_LIST = sorted(set(videos)) return list(_VIDEO_LIST) def fetch_ucf_video(video): """Fetchs a video and cache into local filesystem.""" cache_path = os.path.join(_CACHE_DIR, video) if not os.path.exists(cache_path): urlpath = request.urljoin(UCF_ROOT, video) print("Fetching %s => %s" % (urlpath, cache_path)) data = request.urlopen(urlpath).read() open(cache_path, "wb").write(data) return cache_path # Utilities to open video files using CV2 def crop_center_square(frame): y, x = frame.shape[0:2] min_dim = min(y, x) start_x = (x // 2) - (min_dim // 2) start_y = (y // 2) - (min_dim // 2) return frame[start_y:start_y+min_dim,start_x:start_x+min_dim] def load_video(path, max_frames=0, resize=(224, 224)): cap = cv2.VideoCapture(path) frames = [] try: while True: ret, frame = cap.read() if not ret: break frame = crop_center_square(frame) frame = cv2.resize(frame, resize) frame = frame[:, :, [2, 1, 0]] frames.append(frame) if len(frames) == max_frames: break finally: cap.release() return np.array(frames) / 255.0 def animate(images): converted_images = np.clip(images * 255, 0, 255).astype(np.uint8) imageio.mimsave('./animation.gif', converted_images, fps=25) with open('./animation.gif','rb') as f: display.display(display.Image(data=f.read(), height=300)) #@title Get the kinetics-400 labels # Get the kinetics-400 action labels from the GitHub repository. KINETICS_URL = "https://raw.githubusercontent.com/deepmind/kinetics-i3d/master/data/label_map.txt" with request.urlopen(KINETICS_URL) as obj: labels = [line.decode("utf-8").strip() for line in obj.readlines()] print("Found %d labels." % len(labels)) ``` # Using the UCF101 dataset ``` # Get the list of videos in the dataset. ucf_videos = list_ucf_videos() categories = {} for video in ucf_videos: category = video[2:-12] if category not in categories: categories[category] = [] categories[category].append(video) print("Found %d videos in %d categories." % (len(ucf_videos), len(categories))) for category, sequences in categories.items(): summary = ", ".join(sequences[:2]) print("%-20s %4d videos (%s, ...)" % (category, len(sequences), summary)) # Get a sample cricket video. sample_video = load_video(fetch_ucf_video("v_CricketShot_g04_c02.avi")) print("sample_video is a numpy array of shape %s." % str(sample_video.shape)) animate(sample_video) # Run the i3d model on the video and print the top 5 actions. # First add an empty dimension to the sample video as the model takes as input # a batch of videos. model_input = np.expand_dims(sample_video, axis=0) # Create the i3d model and get the action probabilities. with tf.Graph().as_default(): i3d = hub.Module("https://tfhub.dev/deepmind/i3d-kinetics-400/1") input_placeholder = tf.placeholder(shape=(None, None, 224, 224, 3), dtype=tf.float32) logits = i3d(input_placeholder) probabilities = tf.nn.softmax(logits) with tf.train.MonitoredSession() as session: [ps] = session.run(probabilities, feed_dict={input_placeholder: model_input}) print("Top 5 actions:") for i in np.argsort(ps)[::-1][:5]: print("%-22s %.2f%%" % (labels[i], ps[i] * 100)) ```
github_jupyter
<a href="https://colab.research.google.com/github/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessEVAL.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> **BUTTERFLY - Processing of the FASTQ files for the EVAL dataset.** 1. Download and build kallisto and bustools from source. 2. Download the genome FASTA file and build a kallisto index 3. Download the FASTQ files and process with kallisto 4. Process the output from kallisto with bustools (the butterfly branch) **1. Download and build kallisto and bustools from source** ``` # Install dependencies needed for build !apt update !apt install -y cmake !apt-get install autoconf #Need to download and build htslib to be able to build kallisto !cd /usr/bin && wget https://github.com/samtools/htslib/releases/download/1.9/htslib-1.9.tar.bz2 &&tar -vxjf htslib-1.9.tar.bz2 && cd htslib-1.9 && make #clone the kallisto repo, build and install !rm -r temporary #if the code is run more than once !mkdir temporary !cd temporary && git clone https://github.com/pachterlab/kallisto.git !cd temporary/kallisto && git checkout v0.46.2 && mkdir build && cd build && cmake .. && make !chmod +x temporary/kallisto/build/src/kallisto !mv temporary/kallisto/build/src/kallisto /usr/local/bin/ #clone the bustools repo, build and install !cd temporary && rm -r * !git clone https://github.com/BUStools/bustools.git !mv bustools/ temporary/ !cd temporary/bustools && git checkout butterfly && mkdir build && cd build && cmake .. && make !chmod +x temporary/bustools/build/src/bustools !mv temporary/bustools/build/src/bustools /usr/local/bin/ !kallisto version ``` **2. Download the genome FASTA file and build a kallisto index** ``` #Download fasta and build kallisto index !wget "ftp://ftp.ensembl.org/pub/release-96/fasta/mus_musculus/cdna/Mus_musculus.GRCm38.cdna.all.fa.gz" -O mouse.fa.gz !kallisto index -i Mus_musculus.GRCm38.cdna.all.idx mouse.fa.gz ``` **3. Download the FASTQ files and process with kallisto** ``` #Process the files using kallisto !rm A_R1.gz A_R2.gz B_R1.gz B_R2.gz C_R1.gz C_R2.gz D_R1.gz D_R2.gz !mkfifo A_R1.gz A_R2.gz B_R1.gz B_R2.gz C_R1.gz C_R2.gz D_R1.gz D_R2.gz !curl -Ls "https://zenodo.org/record/4575502/files/Cortex1.CCKVLANXX.10X_2A.unmapped.1.fastq.gz?download=1" > A_R1.gz & curl -Ls "https://zenodo.org/record/4575502/files/Cortex1.CCKVLANXX.10X_2A.unmapped.2.fastq.gz?download=1" > A_R2.gz & curl -Ls "https://zenodo.org/record/4575502/files/Cortex1.CCKVLANXX.10X_2B.unmapped.1.fastq.gz?download=1" > B_R1.gz & curl -Ls "https://zenodo.org/record/4575502/files/Cortex1.CCKVLANXX.10X_2B.unmapped.2.fastq.gz?download=1" > B_R2.gz & curl -Ls "https://zenodo.org/record/4575502/files/Cortex1.CCKVLANXX.10X_2C.unmapped.1.fastq.gz?download=1" > C_R1.gz & curl -Ls "https://zenodo.org/record/4575502/files/Cortex1.CCKVLANXX.10X_2C.unmapped.2.fastq.gz?download=1" > C_R2.gz & curl -Ls "https://zenodo.org/record/4575502/files/Cortex1.CCKVLANXX.10X_2D.unmapped.1.fastq.gz?download=1" > D_R1.gz & curl -Ls "https://zenodo.org/record/4575502/files/Cortex1.CCKVLANXX.10X_2D.unmapped.2.fastq.gz?download=1" > D_R2.gz & kallisto bus -i Mus_musculus.GRCm38.cdna.all.idx -o bus_output/ -x 10xv2 -t 2 A_R1.gz A_R2.gz B_R1.gz B_R2.gz C_R1.gz C_R2.gz D_R1.gz D_R2.gz ``` **4. Process the output from kallisto with bustools (the butterfly branch)** ``` #get the whitelist ![ -d "GRNP_2020" ] && rm -r GRNP_2020 #in case the code is run several times !git clone https://github.com/pachterlab/GRNP_2020.git !cd GRNP_2020/whitelists && unzip 10xv2_whitelist.zip !cp GRNP_2020/tr2g/Mouse/* bus_output/. !cd GRNP_2020/whitelists && ls !bustools correct -w GRNP_2020/whitelists/10xv2_whitelist.txt -p bus_output/output.bus | bustools sort -T tmp/ -t 2 -o bus_output/sort.bus - #collapse !bustools collapse -o bus_output/coll -t bus_output/transcripts.txt -g bus_output/transcripts_to_genes.txt -e bus_output/matrix.ec bus_output/sort.bus #umicorrect !bustools umicorrect -o bus_output/umicorr.bus bus_output/coll.bus #convert to text !bustools text -o bus_output/bug.txt bus_output/umicorr.bus !ls -l !cd bus_output && ls -l ```
github_jupyter
# Executing pages of your book ✨**experimental**✨ Sometimes you'd like to execute each page's content before building its HTML - this ensures that the outputs are up-to-date and that they still run. Jupyter Book has the ability to execute any code in your content when you build each page's HTML. ## Default execution behavior By default, Jupyter Book will decide whether to execute your book's content based on the types of files used to store your content. By default, any Jupyter Notebooks will **not be executed**, Jupyter Book assumes that the notebook outputs are already present in the `ipynb` file (see below for how to change this behavior). In addition, **raw markdown** files will also not be executed, as Jupyter Book assumes that any code blocks were meant for viewing only, not running. However, any **[Jupytext documents](jupytext.html) content will be executed** when the page's HTML is built. This can be either `.py`, `.md`, or `.Rmd` files that have jupytext YAML front-matter. There are a few ways to alter this behavior, which we'll cover below: It's possible to **execute your book content at build time**. There are a few convenience functions and options for this: ## Run all your book's content when building: **`jupyter-book build --execute`** If you'd like to run each page of your book (where possible) when building the HTML for your book pages, you may use the `--execute` flag when you build each page's HTML. This will cause each `.ipynb` and jupytext-formatted page to be **executed** when it is built. In this case, the source content files will not be modified, but the outputs will be placed in each page's HTML. Remember that the HTML for each page is cached until you update your content. The first time you run `jupyter-book build --execute`, each page of your book will be run and converted to HTML. However, after this, only the pages that have been updated will be executed and converted to HTML, so while this will take a long time the first time you run it (since all your book content will need to run), it should be incrementally faster in subsequent builds. Here's a quick summary of this behavior: | Type | Default | `--execute`| |---|---|---| | Jupyter Notebooks (`.ipynb`) | Doesn't execute| Executes | | Raw markdown (`.md`) | Doesn't execute | Executes| | Jupytext pages (`.md`, `.Rmd`, `.py`)| Executes |Executes | ## Run a single page in-place: **`jupyter-book run`** Jupyter Book also provides a convenience command-line function that executes a single Jupyter Notebook and stores the outputs in the same `ipynb` file. This is a command-line function that allows you to quickly execute pages of your book and store the content in-line. You can use it with a path to a single book page like so: ``` jupyter-book run path/to/notebook.ipynb ``` And it will simply run the execute in-place. You can also specify a folder with a **collection** of pages as the first argument: ``` jupyter-book run path/to/notebook_folder ``` In this case, all notebooks in this folder and sub-folders will be run in-place.
github_jupyter
# Convert The `Food-101` Keras Model to CoreML This notebook will convert the [Food-101](https://github.com/stratospark/food-101-keras) trained Keras model and classification labels to a CoreML model. Run each script block by pressing `CTRL-Enter` or the `Run` button (Note: _`In [ ]` will change to `In [*]` when the script is running and eventually updating to `In [somenumber]` when done; some might take a while!_). More information can be found here: - [Deep learning food classification demo with Keras](http://blog.stratospark.com/deep-learning-applied-food-classification-deep-learning-keras.html) - Udacity's free [Core ML: Machine Learning of iOS](https://www.udacity.com/course/core-ml--ud1038) course is a great introduction. ## Import dependencies ``` from keras.models import load_model import coremltools ``` ## Download `Food-101` Keras model (if needed) These will be stored in [/tree/models](/tree/models). If you don't want to do this in this notebook but rather when building the docker image, you can download them in the `Dockerfile` by adding these lines after the workspace directory is created: ``` # Download example Food101 Keras model weights and labels RUN wget -O /workspace/models/keras-food101-model.hdf5 https://s3.amazonaws.com/stratospark/food-101/model4b.10-0.68.hdf5 RUN wget -O /workspace/models/keras-food101-model-labels.txt https://github.com/stratospark/food-101-mobile/raw/43598fdc08500683bbc04f877ae069c38c8ac4c3/model_export/labels.txt ``` ``` # Define the files to download basePath='/workspace/models/' files = [ {'name': 'Food 101 Model file', 'url': 'https://s3.amazonaws.com/stratospark/food-101/model4b.10-0.68.hdf5', 'path': basePath + 'keras-food101-model.hdf5'}, {'name': 'Food 101 Labels file', 'url': 'https://github.com/stratospark/food-101-mobile/raw/43598fdc08500683bbc04f877ae069c38c8ac4c3/model_export/labels.txt', 'path': basePath + 'keras-food101-model-labels.txt'} ] # Convenience method that will download the given files (if needed) and show progress def downloadFiles(files): import os.path import ipywidgets as widgets from urllib.request import urlretrieve from IPython.display import display progressBar = widgets.FloatProgress(value=0, min=0, max=100.0, step=0.1, description="Progress:") def dlProgress(count, blockSize, totalSize): percentage = min(float(count * blockSize * 100 / totalSize), 100.0) progressBar.value = percentage for file in files: path=file['path'] if not os.path.isfile(path) and not os.access(path, os.R_OK): progressBar = widgets.FloatProgress(value=0, min=0, max=100.0, step=0.1, description="Progress:") print("Downloading {0}:".format(file['name'])) display(progressBar) urlretrieve(file['url'], path, reporthook=dlProgress) else: print("{0} has already been downloaded".format(file['name'])) # Download the files (if needed) downloadFiles(files) ``` ## Load The `Food-101` Keras model ``` model = load_model(basePath + 'keras-food101-model.hdf5') class_labels = basePath + 'keras-food101-model-labels.txt' ``` __List the Food-101 model summary__ ``` model.summary() ``` ## Convert Keras model to CoreML model ``` coreml_model = coremltools.converters.keras.convert(model, input_names=['image'], output_names=['confidence'], class_labels=class_labels, image_input_names='image', image_scale=2./255, red_bias=-1, green_bias=-1, blue_bias=-1) ``` ## Add metadata to the CoreML model ``` coreml_model.author = 'Jeroen Wesbeek' coreml_model.license = 'MIT' coreml_model.short_description = 'Classifies food from an image as one of 101 classes' coreml_model.input_description['image'] = 'Food image' coreml_model.output_description['confidence'] = 'Confidence of the food classification' coreml_model.output_description['classLabel'] = 'Food classification label' ``` ## Inspect the created CoreML model ``` coreml_model ``` ## Test the CoreML model predictions __Download a test image and feed it into the CoreML model to get its prediction__ _Note: this will only work on macOS 10.13 or later!_ ``` import requests from PIL import Image from io import BytesIO response = requests.get('https://www.budgetbytes.com/wp-content/uploads/2017/01/Bibimbap-above.jpg') bibimbap = Image.open(BytesIO(response.content)) bibimbap import sys try: coreml_model.predict({'image' : bibimbap}) except Exception as inst: print("Could not perform model predictions", inst.args) ``` ## Save the CoreML model _Note: the converted CoreML model will be stored in to [root](http://0.0.0.0:8888/tree/notebook). Please refer to [Apple's documentation](https://developer.apple.com/documentation/coreml) on how to use the CoreML model inside your App._ ``` coreml_model.save('Food101Net.mlmodel') ```
github_jupyter
<a href="https://colab.research.google.com/github/dantecomedia/Iris-Classification-using-KNN/blob/master/Iris_Classification_(K_Nearest_Neighbors)_(3).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import numpy as np # linear algebrad import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory import os # Walking through the Iris Classification (k-nearest neighbors) example from Introduction to Machine Learning with Python by Andreas Muller & Sarah Guido from sklearn.datasets import load_iris iris_dataset = load_iris() # print out the description of iris dataset print(iris_dataset['DESCR'][:193] + "\n...") # print out the target names of the species to predict print("Target names: {}".format(iris_dataset['target_names'])) # print out the feature names print("Feature names: {}".format(iris_dataset['feature_names'])) # print out the data type of the iris_dataset.data print("Type of data: {}".format(type(iris_dataset['data']))) # print the shape of the dataset (150,4) - "The shape of the data array is the number of samples multiplied by the number of features." print("Shape of data: {}".format(iris_dataset['data'].shape)) # print the first five rows of the dataset print("First five rows of data:\n{}".format(iris_dataset['data'][:5])) # print the datatype of the target dataset (iris_dataset.target) - the target represents the actual pre-defined species of the flower, the target varaible print("Type of target: {}".format(type(iris_dataset['target']))) # print the shape of the target dataset print("Shape of target: {}".format(iris_dataset['target'].shape)) # print the target data print("Target: \n{}".format(iris_dataset['target'])) # import train_test_split to segment data into training/test datasets from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(iris_dataset['data'], iris_dataset['target'], random_state=0) print("X_train shape: {}".format(X_train.shape)) print("y_train shape: {}".format(y_train.shape)) print("X_test shape: {}".format(X_test.shape)) print("y_test shape: {}".format(y_test.shape)) iris_dataframe = pd.DataFrame(X_train, columns=iris_dataset.feature_names) pd.plotting.scatter_matrix(iris_dataframe, c=y_train, figsize=(15,15), marker='o', hist_kwds={'bins':20}, s=60, alpha=.8) # import the k-nearest neighbors classifier from sci-kit learn from sklearn.neighbors import KNeighborsClassifier # Instantiate the KNeighborsClassifier with a n_neighbors value of 1 knn = KNeighborsClassifier(n_neighbors=1) # build the model from the training set knn.fit(X_train, y_train) # Create a new sample and use the model built above to predict the species x_new = np.array([[5, 2.9, 1, 0.2]]) print("X_new.shape: {}".format(x_new.shape)) # make a predition based on the above sample prediction = knn.predict(x_new) print("Prediction: {}".format(prediction)) print("Predicted target name: {}".format(iris_dataset['target_names'][prediction])) # run the test data set through the model to determine predictions = y_pred = knn.predict(X_test) print("Test set predictions: \n {}".format(y_pred)) # test the accuracy of the model by using np.mean function print("Test set score: {:.2f}".format(np.mean(y_pred==y_test))) # test the accuract of the model using the score function print("Test set score: {:2f}".format(knn.score(X_test, y_test))) ```
github_jupyter
# Week 1 ## Overview As explained in the [*Before week 1* notebook](https://nbviewer.jupyter.org/github/SocialComplexityLab/socialgraphs2020/blob/master/lectures/How_To_Take_This_Class.ipynb), each week of this class is an IPython notebook like this one. **_In order to follow the class, you simply start reading from the top_**, following the instructions. **Hint**: And you can ask me - or any of the friendly Teaching Assistants - for help at any point if you get stuck! ## Intro video Below is today's informal intro video, tying everything together and making each and everyone of you feel welcome and loved! ``` # sune's informal intro from IPython.display import YouTubeVideo YouTubeVideo("sC1DXWWUSkI",width=800, height=450) # link: https://www.youtube.com/watch?v=sC1DXWWUSkI ``` ## Today This first lecture will go over a few different topics to get you started * As the zero'th item. Make sure that you're on top of Python. If you feel you need to refresh. **PLEASE GO TO THE** [Python refresher Notebook](https://nbviewer.jupyter.org/github/SocialComplexityLab/socialgraphs2020/blob/master/lectures/PythonBootcamp.ipynb) and work through it before proceeding. Ok. And now for the actual lecture * First, we talk a bit about APIs and how they work. * Next we'll dip our toes in the waters of Network Science, with a bit of lecturing and reading. * Thirdly, and finally, we'll be getting our hands dirty getting to know the awesome Network Analysis package `NetworkX`. ## Part 1: What is an API? As a little check that you're on top of Python, let's get started with a quick overview of APIs. And don't worry, the work you do here will be relevant later, I promise. > > **_Video lecture_**: Click below to watch it on YouTube. > **NOTE**: That this video is made for Python 2. There are a few things that won't work in Python 3. To help smooth things over until I update the video, **[here](https://github.com/SocialComplexityLab/socialgraphs2020/blob/master/files/API_check.ipynb)** is an Python3 version of the notebook used in the video to help you out with the changes. ``` YouTubeVideo("9l5zOfh0CRo",width=800, height=450) # link: https://www.youtube.com/watch?v=9l5zOfh0CRo ``` It's time for you to get to work. Take a look at the two texts below - just to get a sense of a more technical description of how APIs wor. Again, this is a Python 2 video, so small changes may apply. This video will be updated soon. Hint: **[Here](https://github.com/SocialComplexityLab/socialgraphs2020/blob/master/files/API_check.ipynb)** is an Python3 version of the notebook used in the video that you can work from. > _Reading_ (just skim): [Wikipedia page on APIs](https://en.wikipedia.org/wiki/Web_API) > _Reading_ (just skim): [Wikipedia page on REST for web services](https://en.wikipedia.org/wiki/Representational_state_transfer#Applied_to_web_services) > > *Exercise*: > * Explain in your own words: What is the the difference between the html page and the wiki-source? > * What are the various parameters you can set for a query of the wikipedia api? > * Write your own little `notebook` to download wikipedia pages based on the video above. Download the source for your 4 favorite wikipedia pages. # Part 2: Basic description of networks Now let's get to some some lecturing. I love networks, so I'll take some time time today to tell you about them. > **_Video Lecture_**. Start by watching the "History of Networks" below ``` from IPython.display import YouTubeVideo YouTubeVideo("qjM9yMarl70",width=800, height=450) #link https://www.youtube.com/watch?v=qjM9yMarl70 ``` > _Reading_. We'll be reading the textbook _Network Science_ (NS) by Laszlo Barabasi. You can read the whole > thing for free [**here**](http://barabasi.com/networksciencebook/). > > * Read chapter 1\. > > _Exercises_ > _Chapter 1_ (Don't forget that you should be answering these in an IPython notebook.) > > * List three different real networks and state the nodes and links for each of them. > * Tell us of the network you are personally most interested in (a fourth one). Address the following questions: > * What are its nodes and links? > * How large is it? > * Can be mapped out? > * Does it evolve over time? > * Are there processes occurring ON the network? (information spreading, for example) > * Why do you care about it? > * In your view what would be the area where network science could have the biggest impact in the next decade? Explain your answer - and base it on the text in the book. # Part 3: The awesome `NetworkX` library In case it wasn't clear by now, this class is about YOU analyzing networks. And it wouldn't be right to start the first lecture without playing a little bit with network analysis (there will be much more on this in the following lectures). So here goes... `NetworkX` should already be installed as part of your _Anaconda_ Python distribution. But you don't know how to use it yet. The best way to get familiar is to work through a tutorial. That's what the next exercise is about > *Exercises*: > > * Go to the `NetworkX` project's [tutorial page](https://networkx.github.io/documentation/stable/tutorial.html). The goal of this exercise is to create your own Notebook that contains the entire tutorial. You're free to add your own (e.g. shorter) comments in place of the ones in the official tutorial - and change the code to make it your own where ever it makes sense. There will be much more on NetworkX next time.
github_jupyter
``` # Description: Plot Figure 9 (coherences between area-averaged, daily vorticity terms) # # Author: André Palóczy # E-mail: paloczy@gmail.com # Date: April/2020 import numpy as np import matplotlib.pyplot as plt from xarray import open_dataset from pandas import Series from scipy import signal from scipy.special import erfinv class Spectrum(object): """ A class that represents a single realization of the one-dimensional spectrum of a given field phi """ def __init__(self, phi, dt, window=None, beta=14, demean=True, detrend=True, prewhiten=False, normalize=True): """ 'window' must be one of 'hanning', 'hamming', 'kaiser' or 'bartlett'. 'beta' is the shape parameter of the Kaiser window (defaults to 14), if the Kaiser window is used. """ self.phi = phi # field to be analyzed self.dt = dt # sampling interval self.n = phi.size # test if n is even if (self.n%2): self.neven = False else: self.neven = True # calculate frequencies self.calc_freq() # calculate spectrum self.calc_spectrum(window=window, beta=beta, demean=demean, detrend=detrend, prewhiten=prewhiten, normalize=normalize) # calculate total var self.calc_var() def calc_freq(self): """ calculate array of spectral variable (frequency or wavenumber) in cycles per unit of L """ self.df = 1./((self.n-1)*self.dt) if self.neven: self.f = self.df*np.arange(self.n/2+1) else: self.f = self.df*np.arange( (self.n-1)/2. + 1 ) def calc_spectrum(self, window=None, beta=14, demean=True, detrend=True, prewhiten=False, normalize=True): """ compute the 1d spectrum of a field phi """ phi_aux = self.phi.copy() # Copy series to store only the original as Spectrum.phi if demean: phi_aux = phi_aux - phi_aux.mean() if detrend: phi_aux = signal.detrend(phi_aux, type='linear') if prewhiten: phi_aux = (phi_aux[1:] - phi_aux[:-1])/self.dt # Window the variable before calculating the FFT. if window: wdws = {'hanning':np.hanning, 'hamming':np.hamming, \ 'kaiser':np.kaiser, 'bartlett':np.bartlett} if window=='kaiser': win = wdws[window](self.n, beta) else: win = wdws[window](self.n) # The normalization below satisfies Parseval's Theorem with synthetic data. if window=='kaiser': print("Normalization not implemented for Kaiser window yet.") stop else: winfac = {'hanning':np.sqrt(8/3.),'hamming':np.sqrt(2.5164),'bartlett':np.sqrt(3.)} # Thomson & Emery (2014), p. 479, table 5.5. win = win*winfac[window] phi_aux *= win self.phih = np.fft.rfft(phi_aux) # the factor of 2 comes from the symmetry of the Fourier coeffs if normalize: self.spec = 2.*(self.phih*self.phih.conj()).real / self.df / self.n**2 else: self.spec = 2.*(self.phih*self.phih.conj()).real / self.df # the zeroth frequency should be counted only once self.spec[0] = self.spec[0]/2. if self.neven: self.spec[-1] = self.spec[-1]/2. if prewhiten: faux = self.f[1:] self.spec = self.spec/(2*np.pi*faux)**2 # Re--redden the spectrum. self.f = self.f[:-1] def calc_var(self): """ Compute total variance from spectrum """ self.var = self.df*self.spec[1:].sum() # do not consider zeroth frequency class xSpectrum(object): """ A class that represents a single realization of the one-dimensional cross-spectrum of two given fields phi1 and phi2. """ def __init__(self, phi1, phi2, dt, window=None, beta=14, demean=True, detrend=True, normalize=True): """ 'window' must be one of 'hanning', 'hamming', 'kaiser' or 'bartlett'. 'beta' is the shape parameter of the Kaiser window (defaults to 14), if the Kaiser window is used. """ assert phi1.size==phi2.size, "The two fields have different sizes." self.phi1 = phi1 # First field to be analyzed self.phi2 = phi2 # Second field to be analyzed self.dt = dt # sampling interval self.n = phi1.size # test if n is even if (self.n%2): self.neven = False else: self.neven = True # calculate frequencies self.calc_freq() # calculate cross-spectrum self.calc_xspectrum(window=window, beta=beta, demean=demean, detrend=detrend, normalize=normalize) # calculate total var self.calc_var() def calc_freq(self): """ calculate array of spectral variable (frequency or wavenumber) in cycles per unit of L """ self.df = 1./((self.n-1)*self.dt) if self.neven: self.f = self.df*np.arange(self.n/2+1) else: self.f = self.df*np.arange( (self.n-1)/2. + 1 ) def calc_xspectrum(self, window=None, beta=14, demean=True, detrend=True, normalize=True): """compute the 1d cross-spectrum of two fields phi1 and phi2""" phi_aux1 = self.phi1.copy() # Copy series to store only the original as Spectrum.phi phi_aux2 = self.phi2.copy() if demean: phi_aux1 = phi_aux1 - phi_aux1.mean() phi_aux2 = phi_aux2 - phi_aux2.mean() if detrend: phi_aux1 = signal.detrend(phi_aux1, type='linear') phi_aux2 = signal.detrend(phi_aux2, type='linear') # Window the variable before calculating the FFT. if window: wdws = {'hanning':np.hanning, 'hamming':np.hamming, \ 'kaiser':np.kaiser, 'bartlett':np.bartlett} if window=='kaiser': win = wdws[window](self.n, beta) else: win = wdws[window](self.n) # The normalization below satisfies Parseval's Theorem with synthetic data, the ones above do not. if window=='kaiser': print("Normalization not implemented for Kaiser window yet.") stop else: winfac = {'hanning':np.sqrt(8/3.),'hamming':np.sqrt(2.5164),'bartlett':np.sqrt(3.)} # Thomson & Emery (2014), p. 479, table 5.5. win = win*winfac[window] phi_aux1 *= win phi_aux2 *= win self.phih1 = np.fft.rfft(phi_aux1) self.phih2 = np.fft.rfft(phi_aux2) # the factor of 2 comes from the symmetry of the Fourier coeffs if normalize: self.xspec = 2.*self.phih1*self.phih2.conj() / self.df / self.n**2 else: self.xspec = 2.*self.phih1*self.phih2.conj() / self.df # the zeroth frequency should be counted only once self.xspec[0] = self.xspec[0]/2. if self.neven: self.xspec[-1] = self.xspec[-1]/2. def calc_var(self): """ Compute total variance from cross-spectrum """ self.var = self.df*self.xspec[1:].sum() # do not consider zeroth frequency def coherence(phi1, phi2, dt, demean=True, detrend=True, N=10, overlap=0.5, window='hanning', beta=14, verbose=True): """ Calculates the 1D coherence between two variables 'phi1' and 'phi2' with the block averaging method. 'N' is the intended number of blocks to split the time series in, in the case of no overlap. For nonzero overlap, the actual number of blocks will be the maximum possible considering the size of 'phi'. 'overlap' sets the amount of overlap (in fractional lenght of each block). 'window' and 'beta' are passed as kwargs to the 'xSpectrum' class and applied for each block individually. REFERENCE --------- Thomson & Emery (2014), p. 503 """ phi1, phi2 = map(np.asanyarray, (phi1, phi2)) assert phi1.size==phi2.size, "The two fields have different sizes." n = phi1.size ni = int(n/N) # Number of data points in each chunk. dn = int(round(ni - overlap*ni)) # How many indices to move forward with each chunk (depends on the % overlap). if demean: phi1 = phi1 - phi1.mean() phi2 = phi2 - phi2.mean() if detrend: phi1 = signal.detrend(phi1, type='linear') phi2 = signal.detrend(phi2, type='linear') kwspec = {'window':window, 'beta':beta, 'demean':True, 'detrend':True, 'prewhiten':False, 'normalize':False} kwxspec = {'window':window, 'beta':beta, 'demean':True, 'detrend':True, 'normalize':False} nblks=0 i0, i1 = 0, ni while i1<=n: if nblks==0: Sphi1 = Spectrum(phi1[i0:i1], dt, **kwspec) # normalize=False because the normalizations of the cross-spectrum and the two autospectra cancel on the coherence. Sphi2 = Spectrum(phi2[i0:i1], dt, **kwspec) Sphi1phi2 = xSpectrum(phi1[i0:i1], phi2[i0:i1], dt, **kwxspec) else: sphi1 = Spectrum(phi1[i0:i1], dt, **kwspec) # normalize=False because the normalizations of the cross-spectrum and the two autospectra cancel on the coherence. sphi2 = Spectrum(phi2[i0:i1], dt, **kwspec) sphi1phi2 = xSpectrum(phi1[i0:i1], phi2[i0:i1], dt, **kwxspec) # Accumulating cross-spectrum of phi1 and phi2. Sphi1phi2.xspec += sphi1phi2.xspec # Accumulating autospectra of phi1 and phi2 Sphi1.spec += sphi1.spec Sphi2.spec += sphi2.spec i0+=dn; i1+=dn; nblks+=1 else: Sphi1.spec = Sphi1.spec/nblks # Average the individual spectral realizations. Sphi1.spec = Sphi1.spec/ni**2 # Normalize the spectrum by N^2 to enforce Parseval's Theorem (to avoid losing accuracy in normalizing individual estimates). Sphi1.var = Sphi1.df*Sphi1.spec[1:].sum() # Update the total variance to reflect the windowed and block-averaged spectrum. Sphi2.spec = Sphi2.spec/nblks Sphi2.spec = Sphi2.spec/ni**2 Sphi2.var = Sphi2.df*Sphi2.spec[1:].sum() Sphi1phi2.xspec = Sphi1phi2.xspec/nblks Sphi1phi2.xspec = Sphi1phi2.xspec/ni**2 Sphi1phi2.var = Sphi1phi2.df*Sphi1phi2.xspec[1:].sum() Ncap = n - i0 # Number of points left out at the end of the series. nm = n/(ni/2.) EDoF = {None:nm, 'hanning':(8/3.)*nm, 'hamming':2.5164*nm, 'kaiser':'EDoF not implemented for Kaiser window', 'bartlett':3.*nm} # Thomson & Emery (2014), p. 479, Table 5.5. # Calculating coherence from the cross-spectrum of phi1 and phi2 and their autospectra. # Calculating the amplitude and phase lag information as the coherence magnitude and a # phase spectrum, respectively (e.g., Thomson & Emery, 2014, Gille et al., 2001). Coh = Sphi1phi2.xspec/np.sqrt(Sphi1.spec*Sphi2.spec) Coh = (Coh*Coh.conj()).real # SQUARED coherence magnitude, or normalized cross-amplitude spectrum, Thomson & Emery (2014), p. 503. Phase = np.arctan2(Sphi1phi2.xspec.imag, Sphi1phi2.xspec.real) # Phase spectrum [radians], Thomson & Emery (2014), p. 503. # Add the coherence and the phase spectrum to the xSpectrum class that will be returned. Sphi1phi2.coherence = Coh Sphi1phi2.phase = Phase nm = n/(ni/2.) EDoF = {None:nm, 'hanning':(8/3.)*nm, 'hamming':2.5164*nm, 'kaiser':'EDoF not implemented for Kaiser window', 'bartlett':3.*nm} # Thomson & Emery (2014), p. 479, Table 5.5. if verbose: print("") print("Left %d data points outside estimate (%.1f %% of the complete series)."%(Ncap,100*Ncap/n)) print("Intended number of blocks was %d, but could fit %d blocks with %.1f %% overlap."%(N, nblks, 100*overlap)) print("") print("Spectral resolution (original series/block-averaged): %.5f / %.5f [inverse time units]"%(1./(n*dt),1./(ni*dt))) print("Fundamental frequency (original series/block-averaged): %.5f / %.5f [inverse time units]"%(1/(n*dt),1/(ni*dt))) print("Fundamental period (original series/block-averaged): %.5f / %.5f [time units]"%(n*dt,ni*dt)) print("") print("Nyquist frequency: %.5f [inverse time units]"%(1./(2*dt))) print("Nyquist period: %.5f [time units]"%(2*dt)) print("") print("DoF: %d (assuming independent blocks)."%(2*nblks)) if window: print("Equivalent DoF: %d (assuming independent blocks)."%EDoF[window]) # Equivalent DoF for windows. return Sphi1phi2, nblks def coh_err(Cohxy, nblks): """ USAGE ----- Coh_lo, Coh_hi = coh_err(Cohxy, nblks) Calculates confidence intervals for the squared cross-amplitude spectrum (Bendat & Piersol, 2010 [chapter 9]). 'Cohxy' is the block-averaged squared cross-amplitude spectrum (squared coherence magnitude). 'nblks' is the number of segments used in the block-averaging (DoF/2). Returns the low and high 95 % confidence limits of the squared cross-amplitude spectrum ('coh_lo' and 'coh_hi'). TODO: Generalize confidence limits for arbitrary significance levels. """ Cohxy = np.asarray(Cohxy) Cohxysqrt = np.sqrt(Cohxy) eps_Cohxy = np.sqrt(2)*(1 - Cohxy)/(Cohxysqrt*np.sqrt(nblks)) # Bendat & Piersol equation (9.82). # 95 % Confidence intervals of a Gaussian distribution. Cohxy_hi = Cohxy + 2*Cohxy*eps_Cohxy Cohxy_lo = Cohxy - 2*Cohxy*eps_Cohxy return Cohxy_lo, Cohxy_hi def crosscorr(x, y, nblks, maxlags=0, overlap=0, onesided=False, verbose=True): """ Lag-N cross correlation averaged with Welch's Method. Parameters ---------- x, y : Arrays of equal length. nblks : Number of blocks to average cross-correlation. maxlags : int, default (0) calculates te largest possible number of lags, i.e., the number of points in each chunk. overlap : float, fraction of overlap between consecutive chunks. Default 0. onesided : Whether to calculate the cross-correlation only at positive lags (default False). Has no effect if x and y are the same array, in which case the one-sided autocorrelation function is calculated. Returns ---------- crosscorr : float array. """ if x is y: auto = True else: auto = False x, y = np.array(x), np.array(y) nx, ny = x.size, y.size assert x.size==y.size, "The series must have the same length" nblks, maxlags = int(nblks), int(maxlags) ni = int(nx/nblks) # Number of data points in each chunk. dn = int(round(ni - overlap*ni)) # How many indices to move forward with # each chunk (depends on the % overlap). if maxlags==0: if verbose: print("Maximum lag was not specified. Accomodating it to block size (%d)."%ni) maxlags = ni elif maxlags>ni: if verbose: print("Maximum lag is too large. Accomodating it to block size (%d)."%ni) maxlags = ni if onesided: lags = range(maxlags+1) else: lags = range(-maxlags, maxlags+1) # Array that will receive cross-correlation of each block. xycorr = np.zeros(len(lags)) n=0 il, ir = 0, ni while ir<=nx: xn = x[il:ir] yn = y[il:ir] # Calculate cross-correlation for current block up to desired maximum lag - 1. xn, yn = map(Series, (xn, yn)) xycorr += np.array([xn.corr(yn.shift(periods=lagn)) for lagn in lags]) il+=dn; ir+=dn n+=1 # pandas.Series.corr(method='pearson') -> pandas.nanops.nancorr() ... # -> pandas.nanops.get_corr_function() -> np.corrcoef -> numpy.cov(bias=False as default). # So np.corrcoef() returns the UNbiased correlation coefficient by default # (i.e., normalized by N-k instead of N). xycorr /= n # Divide by number of blocks actually used. ncap = nx - il # Number of points left out at the end of array. if verbose: print("") if ncap==0: print("No data points were left out.") else: print("Left last %d data points out (%.1f %% of all points)."%(ncap,100*ncap/nx)) print("Averaged %d blocks, each with %d lags."%(n,maxlags)) if overlap>0: print("Intended %d blocks, but could fit %d blocks, with"%(nblks,n)) print('overlap of %.1f %%, %d points per block.'%(100*overlap,dn)) print("") lags = np.array(lags) if auto and not onesided: fo = np.where(lags==0)[0][0] xycorr[fo+1:] = xycorr[fo+1:] + xycorr[:fo] lags = lags[fo:] xycorr = xycorr[fo:] fgud=~np.isnan(xycorr) return lags[fgud], xycorr[fgud] def Tdecorr(Rxx, M=None, dtau=1., verbose=False): """ USAGE ----- Td = Tdecorr(Rxx) Computes the integral scale Td (AKA decorrelation scale, independence scale) for a data sequence with autocorrelation function Rxx. 'M' is the number of lags to incorporate in the summation (defaults to all lags) and 'dtau' is the lag time step (defaults to 1). The formal definition of the integral scale is the total area under the autocorrelation curve Rxx(tau): /+inf Td = 2 * | Rxx(tau) dtau /0 In practice, however, Td may become unrealistic if all of Rxx is summed (e.g., often goes to zero for data dominated by periodic signals); a different approach is to instead change M in the summation and use the maximum value of the integral Td(t): /t Td(t) = 2 * | Rxx(tau) dtau /0 References ---------- e.g., Thomson and Emery (2014), Data analysis methods in physical oceanography, p. 274, equation 3.137a. Gille lecture notes on data analysis, available at http://www-pord.ucsd.edu/~sgille/mae127/lecture10.pdf """ Rxx = np.asanyarray(Rxx) C0 = Rxx[0] N = Rxx.size # Sequence size. # Number of lags 'M' to incorporate in the summation. # Sum over all of the sequence if M is not chosen. if not M: M = N # Integrate the autocorrelation function. Td = np.zeros(M) for m in range(M): Tdaux = 0. for k in range(m-1): Rm = (Rxx[k] + Rxx[k+1])/2. # Midpoint value of the autocorrelation function. Tdaux = Tdaux + Rm*dtau # Riemann-summing Rxx. Td[m] = Tdaux # Normalize the integral function by the autocorrelation at zero lag # and double it to include the contribution of the side with # negative lags (C is symmetric about zero). Td = (2./C0)*Td if verbose: print("") print("Theoretical integral scale --> 2 * int 0...+inf [Rxx(tau)] dtau: %.2f."%Td[-1]) print("") print("Maximum value of the cumulative sum: %.2f."%Td.max()) return Td def Tdecorrw(x, nblks=30, ret_median=True, verbose=True): """ USAGE ----- Ti = Tdecorrw(x, nblks=30, ret_median=True, verbose=True) 'Ti' is the integral timescale calculated from the autocorrelation function calculated for variable 'x' block-averaged in 'nblks' chunks. """ x = np.array(x) dnblkslr = round(nblks/2) tis = [Tdecorr(crosscorr(x, x, nblks=n, verbose=verbose)[1]).max() for n in range(nblks-dnblkslr, nblks+dnblkslr+1)] tis = np.ma.masked_invalid(tis) if verbose: print("========================") print(tis) print("========================") p1, p2, p3, p4, p5 = map(np.percentile, [tis]*5, (10, 25, 50, 75, 90)) print("--> 10 %%, 25 %%, 50 %%, 75 %%, 90 %% percentiles for Ti: %.2f, %.2f, %.2f, %.2f, %.2f."%(p1, p2, p3, p4, p5)) print("------------------------") if ret_median: return np.median(tis) else: return tis def rsig(ndof_eff, alpha=0.95): """ USAGE ----- Rsig = rsig(ndof_eff, alpha=0.95) Computes the minimum (absolute) threshold value 'rsig' that the Pearson correlation coefficient r between two normally-distributed data sequences with 'ndof_eff' effective degrees of freedom has to have to be statistically significant at the 'alpha' (defaults to 0.95) confidence level. For example, if rsig(ndof_eff, alpha=0.95) = 0.2 for a given pair of NORMALLY-DISTRIBUTED samples with a correlation coefficient r>0.7, there is a 95 % chance that the r estimated from the samples is significantly different from zero. In other words, there is a 5 % chance that two random sequences would have a correlation coefficient higher than 0.7. OBS: This assumes that the two data series have a normal distribution. Translated to Python from the original matlab code by Prof. Sarah Gille (significance.m), available at http://www-pord.ucsd.edu/~sgille/sio221c/ References ---------- Gille lecture notes on data analysis, available at http://www-pord.ucsd.edu/~sgille/mae127/lecture10.pdf Example ------- TODO """ rcrit_z = erfinv(alpha)*np.sqrt(2./ndof_eff) return rcrit_z plt.close('all') head_data = "../../data_reproduce_figs/" terms = ['Ibetav', 'Icurlvdiff', 'Icurlhdiff', 'Istretchp', 'Ires', 'Icurlnonl'] segments = ['Amundsen-Bellingshausen', 'WAP', 'Weddell', 'W-EA', 'E-EA', 'Ross'] # Circumpolar circulation terms. fname = head_data+'circulation_terms_circumpolar.nc' ds = open_dataset(fname) t = ds['t'] # Cross-correlations between terms and autocorrelations. nblks_coh = 15 fnames = ['circulation_terms-Amundsen-Bellingshausen.nc', 'circulation_terms-WAP.nc', 'circulation_terms-Weddell.nc', 'circulation_terms-W-EA.nc', 'circulation_terms-E-EA.nc', 'circulation_terms-Ross.nc', 'circulation_terms_circumpolar.nc',] fig, axs = plt.subplots(nrows=3, ncols=2) axs = axs.flatten() n=0 for fname in fnames[:-1]: print(fname,"*******************************") print("****************************************") ds = open_dataset(head_data+fname) segment = fname.split('terms')[-1].split('.')[0][1:] ax = axs[n] F = ds['Icurlvdiff'].values TSB = -ds['Ibetav'].values - ds['Istretchp'].values # +beta*V +f*w_I, on LHS. tTSB = -ds['Ibetav'].values - ds['Istretchp'].values + ds['Ires'].values # +beta*V +f*w_I + dzeta/dt, on LHS. DoF = nblks_coh # Conservative alpha = 0.05 coh_alpha_theoretical = 1 - alpha**(1/(DoF - 1)) # Thomson & Emery p. 510, eq. (5.173). ax = axs[n] cohTSB, _ = coherence(F, TSB, 1, N=nblks_coh) cohtTSB, actual_nblks = coherence(F, tTSB, 1, N=nblks_coh) cohTSB_lo, cohTSB_hi = coh_err(cohTSB.coherence, nblks_coh) cohtTSB_lo, cohtTSB_hi = coh_err(cohtTSB.coherence, nblks_coh) ax.semilogx(cohTSB.f, cohTSB.coherence, 'b', label='TSB') ax.semilogx(cohtTSB.f, cohtTSB.coherence, 'r', label='tTSB') ax.fill_between(cohTSB.f, cohTSB_lo, cohTSB_hi, color='b', alpha=0.075) ax.fill_between(cohtTSB.f, cohtTSB_lo, cohtTSB_hi, color='r', alpha=0.075) ax.axhline(coh_alpha_theoretical, ls='--', lw=0.5, color='k') if n==0: xt = 0.075 ax.text(xt, 0.5, "TSB", color='b', fontsize=15, transform=ax.transAxes) ax.text(xt, 0.33, "tTSB", color='r', fontsize=15, transform=ax.transAxes) ax.set_ylim(0, 1) ax.set_xlim(cohTSB.f[0], cohTSB.f[-1]) if segment=='Amundsen-Bellingshausen': ax.text(0.02, 0.025, segment, transform=ax.transAxes) else: ax.text(0.02, 0.85, segment, transform=ax.transAxes) if n==2: ax.set_ylabel('Coherence$^2$ [unitless]') elif n==4: ax.set_xlabel('Frequency [cpd]', x=1) if n in [0,1,3,4,5]: ax.set_yticklabels([]) if n==5: fig.subplots_adjust(hspace=0, wspace=0) fig.savefig("fig09.png", dpi=300, bbox_inches='tight') n += 1 plt.show() ```
github_jupyter
# Вычислительная устойчивость (черновик) Под [вычислительной устойчивостью](https://ru.wikipedia.org/wiki/Вычислительная_устойчивость) понимают свойство алгоритма не увеличивать ошибку данных, т.е. небольшие вариации входных данных должны приводить к небольшому изменению результата. Существует несколько точных математических формулировок устойчивости, некоторые из которых будут рассмотрены ниже. В качестве примера рассмотрим задачу решения кубического уравнения ([Nick Higham. Accuracy and Stability of Numerical Algorithms](https://www.maths.manchester.ac.uk/~higham/asna/index.php) 26.3.3. Roots of a Cubic). Рассмотрим [кубическое уравнение](https://ru.wikipedia.org/wiki/Кубическое_уравнение) $$x^3+ax^2+bx+c=0$$ и приведем его к каноническому виду $$y^3+py+q=0,\quad p=b-\frac{a^2}3,\quad q=\frac{2a^3}{27}-\frac{ab}{3}+c,$$ заменой $x=y-a/3$. Замена Виета $y=w-p/(3w)$ приводит уравнение к бикубическому, $$ w^3-\frac{p^3}{27w^3}+q=0\quad\Leftrightarrow\quad (w^3)^2+qw^3-\frac{p^3}{27}=0. $$ Решая квадратное относительно $w^3$ уравнение, получаем: $$w^3=-\frac{q}{2}\pm\sqrt{\frac{q^2}{4}+\frac{p^3}{27}}.$$ Извлекая кубический корень находим три различных корня (причем достаточно взять любой один из знаков в формуле выше). Делая обратные подстановки $w\mapsto y\mapsto x$, получаем искомые решения уравнения. Таким образом мы получили явные аналитические формулы для расчета корней кубического уравнения, полностью решив задачу с точки зрения математики. Проверим, так ли хороши эти формулы для численного счета. Для этого оценим устойчивость метода. ``` import numpy as np # Определим функцию, решающую уравнение третьего порядка. def solve_cubic(coefs): """ Аргумент функции coefs=[a,b,c] задает уравнение x**3_a*x**2+b*x+c=0. Фнукция возвращает три корня этого уравнения [x0,x1,x2]. """ a, b, c = coefs p = b - a**2/3 q = 2*a**3/27-a*b/3+c disc = q**2/4+p**3/27 # дискриминант бикубического уравнения w_cubed = -q/2+np.sqrt(disc+0j) # Три кубических корня из w_cubed. rho, phi = np.abs(w_cubed), np.angle(w_cubed) ws = np.cbrt(rho)*np.exp(1j*(phi+np.array([0,2,-2])*np.pi)/3) # Вспомогательная функция, преобразующая w в x. def w2x(w): y = w-p/(3*w) x = y-a/3 return x return w2x(ws) # Для примера решим уравнение x^3-2x^2-x+2, with np.printoptions(precision=2, suppress=True): print( "Roots of x^3-2x^2-x+2=0:", solve_cubic([-2,-1,2]) ) # Выясним, насколько точные корни дает наша функция. def relative_error(x, x0): """ Считает относительную погрешность |x-x0|/|x0| в норме l-infty для . Так как порядок корней не фиксирован, то выбирается минимальная ошибка из всех перестановок. """ x0 = np.asarray(x0) x = np.asarray(x) permutations = [ [0,1,2], [0,2,1], [1,2,0], [2,1,0], [2,0,1], [1,0,2] ] abserr = [ np.linalg.norm(x[p]-x0, ord=np.inf) for p in permutations] return np.min(abserr) / np.linalg.norm(x0) # Простой тест на предыдущем многочлене assert relative_error(solve_cubic([-2,-1,2]), [2,-1,1])<1e-15 # Для испытаний нам удобно иметь корни явно, тогда нам нужна функция для вычисления коэффициентов многочлена через корни. def roots2coefs(roots): x0,x1,x2=roots return [-(x0+x1+x2), x0*x1+x1*x2+x2*x0, -x0*x1*x2] # Снова проверяем на примере import numpy.testing as npt npt.assert_allclose( roots2coefs([2,-1,1]), [-2,-1,2] ) # Далее нам удобно собрать все части в одну функцию. def test_cubic_solve(roots): """Функция возвращает ошибку вычисления функцией cubic_solve корней многочлена, заданного списком его корней.""" err = relative_error( solve_cubic(roots2coefs(roots)), roots ) # print(err) return err # Небольшая проверка: считаем ошибку на случайном многочлене. print( "Error on a random polynomial:", test_cubic_solve(np.random.randn(3)) ) # Ошибка на случайном многочлене типично мала. Однако равномерно ли ограниченна ошибка? # Возьмем пример из Higham'а print( "Error for Higham's example :", test_cubic_solve([-1.6026, -6.4678e-2 + 8.8798e-1j, -6.4678e-2 - 8.8798e-1j]) ) # Ошибка 2e-4 (у Higham 1e-2 на несколько отличных коэффициентах) на много порядков выше ошибки исходных данных. # Что-то явно не так с нашим алгоритмом. # Чтобы оценить предельную погрешность, не нужно искать примеры в литературе, можно численно максимизировать функционал ошибки. from scipy.optimize import minimize def find_worst_case(): """Вычисляет многочлен, на котором ошибка вычисления solve_cubic максимальна.""" def real_to_complex(r): # Преобразует тройку вещественных чисел в корни. return [r[0], r[1]+r[2]*1j, r[1]-r[2]*1j] roots = np.random.randn(3) # Начальный набор нулей многочлена. res = minimize(lambda r: -test_cubic_solve(real_to_complex(r)), roots, method='Nelder-Mead', options={"disp":True, "fatol":1e-16}) return real_to_complex(res.x) # Найдем наихудший многочлен, на котором ошибка больше всего. roots = find_worst_case() print(f"Error for roots {roots}: {test_cubic_solve(roots)} ") # Так как функция находит только локальный максимум, то потребуется несколько запусков, чтобы получить большую ошибку. # В какой-то момент мы получим сообщение о делении на ноль в функции cubic_solve, так как w=0. # В этот момент ошибка становится бесконечно большой. # Округляя корни, мы можем получить точку, в которой ошибка велика, но не бесконечна, например: roots = [-1.66827, (-0.715961-0.54981j), (-0.715961+0.54981j)] print("Cubic_solve error:", test_cubic_solve(roots) ) # Очевидно, что метод неустойчив, так как он неограниченно увеличивает ошибку в исходных данных. # Однако возможно, что задача нахождения корней плохо обусловлена, тогда никакой метод не даст ответ с хорошей точностью. # Из теории известно, что задача нахождения корней многочлена хорошо обусловлена, если корни многочлена отделены друг от друга, что выполняется в нашем случае. # Мы можем убедиться в этом косвенно, проверив, что NumPy позволяет найти эти корни с гораздо большей точностью. roots_numpy = np.roots([1]+roots2coefs(roots)) print("NumPy.roots error:", relative_error(roots_numpy, roots)) # Раз задача хорошо обусловлена, то значит наш конкретный способ счета неустойчив. # Таким образом явные аналитические формулы не всегда дают лучший способ счета. ``` ## Задание. 1. Каким образом NumPy вычисляет корни многочлена? Почему этот способ лучше?
github_jupyter
# Welcome to Chang Lab Bioinformatics (R version)! We're going to start with an introduction to R, just to get a handle on basics and how to deal with data. In this experiment, we're imagining that you're trying to create a clonal line with a mutation at a given locus. You've picked lots of clones, extracted DNA and amplified the target region by PCR, and are now trying to analyze your results to quickly tell which wells are WT or mutant (or heterozygous). The first section will be an introduction, and then we'll go through dealing with one well, and then go through dealing with all 96 wells. ### Part 0: Your name Replace this with your name. ## Part 1: Introduction Make sure that you are familiar with the difference between a string, integer, and double; as well as data frames and vectors; boolean operators (equals to, greater than, less than, etc.); and the basics of defining a function. We're going to use the **table** function frequently. To see how it works, type **?table**. We're going to practice with an example 'dataset' before moving on to actual (simulated) FASTQ reads. Table is a built in function, so we don't need to load any R packages right now, but typically this would be the first thing to do in a new session. ``` # mock dataset, don't change test_data = c('cat', 'dog', 'cat', 'mouse', 'mouse', 'cat', 'cat', 'dog', 'rat', 'dog', 'rabbit', 'mouse', 'cat', 'cat', 'dog', 'elephant') ``` How many elements are in test_data? (Hint: use the **length()** function) ``` # We can use the table function to print the frequencies of each item in the test_data vector table(test_data) # Now we can use the sort function to find the most common elements of test_data # (by default sort orders in increasing order, so we use the decreasing = TRUE argument) sort(table(test_data), decreasing = TRUE) ``` What if we only wanted to see the top 3 most common elements? (Hint: look at the **head** function using **?head**) If you want to store the result of the sorted table, you'll need to do that explicitly by assigning it to the same or new variable. ``` test_table <- table(test_data) test_table_sort <- sort(test_table, decreasing = TRUE) test_table test_table_sort ``` #### Let's say we want to go through and calculate the percent votes assigned to the third result. How would we do this? First, lets make a data.frame of the table results: ``` test_df <- data.frame(test_table_sort) test_df # Now we can divide each value in the Freq column by the total sum using the sum function to get the percent. # We use the $ operator to name a new column of the dataframe: test_df$percent <- test_df$Freq / sum(test_df$Freq) test_df ``` Second, we need to get the data for the third-most common item. This can be done by indexing the dataframe. Unlike python, R is 1-indexed, so an index of 1 corresponds to the first row of the dataframe, etc. Dataframes are indexed with the row index first, followed by the column - so **dat[1,]** corresponds to the first row of a dataframe and **dat[,1]** the first column. ``` test_df[3,] # If we just want the name of the 3rd most common element, we can use row and column indexing. # Rows and columns can be indexed either by number or by name: test_df[3, 1] test_df[3, "test_data"] ``` Notice that the output here is a **factor**, which has different levels. Factors are different than strings because they have an order. This may be annoying when you are starting out but factors are very useful! If you don't want something to be a factor you can convert is back to a regular string using the **as.character** function. ``` test_df$test_data <- as.character(test_df$test_data) test_df[3, 1] test_df[3, "test_data"] ``` One very useful thing to be able to do is to subset dataframes using logical vectors. Let's use a logical vector to return all animals with over 10% frequency in our dataset. ``` # We can make a logical vector based on the value of the percent column: test_df$percent > 0.10 # Now we can use this vector to subset our dataframe (should this go before or after the comma? Why?): test_df[test_df$percent > 0.10,] # If we just want the names of animals, we can just subset the test_data column. # Can you think of another way to do this using row column indices? test_df$test_data[test_df$percent > 0.10] # If we want to return only the animals within a range, we can use an and statement. # Let's find the names of the animals that are between 10-30% frequency: upper <- 0.30 lower <- 0.10 test_df$test_data[test_df$percent > lower & test_df$percent < upper] ``` #### More things to try! Try playing around with the limits (0.30, 0.10) and try using an **or** ( **|** operator) statement to return animals with either more than 30% or less than 10% frequency. Also try changing what value is returned. For example, write a statement that if the value is between 30-50, it will print the value of the percent column. ### Part 2: Writing a function to determine whether a list of 'reads' is homozygous WT, heterozygous, or mutant (on both alleles). Here, we're going to apply the concepts above to three test datasets. Each of these datasets is going to be a counter. However, to make things simpler, instead of reads, we're going to use animals; and we're just going to say that 'cats' are wild-type and anything else is 'mutant'. ``` c1 <- table(c('cat','cat','cat','dog','cat','cat','cat','rat','cat','cat')) c2 <- table(c('cat','cat','cat','dog','dog','dog','cat','dog','rat','cat')) c3 <- table(c('dog','rat','dog','rat','dog','rat','dog','rat','dog','rat')) c1 c2 c3 ``` Just by looking at this, we can assign each of these as a particular status: c1 is WT, c2 is a het, and c3 is homozygous mutant. But let's write a function to do this for us! #### First, we need to think of the criteria that we mentally apply when deciding if c1, c2, c3 are which status. Let's just set forth the following rules for each conditions: +/+ (WT): at least 80% of the 'reads' are the WT read +/- : at least 40% of the 'reads' are WT, and at least 40% of the reads are for another non-WT allele -/- : the WT reads are fewer than 20% of the total number of reads. Note that there are actually two possible cases here: it could be homozygous (two of the same mutant alleles) or heterozygous (two different mutant alleles). In the first situation (+/+), we said that the WT allele needed to represent at least 80% of the reads. So it seems reasonable to say that if at least 80% of the reads are for a single allele, then we will call it homozygous mutant, and if there's two alleles with at least 40% of the reads for each allele, we'll call it heterozygous mutant. Note that there's also a fourth situation, which is deciding that we have bad data. For example, there's just a lot of random stuff and it doesn't look like good/real data. ``` c4 <- table(c('cat','dog','rat','cat','dog','rat','cat','dog','rat','alligator')) c4 ``` #### What is our function going to do? Our function will have two inputs: the Counter and the wild-type reference. It will return as output one of five vectors: ("WT","WT"), ("WT", "allele2"), ("allele1","WT"), ("allele1", "allele2"), where alleles 1 and 2 are the non-WT alleles. It will also return ("bad","bad") in the situation talked about above, where the data looks bad. #### What are the steps we are going to take? 1. Get the percent frequency for each element in the vector. 2. Look at the first most common element 2.1 Determine if this element is WT or mutant 2.2 If it has at least 80% of the reads, then we are dealing with a homozygous situation and <b>return</b> early (since there's no need to look at the second allele). On the other hand, if it has at least 40% of the reads, then we are dealing with a heterozygous situation. 2.3 If it doesn't then <b>return</b> 'bad' early (there's no need to look at the second allele if the most common one is under 40%, because the second most common one will also be under 40%) 3. Look at the second most common element 3.1 Check if it has at least 40% of the reads: if not (meaning that the first most common read was at least 40%, but the second most common read was less than 40%) <b>return</b> 'bad' 3.2 Determine if the element is WT or mutant 4. <b>Return</b> the status Note that a function can only return once: once your function hits a return statement, it will not run anything else below. #### I've laid out certain components of the function, but you're going to have to use the skills you learned above to fill in the blanks! ``` # Let's first explore the data without creating a function: # You can use this cell to test lines of code before putting them into the genotype function. # The first part is done for you: c <- c1 wt_reference <- "cat" dat <- data.frame(sort(c, decreasing = TRUE)) dat # Now, we are going to create are new function, genotype() genotype <- function(c, wt_reference) { # 1. Sort the results and store in a dataframe dat <- data.frame(sort(c, decreasing = TRUE)) # 1.1. Convert to character (factors will cause problems here - you can remove this line to see what happens!) # Also note that the name of the first column can change based on how table was called. # To avoid problems we use the column index rather than the column name as we do below. dat[,1] <- as.character(dat[,1]) # 1.2. Calculate the percent for each sequence dat$percent <- dat$Freq / sum(dat$Freq) # 2.1. Determine if the most common element is the wild-type allele and if it has at least 80% of the reads if (dat[1,1] == wt_reference & dat$percent[1] >= 0.8) { # 2.2. if so, return the WT vector early return(c("WT", "WT")) # 2.3. Check if the most common element has less than 40% of the reads } else if (dat$percent[1] < 0.4) { # if so, return the bad vector early return(c("bad", "bad")) # 3.1. Check if the second most common element has less than 40% of the reads - you have to do this on your own! # if so, return the bad vector early # 3.2. Determine which elements are wild-type and rename them as "WT" - you have to do this on your own! # 4. Return the two alleles. return(c(dat[1,1], dat[2,1])) } } genotype(c1, "cat") genotype(c2, "cat") genotype(c3, "cat") genotype(c4, "cat") ``` #### Does everything look good? Congratulations for finishing this!! You've now learned the basics of writing a function, performing boolean operations, using if statements, and tables! ### Part 3: Applying this to our FASTQ data. We're going to do this in two parts. First, we're going to learn to deal with a single FASTQ file. Then, we're going to deal with an entire folder of FASTQ files. We're also going to learn how to import a text file: at heart, a FASTQ file, is just a text file, where each line has a difference piece of information. Each FASTQ read comprises four lines: (see https://support.illumina.com/bulletins/2016/04/fastq-files-explained.html for more information) 1. Read ID: information on machine, cluster location, etc. For our purpopses, not important. 2. The actual read. Important! 3. Separator (a + sign). Not important. 4. Base quality scores. Often important, but we're going to ignore it for now and just assume that all of the reads are good enough. So we can think of a FASTQ file as having a periodicity of 4, where the 2nd, 6th, 10th, etc. lines are the reads. Which means that when we are reading in a FASTQ file, we only want to pay attention to the 2nd, 6th, 10th, etc. reads. #### The first thing we need to do is create a new variable, called <i>path</i>, that is the path to the folder (directory) that has our files. You can find this in two ways: 1) in terminal, navigate to the directory with the FASTQ files (crispr_96), and type <i>pwd</i>. 2) in Finder, right click on a FASTQ file in that folder, click "get info" and in the "general" tab, look at "where" and that will be the path: it should be something like /Users/kevin/etc. ``` # creating our variable path which has the location to our files # note that this is a string, and so should be enclosed in quotes # also, make sure that it ends with a / ! This will be important in a second path <- 'crispr_96_data/' # get a list of files files <- list.files(path) # just to make our lives easier, let's sort this files <- sort(files) files ``` Check how long the list files is. It should be 96. Let's just start with the first file. ``` # Remember that R is 1-indexed! fn <- files[1] fn ``` Important note! <b>fn</b> is now a string that is the name of a single file in the directory. The full path to the <i>file</i> is: ``` # Unlike python, R does not let you add strings to concatenate. Instead we use the paste function. # paste0 is a shortcut for paste(sep = "") path_to_file <- paste0(path, fn) path_to_file ``` Above, we've added two strings together. This is why making sure that our variable <b>path</b> ended with a / was important - if it didn't, then we would be looking for a file called "crispr_96crispr_well_0.fastq.gz"; rather than the file "crispr_well_0.fastq.gz" in the "crispr_96" directory. #### Now we're going to learn to open the file. Importantly, this file is <i>gzipped</i>. For an uncompressed file, we would say: readLines(file) Since our files are gzipped, we're going to use the gzfile function to open this file, and say: readLines(gzfile(file)) Since these files are small, we will just read all lines into memory. If the files were large we would want to read in line by line or in chunks to avoid overwhelming the memory. #### Now let's put it all together and print the first twelve lines of the file, corresponding to the first three reads. ``` head(readLines(gzfile(path_to_file)), n = 12) ``` Now, let's modify this a little bit to just print the reads. We're going to use the seq function, which creates a vectors of numbers with a given interval between them. Use **?seq** for more information. Basically, FASTQ reads have a period of 4. This means that we want a vector that starts at 2 and ends at the last read, counting up by 4. ``` # Let's save the fastq file text as a variable (What data type is the output saved as?): fLines <- readLines(gzfile(path_to_file)) # Here's my indexing vector created by the seq function: head(seq(2,length(fLines),4)) # Now we can use this to index the fLines variable to get just the read lines: head(fLines[seq(2,length(fLines),4)], n=3) ``` #### Now we've got a way to deal with the FASTQ files, which are gzipped, import each line of the file, and then print just the reads! #### Now, let's combine everything where we read in a single file, and return a table of the number of times we see each unique read. To start, let's just read in 10 reads to get a sense of what things look like, before we eventually read in the entire file. Pay attention to what we have changed from above to make this work. ``` # Let's overwrite our fLines variable with just the read lines - this will save memory: fLines <- readLines(gzfile(path_to_file)) fLines <- fLines[seq(2,length(fLines),4)] table(fLines[1:10]) ``` #### Now write a function that will do all of this for us. It will take as inputs the path to a file. It will return as an output a table of the read frequencies for that file. ``` process_file <- function(path_to_file){ # add your code here } ``` #### And let's put it together with the genotype function that we wrote above! 1. using process_file(), get a table for a file. 2. using genotype(), get the results for that file. Note that in this case, crispr_well_0.fastq.gz is WT, meaning that the most common read in this file (which you just found) is the wt_reference. ``` # replace empty string with correct wt_reference sequence wt_reference <- '' file_table <- process_file(path_to_file) file_results <- genotype(file_table, wt_reference) file_results ``` #### When you run this with crispr_well_0, you should get the result ['WT', 'WT']. ### Part 4: Putting it all together and processing an entire folder of files. Now, we're going to process the data for all of the files in our folder. All we need to do is loop through all of the files, and then save the results. ``` # again, you'll need to change this for yourself path <- 'crispr_96_data/' # get a list of files files <- list.files(path) # just to make our lives easier, let's sort this files <- sort(files) files wt_reference = '' for (fn in files) { message(fn) path_to_file <- paste0(path, fn) file_table <- process_file(path_to_file) file_results <- genotype(file_table, wt_reference) print(file_results) } ``` #### You should have now printed the results for each file! Now, let's save the results in a new text file. I'm going to provide a template where it just writes the same result for everything, but you'll need to modify it to process the files and write the actual results. #### For the last part, outputting the results, it would be nice to know not just whether it is WT or mutant, but also some other information: * How many reads total did each well get? (as an integer - no decimal point) * What % of reads were for the first allele? (rounded to two decimal places) * What % of reads were for the second allele? (also rounded to two decimal places) * <i> In the case of a homozygous well (WT or mutant), only report a single allele and single percentage </i> * <i> In the case of a bad well, still report the number of reads and the percent for each of the top two alleles </i> You'll need to create a new function, genotype2(), to output not just the genotyping results (e.g., c('WT','sequenceofmutantallele')) but also the above information. As an example, this could be c(10000, 'WT', 45.55, 'sequenceofmutantallele', 43.28). At the end we will merge the data from each file into a new matrix and write the full table to a file. Don't forget to add the name of the file as a column so we know which file the result came from! Second, we want to round the percentages to two decimal places. R has a built in round() function, which you'll need to look up how to use (**?round** or look at https://www.rdocumentation.org/packages/base/versions/3.6.2/topics/Round) - it's important to know how to look things up that you don't know how to use, and learn how to read the documentation for something. I'd recommend first just trying to get the existing genotype() function working here - just output the allele results and make sure you can do that. Then, make genotype2() (and just copy in the code for genotype()) and modify it to add in each piece of information, one by one. In other words, try to do things step-by-step, adding things in one-by-one, rather than doing everything at once - this will make it easier to troubleshoot because you're changing fewer things at a time. <b>Here is what we are doing with the additional lines:</b> Initiating an empty matrix which we can add our output to. Then we use the rbind function to add the new row generated by the genotype2 function (why do we need to use the transpose **t** function here?). Finally we write the results to our output file. Feel free to play around with different things. What if you want to make the end file comma delimted (',') as opposed to tab delimited ('\t')?] #### Also, since we're outputting in a tab delimited text format (the two main formats are either tab separated (usually .txt or .tsv) or comma separated (.csv)), you should be able to open your resulting file in Excel and look at it there (or in any other text editor). ``` genotype2 <- function(c, wt_reference) { # add your code here } # change the start of this to match your own computer output_file <- 'crispr_96_results.txt' output_mat <- matrix(nrow = 0, ncol = 6) for (fn in files) { wt_reference = '' # add code processing files here # replace this with the actual results file_result <- c(fn, c(10000, 'WT', 50.00, 'WT', 41.28)) output_mat <- rbind(output_mat, t(matrix(file_result))) } write.table(output_mat, file = output_file, quote = FALSE, sep = '\t', row.names = FALSE, col.names = FALSE) ``` ### Congratulations for making it to the end!!! Comments: Feedback, suggestions, complaints...
github_jupyter
# Guide for Authors ``` print('Welcome to "The Debugging Book"!') ``` This notebook compiles the most important conventions for all chapters (notebooks) of "The Debugging Book". ## Organization of this Book ### Chapters as Notebooks Each chapter comes in its own _Jupyter notebook_. A single notebook (= a chapter) should cover the material (text and code, possibly slides) for a 90-minute lecture. A chapter notebook should be named `Topic.ipynb`, where `Topic` is the topic. `Topic` must be usable as a Python module and should characterize the main contribution. If the main contribution of your chapter is a class `FooDebugger`, for instance, then your topic (and notebook name) should be `FooDebugger`, such that users can state ```python from FooDebugger import FooDebugger ``` Since class and module names should start with uppercase letters, all non-notebook files and folders start with lowercase letters. this may make it easier to differentiate them. The special notebook `index.ipynb` gets converted into the home pages `index.html` (on fuzzingbook.org) and `README.md` (on GitHub). Notebooks are stored in the `notebooks` folder. ### DebuggingBook and FuzzingBook This project shares some infrastructure (and even chapters) with "The Fuzzing Book". Everything in `shared/` is maintained in "The Debugging Book" and only copied over to "The Fuzzing Book". If you want to edit or change any of the files in `shared/`, do so in "The Debugging Book". ### Output Formats The notebooks by themselves can be used by instructors and students to toy around with. They can edit code (and text) as they like and even run them as a slide show. The notebook can be _exported_ to multiple (non-interactive) formats: * HTML – for placing this material online. * PDF – for printing * Python – for coding * Slides – for presenting The included Makefile can generate all of these automatically (and a few more). At this point, we mostly focus on HTML and Python, as we want to get these out quickly; but you should also occasionally ensure that your notebooks can (still) be exported into PDF. Other formats (Word, Markdown) are experimental. ## Sites All sources for the book end up on the [Github project page](https://github.com/uds-se/debuggingbook). This holds the sources (notebooks), utilities (Makefiles), as well as an issue tracker. The derived material for the book ends up in the `docs/` folder, from where it is eventually pushed to the [debuggingbook website](http://www.debuggingbook.org/). This site allows to read the chapters online, can launch Jupyter notebooks using the binder service, and provides access to code and slide formats. Use `make publish` to create and update the site. ### The Book PDF The book PDF is compiled automatically from the individual notebooks. Each notebook becomes a chapter; references are compiled in the final chapter. Use `make book` to create the book. ## Creating and Building ### Tools you will need To work on the notebook files, you need the following: 1. Jupyter notebook. The easiest way to install this is via the [Anaconda distribution](https://www.anaconda.com/download/). 2. Once you have the Jupyter notebook installed, you can start editing and coding right away by starting `jupyter notebook` (or `jupyter lab`) in the topmost project folder. 3. If (like me) you don't like the Jupyter Notebook interface, I recommend [Jupyter Lab](https://jupyterlab.readthedocs.io/en/stable/), the designated successor to Jupyter Notebook. Invoke it as `jupyter lab`. It comes with a much more modern interface, but misses autocompletion and a couple of extensions. I am running it [as a Desktop application](http://christopherroach.com/articles/jupyterlab-desktop-app/) which gets rid of all the browser toolbars. 4. To create the entire book (with citations, references, and all), you also need the [ipybublish](https://github.com/chrisjsewell/ipypublish) package. This allows you to create the HTML files, merge multiple chapters into a single PDF or HTML file, create slides, and more. The Makefile provides the essential tools for creation. ### Version Control We use git in a single strand of revisions. Feel free branch for features, but eventually merge back into the main "master" branch. Sync early; sync often. Only push if everything ("make all") builds and passes. The Github repo thus will typically reflect work in progress. If you reach a stable milestone, you can push things on the fuzzingbook.org web site, using `make publish`. #### nbdime The [nbdime](https://github.com/jupyter/nbdime) package gives you tools such as `nbdiff` (and even better, `nbdiff-web`) to compare notebooks against each other; this ensures that cell _contents_ are compared rather than the binary format. `nbdime config-git --enable` integrates nbdime with git such that `git diff` runs the above tools; merging should also be notebook-specific. #### nbstripout Notebooks in version control _should not contain output cells,_ as these tend to change a lot. (Hey, we're talking random output generation here!) To have output cells automatically stripped during commit, install the [nbstripout](https://github.com/kynan/nbstripout) package and use ``` nbstripout --install ``` to set it up as a git filter. The `notebooks/` folder comes with a `.gitattributes` file already set up for `nbstripout`, so you should be all set. Note that _published_ notebooks (in short, anything under the `docs/` tree _should_ have their output cells included, such that users can download and edit notebooks with pre-rendered output. This folder contains a `.gitattributes` file that should explicitly disable `nbstripout`, but it can't hurt to check. As an example, the following cell 1. _should_ have its output included in the [HTML version of this guide](https://www.debuggingbook.org/beta/html/Guide_for_Authors.html); 2. _should not_ have its output included in [the git repo](https://github.com/uds-se/debuggingbook/blob/master/notebooks/Guide_for_Authors.ipynb) (`notebooks/`); 3. _should_ have its output included in [downloadable and editable notebooks](https://github.com/uds-se/debuggingbook/blob/master/docs/beta/notebooks/Guide_for_Authors.ipynb) (`docs/notebooks/` and `docs/beta/notebooks/`). ``` import random random.random() ``` ### Inkscape and GraphViz Creating derived files uses [Inkscape](https://inkscape.org/en/) and [Graphviz](https://www.graphviz.org/) – through its [Python wrapper](https://pypi.org/project/graphviz/) – to process SVG images. These tools are not automatically installed, but are available on pip, _brew_ and _apt-get_ for all major distributions. ### LaTeX Fonts By default, creating PDF uses XeLaTeX with a couple of special fonts, which you can find in the `fonts/` folder; install these fonts system-wide to make them accessible to XeLaTeX. You can also run `make LATEX=pdflatex` to use `pdflatex` and standard LaTeX fonts instead. ### Creating Derived Formats (HTML, PDF, code, ...) The [Makefile](../Makefile) provides rules for all targets. Type `make help` for instructions. The Makefile should work with GNU make and a standard Jupyter Notebook installation. To create the multi-chapter book and BibTeX citation support, you need to install the [iPyPublish](https://github.com/chrisjsewell/ipypublish) package (which includes the `nbpublish` command). ### Creating a New Chapter To create a new chapter for the book, 1. Set up a new `.ipynb` notebook file as copy of [Template.ipynb](Template.ipynb). 2. Include it in the `CHAPTERS` list in the `Makefile`. 3. Add it to the git repository. ## Teaching a Topic Each chapter should be devoted to a central concept and a small set of lessons to be learned. I recommend the following structure: * Introduce the problem ("We want to parse inputs") * Illustrate it with some code examples ("Here's some input I'd like to parse") * Develop a first (possibly quick and dirty) solution ("A PEG parser is short and often does the job"_ * Show that it works and how it works ("Here's a neat derivation tree. Look how we can use this to mutate and combine expressions!") * Develop a second, more elaborated solution, which should then become the main contribution. ("Here's a general LR(1) parser that does not require a special grammar format. (You can skip it if you're not interested)") * Offload non-essential extensions to later sections or to exercises. ("Implement a universal parser, using the Dragon Book") The key idea is that readers should be able to grasp the essentials of the problem and the solution in the beginning of the chapter, and get further into details as they progress through it. Make it easy for readers to be drawn in, providing insights of value quickly. If they are interested to understand how things work, they will get deeper into the topic. If they just want to use the technique (because they may be more interested in later chapters), having them read only the first few examples should be fine for them, too. Whatever you introduce should be motivated first, and illustrated after. Motivate the code you'll be writing, and use plenty of examples to show what the code just introduced is doing. Remember that readers should have fun interacting with your code and your examples. Show and tell again and again and again. ### Special Sections #### Quizzes You can have _quizzes_ as part of the notebook. These are created using the `quiz()` function. Its arguments are * The question * A list of options * The correct answer(s) - either * the single number of the one single correct answer (starting with 1) * a list of numbers of correct answers (multiple choices) To make the answer less obvious, you can specify it as a string containing an arithmetic expression evaluating to the desired number(s). The expression will remain in the code (and possibly be shown as hint in the quiz). ``` from bookutils import quiz # A single-choice quiz quiz("The color of the sky is", [ "blue", "red", "black" ], '5 - 4') # A multiple-choice quiz quiz("What is this book?", [ "Novel", "Friendly", "Useful" ], '[5 - 4, 1 + 1, 27 / 9]') ``` Cells that contain only the `quiz()` call will not be rendered (but the quiz will). #### Synopsis Each chapter should have a section named "Synopsis" at the very end: ```markdown ## Synopsis This is the text of the synopsis. ``` This section is evaluated at the very end of the notebook. It should summarize the most important functionality (classes, methods, etc.) together with examples. In the derived HTML and PDF files, it is rendered at the beginning, such that it can serve as a quick reference #### Excursions There may be longer stretches of text (and code!) that are too special, too boring, or too repetitve to read. You can mark such stretches as "Excursions" by enclosing them in MarkDown cells that state: ```markdown #### Excursion: TITLE ``` and ```markdown #### End of Excursion ``` Stretches between these two markers get special treatment when rendering: * In the resulting HTML output, these blocks are set up such that they are shown on demand only. * In printed (PDF) versions, they will be replaced by a pointer to the online version. * In the resulting slides, they will be omitted right away. Here is an example of an excursion: #### Excursion: Fine points on Excursion Cells Note that the `Excursion` and `End of Excursion` cells must be separate cells; they cannot be merged with others. #### End of Excursion ### Ignored Code If a code cell starts with ```python # ignore ``` then the code will not show up in rendered input. Its _output_ will, however. This is useful for cells that create drawings, for instance - the focus should be on the result, not the code. This also applies to cells that start with a call to `display()` or `quiz()`. ### Ignored Cells You can have _any_ cell not show up at all (including its output) in any rendered input by adding the following meta-data to the cell: ```json { "ipub": { "ignore": true } ``` *This* text, for instance, does not show up in the rendered version. ### Documentation Assertions If a code cell starts with ```python # docassert ``` then the code will not show up in rendered input (like `# ignore`), but also not in exported code. This is useful for inserting _assertions_ that encode assumptions made in the (following) documentation. Having this assertion fail means that the documentation no longer applies. Since the documentation is not part of exported code, and since code may behave differently in standalone Python, these assertions are not exported. ## Coding ### Set up The first code block in each notebook should be ``` import bookutils ``` This sets up stuff such that notebooks can import each other's code (see below). This import statement is removed in the exported Python code, as the .py files would import each other directly. Importing `bookutils` also sets a fixed _seed_ for random number generation. This way, whenever you execute a notebook from scratch (restarting the kernel), you get the exact same results; these results will also end up in the derived HTML and PDF files. (If you run a notebook or a cell for the second time, you will get more random results.) ### Coding Style and Consistency Here's a few rules regarding coding style. #### Use Python 3 We use Python 3 (specifically, Python 3.9.7) for all code. As of 2021, there is no need anymore to include compatibility hacks for earlier Python versions. #### Follow Python Coding Conventions We use _standard Python coding conventions_ according to [PEP 8](https://www.python.org/dev/peps/pep-0008/). Your code must pass the `pycodestyle` style checks which you get by invoking `make style`. A very easy way to meet this goal is to invoke `make reformat`, which reformats all code accordingly. The `code prettify` notebook extension also allows you to automatically make your code (mostly) adhere to PEP 8. #### One Cell per Definition Use one cell for each definition or example. During importing, this makes it easier to decide which cells to import (see below). #### Identifiers In the book, this is how we denote `variables`, `functions()` and `methods()`, `Classes`, `Notebooks`, `variables_and_constants`, `EXPORTED_CONSTANTS`, `files`, `folders/`, and `<grammar-elements>`. #### Quotes If you have the choice between quoting styles, prefer * double quotes (`"strings"`) around strings that are used for interpolation or that are natural language messages, and * single quotes (`'characters'`) for single characters and formal language symbols that a end user would not see. #### Static Type Checking Use type annotations for all function definitions. #### Documentation Use documentation strings for all public classes and methods. #### Read More Beyond simple syntactical things, here's a [very nice guide](https://docs.python-guide.org/writing/style/) to get you started writing "pythonic" code. ### Importing Code from Notebooks To import the code of individual notebooks, you can import directly from .ipynb notebook files. ``` from DeltaDebugger import DeltaDebugger def fun(s: str) -> None: assert 'a' not in s with DeltaDebugger() as dd: fun("abc") dd ``` **Important**: When importing a notebook, the module loader will **only** load cells that start with * a function definition (`def`) * a class definition (`class`) * a variable definition if all uppercase (`ABC = 123`) * `import` and `from` statements All other cells are _ignored_ to avoid recomputation of notebooks and clutter of `print()` output. Exported Python code will import from the respective .py file instead. The exported Python code is set up such that only the above items will be imported. If importing a module prints out something (or has other side effects), that is an error. Use `make check-imports` to check whether your modules import without output. Import modules only as you need them, such that you can motivate them well in the text. ### Imports and Dependencies Try to depend on as few other notebooks as possible. This will not only ease construction and reconstruction of the code, but also reduce requirements for readers, giving then more flexibility in navigating through the book. When you import a notebook, this will show up as a dependency in the [Sitemap](00_Table_of_Contents.ipynb). If the imported module is not critical for understanding, and thus should not appear as a dependency in the sitemap, mark the import as "minor dependency" as follows: ``` from Intro_Debugging import remove_html_markup # minor dependency ``` ### Design and Architecture Stick to simple functions and data types. We want our readers to focus on functionality, not Python. You are encouraged to write in a "pythonic" style, making use of elegant Python features such as list comprehensions, sets, and more; however, if you do so, be sure to explain the code such that readers familiar with, say, C or Java can still understand things. ### Incomplete Examples When introducing examples for students to complete, use the ellipsis `...` to indicate where students should add code, as in here: ``` def student_example() -> None: x = some_computation() # type: ignore # Now, do something with x ... ``` The ellipsis is legal code in Python 3. (Actually, it is an `Ellipsis` object.) ### Introducing Classes Defining _classes_ can be a bit tricky, since all of a class must fit into a single cell. This defeats the incremental style preferred for notebooks. By defining a class _as a subclass of itself_, though, you can avoid this problem. Here's an example. We introduce a class `Foo`: ``` class Foo: def __init__(self) -> None: pass def bar(self) -> None: pass ``` Now we could discuss what `__init__()` and `bar()` do, or give an example of how to use them: ``` f = Foo() f.bar() ``` We now can introduce a new `Foo` method by subclassing from `Foo` into a class which is _also_ called `Foo`: ``` class Foo(Foo): def baz(self) -> None: pass ``` This is the same as if we had subclassed `Foo` into `Foo_1` with `Foo` then becoming an alias for `Foo_1`. The original `Foo` class is overshadowed by the new one: ``` new_f = Foo() new_f.baz() ``` Note, though, that _existing_ objects keep their original class: ``` from ExpectError import ExpectError with ExpectError(AttributeError): f.baz() # type: ignore ``` ## Helpers There's a couple of notebooks with helpful functions, including [Timer](Timer.ipynb), [ExpectError and ExpectTimeout](ExpectError.ipynb). Also check out the [Tracer](Tracer.ipynb) class. ### Quality Assurance In your code, make use of plenty of assertions that allow to catch errors quickly. These assertions also help your readers understand the code. ### Issue Tracker The [Github project page](https://github.com/uds-se/debuggingbook) allows to enter and track issues. ## Writing Text Text blocks use Markdown syntax. [Here is a handy guide](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet). ### Sections Any chapter notebook must begin with `# TITLE`, and sections and subsections should then follow by `## SECTION` and `### SUBSECTION`. Sections should start with their own block, to facilitate cross-referencing. ### Highlighting Use * _emphasis_ (`_emphasis_`) for highlighting, * *emphasis* (`*emphasis*`) for highlighting terms that will go into the index, * `backticks` for code and other verbatim elements. ### Hyphens and Dashes Use "–" for em-dashes, "-" for hyphens, and "$-$" for minus. ### Quotes Use standard typewriter quotes (`"quoted string"`) for quoted text. The PDF version will automatically convert these to "smart" (e.g. left and right) quotes. ### Lists and Enumerations You can use bulleted lists: * Item A * Item B and enumerations: 1. item 1 1. item 2 For description lists, use a combination of bulleted lists and highlights: * **PDF** is great for reading offline * **HTML** is great for reading online ### Math LaTeX math formatting works, too. `$x = \sum_{n = 1}^{\infty}\frac{1}{n}$` gets you $x = \sum_{n = 1}^{\infty}\frac{1}{n}$. ### Inline Code Python code normally goes into its own cells, but you can also have it in the text: ```python s = "Python syntax highlighting" print(s) ``` ### Images To insert images, use Markdown syntax `![Word cloud](PICS/wordcloud.png){width=100%}` inserts a picture from the `PICS` folder. ![Word cloud](PICS/wordcloud.png){width=100%} All pictures go to `PICS/`, both in source as well as derived formats; both are stored in git, too. (Not all of us have all tools to recreate diagrams, etc.) ### Footnotes Markdown supports footnotes, as in [^footnote]. These are rendered as footnotes in HTML and PDF, _but not within Jupyter_; hence, readers may find them confusing. So far, the book makes no use of footnotes, and uses parenthesized text instead. [^footnote]: Test, [Link](https://www.fuzzingbook.org). ### Floating Elements and References \todo[inline]{I haven't gotten this to work yet -- AZ} To produce floating elements in LaTeX and PDF, edit the metadata of the cell which contains it. (In the Jupyter Notebook Toolbar go to View -> Cell Toolbar -> Edit Metadata and a button will appear above each cell.) This allows you to control placement and create labels. #### Floating Figures Edit metadata as follows: ```json { "ipub": { "figure": { "caption": "Figure caption.", "label": "fig:flabel", "placement": "H", "height":0.4, "widefigure": false, } } } ``` - all tags are optional - height/width correspond to the fraction of the page height/width, only one should be used (aspect ratio will be maintained automatically) - `placement` is optional and constitutes using a placement arguments for the figure (e.g. \begin{figure}[H]). See [Positioning_images_and_tables](https://www.sharelatex.com/learn/Positioning_images_and_tables). - `widefigure` is optional and constitutes expanding the figure to the page width (i.e. \begin{figure*}) (placement arguments will then be ignored) #### Floating Tables For **tables** (e.g. those output by `pandas`), enter in cell metadata: ```json { "ipub": { "table": { "caption": "Table caption.", "label": "tbl:tlabel", "placement": "H", "alternate": "gray!20" } } } ``` - `caption` and `label` are optional - `placement` is optional and constitutes using a placement arguments for the table (e.g. \begin{table}[H]). See [Positioning_images_and_tables](https://www.sharelatex.com/learn/Positioning_images_and_tables). - `alternate` is optional and constitutes using alternating colors for the table rows (e.g. \rowcolors{2}{gray!25}{white}). See (https://tex.stackexchange.com/a/5365/107738)[https://tex.stackexchange.com/a/5365/107738]. - if tables exceed the text width, in latex, they will be shrunk to fit #### Floating Equations For **equations** (e.g. those output by `sympy`), enter in cell metadata: ```json { "ipub": { "equation": { "environment": "equation", "label": "eqn:elabel" } } } ``` - environment is optional and can be 'none' or any of those available in [amsmath](https://www.sharelatex.com/learn/Aligning_equations_with_amsmath); 'equation', 'align','multline','gather', or their \* variants. Additionally, 'breqn' or 'breqn\*' will select the experimental [breqn](https://ctan.org/pkg/breqn) environment to *smart* wrap long equations. - label is optional and will only be used if the equation is in an environment #### References To reference a floating object, use `\cref`, e.g. \cref{eq:texdemo} ### Cross-Referencing #### Section References * To refer to sections in the same notebook, use the header name as anchor, e.g. `[Code](#Code)` gives you [Code](#Code). For multi-word titles, replace spaces by hyphens (`-`), as in [Using Notebooks as Modules](#Using-Notebooks-as-Modules). * To refer to cells (e.g. equations or figures), you can define a label as cell metadata. See [Floating Elements and References](#Floating-Elements-and-References) for details. * To refer to other notebooks, use a Markdown cross-reference to the notebook file, e.g. [the "Debugger" chapter](Debugger.ipynb). A special script will be run to take care of these links. Reference chapters by name, not by number. ### Citations To cite papers, cite in LaTeX style. The text ``` print(r"\cite{Purdom1972}") ``` is expanded to \cite{Purdom1972}, which in HTML and PDF should be a nice reference. The keys refer to BibTeX entries in [fuzzingbook.bib](fuzzingbook.bib). * LaTeX/PDF output will have a "References" section appended. * HTML output will link to the URL field from the BibTeX entry. Be sure it points to the DOI. ### Todo's * To mark todo's, use `\todo{Thing to be done}.` \todo{Expand this} ### Tables Tables with fixed contents can be produced using Markdown syntax: | Tables | Are | Cool | | ------ | ---:| ----:| | Zebra | 2 | 30 | | Gnu | 20 | 400 | If you want to produce tables from Python data, the `PrettyTable` package (included in the book) allows to [produce tables with LaTeX-style formatting.](http://blog.juliusschulz.de/blog/ultimate-ipython-notebook) ``` from bookutils import PrettyTable as pt import numpy as np data = np.array([[1, 2, 30], [2, 3, 400]]) pt.PrettyTable(data, [r"$\frac{a}{b}$", r"$b$", r"$c$"], print_latex_longtable=False) ``` ### Plots and Data It is possible to include plots in notebooks. Here is an example of plotting a function: ``` %matplotlib inline import matplotlib.pyplot as plt x = np.linspace(0, 3 * np.pi, 500) plt.plot(x, np.sin(x ** 2)) plt.title('A simple chirp'); ``` And here's an example of plotting data: ``` %matplotlib inline import matplotlib.pyplot as plt data = [25, 36, 57] plt.plot(data) plt.title('Increase in data'); ``` Plots are available in all derived versions (HTML, PDF, etc.) Plots with `plotly` are even nicer (and interactive, even in HTML), However, at this point, we cannot export them to PDF, so `matplotlib` it is. ## Slides You can set up the notebooks such that they also can be presented as slides. In the browser, select View -> Cell Toolbar -> Slideshow. You can then select a slide type for each cell: * `New slide` starts a new slide with the cell (typically, every `## SECTION` in the chapter) * `Sub-slide` starts a new sub-slide which you navigate "down" to (anything in the section) * `Fragment` is a cell that gets revealed after a click (on the same slide) * `Skip` is skipped during the slide show (e.g. `import` statements; navigation guides) * `Notes` goes into presenter notes To create slides, do `make slides`; to view them, change into the `slides/` folder and open the created HTML files. (The `reveal.js` package has to be in the same folder as the slide to be presented.) The ability to use slide shows is a compelling argument for teachers and instructors in our audience. (Hint: In a slide presentation, type `s` to see presenter notes.) ## Writing Tools When you're editing in the browser, you may find these extensions helpful: ### Jupyter Notebook [Jupyter Notebook Extensions](https://github.com/ipython-contrib/jupyter_contrib_nbextensions) is a collection of productivity-enhancing tools (including spellcheckers). I found these extensions to be particularly useful: * Spell Checker (while you're editing) * Table of contents (for quick navigation) * Code prettify (to produce "nice" syntax) * Codefolding * Live Markdown Preview (while you're editing) ### Jupyter Lab Extensions for _Jupyter Lab_ are much less varied and less supported, but things get better. I am running * [Spell Checker](https://github.com/ijmbarr/jupyterlab_spellchecker) * [Table of Contents](https://github.com/jupyterlab/jupyterlab-toc) * [JupyterLab-LSP](https://towardsdatascience.com/jupyterlab-2-0-edd4155ab897) providing code completion, signatures, style checkers, and more. ## Interaction It is possible to include interactive elements in a notebook, as in the following example: ```python try: from ipywidgets import interact, interactive, fixed, interact_manual x = interact(fuzzer, char_start=(32, 128), char_range=(0, 96)) except ImportError: pass ``` Note that such elements will be present in the notebook versions only, but not in the HTML and PDF versions, so use them sparingly (if at all). To avoid errors during production of derived files, protect against `ImportError` exceptions as in the above example. ## Read More Here is some documentation on the tools we use: 1. [Markdown Cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) - general introduction to Markdown 1. [iPyPublish](https://github.com/chrisjsewell/ipypublish) - rich set of tools to create documents with citations and references ## Alternative Tool Sets We don't currently use these, but they are worth learning: 1. [Making Publication-Ready Python Notebooks](http://blog.juliusschulz.de/blog/ultimate-ipython-notebook) - Another tool set on how to produce book chapters from notebooks 1. [Writing academic papers in plain text with Markdown and Jupyter notebook](https://sylvaindeville.net/2015/07/17/writing-academic-papers-in-plain-text-with-markdown-and-jupyter-notebook/) - Alternate ways on how to generate citations 1. [A Jupyter LaTeX template](https://gist.github.com/goerz/d5019bedacf5956bcf03ca8683dc5217#file-revtex-tplx) - How to define a LaTeX template 1. [Boost Your Jupyter Notebook Productivity](https://towardsdatascience.com/jupyter-notebook-hints-1f26b08429ad) - a collection of hints for debugging and profiling Jupyter notebooks
github_jupyter
# Análisis de los datos obtenidos Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 13 de Agosto del 2015 Los datos del experimento: * Hora de inicio: 10:30 * Hora final : 11:00 * Filamento extruido: 447cm * $T: 150ºC$ * $V_{min} tractora: 1.5 mm/s$ * $V_{max} tractora: 3.4 mm/s$ * Los incrementos de velocidades en las reglas del sistema experto son distintas: * En los caso 3 y 5 se mantiene un incremento de +2. * En los casos 4 y 6 se reduce el incremento a -1. ``` #Importamos las librerías utilizadas import numpy as np import pandas as pd import seaborn as sns #Mostramos las versiones usadas de cada librerías print ("Numpy v{}".format(np.__version__)) print ("Pandas v{}".format(pd.__version__)) print ("Seaborn v{}".format(sns.__version__)) #Abrimos el fichero csv con los datos de la muestra datos = pd.read_csv('ensayo4.CSV') %pylab inline #Almacenamos en una lista las columnas del fichero con las que vamos a trabajar columns = ['Diametro X','Diametro Y', 'RPM TRAC'] #Mostramos un resumen de los datos obtenidoss datos[columns].describe() #datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']] ``` Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica ``` graf = datos.ix[:, "Diametro X"].plot(figsize=(16,10),ylim=(0.5,3)) graf.axhspan(1.65,1.85, alpha=0.2) graf.set_xlabel('Tiempo (s)') graf.set_ylabel('Diámetro (mm)') #datos['RPM TRAC'].plot(secondary_y='RPM TRAC') box = datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes') box.axhspan(1.65,1.85, alpha=0.2) ``` Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como cuarta aproximación, vamos a modificar las velocidades de tracción. El rango de velocidades propuesto es de 1.5 a 5.3, manteniendo los incrementos del sistema experto como en el actual ensayo. Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento ``` plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.') ``` #Filtrado de datos Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas. ``` datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)] #datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes') ``` ##Representación de X/Y ``` plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.') ``` #Analizamos datos del ratio ``` ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y'] ratio.describe() rolling_mean = pd.rolling_mean(ratio, 50) rolling_std = pd.rolling_std(ratio, 50) rolling_mean.plot(figsize=(12,6)) # plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5) ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5)) ``` #Límites de calidad Calculamos el número de veces que traspasamos unos límites de calidad. $Th^+ = 1.85$ and $Th^- = 1.65$ ``` Th_u = 1.85 Th_d = 1.65 data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) | (datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)] data_violations.describe() data_violations.plot(subplots=True, figsize=(12,12)) ```
github_jupyter
``` import pandas as pd df = pd.read_csv('numerai_tournament_data.csv') ids = df['id'] ids df.head() import os os.environ['KERAS_BACKEND' ] = 'tensorflow' os.environ['MKL_THREADING_LAYER'] = 'GNU' import keras as ks from keras.models import Sequential from keras.layers import Dense from keras.callbacks import TensorBoard import keras import pandas as pd import numpy as np from keras import optimizers from keras.utils import to_categorical from keras.models import Model from keras.layers import Input, Dense from keras.layers.normalization import BatchNormalization from keras.layers.core import Dropout, Activation from sklearn.preprocessing import MinMaxScaler import time TOURNAMENT_NAME = "kazutsugi" TARGET_NAME = f"target_{TOURNAMENT_NAME}" PREDICTION_NAME = f"prediction_{TOURNAMENT_NAME}" BENCHMARK = 0.002 BAND = 0.04 # Submissions are scored by spearman correlation def score(df): # method="first" breaks ties based on order in array return np.corrcoef( df[TARGET_NAME], df[PREDICTION_NAME].rank(pct=True, method="first") )[0,1] # The payout function def payout(scores): return ((scores - BENCHMARK)/BAND).clip(lower=-1, upper=1) NAME='mlp_katzugi_benchmark' def main(): print("# Loading data...") # The training data is used to train your model how to predict the targets. training_data = pd.read_csv("numerai_training_data.csv").set_index("id") # The tournament data is the data that Numerai uses to evaluate your model. tournament_data = pd.read_csv("numerai_tournament_data.csv").set_index("id") tournament_data.head() feature_names = [f for f in training_data.columns if f.startswith("feature")] id_names = [i for i in tournament_data.columns if i.startswith("id")] print(f"Loaded {len(feature_names)} features") # print(f"Loaded {len(id_names)} ids") print(id_names) # print(feature_names) print("Training model") batch_size = 710 dropout = 0.2 visible = Input(shape=(310,)) hidden1 = Dense(3, activation='sigmoid')(visible) hidden1 = Dropout(dropout)(hidden1) hidden2 = Dense(3, activation='sigmoid')(hidden1) hidden3 = Dense(3, activation='sigmoid')(hidden2) output = Dense(1, activation='sigmoid')(hidden3) model = Model(inputs=visible, outputs=output) model.compile(loss='binary_crossentropy',optimizer='rmsprop') model.summary() tensorboard = TensorBoard(log_dir="logs/{}".format(NAME)) model.fit(training_data[feature_names], training_data[TARGET_NAME],batch_size=batch_size,epochs=15,validation_split=0.63,callbacks=[tensorboard],shuffle=False) print("Generating predictions") training_data[PREDICTION_NAME] = model.predict(training_data[feature_names]) tournament_data[PREDICTION_NAME] = model.predict(tournament_data[feature_names]) # Check the per-era correlations on the training set train_correlations = training_data.groupby("era").apply(score) print(f"On training the correlation has mean {train_correlations.mean()} and std {train_correlations.std()}") print(f"On training the average per-era payout is {payout(train_correlations).mean()}") # Check the per-era correlations on the validation set validation_data = tournament_data[tournament_data.data_type == "validation"] validation_correlations = validation_data.groupby("era").apply(score) print(f"On validation the correlation has mean {validation_correlations.mean()} and std {validation_correlations.std()}") print(f"On validation the average per-era payout is {payout(validation_correlations).mean()}") tournament_data[PREDICTION_NAME].to_csv(TOURNAMENT_NAME + "_submission.csv") # results = tournament_data[PREDICTION_NAME][:, 1] # results_df = pd.DataFrame(data={'probability_kazutsugi':results}) if __name__ == '__main__': main() import os os.environ['KERAS_BACKEND' ] = 'tensorflow' os.environ['MKL_THREADING_LAYER'] = 'GNU' import keras as ks from keras.models import Sequential from keras.layers import Dense from keras.callbacks import TensorBoard import keras import pandas as pd import numpy as np from keras import optimizers from keras.utils import to_categorical from keras.models import Model from keras.layers import Input, Dense from keras.layers.normalization import BatchNormalization from keras.layers.core import Dropout, Activation from sklearn.preprocessing import MinMaxScaler import time def main(): # Set seed for reproducibility NAME = "MLP" np.random.seed(0) print("Loading data...") # Load the data from the CSV files training_data = pd.read_csv('numerai_training_data.csv', header=0) print('original train data shape: {},\t{} \n\n \t:'.format(training_data.shape[0],training_data.shape[1])) pd.DataFrame(training_data) # prediction_data = pd.read_csv('numerai_tournament_data.csv', header=0) # print('original prediction data shape: {},\t{} \n\n \t:'.format(prediction_data.shape[0],prediction_data.shape[1])) # complete_training_data = pd.concat([training_data, prediction_data]) # print('total training / valdation shape {}'.format(complete_training_data)) # # Transform the loaded CSV data into numpy arrays # features = [f for f in list(training_data) if "feature" in f] # print(features) # X = training_data[features] # mini= MinMaxScaler(feature_range=(0,1)) # X = mini.fit_transform(X) # Y = training_data["target_frank"] # Y= keras.utils.to_categorical(Y,2) # x_prediction = prediction_data[features] # x_prediction = mini.fit_transform(x_prediction) # ids = prediction_data["id"] # batch_size = 710 # dropout = 0.2 # visible = Input(shape=(50,)) # hidden1 = Dense(10, activation='relu')(visible) # hidden2 = Dense(20, activation='relu')(hidden1) # hidden3 = Dense(10, activation='relu')(hidden2) # output = Dense(2, activation='sigmoid')(hidden3) # model = Model(inputs=visible, outputs=output) # model.compile(loss='binary_crossentropy',optimizer='rmsprop') # model.summary() # tensorboard = TensorBoard(log_dir="logs/{}".format(NAME)) # model.fit(X,Y,batch_size=batch_size,epochs=10,validation_split=0.33,callbacks=[tensorboard]) # y_prediction = model.predict(x_prediction) # evaluate = model.evaluate(x_prediction,y_prediction) # probabilities = y_prediction[:, 1] # print("- probabilities:", probabilities[1:6]) # # We can see the probability does seem to be good at predicting the # # true target correctly. # print("- target:", prediction_data['target_frank'][1:6]) # print("- rounded probability:", [np.round(p) for p in probabilities][1:6]) # # But overall the accuracy is very low. # correct = [ # np.round(x) == y # for (x, y) in zip(probabilities, prediction_data['target_frank']) # ] # print("- accuracy: ", sum(correct) / float(prediction_data.shape[0])) # tournament_corr = np.corrcoef(prediction_data['target_frank'], # prediction_data['target_elizabeth']) # print("- frank vs elizabeth corr:", tournament_corr) # # You can see that target_elizabeth is accurate using the frank model as well. # correct = [ # np.round(x) == y # for (x, y) in zip(probabilities, prediction_data['target_elizabeth']) # ] # print("- elizabeth using frank:", # sum(correct) / float(prediction_data.shape[0])) # # Numerai measures models on logloss instead of accuracy. The lower the logloss the better. # # Numerai only pays models with logloss < 0.693 on the live portion of the tournament data.) # print("- validation logloss:", # model.evaluate(x_prediction,y_prediction)) # results = y_prediction[:, 1] # results_df = pd.DataFrame(data={'probability_frank':results}) # joined = pd.DataFrame(ids).join(results_df) # pd.DataFrame(joined[:5]) # print("Writing predictions to predictions.csv") # path = 'predictions_{:},{}'.format(time.strftime("%Y-%m-%d_%Hh%Mm%Ss", time.gmtime()),NAME) + '.csv' # print() # print("Writing predictions to " + path.strip()) # joined.to_csv(path,float_format='%.15f', index=False) if __name__ == '__main__': main() import os os.environ['KERAS_BACKEND' ] = 'tensorflow' os.environ['MKL_THREADING_LAYER'] = 'GNU' import keras as ks from keras.models import Sequential from keras.layers import Dense from keras.callbacks import TensorBoard import keras import pandas as pd import numpy as np from keras import optimizers from keras.utils import to_categorical from keras.models import Model from keras.layers import Input, Dense from keras.layers.normalization import BatchNormalization from keras.layers.core import Dropout, Activation from sklearn.preprocessing import MinMaxScaler import time def main(): # Set seed for reproducibility NAME = "MLP" np.random.seed(0) print("Loading data...") # Load the data from the CSV files training_data = pd.read_csv('numerai_training_data.csv', header=0) print('original train data shape: {},\t{} \n\n \t:'.format(training_data.shape[0],training_data.shape[1])) prediction_data = pd.read_csv('numerai_tournament_data.csv', header=0) print('original prediction data shape: {},\t{} \n\n \t:'.format(prediction_data.shape[0],prediction_data.shape[1])) complete_training_data = pd.concat([training_data, prediction_data]) print('total training / valdation shape {}'.format(complete_training_data)) # Transform the loaded CSV data into numpy arrays features = [f for f in list(training_data) if "feature" in f] print(features) X = training_data[features] mini= MinMaxScaler(feature_range=(0,1)) X = mini.fit_transform(X) # X = X.values Y = training_data["target_bernie"] Y= keras.utils.to_categorical(Y,2) x_prediction = prediction_data[features] x_prediction = mini.fit_transform(x_prediction) ids = prediction_data["id"] # X = X.values # Y = to_categorical(Y, num_classes=2) batch_size = 710 dropout = 0.2 m_in = Input(shape=(50,)) m1 = Dense(50,)(m_in) m1 = Activation('relu')(m1) m1 = BatchNormalization(momentum=.99999,axis=-1)(m1) m2 = Dense(100)(m1) m2 = Activation('relu')(m2) m2 = BatchNormalization(momentum=.999,axis=-1)(m2) m3 = Dense(25)(m2) m3 = Activation('relu')(m3) m3 = Dense(25)(m3) m3 = Dropout(dropout)(m3) m3 = Activation('relu')(m3) m3 = Dense(25)(m3) m3 = Activation('relu')(m3) m3 = BatchNormalization(momentum=.99,axis=-1)(m3) m3 = Dense(100)(m3) m3 = Activation('relu')(m3) m3 = Dense(25)(m3) m3 = Activation('relu')(m3) m4 = Dense(25)(m3) m4 = Activation('relu')(m4) m4 = Dropout(dropout)(m4) m4 = BatchNormalization(momentum=.9,axis=-1)(m4) m5 = Dense(2)(m4) m_out = Activation('sigmoid')(m5) model = Model(inputs=m_in, outputs=m_out) model.compile(loss='binary_crossentropy',optimizer='rmsprop') tensorboard = TensorBoard(log_dir="logs/{}".format(NAME)) model.fit(X,Y,batch_size=batch_size,epochs=10,validation_split=0.33,callbacks=[tensorboard]) y_prediction = model.predict(x_prediction) evaluate = model.evaluate(x_prediction,y_prediction) probabilities = y_prediction[:, 1] print("- probabilities:", probabilities[1:6]) # We can see the probability does seem to be good at predicting the # true target correctly. print("- target:", prediction_data['target_bernie'][1:6]) print("- rounded probability:", [np.round(p) for p in probabilities][1:6]) # But overall the accuracy is very low. correct = [ np.round(x) == y for (x, y) in zip(probabilities, prediction_data['target_bernie']) ] print("- accuracy: ", sum(correct) / float(prediction_data.shape[0])) # The targets for each of the tournaments are very correlated. tournament_corr = np.corrcoef(prediction_data['target_bernie'], prediction_data['target_elizabeth']) print("- bernie vs elizabeth corr:", tournament_corr) # You can see that target_elizabeth is accurate using the bernie model as well. correct = [ np.round(x) == y for (x, y) in zip(probabilities, prediction_data['target_elizabeth']) ] print("- elizabeth using bernie:", sum(correct) / float(prediction_data.shape[0])) # Numerai measures models on logloss instead of accuracy. The lower the logloss the better. # Numerai only pays models with logloss < 0.693 on the live portion of the tournament data.) print("- validation logloss:", model.evaluate(x_prediction,y_prediction)) results = y_prediction[:, 1] # ----- results_df = pd.DataFrame(data={'probability_bernie':results}) joined = pd.DataFrame(ids).join(results_df) pd.DataFrame(joined[:5]) print("Writing predictions to predictions.csv") # Save the predictions out to a CSV file path = 'predictions_{:}'.format(time.strftime("%Y-%m-%d_%Hh%Mm%Ss", time.gmtime())) + '.csv' print() print("Writing predictions to " + path.strip()) # # Save the predictions out to a CSV file joined.to_csv(path,float_format='%.15f', index=False) # Now you can upload these predictions on numer.ai if __name__ == '__main__': main() m4 = Dense(25)(m3) m4 = Activation('sigmoid')(m4) m4 = Dropout(dropout)(m4) ```
github_jupyter
``` # Import libraries import xarray as xr from matplotlib import pyplot as plt import numpy as np %matplotlib inline from osgeo import gdal import os import sys # load file #file = r'C:/Users/steve/Desktop/GlobSnow/GlobSnow_SWE_L3B_monthly_201101_v1.2.nc' file = r'C:/Users/vicki/Documents/GlobSnow/GlobSnow_SWE_L3B_monthly_nc/GlobSnow_SWE_L3B_monthly_201102_v1.2.nc' f = xr.open_dataset(file) f.latitude.values # Path of netCDF file netcdf_name = "C:/Users/vicki/Documents/GlobSnow/GlobSnow_SWE_L3B_monthly_nc/GlobSnow_SWE_L3B_monthly_201102_v1.2.nc" # Specify the layer name to read layer_name = "SWE" # Open netcdf file.nc with gdal ds = gdal.Open("NETCDF:{0}:{1}".format(netcdf_name, layer_name)) # Read full data from netcdf data = ds.ReadAsArray(0, 0, ds.RasterXSize, ds.RasterYSize) data[data < 0] = 0 # plot plt.figure(figsize=(11,11)) f.SWEV.plot() # plot plt.figure(figsize=(11,11)) f.SWE.plot() # trouble shooting with Steven, seems like in this projection the edges are 0's, but they need to be nans # test to try to assign all those corner edges -- literally not on planet earth -- 0's # assign corner edges the value 0 # where is nan, give it a 0 f['longitude'] = f.longitude.where(~np.isnan(f.longitude), 0) f['SWE'] = f.SWE.where(~np.isnan(f.longitude), 0) f['SWEV'] = f.SWEV.where(~np.isnan(f.longitude), 0) f['latitude'] = f.latitude.where(f.latitude<90, 0) # plot SWE f.SWE.plot() f.SWEV.plot() # plot # note: where 0s occur at lat long, they're plotted at 0,0 plt.figure(figsize=(15,8)) f.SWE.plot(x='longitude',y='latitude') # test to try to get rid of those values SWE = f.SWE.values print(SWE) # Check projection f.projection # Make a string given the metadata # There's a bunch of stuff here to double check -- basically everything in the string # Projection, lat long, see Steven's Github for example def getProj4string(projection_info): '''make a proj4 string and a pyproj map object from projection information''' lon0=projection_info.longitude_of_projection_origin # Center longitude lat0=projection_info.latitude_of_projection_origin # Center latitude (0.0) false_e=projection_info.false_easting false_n=projection_info.false_northing scale=projection_info.scale_factor_at_projection_origin # Make proj4 string proj_string = '"+proj=laea +ellps=WGS84 +lon_0={} +lat_0={} +k_0={} +x_0={} +y_0={}"'.format( lon0,lat0,scale,false_e,false_n,scale) return proj_string # make a proj4 string and a pyproj geostationary map object from this file's projection information # reproject proj_string = getProj4string(f.projection) print(proj_string) #assign proj4 string to file # check this website: https://spectraldifferences.wordpress.com/2014/12/01/assign-a-projection-to-an-image-from-a-wkt-file/ ```
github_jupyter
# University of Applied Sciences Munich ## Kalman Filter Tutorial --- (c) Lukas Köstler (lkskstlr@gmail.com) ``` import ipywidgets as widgets from ipywidgets import interact_manual from IPython.display import display import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = (6, 3) import numpy as np %matplotlib notebook def normal_pdf(x, mu=0.0, sigma=1.0): return 1.0 / np.sqrt(2*np.pi*sigma**2) * np.exp(-0.5/sigma**2 * (x-mu)**2) ``` #### Possible sources: + (One of the most prominent books on robotics) https://docs.ufpr.br/~danielsantos/ProbabilisticRobotics.pdf + (Many nice pictures) http://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/ + (Stanford lecture slides) https://stanford.edu/class/ee363/lectures/kf.pdf ## Kalman Filter --- * We will develop everything with an example in mind: One throws an object into the air and for some timepoints $t_0, t_2, \dots, t_N$ measures the position $x$ and the velocity $v$ of the object. The measurements have some error. We want to find the best estimate of the positions $x_0, x_1, \dots, x_N$ for said timepoints. * Under certain conditions the Kalman Filter (~ 1960) is the optimal tool for this task ### State & Control --- * The state at time $t_n$ is denoted by $x_n$ Example: Height above ground in meters * The control at time $t_n$ is denoted $u_n$ Example: Vertical velocity in meters/second ### Transition Model --- * The *simplified* transition model gives the new state $$ x_{n+1} = A x_n + B u_{n+1} $$ Example (simple mechanics): $$x_{n+1} = \underbrace{1}_{A} \, x_n + \underbrace{\Delta t}_{B} \,u_{n+1}$$ ### Transition Model with Noise --- * The transition model might not be perfect. Thus we add a random term (gaussian) $$ x_{n+1} = A x_n + B u_{n+1} + w_n, \,\,\, w_n \sim N(0, \sigma_w)$$ Example: Because $\mu = 0$ we assume that we have no systematic error in the transition model. $\sigma_w$ is the expected error per transition, e.g. 0.1 meters due to friction etc.. **Important**: It is reasonable to expect that $u_{n+1}$ is noisy as well. This has to be accounted for! Therefor $\sigma_w$ is usually a function of $x_n, u_{n+1}$. So $\sigma_w = \sigma_{w, n}$ i.e. different for each timestep. ### Observation Model --- * At each timepoint $t_n$ we observe/measure the state: $$ z_{n} = C x_n$$ Example (we measure the height in meters directly): $$z_{n} = \underbrace{1}_{C} \, x_n$$ If we would measure the height in centimeters one would get: $$z_{n} = \underbrace{100}_{C} \, x_n $$ ### Observation Model with Noise --- * The measurement (either the device or our model) might not be perfect. Thus we add a random term (gaussian) $$ z_{n} = C x_n + v_n, \,\,\, v_n \sim N(0, \sigma_v)$$ Example: Because $\mu = 0$ we assume that we have no systematic error in the measurement model. $\sigma_v$ is the expected error per measurement, e.g. 0.5 meters as given in the sensors data sheet. Again it is reasonable to expect that $\sigma_v$ is not constant. A normal distance sensor usually has some fixed noise and some which is relative to the distance measured, so $\sigma_v \approx \sigma_{v, fix} + x_n * \sigma_{v, linear}$. ## The Filter --- * **Predict** Take the old estimate $\hat{x}_n$ and the transition model to get $\hat{x}_{n+1\vert n}$ which is the new "guess" without using the measurement $z_{n+1}$. * **Update/Correct** Take the new "guess"/prediction $\hat{x}_{n+1\vert n}$ and the measurement $z_{n+1}$ to get the final estimate $\hat{x}_{n+1}$. ### Predict --- * Reminder: $ x_{n+1} = a x_n + b u_n + w_n, \,\,\, w_n \sim N(0, \sigma_w), \, a, b \in \mathcal{R}$ * Which gives: \begin{align} \text{mean:}& &E[\hat{x}_{n+1\vert n}] &= a E[\hat{x}_n] + b u_n + 0 \\ \text{variance:}& &Var(\hat{x}_{n+1 \vert n}) &= a^2 Var(\hat{x}_n) + 0 + \sigma_w^2 \\[2ex] && \mu_{n+1 \vert n} &= a \mu_{n} + b u_n \\ && \sigma_{n+1 \vert n}^2 &= a^2 \sigma_{n}^2 + \sigma_w^2 \end{align} ``` %%capture mu_n = 1.0 sigma_n = 0.5 # parameters to be animated a = 1.0 bu_n = 0.1 sigma_w = 0.4 xx = np.linspace(-5,5,1000) yyxn = normal_pdf(xx, mu_n, sigma_n) yyaxn = normal_pdf(xx, a*mu_n, np.abs(a)*sigma_n) yybun = normal_pdf(xx, a*mu_n + bu_n, np.abs(a)*sigma_n) yyw = normal_pdf(xx, a*mu_n + bu_n, sigma_w) yyxnp1 = normal_pdf(xx, a*mu_n + bu_n, np.sqrt(a**2 * sigma_n**2 + sigma_w**2)) fig01 = plt.figure(); ax01 = fig01.add_subplot(1,1,1); line01_xn, = ax01.plot(xx, yyxn, '--', label="$p(x_n)$"); line01_axn, = ax01.plot(xx, yyaxn, '-', label="$p(a x_n)$", alpha=0.5); line01_bun, = ax01.plot(xx, yybun, '-', label="$p(a x_n + b u_n)$", alpha=0.5); #line01_w, = ax01.plot(xx, yyw, '--', label="$p(w_n)$ shifted"); line01_xnp1, = ax01.plot(xx, yyxnp1, label="$p(x_{n+1})$"); ax01.set_xlim(-5,5) ax01.set_ylim(0.0, 1.2*max(np.amax(yyxn), np.amax(yyxnp1), np.amax(yyaxn))) ax01.legend() def update01(a, bu_n, sigma_w): global xx, ax01, yyw, yyxnp1, yyxn, yyaxn, yybun, line01_w, line01_xnp1, line01_axn, line01_bun, mu_n, sigma_n mu_xnp1 = a*mu_n + bu_n sigma_xnp1 = np.sqrt(a**2 * sigma_n**2 + sigma_w**2) yyw = normal_pdf(xx, mu_xnp1, sigma_w) yyxnp1 = normal_pdf(xx, mu_xnp1, sigma_xnp1) yyaxn = normal_pdf(xx, a*mu_n, np.abs(a)*sigma_n) yybun = normal_pdf(xx, a*mu_n + bu_n, np.abs(a)*sigma_n) line01_axn.set_ydata(yyaxn) line01_bun.set_ydata(yybun) #line01_w.set_ydata(yyw) line01_xnp1.set_ydata(yyxnp1) ax01.set_ylim(0.0, 1.2*max(np.amax(yyxn), np.amax(yyxnp1), np.amax(yyaxn))) #ax02.set_xlabel("$\\mu_Y={:.2f}$, $\\sigma_Y={:.2}$ $\\rightarrow$ $\\mu_Z={:.2}$, $\\sigma_Z={:.2}$".format(mu_Y, sigma_Y, mu_Z, sigma_Z)) fig01.canvas.draw() w01_a = widgets.FloatSlider(value=1.0, min=0.0, max=2.0, step = 0.1) w01_bu_n = widgets.FloatSlider(value=0.2, min=-1.0, max=1.0, step = 0.1) w01_sigma_w = widgets.FloatSlider(value=sigma_w, min=0.01, max=1.0, step=0.01) display(interact_manual(update01, a=w01_a, bu_n=w01_bu_n, sigma_w=w01_sigma_w)); display(fig01); ``` ### Update --- * Reminder: $ z_{n} = c x_n + v_n, \,\,\, v_n \sim N(0, \sigma_v), c \in \mathcal{R}$ * Which gives: \begin{align} && \mu_{n+1 \vert n+1} &= \mu_{n+1 \vert n} + \frac{\sigma_{n+1 \vert n}^2 c}{\sigma_v^2 + c^2 \sigma_{n+1 \vert n}^2}\left(z_{n+1} - c \mu_{n+1 \vert n} \right) \\ && \sigma_{n+1 \vert n+1}^2 &= \sigma_{n+1 \vert n}^2 - \frac{\sigma_{n+1 \vert n}^4 c^2}{\sigma_v^2 + c^2 \sigma_{n+1 \vert n}^2} \end{align} Note: The above formulas are only valid for the specific case discussed. ``` %%capture def kf_1d_update(mu_np1_n, sigma_np1_n, z_np1, c, sigma_v): mu_np1_np1 = mu_np1_n + ((sigma_np1_n**2 * c) / (sigma_v**2 + c**2 * sigma_np1_n**2)) * (z_np1 - c*mu_np1_n) sigma_np1_np1 = np.sqrt(sigma_np1_n**2 - ((sigma_np1_n**4 * c**2) / (sigma_v**2 + c**2 * sigma_np1_n**2))) return mu_np1_np1, sigma_np1_np1 mu_np1_n = 1.0 sigma_np1_n = 1.0 # parameters to be animated z_np1 = 1.5 c = 1.0 sigma_v = 0.8 mu_np1_np1, sigma_np1_np1 = kf_1d_update(mu_np1_n, sigma_np1_n, z_np1, c, sigma_v) xx = np.linspace(-5,5,1000) yy_np1_n = normal_pdf(xx, mu_np1_n, sigma_np1_n) yy_np1_np1 = normal_pdf(xx, mu_np1_np1, sigma_np1_np1) yy_only_z = normal_pdf(xx, z_np1/c, sigma_v/c) fig02 = plt.figure(); ax02 = fig02.add_subplot(1,1,1); line02_np1_n, = ax02.plot(xx, yy_np1_n, '--', label=r"$p(x_{n+1 \vert n})$"); line02_np1_np1, = ax02.plot(xx, yy_np1_np1, '-', label=r"$p(x_{n+1 \vert n+1})$"); line02_only_z, = ax02.plot(xx, yy_only_z, '-', label="only measurement") ax02.set_xlim(-5,5) ax02.set_ylim(0.0, 1.2*max(np.amax(yy_np1_n), np.amax(yy_np1_np1))) ax02.legend() def update02(z_np1, c, sigma_v): global xx, ax02, yy_np1_np1, yy_np1_n, yy_only_z, mu_np1_n, sigma_np1_n mu_np1_np1, sigma_np1_np1 = kf_1d_update(mu_np1_n, sigma_np1_n, z_np1, c, sigma_v) yy_np1_np1 = normal_pdf(xx, mu_np1_np1, sigma_np1_np1) yy_only_z = normal_pdf(xx, z_np1/c, sigma_v/c) line02_np1_np1.set_ydata(yy_np1_np1) line02_only_z.set_ydata(yy_only_z) ax02.set_ylim(0.0, 1.2*max(np.amax(yy_np1_n), np.amax(yy_np1_np1))) fig02.canvas.draw() w02_znp1 = widgets.FloatSlider(value=z_np1, min=-1.5, max=2.5, step = 0.1) w02_c = widgets.FloatSlider(value=c, min=0.1, max=2.0, step = 0.1) w02_sigma_v = widgets.FloatSlider(value=sigma_v, min=0.01, max=2.0, step=0.01) display(interact_manual(update02, z_np1=w02_znp1, c=w02_c, sigma_v=w02_sigma_v)); display(fig02); ```
github_jupyter
``` import tensorflow as tf import os ``` Tensorflow distinguishes between saving/restoring the current values of all the variables in a graph and saving/restoring the actual graph structure. To restore the graph, you are free to use either Tensorflow's functions or just call your piece of code again, that built the graph in the first place. When defining the graph, you should also think about which and how variables/ops should be retrievable once the graph has been saved and restored. ## [MetaGraph Basic concept](https://www.tensorflow.org/api_guides/python/meta_graph#top_of_page) ### Meta graph This is a protocol buffer which saves the complete Tensorflow graph; i.e. all variables, operations, collections etc. This file has .meta extension. `mysaver-9900.meta` ### Checkpoint file This is a binary file which contains all the values of the weights, biases, gradients and all the other variables saved. This file has an extension .ckpt. However, Tensorflow has changed this from version 0.11. Now, instead of single .ckpt file, we have two files: `mysaver-9900.data-00000-of-00001` `mysaver-9900.index` Along with this, Tensorflow also has a file named checkpoint which simply keeps a record of latest checkpoint files saved. ``` model_checkpoint_path: "mysaver-9000" all_model_checkpoint_paths: "mysaver-7000" all_model_checkpoint_paths: "mysaver-8000" all_model_checkpoint_paths: "mysaver-9000" ``` ### Saving a Tensorflow model as MetaGraph First, let's define a simple graph ``` checkpoint_dir = "mysaver" ! ls -l # first creat a simple graph graph = tf.Graph() with graph.as_default(): x = tf.placeholder(tf.float32,shape=[],name='input') y = tf.Variable(initial_value=0,dtype=tf.float32,name="y_variable") update_y = y.assign(x) saver = tf.train.Saver(max_to_keep=3) init_op = tf.global_variables_initializer() ``` Let's save it every 1000 iterationes ``` sess = tf.Session(graph=graph) sess.run(init_op) for i in range(1,10000): y_result = sess.run(update_y,feed_dict={x:i}) if i %4000 == 0: saver.save(sess,checkpoint_dir,global_step=i) ! ls -lt ``` This is the generated files: ```shell checkpoint mysaver-4000.data-00000-of-00001 mysaver-4000.index mysaver-4000.meta mysaver-8000.data-00000-of-00001 mysaver-8000.index mysaver-8000.meta ``` ``` # check the state of this session sess.run(y) ``` ## some configuration for saver * specify how many checkpoint file or the frequency of saving * max_to_keep * keep_checkpoint_every_n_hours * specify partial variable for saving * var_list * etc... check [document](https://www.tensorflow.org/api_docs/python/tf/train/Saver#top_of_page) ### Restore we had saved the network in .meta file. we can use `tf.train.import_meta_graph` to restore it. * `tf.train.import_meta_graph` appends the loaded network defined in .meta file to current graph. * This will create the network for you, but we still need to load the value of the parameters from checkpoint file with `restore` ``` # first, reset the default graph tf.reset_default_graph() restore_graph = tf.Graph() with tf.Session(graph=restore_graph) as restore_sess: restore_saver = tf.train.import_meta_graph('mysaver-8000.meta') restore_saver.restore(restore_sess,tf.train.latest_checkpoint('./')) print(restore_sess.run("y_variable:0")) ``` Because the lates checkpoint is saved at 8000th iteration, so the value of Tensor(y_variable) is 8000 ## SavedModel SavedModel is a language-neutral, recoverable, hermetic serialization format. ### Building a SavedModel Class SavedModelBuilder * The SavedModelBuilder class provides functionality to build a SavedModel protocol buffer. Specifically, this allows multiple meta graphs to be saved as part of a single language-neutral SavedModel, while sharing variables and assets. * To build a SavedModel, the first meta graph must be saved with variables. Subsequent meta graphs will simply be saved with their graph definitions. **this means, when you write code** ```python # first SavedModelBuilder.add_meta_graph_and_variables(...) #Adds the current meta graph to the SavedModel and saves variables # then SavedModelBuilder.add_meta_graph(...) Adds the current meta graph to the SavedModel. ``` #### arguments explanation * tags: The set of tags with which to save the meta graph.The tags provide a means to identify the specific meta graph to load and restore, along with the shared set of variables and assets. * signature_def_map: The map of signature def map to add to the meta graph def. ### Loading a SavedModel [Module: tf.saved_model.loader](https://www.tensorflow.org/api_docs/python/tf/saved_model/loader) ```python ... builder = tf.saved_model.builder.SavedModelBuilder(export_dir) with tf.Session(graph=tf.Graph()) as sess: ... builder.add_meta_graph_and_variables(sess, ["foo-tag"], signature_def_map=foo_signatures, assets_collection=foo_assets) ... with tf.Session(graph=tf.Graph()) as sess: ... builder.add_meta_graph(["bar-tag", "baz-tag"], assets_collection=bar_baz_assets) ... builder.save() ... with tf.Session(graph=tf.Graph()) as sess: tf.saved_model.loader.load(sess, ["foo-tag"], export_dir) ... ``` We can find a [offical example](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/mnist_saved_model.py)here ### Inspect and excute SavedModel Tool: saved_model_cli * If you installed TensorFlow through a pre-built TensorFlow binary, then the SavedModel CLI is already installed on your system * If you built TensorFlow from source code, you must run the following additional command to build saved_model_cli #### Overview of commands The SavedModel CLI supports the following two commands on a MetaGraphDef in a SavedModel: * show, which shows a computation on a MetaGraphDef in a SavedModel. * run, which runs a computation on a MetaGraphDef. ##### show command **Note**: the following command is excute on the output of this [offical example](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/mnist_saved_model.py)here * show the tag-sets ```shell $saved_model_cli show --dir . The given SavedModel contains the following tag-sets: serve ``` * show availlable `SignatureDef` keys in a `MetaGraphDefm` ```shell $ saved_model_cli show --dir . --tag serve SignatureDef key: "predict_images" SignatureDef key: "serving_default" ``` * show all inputs and outputs TensorInfo for a specific 'SignatureDef' ```shell $ saved_model_cli show --dir . --tag_set serve --signature_def predict_images predict_images The given SavedModel SignatureDef contains the following input(s): inputs['images'] tensor_info: dtype: DT_FLOAT shape: (-1, 784) name: x:0 The given SavedModel SignatureDef contains the following output(s): outputs['scores'] tensor_info: dtype: DT_FLOAT shape: (-1, 10) name: y:0 Method name is: tensorflow/serving/predict ``` * show all available information in the SavedModel ```shell saved_model_cli show --dir . --all MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['predict_images']: The given SavedModel SignatureDef contains the following input(s): inputs['images'] tensor_info: dtype: DT_FLOAT shape: (-1, 784) name: x:0 The given SavedModel SignatureDef contains the following output(s): outputs['scores'] tensor_info: dtype: DT_FLOAT shape: (-1, 10) name: y:0 Method name is: tensorflow/serving/predict signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['inputs'] tensor_info: dtype: DT_STRING shape: unknown_rank name: tf_example:0 The given SavedModel SignatureDef contains the following output(s): outputs['classes'] tensor_info: dtype: DT_STRING shape: (-1, 10) name: index_to_string_Lookup:0 outputs['scores'] tensor_info: dtype: DT_FLOAT shape: (-1, 10) name: TopKV2:0 Method name is: tensorflow/serving/classify ``` ### A way of showing model visually ``` from classify_image import maybe_download_and_extract, create_graph maybe_download_and_extract() imagenet_graphdef = create_graph() import numpy as np from IPython.display import clear_output, Image, display, HTML def strip_consts(graph_def, max_const_size=32): """Strip large constant values from graph_def.""" strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = tf.compat.as_bytes("<stripped %d bytes>"%size) return strip_def def rename_nodes(graph_def, rename_func): res_def = tf.GraphDef() for n0 in graph_def.node: n = res_def.node.add() n.MergeFrom(n0) n.name = rename_func(n.name) for i, s in enumerate(n.input): n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:]) return res_def def show_graph(graph_def, max_const_size=32): """Visualize TensorFlow graph.""" if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = """ <script> function load() {{ document.getElementById("{id}").pbtxt = {data}; }} </script> <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()> <div style="height:600px"> <tf-graph-basic id="{id}"></tf-graph-basic> </div> """.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand())) iframe = """ <iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe> """.format(code.replace('"', '&quot;')) display(HTML(iframe)) show_graph(imagenet_graphdef) ```
github_jupyter
# Ejemplo Partes de Función ## Motivación enunciado: ![circuito motivacion](partes_de_funcion_ejemplo_clase/circuito_motivacion.png) $|V_I(jw)|^2*G_I(jw) = |V_O(jw)|^2*G_L$ $G_{I}(jw) = |H(jw)|^2$ Con $G_L = 1$. $|H(jw)|^2 = \frac{1}{81} \frac{(9-w^2)^2}{1 + w^6}$ --polos Butterworth con cero de transmisión en $\omega=3$-- $G_{I}(jw) = Real(Y(jw)) = \frac{1}{81} \frac{w^4 - 18 w^2 + 81}{1 + w^6}$ ## Enunciado Se dispone de una impedancia cuya parte real es: $Real(Y(jw)) = \frac{1}{81} \frac{w^4 - 18 w^2 + 81}{1 + w^6}$ Encuentre la admitancia $Y(s)$. ## ¿Cómo se relaciona Y(s) con su parte real en el eje de las $\omega$? $Y(s) = Par(Y(s)) + Impar(Y(s))$ $Y(s) = M(s) + N(s)$ $M(jw) = Real(Y(jw))$ En este ejemplo: $M(s) = \frac{1}{81} \frac{s^4 + 18 s^2 + 81}{1-s^6}$ Obtenemos $M(s)$ en terminos $Y(s)$: $Y(s) + Y(-s) = M(s) + N(s) + M(s) - N(s)$ $Y(s) + Y(-s) = 2 * M(s)$ $M(s) = \frac{Y(s) + Y(-s)}{2}$ Relacionamos las singularidades: $Y(s) = \frac{A(s)}{B(s)}$ $M(s) = \frac{A(s)*B(-s) + A(-s)*B(s)}{2*B(s)B(-s)}$ ## ¿A qué conclución nos lleva $M(s) = \frac{A(s)*B(-s) + A(-s)*B(s)}{2*B(s)B(-s)}$? * Los polos de $M(s)$ son o bien polos de $Y(s)$ o de $Y(-s)$. ## Cálculo de los polos de $Y(s)$ ``` import numpy as np den = [-1, 0, 0, 0, 0, 0, 1] roots = np.roots(den) print(f"Polos de M(s): {roots}") ``` Por lo tanto, $Y(s)$ sera de la forma: $Y(s) = \frac{A(s)}{s^3 + 2*s^2 + 2*s + 1}$ ## ¿De qué grado puede ser el polinomio $A(s)$? $Y(s) = \frac{A(s)}{s^3 + 2s^2 + 2s + 1}$ $M(s) = \frac{1}{81} \frac{s^4 + 18 s^2 + 81}{1 - s^6}$ * El grado del numerador y denominador de $Y(s)$ pueden difererir a lo sumo en 1 (i.e. la diferencia puede ser +1, -1, 0). * $A(s)$ debe ser por lo menos de segundo grado!! ## Obtención de $A(s)$ $M(s) = \frac{1}{81} \frac{s^4 + 18 s^2 + 81}{1 - s^6}$ $M(s) = \frac{A(-s)*B(s) + A(s)*B(-s)}{2*B(s)*B(-s)}$ $A(-s)*B(s) + A(s)*B(-s) = (m_1(s)-n_1(s))*(m_2(s)+n_2(s)) + (m_1(s)+n_1(s))*(m_2(s)-n_2(s))$ $A(-s)*B(s) + A(s)*B(-s) = 2 * (m_1(s)*m_2(s) - n_1(s)*n_2(s))$ $m_1(s)*m_2(s) - n_1(s)*n_2(s) = s^4 + 18 s^2 + 81$ $(a_2*s^2 + a_0) * (2*s^2 + 1) - a_1 * s * (s^3 + 2*s) = (s^4 + 18 * s^2 + 81) / 81$ --- $(2 * a_2 - a_1) * s^4 + (a_2 - 2 * a_1 + 2 * a_0) * s^2 + a_0 = (s^4 + 18 * s^2 + 81) / 81$ --- Nos queda: $a_0 = 1$ $2 * a_2 - a_1 = \frac{1}{81}$ $a_2 - 2 * a_1 + 2 * a_0 = \frac{18}{81}$ Resolviendo: $a_0 = 1$ $a_1 = \frac{578}{486} = 1.1893$ $a_2 = \frac{146}{243} = 0.6008$ ## Obtuvimos $Y(s)$!!! $Y(s) = \frac{1}{81} \frac{146/3 * s^2 + 578/6 * s + 81}{s^3 + 2s^2 + 2s + 1}$ ``` import matplotlib.pyplot as plt w = np.logspace(-2, 2) g_enunciado = (w ** 4 - 18 * (w ** 2) + 81) / (1 + w ** 6) / 81 y_obtenida = (-146/3 * (w ** 2) + 1j * 578/6 * w + 81)/(-1j * (w ** 3) - 2 * (w ** 2) + 2j * w + 1) / 81 g_obtenida = y_obtenida.real # Ploteo fig, ax = plt.subplots() ax.ticklabel_format(useOffset=False) ax.set_ylabel('Siemens') ax.set_xlabel('w') ax.set_title('G(jw) enunciado') ax.grid(True) ax.semilogx(w, g_enunciado) plt.show() fig, ax = plt.subplots() ax.ticklabel_format(useOffset=False) ax.set_ylabel('Siemens') ax.set_xlabel('w') ax.set_title('G(jw) obtenida') ax.grid(True) ax.semilogx(w, g_obtenida) plt.show() ``` ## Posible Implementación ![posible implementación](partes_de_funcion_ejemplo_clase/posible_sintesis.png) # Obtención de $F(s)$ a partir de otras partes de función * A partir de $|F(s)|^2$ * Ya utilizamos este método en teoría moderna de filtros (ej.: Butter, Cheby). * A partir de $Imag(F(s))$ * Idem al método recién visto. * A partir de $\phi(jw)$ * Útil en diseño de desfasadores. * A partir de $\tau(s)$ * Útil cuando se quiere diseñar filtros con ciertas características de retardo. ## A partir de $\phi(j\omega) = ang(F(j\omega))$ $F(j\omega) = |F(j\omega)|*e^{j*\phi(j\omega)}$ $\frac{F(j\omega)}{F(-j\omega)} = e^{2j*\phi(j\omega)}$ $\frac{F(j\omega)}{F(-j\omega)} = \frac{e^{j*\phi(j\omega)}}{e^{-j*\phi(j\omega)}}$ $\frac{F(j\omega)}{F(-j\omega)} = \frac{cos(\phi(j\omega))+j*sin(\phi(j\omega)}{cos(\phi(j\omega))-j*sin(\phi(j\omega)}$ $\frac{F(j\omega)}{F(-j\omega)} = \frac{1+j*tg(\phi(j\omega))}{1-j*tg(\phi(j\omega))}$ $\frac{F(s)}{F(-s)} = \frac{A(s)*B(-s)}{A(-s)*B(s)}$ * Los polos en el semiplano derecho de $\frac{F(s)}{F(-s)}$ seran ceros de F(-s) --sino $F(s)$ sería inestable--. * Los polos en el semiplano izquierdo de $\frac{F(s)}{F(-s)}$, pueden ser tanto polos de $F(s)$ como ceros de $F(-s)$ --a elección--. * Diferencia de grado entre numerador y denominador $F(s)$ tiene que ser admisible. Para una transferencia $grado(num) < gr(den)$. ## A partir de $\tau(\omega)$ $\tau(j\omega) = - \frac{d\phi(j\omega)}{d\omega}$ Se puede demostrar que: $\tau(w) = \sum_i{\frac{\sigma_{z\_i}}{\sigma^2_{z\_i} + (\omega-\omega_{z\_i})^2}} - \sum_k{\frac{\sigma_{p\_k}}{\sigma^2_{p\_k} + (\omega-\omega_{p\_k})^2}}$ Donde: $s_{z\_i} = \sigma_{z\_i} + j \omega_{z\_i}$ son los zeros de $F(s)$ y $s_{p\_k} = \sigma_{p\_k} + j \omega_{p\_k}$ son los polos. --- Descomponiendo $\tau(s)$ en fracciones simples podemos recomponer $F(s)$ teniendo en cuenta las siguientes observaciones: * Si el residuo --el numerador-- es negativo, debera corresponder a un cero de $F(s)$ en el semiplano izquierdo. * Si el residuo es positivo, puede corresponder tanto a un cero de $F(s)$ en el semiplano derecho como a un polo en el semiplano izquierdo.
github_jupyter
## Training Amazon SageMaker models by using the Deep Graph Library with PyTorch backend The **Amazon SageMaker Python SDK** makes it easy to train Deep Graph Library (DGL) models. In this example, you train a simple graph neural network using the [DMLC DGL API](https://github.com/dmlc/dgl.git) and the [Cora dataset](https://relational.fit.cvut.cz/dataset/CORA). The Cora dataset describes a citation network. The Cora dataset consists of 2,708 scientific publications classified into one of seven classes. The citation network consists of 5,429 links. The task is to train a node classification model using Cora dataset. ### Setup Define a few variables that are needed later in the example. ``` import sagemaker from sagemaker import get_execution_role from sagemaker.session import Session # Setup session sess = sagemaker.Session() # S3 bucket for saving code and model artifacts. # Feel free to specify a different bucket here. bucket = sess.default_bucket() # Location to put your custom code. custom_code_upload_location = 'customcode' # IAM execution role that gives Amazon SageMaker access to resources in your AWS account. # You can use the Amazon SageMaker Python SDK to get the role from the notebook environment. role = get_execution_role() ``` ### The training script The pytorch_gcn.py script provides all the code you need for training an Amazon SageMaker model. ``` !cat pytorch_gcn.py ``` ### SageMaker's estimator class The Amazon SageMaker Estimator allows you to run single machine in Amazon SageMaker, using CPU or GPU-based instances. When you create the estimator, pass in the filename of the training script and the name of the IAM execution role. You can also provide a few other parameters. train_instance_count and train_instance_type determine the number and type of Amazon SageMaker instances that are used for the training job. The hyperparameters parameter is a dictionary of values that is passed to your training script as parameters so that you can use argparse to parse them. You can see how to access these values in the pytorch_gcn.py script above. Here, you can directly use the DL Container provided by Amazon SageMaker for training DGL models by specifying the PyTorch framework version (>= 1.3.1) and the python version (only py3). You can also add a task_tag with value 'DGL' to help tracking the task. For this example, choose one ml.p3.2xlarge instance. You can also use a CPU instance such as ml.c4.2xlarge for the CPU image. ``` from sagemaker.pytorch import PyTorch CODE_PATH = 'pytorch_gcn.py' account = sess.boto_session.client('sts').get_caller_identity()['Account'] region = sess.boto_session.region_name params = {} params['dataset'] = 'cora' task_tags = [{'Key':'ML Task', 'Value':'DGL'}] estimator = PyTorch(entry_point=CODE_PATH, role=role, train_instance_count=1, train_instance_type='ml.p3.2xlarge', # 'ml.c4.2xlarge ' framework_version="1.3.1", py_version='py3', tags=task_tags, hyperparameters=params, sagemaker_session=sess) ``` ### Running the Training Job After you construct the Estimator object, fit it by using Amazon SageMaker. The dataset is automatically downloaded. ``` estimator.fit() ``` ## Output You can get the model training output from the Amazon Sagemaker console by searching for the training task named pytorch-gcn and looking for the address of 'S3 model artifact'
github_jupyter
``` import os import numpy as np import tensorflow as tf lexical_model = '/home/disooqi/qmdis-post-processor-full/arabic_dialect_identification/lexical/model' acoustic_model = '/home/disooqi/qmdis-post-processor-full/arabic_dialect_identification/acoustic/model' suwan_model = os.path.join(lexical_model, 'model60400.ckpt') model_path = os.path.join(acoustic_model, 'model910000.ckpt') class siamese: # Create model def __init__(self,input_dim): self.x1 = tf.placeholder(tf.float32, [None, input_dim]) self.x2 = tf.placeholder(tf.float32, [None, input_dim]) with tf.variable_scope("siamese") as scope: self.a1,self.b1,self.o1 = self.network(self.x1) scope.reuse_variables() self.a1,self.b2,self.o2 = self.network(self.x2) # Create loss self.y_ = tf.placeholder(tf.float32, [None]) self.loss = self.loss_with_cds() def network(self, x): weights = [] kernel_size =150 stride = 18 depth=40 conv1 = self.conv_layer(x, kernel_size,stride,depth,'conv1') conv1r = tf.nn.relu(conv1) n_prev_weight = int(x.get_shape()[1]) conv1_d = tf.reshape(conv1r,[-1, int(round(n_prev_weight/stride+0.5)*depth)]) fc1 = self.fc_layer(conv1_d, 1500, "fc1") ac1 = tf.nn.relu(fc1) fc2 = self.fc_layer(ac1, 600, "fc2") ac2 = tf.nn.relu(fc2) fc3 = self.fc_layer(ac2, 200, "fc3") return fc1,fc2,fc3 def fc_layer(self, bottom, n_weight, name): # print( bottom.get_shape()) n_prev_weight = bottom.get_shape()[1] W = tf.get_variable(name+'W', dtype=tf.float32, shape=[n_prev_weight, n_weight], initializer=tf.contrib.layers.xavier_initializer()) b = tf.get_variable(name+'b', dtype=tf.float32, initializer=tf.random_uniform([n_weight],-0.001,0.001, dtype=tf.float32)) fc = tf.nn.bias_add(tf.matmul(bottom, W), b) return fc def conv_layer(self, bottom, kernel_size, stride, depth, name): n_prev_weight = int(bottom.get_shape()[1]) num_channels = 1 # for 1 dimension inputlayer = tf.reshape(bottom, [-1,n_prev_weight,1]) initer = tf.truncated_normal_initializer(stddev=0.1) W = tf.get_variable(name+'W', dtype=tf.float32, shape=[kernel_size, num_channels, depth], initializer=tf.contrib.layers.xavier_initializer()) b = tf.get_variable(name+'b', dtype=tf.float32, initializer=tf.constant(0.001, shape=[depth*num_channels], dtype=tf.float32)) conv = tf.nn.bias_add( tf.nn.conv1d(inputlayer, W, stride, padding='SAME'), b) return conv def loss_with_cds(self): labels_t = self.y_ cds = tf.reduce_sum(tf.multiply(self.o1,self.o2),1) eucd2 = tf.reduce_mean(tf.pow(tf.subtract(labels_t,cds),2)) eucd = tf.sqrt(eucd2, name="eucd") return eucd graph_1 = tf.Graph() with graph_1.as_default(): input_dim = 41657 siamese = siamese(input_dim) global_step = tf.Variable(0, trainable=False) learning_rate = tf.train.exponential_decay(0.01, global_step, 5000, 0.99, staircase=True) train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(siamese.loss, global_step=global_step) saver = tf.train.Saver() sess = tf.Session() sess.run(tf.global_variables_initializer()) saver.restore(sess, suwan_model) x = tf.placeholder(tf.float32, [None,None,40]) y = tf.placeholder(tf.int32, [None]) s = tf.placeholder(tf.int32, [None,2]) softmax_num = 5 class nn: # Create model def __init__(self, x1, y_, y_string, shapes_batch, softmax_num): self.ea, self.eb, self.o1,self.res1,self.conv,self.ac1,self.ac2 = self.net(x1, shapes_batch, softmax_num) # Create loss self.loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y_, logits=self.o1)) self.label=y_ self.shape = shapes_batch self.true_length = x1 self.label_string=y_string def net(self,x, shapes_batch,softmax_num): shape_list = shapes_batch[:,0] featdim = 40 #channel weights = [] kernel_size =5 stride = 1 depth = 500 shape_list = shape_list/stride conv1 = self.conv_layer(x,kernel_size,featdim,stride,depth,'conv1',shape_list) conv1r= tf.nn.relu(conv1) featdim = depth #channel weights = [] kernel_size =7 stride = 2 depth = 500 shape_list = shape_list/stride conv2 = self.conv_layer(conv1r,kernel_size,featdim,stride,depth,'conv2',shape_list) conv2r= tf.nn.relu(conv2) featdim = depth #channel weights = [] kernel_size =1 stride = 1 depth = 500 shape_list = shape_list/stride conv3 = self.conv_layer(conv2r,kernel_size,featdim,stride,depth,'conv3',shape_list) conv3r= tf.nn.relu(conv3) featdim = depth #channel weights = [] kernel_size =1 stride = 1 depth = 3000 shape_list = shape_list/stride conv4 = self.conv_layer(conv3r,kernel_size,featdim,stride,depth,'conv4',shape_list) conv4r= tf.nn.relu(conv4) print ('Hi I am conv1', conv1) shape_list = tf.cast(shape_list, tf.float32) shape_list = tf.reshape(shape_list,[-1,1,1]) mean = tf.reduce_sum(conv4r,1,keep_dims=True)/shape_list res1=tf.squeeze(mean,axis=1) fc1 = self.fc_layer(res1,1500,"fc1") ac1 = tf.nn.relu(fc1) fc2 = self.fc_layer(ac1,600,"fc2") ac2 = tf.nn.relu(fc2) fc3 = self.fc_layer(ac2,softmax_num,"fc3") return fc1, fc2, fc3,res1,conv1r,ac1,ac2 def xavier_init(self,n_inputs, n_outputs, uniform=True): if uniform: init_range = np.sqrt(6.0 / (n_inputs + n_outputs)) return tf.random_uniform_initializer(-init_range, init_range) else: stddev = np.sqrt(3.0 / (n_inputs + n_outputs)) return tf.truncated_normal_initializer(stddev=stddev) def fc_layer(self, bottom, n_weight, name): print(bottom.get_shape()) assert len(bottom.get_shape()) == 2 n_prev_weight = bottom.get_shape()[1] initer = self.xavier_init(int(n_prev_weight),n_weight) W = tf.get_variable(name+'W', dtype=tf.float32, shape=[n_prev_weight, n_weight], initializer=initer) b = tf.get_variable(name+'b', dtype=tf.float32, initializer=tf.random_uniform([n_weight],-0.001,0.001, dtype=tf.float32)) fc = tf.nn.bias_add(tf.matmul(bottom, W), b) return fc def conv_layer(self, bottom, kernel_size,num_channels, stride, depth, name, shape_list): # n_prev_weight = int(bottom.get_shape()[1]) n_prev_weight = tf.shape(bottom)[1] inputlayer=bottom initer = tf.truncated_normal_initializer(stddev=0.1) W = tf.get_variable(name+'W', dtype=tf.float32, shape=[kernel_size, num_channels, depth], initializer=tf.contrib.layers.xavier_initializer()) b = tf.get_variable(name+'b', dtype=tf.float32, initializer=tf.constant(0.001, shape=[depth], dtype=tf.float32)) conv = ( tf.nn.bias_add( tf.nn.conv1d(inputlayer, W, stride, padding='SAME'), b)) mask = tf.sequence_mask(shape_list,tf.shape(conv)[1]) # make mask with batch x frame size mask = tf.where(mask, tf.ones_like(mask,dtype=tf.float32), tf.zeros_like(mask,dtype=tf.float32)) mask=tf.tile(mask, tf.stack([tf.shape(conv)[2],1])) #replicate make with depth size mask=tf.reshape(mask,[tf.shape(conv)[2], tf.shape(conv)[0], -1]) mask = tf.transpose(mask,[1, 2, 0]) print('Hi I am mask', mask) conv=tf.multiply(conv,mask) return conv emnet_validation = nn(x,y,y,s,softmax_num) sess2 = tf.Session() saver2 = tf.train.Saver() sess2.run(tf.global_variables_initializer()) saver2.restore(sess2, model_path) ```
github_jupyter
### TorchText with our custom data From the previous notebook we have been using the `IMDB` dataset for sentiment analyisis classification. In real world we will want to work with our own dataset. In this notebook we are going to cover that with TorchText helper functions which have have been using all long. We: 1. Define the Fields 2. Loaded the Dataset 3. Created the Splits Recall: ``` TEXT = data.Field() LABEL = data.LabelField() train_data, test_data = datasets.IMDB.splits(TEXT, LABEL) train_data, valid_data = train_data.split() ``` Torch text is cappable of reading 3 files which are: 1. json -> javascript object notation 2. csv -> comma serperated values 3. tsv -> tab seperated values **`Json` is the best we will explain why later on.** We have files that are in the data Folder. The `train.json` has the following formart. ```json {"name": "Jocko", "quote": "You must own everything in your world. There is no one else to blame.", "score":1} ``` ``` from torchtext.legacy import data, datasets import torch ``` ### Define the Fields. ``` tokenizer = lambda x: x.split() QOUTE = data.Field(tokenize=tokenizer, lower=True) LABEL = data.LabelField(dtype=torch.float32) ``` Next, we must tell TorchText which fields apply to which elements of the json object. For `json` data, we must create a dictionary where: * the key matches the key of the json object * the value is a tuple where: * the first element becomes * the batch object's attribute name * the second element is the name of the Field **What do we mean when we say "becomes the batch object's attribute name"?** Recall in the previous notebooks where we accessed the `TEXT` and `LABEL` fields in the `train/evaluation` loop by using `batch.text` and `batch.label`, this is because `TorchText` sets the `text` object to have a `text` and `label` attribute, each being a tensor containing either the `text` or the `label`. **Take Home Notes**: 1. The order of the keys in the fields dictionary does not matter, as long as its keys match the json data keys. 2. The Field name does not have to match the key in the json object, e.g. you can use `LABEL` for the "score" field. 3. When dealing with json data, not all of the keys have to be used, e.g. we did not use the "name" field. 4. Also, if the values of json field are a string then the Fields tokenization is applied (default is to split the string on spaces), however if the values are a list then no tokenization is applied. Usually it is a good idea for the data to already be tokenized into a list, this saves time as you don't have to wait for TorchText to do it. 5. The value of the json fields do not have to be the same type. Some examples can have their "quote" as a string, and some as a list. The tokenization will only get applied to the ones with their "quote" as a string. 6. If you are using a json field, every single example must have an instance of that field. ``` fields = { 'quote': ('q' , QOUTE), 'score': ('s', LABEL) } ``` Now, in a training loop we can iterate over the data iterator and access the qoute via batch.q, the score/label via batch.s We then create our datasets (train_data and test_data) with the TabularDataset.splits function. The path argument specifices the **top level folder** (in our case `data`) common among both datasets, and the `train` and `test` arguments specify the filename of each dataset, e.g. here the train dataset is located at data/train.json.**We can also specify the validation if we have a file containing validation data**. We tell the function we are using json data, and pass in our fields dictionary defined previously. ``` train_data, test_data = data.TabularDataset.splits( path = 'data', train="train.json", test="test.json", format="json", fields=fields ) ``` We can then view an example to make sure it has worked correctly. **Notice how the field names (q and s) match up with what was defined in the fields dictionary.** ``` print(vars(train_data[0])) ``` We can now use `train_data`, `test_data` and `valid_data`(if available) to build a `vocabulary` and create `iterators`, as in the previous `notebooks`. We can access all attributes by using batch.s and batch.q. ### Reading CSV/TSV `csv` and `tsv` are very similar, except `csv` has elements separated by commas and `tsv` by tabs. Using the same example above, our tsv data will be in the form of: ```tsv name quote score Jocko You must own everything in your world. There is no one else to blame. 1 Bruce Lee Do not pray for an easy life, pray for the strength to endure a difficult one. 1 Potato guy Stand tall, and rice like a potato! 0 ``` That is, on each row the elements are separated by tabs and we have one example per row. The first row is usually a header (i.e. the name of each of the columns), but sometimes with on header. **You cannot have lists within tsv or csv data.** The way the fields are defined is a bit different to json. We now use a list of tuples, where each element is also a tuple. The first element of these inner tuples will become the batch object's attribute name, second element is the Field name. Unlike the json data, the tuples have to be in the same order that they are within the tsv data. Due to this, when skipping a column of data a tuple of Nones needs to be used. However, if you only wanted to use the name and svore column, you could just use two tuples as they are the first two columns. We change our TabularDataset to read the correct `.tsv` files, and change the format argument to 'tsv'. If your data has a header, which ours does, it must be skipped by passing `skip_header = True`. If not, TorchText will think the header is an example. By default, `skip_header` will be `False.` ``` fields = [(None, None), ('q', QOUTE) , ('s', LABEL)] train_data, test_data = data.TabularDataset.splits( path = 'data', train = 'train.csv', test = 'test.csv', format = 'csv', fields = fields, skip_header = True ) ``` ### If you decide to specify field names as a dictionery like before you can do it as follows: ```python fields = { 'quote': ('q' , QOUTE), 'score': ('s', LABEL) } train_data, test_data = data.TabularDataset.splits( path = 'data', train = 'train.csv', test = 'test.csv', format = 'csv', fields = fields, skip_header = False # should be false ) ``` ``` print(vars(train_data[0])) ``` ### Why `JSON` over `CSV/TSV`? 1. Your csv or tsv data cannot be stored lists. This means data cannot be already be tokenized, thus everytime you run your Python script that reads this data via TorchText, it has to be tokenized. Using advanced tokenizers, such as the spaCy tokenizer, takes a non-negligible amount of time. Thus, it is better to tokenize your datasets and store them in the json lines format. 2. If tabs appear in your tsv data, or commas appear in your csv data, TorchText will think they are delimiters between columns. This will cause your data to be parsed incorrectly. Worst of all TorchText will not alert you to this as it cannot tell the difference between a tab/comma in a field and a tab/comma as a delimiter. As json data is essentially a dictionary, you access the data within the fields via its key, so do not have to worry about "surprise" delimiters. ### Building the Vocabularies. ``` QOUTE.build_vocab(train_data) LABEL.build_vocab(train_data) print(QOUTE.vocab.itos) print(LABEL.vocab.stoi) print(LABEL.vocab.itos) QOUTE.vocab.freqs.most_common(2) ``` ### Iterating over a dataset using the `BucketIterator`. * Then, we can create the iterators after defining our `batch size` and `device`. * By default, the `train` data is `shuffled` each epoch, but the `validation/test` data is `sorted`. However, TorchText doesn't know what to use to sort our data and it would throw an error if we don't tell it. There are two ways to handle this, you can either tell the iterator not to sort the `validation/test` data by passing `sort = False`, or you can tell it how to sort the data by passing a `sort_key`. **A sort key is a function that returns a key on which to sort the data on**. For example: ```py lambda x: x.q ``` will sort the examples by their q attribute, i.e their quote. Ideally, you want to use a sort key as the BucketIterator will then be able to sort your examples and then minimize the amount of padding within each batch. We can then iterate over our iterator to get batches of data. **Note how by default TorchText has the batch dimension second.** ``` DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(DEVICE) BATCH_SIZE = 1 train_iterator, test_iterator = data.BucketIterator.splits( (train_data, test_data), device = DEVICE, batch_size = BATCH_SIZE, sort_key = lambda x: x.q, ) ``` ### Train Data. ``` for data in train_iterator: print(data, data.q) ``` ### Test Data ``` for data in train_iterator: print(data.q) ``` ### That's how we can load our own dataset using `TorchText` ### Credits. * [bentrevett](https://github.com/bentrevett/pytorch-sentiment-analysis/blob/master/A%20-%20Using%20TorchText%20with%20Your%20Own%20Datasets.ipynb)
github_jupyter
# Face detection for video Images of detected faces have format `frameXfaceY.jpg`, where `X` represents the Xth frame and `Y` the Yth face in Xth frame. ``` import os import cv2 import numpy as np from matplotlib import pyplot as plt import tensorflow as tf from keras import backend as K from pathlib import PurePath, Path from moviepy.editor import VideoFileClip from umeyama import umeyama import mtcnn_detect_face ``` Create MTCNN and its forward pass functions ``` def create_mtcnn(sess, model_path): if not model_path: model_path,_ = os.path.split(os.path.realpath(__file__)) with tf.variable_scope('pnet2'): data = tf.placeholder(tf.float32, (None,None,None,3), 'input') pnet = mtcnn_detect_face.PNet({'data':data}) pnet.load(os.path.join(model_path, 'det1.npy'), sess) with tf.variable_scope('rnet2'): data = tf.placeholder(tf.float32, (None,24,24,3), 'input') rnet = mtcnn_detect_face.RNet({'data':data}) rnet.load(os.path.join(model_path, 'det2.npy'), sess) with tf.variable_scope('onet2'): data = tf.placeholder(tf.float32, (None,48,48,3), 'input') onet = mtcnn_detect_face.ONet({'data':data}) onet.load(os.path.join(model_path, 'det3.npy'), sess) return pnet, rnet, onet WEIGHTS_PATH = "./mtcnn_weights/" sess = K.get_session() with sess.as_default(): global pnet, rnet, onet pnet, rnet, onet = create_mtcnn(sess, WEIGHTS_PATH) # global pnet, rnet, onet pnet = K.function([pnet.layers['data']],[pnet.layers['conv4-2'], pnet.layers['prob1']]) rnet = K.function([rnet.layers['data']],[rnet.layers['conv5-2'], rnet.layers['prob1']]) onet = K.function([onet.layers['data']],[onet.layers['conv6-2'], onet.layers['conv6-3'], onet.layers['prob1']]) ``` Create folder where images will be saved to ``` Path(f"faces/aligned_faces").mkdir(parents=True, exist_ok=True) Path(f"faces/raw_faces").mkdir(parents=True, exist_ok=True) Path(f"faces/binary_masks_eyes").mkdir(parents=True, exist_ok=True) ``` Functions for video processing and face alignment ``` def get_src_landmarks(x0, x1, y0, y1, pnts): """ x0, x1, y0, y1: (smoothed) bbox coord. pnts: landmarks predicted by MTCNN """ src_landmarks = [(int(pnts[i+5][0]-x0), int(pnts[i][0]-y0)) for i in range(5)] return src_landmarks def get_tar_landmarks(img): """ img: detected face image """ ratio_landmarks = [ (0.31339227236234224, 0.3259269274198092), (0.31075140146108776, 0.7228453709528997), (0.5523683107816256, 0.5187296867370605), (0.7752419985257663, 0.37262483743520886), (0.7759613623985877, 0.6772957581740159) ] img_size = img.shape tar_landmarks = [(int(xy[0]*img_size[0]), int(xy[1]*img_size[1])) for xy in ratio_landmarks] return tar_landmarks def landmarks_match_mtcnn(src_im, src_landmarks, tar_landmarks): """ umeyama(src, dst, estimate_scale) landmarks coord. for umeyama should be (width, height) or (y, x) """ src_size = src_im.shape src_tmp = [(int(xy[1]), int(xy[0])) for xy in src_landmarks] tar_tmp = [(int(xy[1]), int(xy[0])) for xy in tar_landmarks] M = umeyama(np.array(src_tmp), np.array(tar_tmp), True)[0:2] result = cv2.warpAffine(src_im, M, (src_size[1], src_size[0]), borderMode=cv2.BORDER_REPLICATE) return result def process_mtcnn_bbox(bboxes, im_shape): """ output bbox coordinate of MTCNN is (y0, x0, y1, x1) Here we process the bbox coord. to a square bbox with ordering (x0, y1, x1, y0) """ for i, bbox in enumerate(bboxes): y0, x0, y1, x1 = bboxes[i,0:4] w, h = int(y1 - y0), int(x1 - x0) length = (w + h)/2 center = (int((x1+x0)/2),int((y1+y0)/2)) new_x0 = np.max([0, (center[0]-length//2)])#.astype(np.int32) new_x1 = np.min([im_shape[0], (center[0]+length//2)])#.astype(np.int32) new_y0 = np.max([0, (center[1]-length//2)])#.astype(np.int32) new_y1 = np.min([im_shape[1], (center[1]+length//2)])#.astype(np.int32) bboxes[i,0:4] = new_x0, new_y1, new_x1, new_y0 return bboxes def process_video(input_img): global frames, save_interval global pnet, rnet, onet minsize = 30 # minimum size of face detec_threshold = 0.7 threshold = [0.6, 0.7, detec_threshold] # three steps's threshold factor = 0.709 # scale factor frames += 1 if frames % save_interval == 0: faces, pnts = mtcnn_detect_face.detect_face( input_img, minsize, pnet, rnet, onet, threshold, factor) faces = process_mtcnn_bbox(faces, input_img.shape) for idx, (x0, y1, x1, y0, conf_score) in enumerate(faces): det_face_im = input_img[int(x0):int(x1),int(y0):int(y1),:] # get src/tar landmarks src_landmarks = get_src_landmarks(x0, x1, y0, y1, pnts) tar_landmarks = get_tar_landmarks(det_face_im) # align detected face aligned_det_face_im = landmarks_match_mtcnn( det_face_im, src_landmarks, tar_landmarks) fname = f"./faces/aligned_faces/frame{frames}face{str(idx)}.jpg" plt.imsave(fname, aligned_det_face_im, format="jpg") fname = f"./faces/raw_faces/frame{frames}face{str(idx)}.jpg" plt.imsave(fname, det_face_im, format="jpg") bm = np.zeros_like(aligned_det_face_im) h, w = bm.shape[:2] bm[int(src_landmarks[0][0]-h/15):int(src_landmarks[0][0]+h/15), int(src_landmarks[0][1]-w/8):int(src_landmarks[0][1]+w/8),:] = 255 bm[int(src_landmarks[1][0]-h/15):int(src_landmarks[1][0]+h/15), int(src_landmarks[1][1]-w/8):int(src_landmarks[1][1]+w/8),:] = 255 bm = landmarks_match_mtcnn(bm, src_landmarks, tar_landmarks) fname = f"./faces/binary_masks_eyes/frame{frames}face{str(idx)}.jpg" plt.imsave(fname, bm, format="jpg") return np.zeros((3,3,3)) ``` Start face detection Default input video filename: `INPUT_VIDEO.mp4` ``` # Start face detection for video B global frames frames = 0 # configuration save_interval = 6 # perform face detection every {save_interval} frames fn_input_video = "1.mp4" output = 'dummy.mp4' clip1 = VideoFileClip(fn_input_video) clip = clip1.fl_image(process_video)#.subclip(0,3) #NOTE: this function expects color images!! clip.write_videofile(output, audio=False) clip1.reader.close() os.rename('faces','facesA') Path(f"faces/aligned_faces").mkdir(parents=True, exist_ok=True) Path(f"faces/raw_faces").mkdir(parents=True, exist_ok=True) Path(f"faces/binary_masks_eyes").mkdir(parents=True, exist_ok=True) # Start face detection for video B frames = 0 # configuration save_interval = 6 # perform face detection every {save_interval} frames fn_input_video = "2.mp4" output = 'dummyB.mp4' clip1 = VideoFileClip(fn_input_video) clip = clip1.fl_image(process_video).subclip(0,3) #NOTE: this function expects color images!! clip.write_videofile(output, audio=False) clip1.reader.close() os.rename('faces','facesB') ``` ## Saved images will be in folder `faces/raw_faces` and `faces/aligned_faces` respectively. Binary masks will be in `faces/binary_masks_eyes`.
github_jupyter
<table class="ee-notebook-buttons" align="left"> <td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Datasets/Terrain/alos_chili.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td> <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Terrain/alos_chili.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Terrain/alos_chili.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td> </table> ## Install Earth Engine API and geemap Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`. The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet. ``` # Installs geemap package import subprocess try: import geemap except ImportError: print('Installing geemap ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) import ee import geemap ``` ## Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function. ``` Map = geemap.Map(center=[40,-100], zoom=4) Map ``` ## Add Earth Engine Python script ``` # Add Earth Engine dataset dataset = ee.Image('CSP/ERGo/1_0/Global/ALOS_CHILI') alosChili = dataset.select('constant') alosChiliVis = { 'min': 0.0, 'max': 255.0, } Map.setCenter(-105.8636, 40.3439, 11) Map.addLayer(alosChili, alosChiliVis, 'ALOS CHILI') ``` ## Display Earth Engine data layers ``` Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ```
github_jupyter
``` import os import json import string import numpy as np from nltk.tag import pos_tag from sklearn_crfsuite import CRF, metrics from sklearn.metrics import make_scorer,confusion_matrix from sklearn.metrics import f1_score,classification_report from sklearn.pipeline import Pipeline from pprint import pprint from keras.preprocessing.text import Tokenizer train_loc = "Data/snips/train_PlayMusic_full.json" test_loc = "Data/snips/validate_PlayMusic.json" train_file = json.load(open(train_loc, encoding= "iso-8859-2")) test_file = json.load(open(test_loc, encoding= "iso-8859-2")) train_datafile = [i["data"] for i in train_file["PlayMusic"]] test_datafile = [i["data"] for i in test_file["PlayMusic"]] def convert_data(datalist): output = [] for data in datalist: sent = [] pos = [] for phrase in data: words = phrase["text"].strip().split(" ") while "" in words: words.remove("") if "entity" in phrase.keys(): label = phrase["entity"] labels = [label+"-{}".format(i+1) for i in range(len(words))] else: labels = ["O"] * len(words) sent.extend(words) pos.extend(labels) output.append([sent, pos]) print(sent) return output train_data = convert_data(train_datafile) test_data = convert_data(test_datafile) BASE_DIR = 'Data' GLOVE_DIR = os.path.join(BASE_DIR, 'glove.6B') MAX_SEQUENCE_LENGTH = 300 MAX_NUM_WORDS = 20000 EMBEDDING_DIM = 100 VALIDATION_SPLIT = 0.3 print('Preparing embedding matrix.') # first, build index mapping words in the embeddings set # to their embedding vector embeddings_index = {} with open(os.path.join(GLOVE_DIR, 'glove.6B.100d.txt'), encoding="utf-8") as f: for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefs print('Found %s word vectors in Glove embeddings.' % len(embeddings_index)) def get_embeddings(word): embedding_vector = embeddings_index.get(word) if embedding_vector is None: # words not found in embedding index will be all-zeros. embedding_vector = np.zeros(shape=(EMBEDDING_DIM, )) return embedding_vector train_texts = [" ".join(i[0]) for i in train_data] test_texts = [" ".join(i[0]) for i in test_data] train_texts[0] tokenizer = Tokenizer(num_words=MAX_NUM_WORDS) tokenizer.fit_on_texts(train_texts) train_sequences = tokenizer.texts_to_sequences(train_texts) #Converting text to a vector of word indexes test_sequences = tokenizer.texts_to_sequences(test_texts) word_index = tokenizer.word_index print('Found %s unique tokens.' % len(word_index)) """ Get features for all words in the sentence Features: - word context: a window of 2 words on either side of the current word, and current word. - POS context: a window of 2 POS tags on either side of the current word, and current tag. input: sentence as a list of tokens. output: list of dictionaries. each dict represents features for that word. """ def sent2feats(sentence): feats = [] sen_tags = pos_tag(sentence) #This format is specific to this POS tagger! for i in range(0,len(sentence)): word = sentence[i] wordfeats = {} #word features: word, prev 2 words, next 2 words in the sentence. wordfeats['word'] = word if i == 0: wordfeats["prevWord"] = wordfeats["prevSecondWord"] = "<S>" elif i==1: wordfeats["prevWord"] = sentence[0] wordfeats["prevSecondWord"] = "</S>" else: wordfeats["prevWord"] = sentence[i-1] wordfeats["prevSecondWord"] = sentence[i-2] #next two words as features if i == len(sentence)-2: wordfeats["nextWord"] = sentence[i+1] wordfeats["nextNextWord"] = "</S>" elif i==len(sentence)-1: wordfeats["nextWord"] = "</S>" wordfeats["nextNextWord"] = "</S>" else: wordfeats["nextWord"] = sentence[i+1] wordfeats["nextNextWord"] = sentence[i+2] #POS tag features: current tag, previous and next 2 tags. wordfeats['tag'] = sen_tags[i][1] if i == 0: wordfeats["prevTag"] = wordfeats["prevSecondTag"] = "<S>" elif i == 1: wordfeats["prevTag"] = sen_tags[0][1] wordfeats["prevSecondTag"] = "</S>" else: wordfeats["prevTag"] = sen_tags[i - 1][1] wordfeats["prevSecondTag"] = sen_tags[i - 2][1] # next two words as features if i == len(sentence) - 2: wordfeats["nextTag"] = sen_tags[i + 1][1] wordfeats["nextNextTag"] = "</S>" elif i == len(sentence) - 1: wordfeats["nextTag"] = "</S>" wordfeats["nextNextTag"] = "</S>" else: wordfeats["nextTag"] = sen_tags[i + 1][1] wordfeats["nextNextTag"] = sen_tags[i + 2][1] #Adding word vectors vector = get_embeddings(word) for iv,value in enumerate(vector): wordfeats['v{}'.format(iv)]=value feats.append(wordfeats) return feats #Extract features from the conll data, after loading it. def get_feats_conll(conll_data): feats = [] labels = [] for sentence in conll_data: feats.append(sent2feats(sentence[0])) labels.append(sentence[1]) return feats, labels #Train a sequence model def train_seq(X_train,Y_train,X_dev,Y_dev): crf = CRF(algorithm='lbfgs', c1=0.1, c2=10, max_iterations=50)#, all_possible_states=True) #Just to fit on training data crf.fit(X_train, Y_train) labels = list(crf.classes_) #testing: y_pred = crf.predict(X_dev) sorted_labels = sorted(labels, key=lambda name: (name[1:], name[0])) print(metrics.flat_f1_score(Y_dev, y_pred,average='weighted', labels=labels)) print(metrics.flat_classification_report(Y_dev, y_pred, labels=sorted_labels, digits=3)) #print(metrics.sequence_accuracy_score(Y_dev, y_pred)) get_confusion_matrix(Y_dev, y_pred,labels=sorted_labels) #source for this function: https://gist.github.com/zachguo/10296432 def print_cm(cm, labels): print("\n") """pretty print for confusion matrixes""" columnwidth = max([len(x) for x in labels] + [5]) # 5 is value length empty_cell = " " * columnwidth # Print header print(" " + empty_cell, end=" ") for label in labels: print("%{0}s".format(columnwidth) % label, end=" ") print() # Print rows for i, label1 in enumerate(labels): print(" %{0}s".format(columnwidth) % label1, end=" ") sum = 0 for j in range(len(labels)): cell = "%{0}.0f".format(columnwidth) % cm[i, j] sum = sum + int(cell) print(cell, end=" ") print(sum) #Prints the total number of instances per cat at the end. #python-crfsuite does not have a confusion matrix function, #so writing it using sklearn's confusion matrix and print_cm from github def get_confusion_matrix(y_true,y_pred,labels): trues,preds = [], [] for yseq_true, yseq_pred in zip(y_true, y_pred): trues.extend(yseq_true) preds.extend(yseq_pred) print_cm(confusion_matrix(trues,preds,labels),labels) print("Training a Sequence classification model with CRF") feats, labels = get_feats_conll(train_data) devfeats, devlabels = get_feats_conll(test_data) train_seq(feats, labels, devfeats, devlabels) print("Done with sequence model") ```
github_jupyter
# Convex Sets ### Affine Sets A set $C$ is *affine* if the line through any two distinct points in $C$ lies in $C$, i.e., if for all $x, y \in C$ and $\theta \in \mathbb{R}$, we have $\theta x + (1 - \theta) y \in C$ #### Example: Solution set for linear equations $C = \{x | A x = B\}$, where $A \in \mathbf{R}^{m \times n}$, $B \in \mathbf{R}^m$ ### Convex Sets A set $C$ is *convex* if the line segment between any two points in $C$ lies in $C$, i.e., if for all $x, y \in C$ and $\theta \in \mathbb{R}$ with $\theta \geq 0$, we have $\theta x + (1 - \theta) y \in C$ + Every affine set is convex set ![convex_set](images/convex_set.png) #### Convex combination $\theta_1 x_1 + \theta_2 x_2 + \dots + \theta_k x_k$, where $\theta_1 + \theta_2 + \dots + \theta_k = 1$ and $\theta_i \geq 0, i = 1, \dots, k$, a convex combination of the points $x_1, \dots, x_k$ + weighted average #### Convex Hull $\mathbf{conv} C = \{ \theta_1 x_1 + \theta_2 x_2 + \dots + \theta_k x_k | x_i \in C, \theta_i \geq 0, i = 1, \dots, k, \theta_1 + \theta_2 + \dots + \theta_k = 1 \}$ + the set of all convex combinations of points in $C$ + $\mathbf{conv} C$ + convex hull is convex + $k$: number of variables?? #### Cones A set $C$ is called a *cone*, if for every $x \in C$ and $\theta \geq 0$, we have $\theta x \in C$. A set $C$ is called a *convex cone*, if it is convex and cone, which means for any $x_1, x_2 \in C$ and $\theta_1, \theta_2 \geq 0$, we have $\theta_1 x_1 + \theta_2 x_2 \in C$ + In general, the element of cone is a vector in $\mathbf{R}^n$ ![cone](images/cone.png) ### Some important examples + a line segment is convex but not affine + any line is affine and thus convex #### Hyperplane $\{ x | a^T x = b \}$, where $a \in \mathbf{R}^n$, $a \neq 0$ and $b \in \mathbf{R}$ + solution set for linear equation + affine ![hyperplane](images/hyperplane.png) #### Halfplane $\{ x | a^T x \leq b \}$, where $a \in \mathbf{R}^n$, $a \neq 0$ and $b \in \mathbf{R}$ + space divided by hyperplane + convex, but not affine ![halfplane](images/halfplane.png) #### Euclidean balls $B(x_c, r) = \{ x | \lVert x - x_c \rVert_2 \leq r \} = \{ x | (x - x_c)^T(x - x_c) \leq r^2 \}$ #### Polyhedra A *polyhedron* is defined as the solution set of a finite number of linear equalities and inequalities: $$ \{ x | A x \preceq B, C x = d \}, $$ where $A \in \mathbf{R}^{m \times n}$, $B \in \mathbf{R^m}$ and the symbol $\preceq$ denotes *vector inequality*: $u \preceq v$ means $u_i \leq v_i$ for $i = 1, \dots, m$ ![](./images/polyhedron.png) ##### Example: nonnegative orthant $$\mathbf{R}_+^n = \{x \in \mathbf{R}^n| x \succeq 0\}$$ #### Positive semidefinite cone $\mathbf{S}^n$ is denoted to a set of symmetric matrices, $$\mathbf{S}^n = \{ X \in \mathbf{R}^{n \times n} | X = X^T \}$$. $\mathbf{S}_+^n$ is denoted to a set of symmetric positive semidefinite matrices: $$\mathbf{S}_+^n = \{ X \in \mathbf{S}^n | X \succeq 0 \}$$ + convex: $z^T (\theta A + (1 - \theta) B) z = z^T \theta A z + z^T (1 - \theta) B z \geq 0 + 0 = 0$ ##### Example: $\mathbf{S}^2$ $$ x = \begin{bmatrix} x & y \\ y & z \end{bmatrix} \in \mathbf{S}^2 \iff x \geq 0, z \geq 0, xz \geq y^2 $$ ![](./images/positive_semidefinite_cone.png) ## Operations that preserve convexity ### Intersection If $S_1$ and $S_2$ are convex, then $S_1 \cap S_2$ is convex #### Example: Positive Semidefinite Cone Prove positive semidefinite cone is convex by using intersection. A positive semidefinite cone can be expressed as the intersection of an infinite number of halfspaces, i.e.: $$ \bigcap_{z \ne 0}\{ X \in \mathbf{S}^n | z^T X z \geq 0 \}. $$ + $z^T X z$ is a linear function of $X$, and thus $\{ X \in \mathbf{S}^n | z^T X z \geq 0 \}$ is a halfspace + Intersection preserves the convexity even if the number of sets are infinite ### Affine function A function $f: \mathbf{R}^n \rightarrow \mathbf{R}^m$ is called *affine* if it is the sum of linear function and a constant, i.e., it has the form $f(x) = A x + b$ where $A \in \mathbf{R}^{m \times n}$ and $b \in \mathbf{R}^m$. If $S \in \mathbf{R}^n$ is convex and $f: \mathbf{R}^n \rightarrow \mathbf{R}^m$ is an affine function, then the image of $S$ under $f$: $f(S) = \{f(x) | x \in S\}$ is convex. In other words, $x$ is one of the elements in the set $S$ and the affine function value at $x$ forms a new set which is also convex. Similarly, if $f: \mathbf{R}^k \rightarrow \mathbf{R}^n$ is an affine function, the inverse image of $S$ under f: $f^{-1}(S) = \{x| f(x) \in S\}$ is convex. In other words, each element in $S$ can be interpreted as a function value over $f^{-1}(x)$ and the variable $x$ forms a new set which is convex. + scaling + translation + projection: if $S \subseteq \mathbf{R}^m \times \mathbf{R}^n$ is convex, then $\{ x_1 \in \mathbf{R}^m | (x_1, x_2) \in S \text{ for some } x_2 \in \mathbf{R}^n \}$ is convex + partial sum: If $S_1, S_2 \in \mathbf{R}^{n + m}$ are convex, then $S = \{ (x, y_1 + y_2) \in \mathbf{R}^{n + m} | (x, y_1) \in S_1, (x, y_2) \in S_2 \}$ is convex. For $m = 0$, the partial sum gives the intersection of $S1$ and $S_2$; for $n = 0$, it is set addition. #### Example 1: Polyhedron The polyhedron $\{x | A x \preceq b, C x = d\}$ can be expressed as the inverse image of the Cartesian product of the nonnegative orthant and the origin under the affine function $f(x) = (b - Ax, d - Cx)$: 1. $\{x | A x \preceq b, C x = d\} = \{ x | b - A x \succeq 0, d - C x = 0 \} = \{ x | f(x) \in \mathbf{R}_+^n \times \{ 0 \} \}$ where $\times$ represents the Cartesian product 2. Since the inverse image of a convex set $S$ under an affine function $f$, $\{ x | f(x) \in S \}$ is convex, and $\mathbf{R}_+^n \times \{ 0 \}$ is a convex set and $f(x) = (b - Ax, d - Cx)$ is an affine function, $\{ x | f(x) \in \mathbf{R}_+^n \times \{ 0 \} \}$ is a convex set 3. $\{x | A x \preceq b, C x = d\}$ is a convex set #### Example 2: Solution set of linear matrix inequality The soliution set of linear matrix inequality $\{X \in \mathbf{R}^n | A X \preceq B, A \in \mathbf{R}^{m \times n}, B \in \mathbf{R}^m\}$ is the inverse image of positive semidefinite cone $\{X \in \mathbf{S}^m | X \succeq 0 \}$ under the affine function $f(x): \mathbf{R}^n \rightarrow \mathbf{S}^m, f(X) = B - AX$: $$ \{X \in \mathbf{R}^n | A X \preceq B, A \in \mathbf{R}^{m \times n}, B \in \mathbf{R}^m\} = \{X \in \mathbf{R}^n | f(X) \succeq 0 \} $$ ### Perspective function We define a *perspective function* $P$:$ \mathbf{R}^{n+1} \rightarrow \mathbf{R}^{n}$, with $\mathbf{dom}{P} = \mathbf{R}^n \times \mathbf{R}_{++}$, as $P(z, t) = z / t$. (Here $\mathbf{R}_{++}$ denotes the sets of positive numbers: $\mathbf{R}_{++} = \{ x \in \mathbf{R} | x \gt 0 \}$, while $\mathbf{R}_+$ means non-negative) The pespective function scales or norms the vectors so the last component is one, and the drops the last component. $[4, 2, -3, 5]^T \rightarrow [0.8, 0.4, -0.6, 1]^T \rightarrow [0.8, 0.4, -0.6]^T$ ### Linear-fractional function A linear-fractional function is formed by composing perspective function with an affine function $g(x) = \begin{bmatrix}A \\ c^T\end{bmatrix} x + \begin{bmatrix}b\\ d\end{bmatrix}$: \begin{equation*} \begin{aligned} f(x) = \frac{Ax + b}{c^T x + d}, & & \mathbf{ dom }{f} = \{ x | c^T x + d \gt 0 \} \end{aligned} \end{equation*} + $A \in \mathbf{R}^{m \times n}$, $c \in \mathbf{R}^n$, $b \in \mathbf{R}^m$ and $d \in \mathbf{R}$ + $f$: $\mathbf{R}^n \rightarrow \mathbf{R}^m$ #### Project Interpretation We usually represents the linear-fractional function as the following: $$ Q = \begin{bmatrix} A & b \\ c^T & d \end{bmatrix} \in \mathbf{R}^{(m + 1) \times (n + 1)} $$ Define a ray $\mathcal{P}(z) = \{ z | t(z, 1), t \gt 0 \}$ in $\mathbf{R}^{n + 1}$. The inverse of the function $y = t(z, 1) = (tz, t1)$ is $z = \frac{y}{t}$, which means that we scale or normalize the vector so that its last component is $1$ and drops the last element. In other words, $\mathcal{P}(x)$ appends scaled $1$ to the vector, while $\mathcal{P}^{-1}(x)$(i.e., perspective function) scales or normalizes the vector and throw away the last element. Therefore, the linear-fractional function can be expressed as: $$ f(x) = \mathcal{P}^{-1}(Q(\mathcal{P}(x))) $$ #### Example: Inverse image with $f(x)$ to be a linear-fractional function $f(x) = \frac{Ax + b}{c^T x + d}$ where $c^T x + d \gt 0$ $C = \{ y | g^T y \leq b \}$ with $g^T \ne 0$ Then, we have: \begin{equation*} \begin{aligned} f^{-1}(C) & = \{ x | g^T f(x) \leq h \} \\ & = \{ x | g^T \frac{Ax+b}{c^T x + d} \leq h \} \\ & = \{ x | g^T (Ax+b) \leq h (c^T x + d), (c^T x + d) \gt 0 \} \\ & = \{ x | (A^T g - h c)^Tx \leq hd - g^T b, (c^T x + d) \gt 0 \}, \end{aligned} \end{equation*} which is another halfspace. (Why is not the answer $g^T A - h c^T$??) ### Generalized inequalities #### Proper cone A cone is called proper cone if it satisfies the following: + convex + closed: contain boundary + solid: has nonempty interior + pointed: contains no line *Take Care*:The following cone is not convex and not pointed: <img src="./images/cone_eg1.png" width="25%"/> ##### Examples + nonnegative orthant #### Generalized inequalities A proper cone can be used to define generalized inequalities. $$x \preceq_K y \iff y - x \in K$$ $$x \prec_K y \iff y - x \in \mathbf{int }K$$ ##### Examples + componentwise inequality($K$ = $\mathbf{R}_+^n$) $$ x \preceq_{\mathbf{R}_+^n} y \iff x_i \leq y_i, i = 1, \dots, n $$ #### Minimum and minimal elements $\preceq_K$ is not in general a linear ordering: we can have $x \npreceq y$ and $y \npreceq x$ A point x is the *minimum* element of $S$ if and only if $S \subseteq x + K$, where $x + K$ denotes all the points that are comparable to $x$ and greater than or equal to $x$ (according to $\preceq_K$) A point x is the *minimal* element if and only if $(x - K) \cap K = \{x\}$, where $x - K$ denotes all the points that are comparable to $x$ and less than or equal to x (according to $\preceq_K$ and the only point in common with $S$ is $x$ ![](./images/minimum_minimal.png) ### Separating and supporting hyperplanes #### Separating Hyperplane Theorem Suppose $C$ and $D$ are nonempty disjoint convex sets, there exists a $a \neq 0$ and $b$ such that $a^T x \leq b$ for all $x \in C$ and $a^T x \geq b$ for all $x \in D$, i.e., there exists a hyperplane that separates them. + not unique ![](./images/separating_hyperplane.png) #### Supporting hyperplanes Suppose $C \in \mathbf{R}^n$, and $x_0$ is a point in its boundary $\mathbf{bd} C$. If $a \neq 0$ satisifies $a^T x \leq a^T x_0$ for all $x \in C$, then the hyperplane $\{x | a^T x = a^T x_0\}$ is called the *supporting hyperplane* to $C$ at the point $x_0$. + not unique ![](./images/supporting_hyperplane.png) ## Dual cones and generalized inequalities ### Dual cones Let $K$ be a cone, then $K^* = \{ y | x^T y \geq 0 \text{ for all } x \in K \}$ is called the dual cones. + $K^*$ is convex, even when the $K$ is not *Proof.* $y_1, y_2 \in K^* \implies \forall x \in K: x^T y_1 \geq 0, x^T y_2 \geq 0 \implies \forall x \in K, \theta \geq 0, \theta \leq 0: \theta x^T y_1 \geq 0, (1-\theta) x^T y_2 \geq 0 \implies \theta x^T y_1 + (1-\theta) x^T y_2 \geq 0 \implies x^T(\theta y_1 + (1 - \theta)y_2) \geq 0 \implies \theta y_1 + (1 - \theta)y_2 \in K^*$. Besides, another idea is that we can view $K^*$ as the intersection of a set of halfspaces <img src="./images/dual_cone.png" width="25%"> + $K^*$ is closed + If $K$ is a proper cone, then so is its dual $K^*$ #### Example 1: Nonnegative orthant is self-dual Nonnegative orthant is defined as $\mathbf{R}_+^n = \{x \in \mathbf{R}^n | x \succeq 0 \}$. Since for all $x \succeq 0$, we have $x^T y \geq 0$, if $x$ is $[\dots, 1, \dots]$, then $y_j$ must be nonnegative. Therefore, $y \succeq 0$, vice versa. #### Example 2: Positive semidefinite cone is self-dual + $Y \notin \mathbf{S}_+^n \implies Y \notin (\mathbf{S}_+^n)^* \iff Y \in (\mathbf{S}_+^n)^* \implies Y \in \mathbf{S}_+^n$ *Proof.* $Y \notin \mathbf{S}_+^n \implies \exists v \in \mathbf{R}^n, v^T Y v = tr(v v^T Y) \lt 0 \implies \exists X = v v^T, tr(X Y) \lt 0$. Since $X = v v^T$ is a positive semidefinite matrix according to the definition of positie semidefinite matrix, $X \in \mathbf{S}_+^n$ $\implies$ $Y \notin (\mathbf{S}_+^n)^*$ + $X, Y \in \mathbf{S}_+^n \implies Y \in (\mathbf{S}_+^n)^*$ *Proof.* $X$ is a normal matrix, and thus can be expressed as $X = \sum_i^n{\lambda_i q_i q_i^T}$ $\implies$ $\mathbf{tr}(YX) = \mathbf{tr}(Y\sum_i^n{\lambda_i q_i q_i^T}) = \mathbf{tr}(\sum_i^n{\lambda_i q_i Y q_i^T})$. Since $Y \in \mathbf{S}_+^n$ and $\lambda_i \geq 0$ because of the property of positive semidefinite matrix, $q_i Y q_i^T \geq 0$ and $\mathbf{tr}(YX) \geq 0$ $\implies$ $Y \in (\mathbf{S}_+^n)^*$
github_jupyter
# Continuously parameterized gates This tutorial demonstrates how gate labels can be given "arguments". Let's get started by some usual imports: ``` import numpy as np import pygsti from pygsti.baseobjs import Label from pygsti.circuits import Circuit from pygsti.modelmembers import operations as op ``` **Arguments** are just tags that get associated with a gate label, and can include continuous parameters such as an angle of rotation. Arguments are held distinct from the "state space labels" (usually equivalent to "qubit labels") associated with a gate, which typically specify the *target* qubits for a gate, and thereby determing where gate is displayed when drawing a circuit (on which qubit lines). Here are some ways you can create labels containing arguments. A common theme is that arguments are indicated by a preceding semicolon (;): ``` #Different ways of creating a gate label that contains a single argument l = Label('Ga',args=(1.4,1.2)) l2 = Label(('Ga',';1.4',';1.2')) #Note: in this case the arguments are *strings*, not floats l3 = Label(('Ga',';',1.4,';',1.2)) ``` You can use the more compact preceded-with-semicolon notation when construting `Circuit`s from tuples or strings: ``` # standard 1Q circuit, just for reference c = Circuit( ('Gx','Gy') ) print(c) # 1Q circuit with explicit qubit label c = Circuit( [('Gx',0),('Gy',0)] ) print(c) # adding arguments c = Circuit( [('Gx',0,';1.4'),('Gy',';1.2',0)] ) print(c) #Or like this: c = Circuit("Gx;1.4:0*Gy;1.2:0") print(c) ``` Now that we know how to make circuits containing labels with arguments, let's cover how you connect these labels with gate operations. A gate label without any arguments corresponds to an "operator" object in pyGSTi; a label with arguments typically corresponds to an object *factory* object. A factory, as its name implies, creates operator objects "on demand" using a supplied set of arguments which are taken from the label in a circuit. The main function in an `OpFactory` object is `create_object`, which accepts a tuple of arguments as `args` and is expected to return a gate object. Here's an example of a simple factory that expects a single argument (see the assert statements), and so would correspond to a continuously-parameterized-gate with a single continuous parameter. In this case, our factory generates a X-rotation gate whose rotation angle is given by the one and only argument. We return this as a `StaticArbitraryOp` because we're not worrying about how the gate is parameterized for now (parameters are the things that GST twiddles with, and are distinct from arguments, which are fixed by the circuit). ``` class XRotationOpFactory(op.OpFactory): def __init__(self): op.OpFactory.__init__(self, state_space=1, evotype="densitymx") def create_object(self, args=None, sslbls=None): assert(sslbls is None) # don't worry about sslbls for now -- these are for factories that can create gates placed at arbitrary circuit locations assert(len(args) == 1) theta = float(args[0])/2.0 #note we convert to float b/c the args can be strings depending on how the circuit is specified b = 2*np.cos(theta)*np.sin(theta) c = np.cos(theta)**2 - np.sin(theta)**2 superop = np.array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, c, -b], [0, 0, b, c]],'d') return op.StaticArbitraryOp(superop) ``` Next, we build a model that contains an instance of `XRotationFactory` that will be invoked when a circuit contains a `"Ga"` gate. So far, only *implicit* models are allowed to contain factories, so we'll create a `LocalNoiseModel` (see the [implicit model tutorial](../ImplicitModel.ipynb)) for a single qubit with the standard X, and Y gates, and then add our factory: ``` pspec = pygsti.processors.QubitProcessorSpec(1, ['Gx', 'Gy']) mdl = pygsti.models.create_crosstalk_free_model(pspec) Ga_factory = XRotationOpFactory() mdl.factories['layers'][('Ga',0)] = Ga_factory ``` The resulting model is capable of computing outcome probabilities for circuits containing `Gx`, `Gy`, *or* `Ga;<ANGLE>` on any of the qubits, where ANGLE is a floating point angle in radians that will get passed to the `create_object` function of our `XRotationFactory` instance. Let's try this out (note that we need to specify the qubit label, 0, because local noise models create gates using multi-qubit conventions): ``` c1 = pygsti.circuits.Circuit('Gx:0*Ga;3.1:0*Gx:0') print(c1) mdl.probabilities(c1) ``` The above is readily extensible to systems with more qubits. The only nontrivial addition is that our factory, which creates 1-qubit gates, must be "embedded" within a larger collection of qubits to result in a n-qubit-gate factory. This step is easily accomplished using the builtin `EmbeddingOpFactory` object, which takes a tuple of all the qubits, e.g. `(0,1)` and a tuple of the subset of qubits therein to embed into, e.g. `(0,)`. This is illustrated below for the 2-qubit case, along with a demonstration of how a more complex 2-qubit circuit can be simulated: ``` pspec2 = pygsti.processors.QubitProcessorSpec(2, ('Gx','Gy','Gcnot'), geometry='line') mdl2 = pygsti.models.create_crosstalk_free_model(pspec2) Ga_factory = XRotationOpFactory() mdl2.factories['layers'][('Ga',0)] = op.EmbeddedOpFactory((0,1),(0,),Ga_factory) mdl2.factories['layers'][('Ga',1)] = op.EmbeddedOpFactory((0,1),(1,),Ga_factory) c2 = pygsti.circuits.Circuit("[Gx:0Ga;1.2:1][Ga;1.4:0][Gcnot:0:1][Gy:0Ga;0.3:1]" ) print(c2) mdl2.probabilities(c2) ```
github_jupyter
*Note: You are currently reading this using Google Colaboratory which is a cloud-hosted version of Jupyter Notebook. This is a document containing both text cells for documentation and runnable code cells. If you are unfamiliar with Jupyter Notebook, watch this 3-minute introduction before starting this challenge: https://www.youtube.com/watch?v=inN8seMm7UI* --- In this challenge, you will predict healthcare costs using a regression algorithm. You are given a dataset that contains information about different people including their healthcare costs. Use the data to predict healthcare costs based on new data. The first two cells of this notebook import libraries and the data. Make sure to convert categorical data to numbers. Use 80% of the data as the `train_dataset` and 20% of the data as the `test_dataset`. `pop` off the "expenses" column from these datasets to create new datasets called `train_labels` and `test_labels`. Use these labels when training your model. Create a model and train it with the `train_dataset`. Run the final cell in this notebook to check your model. The final cell will use the unseen `test_dataset` to check how well the model generalizes. To pass the challenge, `model.evaluate` must return a Mean Absolute Error of under 3500. This means it predicts health care costs correctly within $3500. The final cell will also predict expenses using the `test_dataset` and graph the results. ``` # Import libraries. You may or may not use all of these. !pip install -q git+https://github.com/tensorflow/docs import matplotlib.pyplot as plt import numpy as np import pandas as pd try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_docs as tfdocs import tensorflow_docs.plots import tensorflow_docs.modeling from sklearn.model_selection import train_test_split from tensorflow.keras.layers.experimental import preprocessing # Import data !wget https://cdn.freecodecamp.org/project-data/health-costs/insurance.csv dataset = pd.read_csv('insurance.csv') dataset.tail() sex_dummy = pd.get_dummies(dataset['sex']) smoker_dummy = pd.get_dummies(dataset['smoker']) region_dummy = pd.get_dummies(dataset['region']) dataset = pd.concat([dataset, sex_dummy, smoker_dummy, region_dummy], axis=1) dataset = dataset.drop(['sex', 'smoker', 'region'], axis=1) dataset.tail() X = dataset y = X.pop('expenses') X.head(), y.head() X_train, X_test, y_train, y_test = train_test_split( ... X, y, test_size=0.2, random_state=42) normalizer = preprocessing.Normalization() normalizer.adapt(np.array(X_train)) model = tf.keras.Sequential([ normalizer, layers.Dense(units=1) ]) model.compile( optimizer=tf.optimizers.Adam(learning_rate=0.1), loss='mean_absolute_error') history = model.fit( X_train, y_train, epochs=150, # suppress logging verbose=0, # Calculate validation results on 20% of the training data validation_split = 0.2) test_results = {} loss = model.evaluate(X_test, y_test, verbose=2) # RUN THIS CELL TO TEST YOUR MODEL. DO NOT MODIFY CONTENTS. # Test model by checking how well the model generalizes using the test set. loss, mae, mse = model.evaluate(X_test, y_test, verbose=2) print("Testing set Mean Abs Error: {:5.2f} expenses".format(mae)) if mae < 3500: print("You passed the challenge. Great job!") else: print("The Mean Abs Error must be less than 3500. Keep trying.") # Plot predictions. test_predictions = model.predict(test_dataset).flatten() a = plt.axes(aspect='equal') plt.scatter(test_labels, test_predictions) plt.xlabel('True values (expenses)') plt.ylabel('Predictions (expenses)') lims = [0, 50000] plt.xlim(lims) plt.ylim(lims) _ = plt.plot(lims,lims) ``` # New Section
github_jupyter
``` import pandas as pd import numpy as np import matplotlib as plt pd.options.display.max_columns = 107 ``` # Getting started ``` df = pd.read_csv('../listings.csv') print(df.shape) df.head(2) ``` ## Cut out the garbage ``` # first: scrap the URLs and ID numbers url_id_columns = ['id', 'listing_url', 'scrape_id', 'thumbnail_url', 'medium_url', 'picture_url', 'xl_picture_url', 'host_id', 'host_url', 'host_thumbnail_url', 'host_picture_url'] df = df.drop(columns=url_id_columns) print(df.shape) df.head(2) # create a separate dataframe of text features for possible later analysis # remove them from the main dataframe text_columns = ['name', 'summary', 'space', 'description', 'neighborhood_overview', 'notes', 'transit', 'access', 'interaction', 'house_rules', 'host_name', 'host_about'] text_df = pd.DataFrame(data=df[text_columns]) df = df.drop(columns=text_columns) text_df.head() print(df.shape) df.head() # drop no-variance columns -- all values the same empty_columns = ['experiences_offered', 'host_acceptance_rate', 'neighbourhood_group_cleansed', 'state', 'market', 'country_code', 'country', 'jurisdiction_names'] df = df.drop(columns=empty_columns) print(df.shape) df.head() ``` ## Clean the data ``` # 'relevant' here refers to actual use in the model -- expanding over time relevant_cols = ['bedrooms', 'bathrooms', 'neighbourhood_cleansed'] df[relevant_cols].isnull().sum() # chosen cleaning method: replace nulls with medians from that column for feature in relevant_cols: df[feature] = df[feature].fillna(value=df[feature].median()) df[relevant_cols].isnull().sum() # target value, price, is stored as strings and has some 0 values free_rentals = list(df[df['price'] == "$0.00"].index) df = df.drop(index=free_rentals) print(df.shape) # convert to floats df['price'] = df['price'].apply(lambda p: float(p.strip('$').replace(",",''))) df['price'].describe() ``` ## Small Data Visualizations ``` df['bathrooms'].hist(); df['bedrooms'].hist(); df['neighbourhood_cleansed'].hist(); df.plot.scatter(x='bedrooms', y='price'); ``` # Feature Engineering ## Extracting amenities ``` # amenities are currently saved as a string # step one is to break it into a list of standard format def am_to_list(amenities): li = amenities.split(",") for i in range(len(li)): li[i] = li[i].replace('"', '') li[i] = li[i].replace("'", '') li[i] = li[i].strip("{") li[i] = li[i].strip("}") return li # create improved amenities column df['am_list'] = df['amenities'].apply(am_to_list) df.head() # for reference, create a set of all possible amenities superlist = [] for li in df['am_list']: superlist.extend(li) all_ams = set(superlist) all_ams # that's a lot, but we can just analyze the ones that seem potentially important # new columns for whether each potential feature occurs in the list of amenities potential_features = ['Air conditioning', 'Central air conditioning', 'Balcony', 'Dishwasher', 'Free parking on premises', 'Full kitchen', 'Kitchen', 'Kitchenette', 'Garden or backyard', 'Gated community', 'Internet', 'Wifi', 'Office', 'Smart Technology', 'Suitable for events'] for feature in potential_features: df[feature] = df['am_list'].apply(lambda li: feature in li) df.head() # do these features divide the data adequately? for feature in potential_features: print(df[feature].value_counts(dropna=False)) # lots to throw out from scipy.stats import ttest_ind df[df['Internet'] == True].head() ``` ## One hot encoding for room type ``` df['room_type'].value_counts(dropna=False) df['entire'] = df['room_type'] == 'Entire home/apt' df['private'] = df['room_type'] == 'Private room' df['shared'] = df['room_type'] == 'Shared room' df['hotel'] = df['room_type'] == 'Hotel room' df['hotel'].value_counts() ``` ## One hot encoding for neighborhood ``` df['neighbourhood'].value_counts(dropna=False) # just for the exercise, let's only look at the top n neighborhoods cutoff = 10 top_hoods = df['neighbourhood'].value_counts(dropna=True).index[:cutoff] for hood in top_hoods: df[hood] = df['neighbourhood'] == hood df.head() df['review_scores_rating'].describe() ``` # Modeling ``` from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestRegressor # first, a basic kitchen sink model to check feature importance features = ['bedrooms', 'bathrooms', 'neighbourhood_cleansed', 'Free parking on premises', 'Dishwasher', 'Internet', 'Garden or backyard', 'Suitable for events', 'Kitchen', 'Air conditioning', 'latitude', 'longitude', 'entire', 'private', 'shared', 'hotel'] features.extend(top_hoods) X = df[features] y = df['price'] X_train, X_test, y_train, y_test = train_test_split(X, y) X_train.shape, X_test.shape, y_train.shape, y_test.shape forest = RandomForestRegressor(n_estimators=100) forest.fit(X_train, y_train) for i in range(len(forest.feature_importances_)): print(f'{features[i]}: {forest.feature_importances_[i]:.5f}') forest.score(X_test, y_test) # slimmed-down model including only the features that reliably scored >.01 good_feats = ['bedrooms', 'bathrooms', 'neighbourhood_cleansed', 'Free parking on premises', 'Dishwasher', 'Internet', 'Garden or backyard', 'Suitable for events', 'Kitchen', 'latitude', 'longitude', 'entire'] X = df[good_feats] y = df['price'] X_train, X_test, y_train, y_test = train_test_split(X, y) X_train.shape, X_test.shape, y_train.shape, y_test.shape forest = RandomForestRegressor(n_estimators=100) forest.fit(X_train, y_train) for i in range(len(forest.feature_importances_)): print(f'{features[i]}: {forest.feature_importances_[i]:.5f}') forest.score(X_test, y_test) top_hoods ```
github_jupyter
## ChemEPy cookbook Welcome to the chemEPy cookbook an interactive Jupyter notebook enviorment designed to teach a python toolchain for chemical engineering. As of now the modules we are going to work on are the IAPWS module for the properties of water/steam, thermo and the chemEPy module currently under developement in this github repo. The way we are currently building this is going to be closely modeled on the scipy module. The current version is built on top of a variety of modules that are common in python scientific computing to see the dependencies please go to requirments.txt. In order to cut down on load times, and keep the module lightweight those modules that do not require reading in tables will load on initialization, but those that do require tables will need to be loaded seperately the same way you load in the optimization or linalg packages in scipy. ``` import chemEPy from chemEPy import eos from chemEPy import equations #ignore this cell. I am using it to reload the package after I rebuild it when I modify it from importlib import reload reload(chemEPy) reload(chemEPy.eos) reload(chemEPy.equations) ``` As of now there are two available equations of state, ideal gas and van Der Waals. These functions are typical of the design thus far. They are supposed to be generalizable and intuitive, but do not rely on any computer algebra. As of now there is no use of sympy in the module and for the foreseeable future we would like to keep it this way. This means that you the user have one important job, make sure your units line up correctly. The lack of computer algebra greatly simpifies this process and it most cases this means that the only return from a function will be a float or collection of floats. In exchange for careful attention to units we are going to try and make this module easy to use, and as flexible as possible. Let us begin by looking at some of the info functions. ``` eos.idealGasInfo() eos.vdwInfo() ``` These equations are built using the python patern kwargs which means that you are going to be able to put your arguments in any order that you like, but remember the units are on you. Let us examine how the syntax for these functions work ``` eos.idealGas(P = 1, R = 0.08205, n = 1, T = 273) import numpy as np from matplotlib import pyplot as plt import math Parrow = np.linspace(0.1,1.1,101) volData1 = eos.idealGas(P = Parrow, R = 0.08205, n = 1, T = 273) nData1 = eos.idealGas(P = Parrow, R = 0.08205, T = 273, V = 22.4) Tarrow = np.linspace(100,400,301) volData2 = eos.idealGas(P = 1, R = 0.08205, n = 1, T = Tarrow) nData2 = eos.idealGas(P = 1, R = 0.08205, T = Tarrow, V = 22.4) fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2,2, figsize = (15,7.5)) ax1.plot(Parrow, volData1) ax1.set(ylabel = 'Volume(L)') ax2.plot(Tarrow, volData2, 'tab:green') ax3.plot(Parrow, nData1, 'tab:red') ax3.set(xlabel = 'Pressure(atm)', ylabel = '# of moles') ax4.plot(Tarrow, nData2, 'tab:orange') ax4.set(xlabel = 'Temperature(K)') fig.suptitle('Ideal gas plots') ``` Let us dig into our example above a little bit. As you can see there are several intuitive things about the way the ideal gas law works. You begin by stating your arguements in the function explictly, this means you do not need to worry about the order you put them in. It also means that the function is going to figure out which of your arguments are missing and then return the correct one. You can also feed the function vectors in the form of a numpy array which is what we did to build these graphs. Now we will move on to the Van der Waals eos. If you scroll up you can see that this equation of state does specify units because the correction terms a and b have units. First let us see which materials are available to this function. ``` eos.vdwNames() ``` Quite a nice variety! As of the time of writing this the goal will be to have materials use a common capitialization format, but when in doubt see if there is a helper function to make your life easier. In the info function we can also see that the gas constant R is given in the correct units. ``` temp1 = eos.vdw(name = 'Naphthalene', V = 20, P = 5, n = 1, R = 0.08314) temp2 = eos.idealGas(V = 20, P = 5, n = 1, R = 0.08314) v1 = eos.vdw(name = 'Naphthalene', T = 1200, P = 5, n = 1, R = 0.08314) v2 = eos.idealGas(T = 1200, P = 5, n = 1, R = 0.08314) print('temperature with Van der Waals eos is:', temp1, 'K') print('volume with Van der Waals eos is:', v1, 'L \n') print('temperature with ideal gas eos is:', temp2, 'K') print('volume with ideal gas eos is:', v2, 'L') ``` As of now the Van der Waals solver for V and n uses a solver for a non-linear system, Newton's method, with an initial guess supplied by the ideal gas equation. A convergence study is planned for this, but it means that convergence is not guarenteed. Now we will look at a sub-module that loaded on initialization, fluidNumbers, which provides functions for a variety of numbers in fluid dynamics, most of them are dimensionless parameters. ``` chemEPy.fluidNumbers.reynoldsInfo() chemEPy.fluidNumbers.rayleighInfo() ``` Again these functions are designed to parse the information you give them and then determine if they are a valid set of arguements so oftentimes there are multiple combinations of arguments you can feed these. ``` print(chemEPy.fluidNumbers.reynolds(rho = 1, u = 2, L = 0.1, mu = 1e-2)) print(chemEPy.fluidNumbers.reynolds(u = 2, L = 0.1, nu = 1e-2)) print(chemEPy.fluidNumbers.rayleigh(g = 9.81, beta = 1/273, Ts = 295, Tinf = 273, L = 1, nu = 1.5e-5, alpha = \ chemEPy.fluidNumbers.thermalDiffusivity(k = 0.025, rho = 1.2, cp = 1000))) ``` Available functions for the fluidNumbers module are currently: archimedes, biot, graetz, grashoff, nusselt, peclet, prandtl, rayleigh, reynolds, and thermalDiffusivity Now we will explore the nusseltCor submodule which is designed to work through the nusselt number correlations for convective heat transfer ``` chemEPy.nusseltCor.nuInfo() ``` Wow thats a lot of possible arguments! But each line guides you through what you will need to gather before you proceed. This submodule combined with the fluidNumbers makes for a powerful quick workflow that can speed you through the process. Let's take a look at an example where we find the convective heat transfer for free convection from a cylinder. ``` ra = chemEPy.fluidNumbers.rayleigh(g = 9.81, beta = 1/273, Ts = 323, Tinf = 273, L = 0.1, nu = 1.5e-5, alpha = \ chemEPy.fluidNumbers.thermalDiffusivity(k = 0.025, rho = 1.2, cp = 1000)) #recall that L is the characteristic length, which in this case is the diameter of the cylinder pr = 0.71 #physical constant lookup area = math.pi*0.1*2 #this cylinder has a diameter of 0.1 and length 2 ts = 323 tinf = 273 h = chemEPy.nusseltCor.nu(forced = False, shape = 'cylinder', Ra = ra, Pr = pr) * 0.025/0.1 q = h*area*(ts-tinf) print('The total convective heat transfer is:', q, 'watts') ``` Now we will look at two modules which are designed to help with physical properties. First iapws which is particulary useful for the properties of water/steam and has some additional features such as heavy water and ammonia. Second thermo which is useful for a broader variety of materials but has a different design philosphy and uses a significant amount of computer algebra. Both packages are on PyPI and have good documentation which can be found at https://pypi.org/project/iapws/ and https://pypi.org/project/thermo/ ``` import iapws import thermo from iapws import IAPWS97 as ia water = ia(T = 170+273.15, x = 0.5) #saturated water at 170 C with quality = 0.5 print(water.Liquid.cp, water.Vapor.cp) #heat capacities print(water.Liquid.v, water.Vapor.v, water.v) #specific volumes print(water.Liquid.f, water.Vapor.f) #fugacity should be equal for VLE ``` The iapws module is designed more along the lines of the fluidNumbers submodule we looked at above. It does not take in positional arguments and instead lets the user specify a combination of arguments which it then autochecks to make sure that the system is appriopiatly specified. In the two phase region you will be able to specify one free physical parameter and the quality of the water/steam and in the one phase region you will be able to specify two parameters. ``` water = ia(T = 170+273.15, P = 1) #pressure is in MPa so this is slightly less than 10 atm water.v, water.rho, water.mu ``` There are other submodules in the iapws package that you can explore and there are additional parameters included in the IAPWS97 data for a full list please see: https://iapws.readthedocs.io/en/latest/iapws.iapws97.html#iapws.iapws97.IAPWS97 Now we will look at some of the functionality in the thermo module. Thermo is large and impressive module with dozens of submodules some of which overlap with the functionality of chemEPy. If you are interested in some of these other submodules you should look further into them, but they are different from chemEPy. First the functions in thermo are primarily written with positional arguments so they are not going to try and parse out the missing arguement and solve for it. This means that some of the functions are more specific and less flexible. That said thermo has a fanstic library that can speed up physical property calculation called chemical. For detailed information on all the functionality please see: https://thermo.readthedocs.io/thermo.chemical.html ``` from thermo.chemical import Chemical ip = Chemical('isopentane') #all chemicals are loaded by default to 298.15 K and 101325 Pa print(ip.Tm, ip.Tb, ip.rho, ip.Cp, ip.mu) #melting, boiling, density, cp, and dynamic viscosity at current state ip.calculate(T = 373.15, P = 1e5) #change temperature and pressure print(ip.phase, ip.alpha) #for pure components we can see the phase, and find thermal diffusivity ip.VaporPressure.solve_prop(1e5) #solve for a dependent properity ip.VolumeLiquid.plot_isotherm(T = 250, Pmin = 1e5, Pmax = 1e7) ``` Now we will come back to the chemEPy module and look at the radiation and conduction submodules ``` chemEPy.radiation.qInfo() ``` The function q will return the total heat exchanged between two black or grey bodies. You can set the units to imperial if you desire and there are several optional arguements. The viewFactor arguement will be used to compute the composite view factor if both bodies are grey. Let us look at an example where we find the energy exchanged between two grey bodies of uneven areas with a known view factor. ``` chemEPy.radiation.q(body1 = 'grey', body2 = 'grey', area = 0.5, area2 = 0.3, t1 = 300, t2 = 500, epsilon1 = 0.9,\ epsilon2 = 0.8, viewFactor = 0.8) ``` This result is negative because the function is expressing the energy going from body one to body 2. In the future adding functionality on how to compute different view factors will be included in the radiation submodule ``` equations.antoine(name = 'Water', P = 1) ```
github_jupyter
# Implementazione da zero di KMeans con un esempio geografico ``` import numpy as np from geopy import distance from sklearn.metrics import pairwise_distances import matplotlib.pyplot as plt data_file = './data/IT.txt' with open(data_file, 'r') as infile: all_lines = infile.readlines() lines = np.random.choice(all_lines[1:], size=500, replace=False) len(lines) cities, latitude, longitude = [], [], [] for line in lines: row = line.split(',') cities.append(row[2]) latitude.append(float(row[-3])) longitude.append(float(row[-2])) geo = np.array([latitude, longitude]).T fig, ax = plt.subplots(figsize=(8, 8)) ax.scatter(geo[:,1], geo[:,0], alpha=0.2, c=assignement) ax.scatter(C[:,1], C[:,0], alpha=0.8, s=400, c=range(k)) plt.tight_layout() plt.show() from collections import defaultdict town_index = defaultdict(list) for city_index, cluster_id in enumerate(assignement): town_index[cluster_id].append(city_index) town_names = [cities[i] for i in town_index[0]] town_names[:10] ``` ## Problema > come collocare k uffici di spedizione in modo da minimizzare le distanze dai comuni italiani ### Requisiti - Rappresentazione dei punti da raggruppare $\rightarrow$ $M^{n \times 2}$ - Parametro $k$ - Generare casualmente $k$ punti nello spazio $M^{n \times 2}$ $\rightarrow$ generare casualmente una matrice $C^{k \times 2}$ - Funzione che calcoli la distanza fra ogni punto e ogni centro $c \in C$ - Euclidean distance - Geodesic ditance - Assegnare i punti al centro più vicino - Funzione che calcoli $$RSS = \sum\limits_{k \in K} \sum\limits_{p \in k} (p - \eta_k)^2$$ - Ciclo per rieseguire i punti precedenti fino alla condizione di uscita - Condizione di uscita ``` k = 4 C = np.zeros((k, 2)) x = np.random.choice(geo[:,0], 4) y = np.random.choice(geo[:,1], 4) C[:,0] = x C[:,1] = y euclidean = lambda x, y: np.linalg.norm(x - y) geodist = lambda x, y: distance.distance(x, y).km history = [] for iterations in range(1000): storage = pairwise_distances(geo, C, metric=geodist) assignement = np.argmin(storage, axis=1) RSS = 0 for i in range(k): indexes = [j for j, x in enumerate(assignement) if x == i] cluster_geo = geo[indexes,:] C[i] = cluster_geo.mean(axis=0) RSS += sum([geodist(x, C[i])**2 for x in cluster_geo]) history.append(RSS) if len(history) > 2 and (history[-2] - history[-1]) < 100: break fig, ax = plt.subplots(figsize=(8, 8)) ax.plot(history) plt.tight_layout() plt.show() ``` ### Versione from scratch ``` history = [] for iterations in range(1000): storage = np.zeros((geo.shape[0], C.shape[0])) for i, town in enumerate(geo): storage[i] = np.zeros(k) for j, center in enumerate(C): delta = geodist(town, center) storage[i,j] = delta assignement = np.argmin(storage, axis=1) RSS = 0 for i in range(k): indexes = [j for j, x in enumerate(assignement) if x == i] cluster_geo = geo[indexes,:] C[i] = cluster_geo.mean(axis=0) RSS += sum([geodist(x, C[i])**2 for x in cluster_geo]) history.append(RSS) if len(history) > 2 and (history[-2] - history[-1]) < 100: break ``` ### Version Sklearn ``` from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=4) y_hat = kmeans.fit_predict(geo) lines = np.random.choice(all_lines[1:], size=len(all_lines[1:]), replace=False) cities, latitude, longitude = [], [], [] for line in lines: row = line.split(',') cities.append(row[2]) latitude.append(float(row[-3])) longitude.append(float(row[-2])) geo = np.array([latitude, longitude]).T fig, ax = plt.subplots(figsize=(8, 8)) ax.scatter(geo[:,1], geo[:,0], alpha=0.4, c=y_hat) plt.tight_layout() plt.show() ```
github_jupyter
<table class="ee-notebook-buttons" align="left"> <td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/FeatureCollection/join.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td> <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/join.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/join.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td> </table> ## Install Earth Engine API and geemap Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`. The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet. **Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). ``` # Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import geemap.eefolium as emap except: import geemap as emap # Authenticates and initializes Earth Engine import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ``` ## Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function. ``` Map = emap.Map(center=[40,-100], zoom=4) Map.add_basemap('ROADMAP') # Add Google Map Map ``` ## Add Earth Engine Python script ``` # Add Earth Engine dataset Map.setCenter(-122.45, 37.75, 13) bart = ee.FeatureCollection('GOOGLE/EE/DEMOS/bart-locations') parks = ee.FeatureCollection('GOOGLE/EE/DEMOS/sf-parks') buffered_bart = bart.map(lambda f: f.buffer(2000)) join_filter = ee.Filter.withinDistance(2000, '.geo', None, '.geo') close_parks = ee.Join.simple().apply(parks, bart, join_filter) Map.addLayer(buffered_bart, {'color': 'b0b0b0'}, "BART Stations") Map.addLayer(close_parks, {'color': '008000'}, "Parks") ``` ## Display Earth Engine data layers ``` Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ```
github_jupyter
<img src="https://s8.hostingkartinok.com/uploads/images/2018/08/308b49fcfbc619d629fe4604bceb67ac.jpg" width=500, height=450> <h3 style="text-align: center;"><b>Физтех-Школа Прикладной математики и информатики (ФПМИ) МФТИ</b></h3> --- <h2 style="text-align: center;"><b>Детектирование объектов с помощью YOLOv3</b></h2> <img src="https://i.ytimg.com/vi/s8Ui_kV9dhw/maxresdefault.jpg" width=600 height=450> <h4 style="text-align: center;"><b>Составитель: Илья Захаркин (ФИВТ МФТИ, NeurusLab). По всем вопросам в Telegram: <a>@ilyazakharkin</a></b></h4> На семинаре мы запускали SSD и Mask-RCNN из Tensorflow Object Detection API. На лекции же подробно разбирался алгоритм YOLOv3, давайте же теперь этот самый детектор и попробуем применить на практике. <h2 style="text-align: center;"><b>YOLOv3</b></h2> **Идея детекторов:** использовать сильную свёрточную нейросеть, натренированную на классификации, чтобы извлечь признаки из изображения, потом использовать свёрточные слои для регрессии точек боксов и классификации объектов внутри них. Напомним, что архитектура у YOLOv3 следующая: <img src="https://camo.githubusercontent.com/5c561504c1b01ee565764785efe5572156d4cd61/68747470733a2f2f692e696d6775722e636f6d2f546f45626c6a5a2e706e67"> Словами: 1. Картинка подаётся на вход 2. Она сжимается до размера 300х300х3 3. Пропускается через backbone-нейросеть, которая извлекает признаки -- *Darknet53* 4. Идут несколько свёрточных слоёв со свёртками 1х1 и 3х3 5. После них идёт yolo-слой: свёртка 1х1х(1 + 4 + NUM_CLASSES) 6. Далее происходит upsampling (увеличение по ширине и высоте) в 2 раза и конкатенация с feature map'ами, которые были до upsampling'а (чтобы улучшить качество) 7. Шаги 4-6 повторяются ещё 2 раза, чтобы улучшить качество детектирования мелких объектов При обучении также: 8. Финальный feature map специальным образом подаётся в Loss для подсчёта ошибки 9. Распространятся градиенты, как в обычном backpropagation, обновляются веса сети В слоях используются LeakyReLU активации. Перед YOLO-слоями используются линейные активации (то есть нет нелинейности). Как вся архитектура выглядит в коде вы можете посмотреть в этом файле: https://github.com/akozd/tensorflow_yolo_v3/blob/master/models/yolo_v3.py Оригинальная статья с arxiv.org: https://arxiv.org/abs/1804.02767 ***Примечание:*** Вы можете спросить: "Почему именно YOLOv3, ведь много других хороших детекторов?". Да, но на данный момент у YOLOv3 лучшее соотношение скорость/качество из широко применяемых нейросетевых детекторов. В этом плане он State-of-the-Art. <h2 style="text-align: center;"><b>Задание (10 баллов)</b></h2> ***Предполагается, что Вы знакомы с TensorFlow и свёрточными нейросетями*** Лучше выполнять этот ноутбук локально, поставив TensorFlow: `pip install tensorflow` (CPU-версия, но слишком долго работать не будет, так как обучения в задании нет, только предсказание). Если Вы выполняете на Google Colab, то будьте готовы активно использовать переходы в подпапки (`os.chdir(*path*)`), как было на семинаре. <img src="http://blog.yavuzz.com/image.axd?picture=/resimler/sayit.jpg"> Писать свой нейросетевой детектор с нуля -- весьма непростая задача, поэтому сейчас просто используем код человека, который смог: https://github.com/akozd/tensorflow_yolo_v3 Напомню, что скачать с Github весь репозиторий можно командой: `git clone *адрес репозитория*`. Например, репозиторий, который нужен в этом задании, скачивается так: `git clone https://github.com/akozd/tensorflow_yolo_v3` ### Этап 1 (2 балла): первичное ознакомлене с репозиторием Прочитать README этого репозитория: https://github.com/akozd/tensorflow_yolo_v3 ***Вопрос по `README.md` (1 балл)***: что автор репозитория предлагает для того, чтобы улучшить качество предсказания боксов пр обучении на собственных данных? ``` <Ответ> ... ``` Прочитайте файл `train.py` ***Вопрос по `train.py` (1 балл)***: за что отвечает аргумент скрипта `train.py` под названием `--test_model_overfit`? ``` <Развёрнутый ответ> ... ``` ### Этап 2 (3 балла): чтение кода репозитория Теперь нужно прочитать код автора и понять, что в нём происходит. Этот репозиторий был выбран не спроста -- весь код хорошо документирован и исправно работает. Ваша задача состоит в том, чтобы понять, как связаны файлы друг с другом, какие файлы используются для обучения, какие для предсказания, какие вовсе не используются. Хорошая стратегия: основываясь на README.md начать разбираться с тем, как работает `detect.py`, то есть что принимает на вход и что на выход, какие сторонние файлы использует. <img src="https://thefreshtoast.com/wp-content/uploads/2017/02/bbc-new-meme-hood-documentary.jpg" width=500 height=300> ***Задача (3 балла)***: подробно опишите структуру репозитория, пояснив, для чего нужен каждый файл. Чем более подробно вы опишите, что происходит внутри файла (можно прямо в виде "..в строчках 15-20 производится предсказание боксов по изображению.."), тем больше баллов получите. ``` <Подробное описание структуры репозитория> ... ``` ### Этап 3 (5 баллов): установка нужных зависимостей, скачивание весов (`.ckpt`) и запуск `detect.py` на ваших изображениях Разомнём пальцы и позапускаем код из репозитория на ваших изображениях (любых, однако желательно, чтобы на них были объекты из [этого списка](https://github.com/nightrome/cocostuff/blob/master/labels.md), так как изначально детектор обучен на COCO датсете). <img src="http://static.hdw.eweb4.com/media/wallpapers_dl/1/89/882736-adventure-time-with-finn-and-jake.jpg" width=400 height=300> Сначала убедитесь, что у вас (или на Colab) стоят все нужные зависимости (5 ссылок в разделе Dependencdies в README.md). Потом либо скриптом `.sh`, либо по ссылке, данной в ридми, скачайте в папку `model_weights` веса обученной на датасете COCO YOLOv3-модели. Баллы в этом задании ставятся следующим образом: * (1 балл) получены предсказания на любом вашем изображении (этот пункт служит подтверждением того, что у вас всё запустилось и вы смогли скачать и настроить репозиторий у себя/в колабе) * (1 балл) найдена кратинка, где у нейросети есть ложные срабатывания (false positives) * (1 балл) найдена картинка, где у нейросети есть пропуски в детекции (false negatives) * (1 балл) найдена картинка, где сеть детектировала успешно все объекты, хотя они сильно перекрыватся * (1 балл) предыдущий пункт, но наоброт -- нейросеть справляется плохо ``` <Вашы попытки здесь> <и здесь> ... ``` ### * Дополнительный этап 4 (10 баллов): обучение детектора на собственной выборке <img src="https://i.ytimg.com/vi/Zdf7Afgfq8Q/maxresdefault.jpg" width=500 height=300> В этом задании Вы по-желанию можете обучить свой детектор. Чтобы упростить задачу, вот примеры небольших датасетов, на которых можно обучить и протестировать (**10 баллов ставится за один из двух вариантов, за оба варианта двойной балл ставиться не будет**): ***1). Датасет игровых карт: https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10*** Репозиторий состоит из туториала по обучению детектора с помощью TF Object Detection API. Вы можете либо взять датасет из папки `/images` этого репозитория и обучить текущий YOLOv3 с помощью `train.py` (готовьтесь, предстоит повозиться с переводом разметки данных в нужный формат), либо же пройти тот туториал и обучить любую модель из TF Object Detection API на этом датасете. Главное: продемонстрировать работу вашего детектора не тестовых примерах с картами. ``` ... <Ты сможешь!> ... ``` **2). Датасет из картинок со снитчем из Гарри Поттера, ссылка на статью с подробным описанием задачи: https://apptractor.ru/develop/syigraem-v-kviddich-s-tensorflow-object-detection-api.html** В качестве результата нужно показать тестовые изображения, на которых верно детектирован снитч. ``` ... <Торжественно клянусь, что совершаю только шалость> ... ``` Также есть **ещё два пути, которые должны сработать**, если не работает то, что описано в домашнем ноутбуке: а). **Darkflow** -- репозиторий с разными версиями YOLO, в Readme есть про то, как обучать: https://github.com/thtrieu/darkflow б). **Darknet** -- фреймворк на С++ c авторским YOLOv3 (от Джозефа Редмона). Можно обучить детектор, следуя инструкциям на его сайте: https://pjreddie.com/darknet/yolo/ ``` ... ``` <h3 style="text-align: center;"><b>Полезные ссылки</b></h3> 1. https://github.com/qqwweee/keras-yolo3 2. https://github.com/ayooshkathuria/pytorch-yolo-v3 3. https://github.com/eriklindernoren/PyTorch-YOLOv3 4. https://github.com/maiminh1996/YOLOv3-tensorflow 5. https://github.com/ultralytics/yolov3 6. https://www.analyticsvidhya.com/blog/2018/12/practical-guide-object-detection-yolo-framewor-python/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+AnalyticsVidhya+%28Analytics+Vidhya%29
github_jupyter
# Using wind and rainfall as a discriminator for damage This notebook explores the potential of using rainfall and wind speed data as a discriminator for damage to buildings. The "forecast" is the Bureau of Meteorology Atmospheric high-resolution Regional Reanalysis for Australia (BARRA), and uses four different measures of wind speed and rainfall: * Surface wind gust (PSWG) * Surface "mean" wind (PSMW) * 900 hPa wind speed (PGWS) * "Neighbourhood" surface wind gust (NSWG) * Maximum instantaneous rainfall rate (PIRR) * Maximum 1-hour rainfall rate (P1RR) * Maximum 6-hour rainfall rate (P6RR) * Event total rainfall accumulation (PTEA) * "Neighbourhood" 1 -hour rainfall rate (N1RR) These fields are all extracted from 10-minute time step fields (provided by David Wilke). The "Neighbourhood" fields apply a spatial filter that takes the highest value within a prescribed radius of each point and assigns it to that point. In this case, the radius has been set at approximately 0.36$^{\circ}$ (40 km). The codes following each line above are the variable name as used throughout this notebook, and the field names in the input shape file. First load the required modules for the calculation ``` %matplotlib inline import os from os.path import join as pjoin from itertools import product import numpy as np import pandas as pd import geopandas as gpd from scipy.optimize import curve_fit import matplotlib.pyplot as plt import seaborn as sns sns.set_context("poster") from matplotlib import colors from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA ``` Set the data paths and read the data into a geoDataFrame ``` data_path = "X:/georisk/HaRIA_B_Wind/projects/impact_forecasting/data/impact/dungog" data_path = "C:/WorkSpace/data/impact" filename = "damage_hazard.shp" filepath = pjoin(data_path, filename) gdf = gpd.read_file(filepath) DMG_ORDER=['No Damage - 0%','Minor Impact - 1-25%', 'Major Impact - 26-50%', 'Severe Impact - 51-75%','Destroyed - 76-100%'] ``` Drop any records where the hazard variables are null. This removes any records outside the extent of the BARRA-SY grid, but I do not think there are any in this case. ``` gdf.dropna(axis=0, inplace=True, subset=['PIRR', 'P1RR', 'N1RR', 'P6RR', 'PTEA', 'PSWG', 'PSMW', 'NSWG', 'PGWS']) ``` In the first instance, we simply want to model the probability of buildings being damaged or undamaged. To do this, we add a numeric field to represent those buildings that are either classified as "Major Impact", "Severe Impact" or "Destroyed". ``` damaged = np.zeros(len(gdf)) damaged[gdf['EICU_Degda'].isin(['Destroyed - 76-100%', 'Severe Impact - 51-75%', 'Major Impact - 26-50%', ])] = 1 gdf['Damaged'] = damaged ``` Define the list of variables representing rainfall and wind speed. ``` rainfall = ['PIRR', 'P1RR', 'N1RR', 'P6RR', 'PTEA'] wind = ['PSWG', 'PSMW', 'NSWG', 'PGWS'] ``` Now we cycle through the combinations of rainfall and wind variables, and calculate a quadratic discriminant analysis, with these as the predictor variables and the 'Damaged' variable as the predictand. This produces a 5 x 4 plot, with each panel represeting the QDA using a unique combination of rainfall and wind variables. In each, the wind variable is on the horizontal axis and the rainfall variable on the vertical axis. The shading represents the probabilty of being in the "Damaged" category. ``` fig, axes = plt.subplots(5,4, figsize=(24,20),facecolor='white') ax = axes.flatten() xx = np.linspace(0, 100, 1000) yy = np.linspace(0, 500, 1000) xp, yp = np.meshgrid(xx, yy) for i, (r, w) in enumerate(product(rainfall, wind)): X = np.array([gdf[w].values, gdf[r].values]).T y = gdf['Damaged'].values clf = QDA() clf.fit(X, y) Z = clf.predict_proba(np.c_[xp.ravel(), yp.ravel()]) Z = Z[:, 1].reshape(xp.shape) cm = ax[i].pcolormesh(xp, yp, Z, cmap='viridis', norm=colors.Normalize(0., 1)) cs = ax[i].contour(xp, yp, Z, [0.05, 0.1, 0.25,0.5, 0.75], linewidths=2., colors='k') ax[i].clabel(cs, fmt='%.2f') #ax[i].colorbar(cm, label="Probability of damage") if i>15: ax[i].set_xlabel(w) if i%4 == 0: ax[i].set_ylabel(r) ax[i].set_xlim((np.floor(gdf[w].values.min()/10)*10, np.ceil(gdf[w].values.max()/10)*10)) ax[i].set_ylim((gdf[r].values.min(), gdf[r].values.max()*1.1,)) cbar_ax = fig.add_axes([1.05, 0.15, 0.05, 0.7]) fig.colorbar(cm, cax=cbar_ax, label="Probability of damage") fig.tight_layout() fig.subplots_adjust(wspace=0.3, hspace=0.3,right=0.95) ``` Each panel above describes the probability of being "Damaged", given the incident wind speed measure (horizontal axis) and rainfall measure (vertical axis). Note that the bounds on each axis vary for each variable. We now consider one of these panels for more deteailed investigation. The `PGWS` and `PIRR` variables give what we would intuitively expect to see from a predictor of damage - increasing probability for increasing values of the two predicor variables. This next section focuses on determining a discriminant function that we might be able to implement into an impact-forecasting workflow. We take a threshold of 50% probability of damage as our reference point. The figure below plots the contour that we're looking for. Note that it produces two contours on this figure, and we will take the longer of the two to derive the discriminant function. ``` fig, axes = plt.subplots(1,1,figsize=(24,20),facecolor='white') ax = axes xx = np.linspace(0, 100, 1000) yy = np.linspace(0, 500, 1000) xp, yp = np.meshgrid(xx, yy) xvar = 'PGWS' yvar = 'PIRR' X = np.array([gdf[xvar].values, gdf[yvar].values]).T y = gdf['Damaged'].values clf = QDA() clf.fit(X, y) Z = clf.predict_proba(np.c_[xp.ravel(), yp.ravel()]) Z = Z[:, 1].reshape(xp.shape) cm = ax.pcolormesh(xp, yp, Z, cmap='viridis', norm=colors.Normalize(0., 1)) cs = ax.contour(xp, yp, Z, [0.5], linewidths=2., colors='k') cbar_ax = fig.add_axes([1.05, 0.15, 0.05, 0.7]) fig.colorbar(cm, cax=cbar_ax, label="Probability of damage") def get_contour_verts(cn): contours = [] # for each contour line for cc in cn.collections: paths = [] # for each separate section of the contour line for pp in cc.get_paths(): xy = [] # for each segment of that section for vv in pp.iter_segments(): xy.append(vv[0]) paths.append(np.vstack(xy)) contours.append(paths) return contours cn = get_contour_verts(cs)[0][1][:-1] ``` The `cn` variable is a list of vertices of the 0.5 probability contour in the above plot. I have selected the second contour (by first inspecting the full set of returned vertices), and the last vertex is dropped, as it is repeated (which will cause problems with the curve fitting in the next step). ``` def func(x, a, b, c): value = (a/(x + b)) + c return value popt, pcov = curve_fit(func, cn[:,0], cn[:,1], maxfev=100000) print(popt) print(pcov) fig, axes = plt.subplots(1,1,figsize=(12,10),facecolor='white') plt.plot(cn[:,0], func(cn[:,0], *popt)) plt.xlabel(xvar) plt.ylabel(yvar) fig, axes = plt.subplots(1,1,figsize=(12,10),facecolor='white') xx2 = np.linspace(0, 50, 1000) yy2 = np.linspace(0, 500, 1000) xp2, yp2 = np.meshgrid(xx2, yy2) Z2 = np.c_[yp2.ravel()] - func(np.c_[xp2.ravel()], *popt) Z2 = Z2.reshape(xp2.shape) plt.contour(xp2, yp2, Z2, [0], linewidths=2., colors='k') plt.contourf(xp2, yp2, Z2, levels=10, cmap='viridis',) plt.colorbar() plt.xlabel(xvar) plt.ylabel(yvar) plt.tight_layout() plt.savefig(pjoin(data_path, 'predictor.png')) def dmgthresh(pswg, pirr): v = (popt[0]/(pswg + popt[1])) + popt[2] - pirr return v dmg = dmgthresh(gdf[xvar].values, gdf[yvar].values) sns.distplot(dmg) gdf['FcastDMG'] = dmg outputfile = "damage_forecast.shp" gdf.to_file(pjoin(data_path, outputfile)) g = sns.pairplot(gdf[['PIRR', 'P1RR', 'N1RR', 'P6RR', 'PTEA', 'PSWG', 'PSMW', 'NSWG', 'PGWS']], markers="+", plot_kws={'alpha':0.5}) gdf['EICU_Degda'].head() ```
github_jupyter
# Facial Keyponts Detection https://www.kaggle.com/c/facial-keypoints-detection ``` %matplotlib inline %matplotlib notebook import matplotlib.pyplot as plt import numpy as np from pandas.io.parsers import read_csv from random import randrange import torch import torch.nn.functional as F from torch.autograd import Variable from torch.utils.data import Dataset, DataLoader from torchvision import models from torchvision import transforms import sys sys.path.append('../') from pycoach.coach.coach import Coach from pycoach.datasets.facialkeypoints import * # verifica se a GPU esta disponivel use_gpu = torch.cuda.is_available() print("Usando GPU:", use_gpu) ``` ## Cria DataLoader para plot dos exemplos ``` train_file = 'data/training.csv' facial_dataset = FacialKeypointsDataset(train_file, train=True, transform=transforms.Compose([ Normalize(), Channel(), #ToTensor(), ])) ``` ## Plotando alguns exemplos ``` fig = plt.figure(figsize=(13,3)) for i in range(5): img = randrange(0, len(facial_dataset)) sample = facial_dataset[img] ax = plt.subplot(1, 5, i + 1) plt.tight_layout() ax.set_title('Amostra #{}'.format(img)) ax.axis('off') plot_image(sample['image'], sample['keypoints']) plt.show() del facial_dataset ``` ## Cria DataLoader para treinamento e validação ``` facial_dataset_train = FacialKeypointsDataset(train_file, train=True, transform=transforms.Compose([ Normalize(), Channel(), ToTensor(), ])) facial_dataset_validate = FacialKeypointsDataset(train_file, train=False, transform=transforms.Compose([ Normalize(), Channel(), ToTensor(), ])) train_loader = torch.utils.data.DataLoader(facial_dataset_train, batch_size=64, shuffle=True, num_workers=1) validate_loader = torch.utils.data.DataLoader(facial_dataset_validate, batch_size=64, shuffle=False, num_workers=1) ``` ## Construindo o modelo ``` class MyDense(torch.nn.Module): def __init__(self): super(MyDense, self).__init__() self.dense1 = torch.nn.Linear(512 * 3 * 3, 1024) self.drop1 = torch.nn.Dropout(0.5) self.dense2 = torch.nn.Linear(1024, 512) self.drop2 = torch.nn.Dropout(0.5) self.dense3 = torch.nn.Linear(512, 30) def forward(self, x): x = x.view(-1, 512 * 3 * 3) x = F.relu(self.dense1(x)) x = self.drop1(x) x = self.dense2(x) x = self.drop2(x) x = self.dense3(x) return x class Net(torch.nn.Module): def __init__(self, model_vgg): super(Net, self).__init__() self.vgg = model_vgg self.dense = MyDense() def forward(self, x): x = self.vgg(x) x = self.dense(x) return x model_vgg = models.vgg11(pretrained=True) for param in model_vgg.parameters(): param.requires_grad = False model_vgg.classifier = MyDense() if use_gpu: model_vgg.cuda() print(model_vgg) ``` ## Extraindo características ``` def feature_extractor(loader, model): if use_gpu: features = torch.Tensor().cuda() feature_labels = torch.Tensor().cuda() else: features = torch.Tensor() feature_labels = torch.Tensor() for batches, data in enumerate(loader): if isinstance(data, dict): inputs, labels = data.values() else: inputs, labels = data if use_gpu: inputs, labels = Variable(inputs.cuda()), \ Variable(labels.cuda()) else: inputs, labels = Variable(inputs), Variable(labels) features = torch.cat((features, model(inputs).data)) feature_labels = torch.cat((feature_labels, labels.data)) return (features, feature_labels) features_train, labels_train = feature_extractor(train_loader, model_vgg.features) features_validate, labels_validate = feature_extractor(validate_loader, model_vgg.features) features_train_dataset = torch.utils.data.TensorDataset(features_train.cpu(), labels_train.cpu()) features_validate_dataset = torch.utils.data.TensorDataset(features_validate.cpu(), labels_validate.cpu()) ``` ## DataLoaders das features ``` features_train_loader = torch.utils.data.DataLoader(features_train_dataset, batch_size=64, shuffle=True, num_workers=1) features_validate_loader = torch.utils.data.DataLoader(features_validate_dataset, batch_size=64, shuffle=False, num_workers=1) ``` ## Compilando e treinando ``` from pycoach.coach.callbacks import Plotter loaders = {'train': features_train_loader, 'validate': features_validate_loader} optimizer = torch.optim.Adam(model_vgg.classifier.parameters()) loss_fn = torch.nn.MSELoss() epochs = 50 # callbacks plotter = Plotter() coach = Coach(model_vgg.classifier, loaders, optimizer, loss_fn) coach.train(epochs, verbose=0, callbacks=[plotter]) ``` ## Salva modelo ``` coach.save('./net.pytorch') ``` ## Avaliando a rede no conjunto de validação ``` coach.load('./net.pytorch') coach.model = model_vgg y_predicted = coach.predict(validate_loader) if use_gpu: y_predicted = y_predicted.cpu() ``` ## Visualizando algumas previsões ``` fig = plt.figure(figsize=(13,3)) for i in range(5): img = randrange(0, len(facial_dataset_validate)) sample = facial_dataset_validate[img] ax = plt.subplot(1, 5, i + 1) plt.tight_layout() ax.set_title('Amostra #{}'.format(img)) ax.axis('off') plot_image(sample['image'].numpy().transpose(1,2,0), sample['keypoints'].numpy(), y_predicted[img].numpy()) plt.show() ```
github_jupyter
``` import numba import numpy as np import pandas as pd from matplotlib import pyplot as plt from mol2vec import features from rdkit import Chem from rdkit.Chem.Draw import IPythonConsole from rdkit.Chem import AllChem from gensim.models import word2vec from sklearn.linear_model import LinearRegression from sklearn.decomposition import PCA from sklearn.neighbors import KNeighborsRegressor from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import RBF ``` # Linear regression In this notebook, we're going to perform some rudimentary checks to make sure our `mol2vec` model is working in a way we expect it to by using the predicted vectors/features to predict the abundances of cyanopolyynes in TMC-1. The idea behind this is that the cyanopolyynes extends mainly along one dimension/feature, which is the number of carbons. The first thing we will do is to use principal components analysis to extract the most important features, in this case should correlate with the number of carbon atoms, and then use the lower dimensionality features to predict the column densities of the molecules. This is also a good opportunity to test out the models we'll be considering: linear regression, $k$-nearest neighbors, and a Gaussian process with a simple radial basis kernel. ``` model = word2vec.Word2Vec.load("../../models/mol2vec_model.pkl") @numba.jit(fastmath=True) def cosine_similarity(A, B): return np.dot(A, B) / (np.linalg.norm(A) * np.linalg.norm(B)) @numba.jit(fastmath=True) def pairwise_similarity(vectors): n = len(vectors) matrix = np.zeros((n, n), dtype=np.float32) for i in range(n): for j in range(n): matrix[i,j] = cosine_similarity(vectors[i], vectors[j]) return matrix def smi_to_vector(smi: str, model): try: mol = Chem.MolFromSmiles(smi) except RuntimeError: mol = Chem.MolFromSmiles(smi, sanitize=False) mol.UpdatePropertyCache(strict=False) Chem.GetSymmSSSR(mol) # generate a sentence from rdkit molecule sentence = features.mol2alt_sentence(mol, radius=1) # generate vector embedding from sentence and model vector = features.sentences2vec([sentence], model) return vector tmc1_df = pd.read_pickle("../../data/interim/tmc1_table_vecs.pkl") ``` ## See how PCA looks Principal component analysis is a way to reduce the dimensionality/number of features: we have 300 in `mol2vec`, and we're not sure which ones correspond to the "number of carbons" dimension. The way PCA works is by determining projections of the feature space where the explained variance is maximized: in other words, _which axes/dimensions explain the most variation in the data_. For our select polyyne chains, the biggest difference between them all is the length of the carbon chain. ``` molecules = ["HCN", "HC3N", "HC5N", "HC7N", "HC9N"] cyanopolyynes = tmc1_df.loc[tmc1_df["Formula"].isin(molecules)] cyanopolyynes # take 3 components: the number of components has to be fewer than the number of samples pca_model = PCA(n_components=3) cp_array = np.vstack(cyanopolyynes["Vecs"].to_list()) # the 2D array is number of samples x number of features cp_array.shape # fit the PCA model result = pca_model.fit(cp_array) # this corresponds to the percentage of explained variance with the component # number. PCA works to find the minimum number of components that explains the # most variation in your data result.explained_variance_ratio_ # once we've fit the PCA model, we can transform the 300 dimensional vectors into # the PCA projection reduced = result.transform(cp_array) # the result is nnumber of samples x number of dimensions reduced.shape # if we plot out the first dimension, this we think should # correspond to the number of carbons. This can be a little abstract to think about fig, ax = plt.subplots() n_carbon = np.arange(3, 11, 2) ax.scatter(n_carbon, reduced[:,0]) # annotate the molecule names for index, (_, row) in enumerate(cyanopolyynes.iterrows()): ax.text( n_carbon[index], reduced[index,0] + 1.5, row["Molecule"], horizontalalignment="center" ) fig.tight_layout() ax.set(ylabel="PCA1", xlabel="Number of carbons", ylim=[-35., 35.]) for spine in ["top", "right"]: ax.spines[spine].set_visible(False) ``` This checks out! The first dimension indeed correlates with the number of carbons in the chain. This is just a sanity check to make sure the features our `mol2vec` model are producing actually makes sense to us. ## Linear regression with carbon chain abundances To test how well this works, we can try and see how well the cyanopolyyne chain abundances decrease linearly. ``` model = LinearRegression() # This is just to help conceptualize what we're doing X = reduced Y = cyanopolyynes["Column density (cm^-2)"] # fit the linear model fit_result = model.fit(X, Y) ``` ### Plot the linear result ``` pred_Y = fit_result.predict(X) fig, ax = plt.subplots() ax.scatter(X[:,0], pred_Y) ``` ## k-nearest neighbors ``` n_neighbors = 3 knn_model = KNeighborsRegressor(n_neighbors=n_neighbors) knn_fit = knn_model.fit(X, Y) knn_pred = knn_fit.predict(X) ``` ## Gaussian process ``` # The RBF kernel has a "length_scale" parameter that defines how uncertain we think # our data is. This is a tunable parameter. kernel = RBF(length_scale=0.3) gp_model = GaussianProcessRegressor(kernel) gp_fit = gp_model.fit(X, Y) gp_pred = gp_fit.predict(X) ``` ## Plot up all the results on the same graph
github_jupyter
# Set-up ## Import Datalogue libraries Note, you'll need to have downloaded and installed the Datalogue SDK before this step will work. Right now, to do so you will need to get access through Artifactory. ``` # Import Datalogue libraries from datalogue import * from datalogue.version import __version__ from datalogue.models.ontology import * from datalogue.models.datastore_collection import * from datalogue.models.datastore import * from datalogue.models.datastore import GCSDatastoreDef from datalogue.models.credentials import * from datalogue.models.stream import * from datalogue.models.transformations import * from datalogue.models.transformations.structure import * from datalogue.dtl import Dtl, DtlCredentials from datalogue.models.training import DataRef # Import Datalogue Bag of Tricks from DTLBagOTricks import DTL as DTLHelper # Import other useful libraries from datetime import datetime, timedelta from os import environ import pandas from IPython.display import Image # Checks the version of the SDK is correct # The expected version is 0.28.3 # If the SDK is not installed, run `! pip install datalogue` and restart the Jupyter Notebook kernel # If the wrong versions is installed, run `! pip install datalogue --upgrade` and restart the Jupyter Notebook kernel __version__ # Set host, username and password variables datalogue_host = "https://internal.dtl.systems" # for connecting to internal (note) # datalogue_host = "https://internal.dtl.systems" # for connecting to internal (note) # datalogue_host = "http://10.2.161.119:3000" # for connecting to Eric's DGX #email = environ.get("DTL_EMAIL") email = "chrisr@datalogue.io" #password = environ.get("DTL_PASSWORD") password = "StreudelSauce1!" # Log in to Datalogue BOT = DTLHelper(datalogue_host, email, password) dtl = BOT.dtl # Expected output Datalogue v0.28.3 # "Logged in '[host location]' with '[username]' account)" # Deploy the model before the tidy up to give it some time to be ready: from datalogue.models.training import * import uuid OntologyId = '395a2b81-86c5-4f17-9d5f-6f17d4ae84f0' trainingId = dtl.training.get_trainings(uuid.UUID(str(OntologyId)))[0].id print(OntologyId) print(trainingId) dtl.training.deploy(trainingId, OntologyId) # First, let's clean up the assets this workbook creates from previous runs # Warning! this will clean all your datastores and data collections and credentials #BOT.server_summary() # Clear Datastores and Datastore Collections for store in dtl.datastore.list(): # print(store.name, ',', store.name[:5]) if (store.name == 'dtl-demo npl'): targetDS = store.id if (store.name[:5] == 'demo-'): dtl.datastore.delete(store.id) for store in dtl.datastore_collection.list(): # print(store.name) if (store.name[:5] == 'demo-'): dtl.datastore_collection.delete(store.id) # Clear data pipelines for StreamCollection in dtl.stream_collection.list(): print(StreamCollection, '\n') if (StreamCollection.name[:5] == 'demo-'): dtl.stream_collection.delete(StreamCollection.id) if (StreamCollection.name == 'Unnamed Pipeline'): dtl.stream_collection.delete(StreamCollection.id) ## Clear ontologies for Ontology in dtl.ontology.list(): # dtl.ontology.delete(ontology.id) if (Ontology.name[:12] == 'NPL Ontology'): OntologyID = Ontology.id print(targetDS) # truncate the postgres table before writing data import psycopg2 #from config import Config def connect(): """ Connect to the PostgreSQL database server """ conn = None try: # read connection parameters #params = config() # connect to the PostgreSQL server print('Connecting to the PostgreSQL database...') #conn = psycopg2.connect(**params) conn = psycopg2.connect(host="34.73.161.131",database="demo", user="postgres", password="devout-north-solitude") # create a cursor cur = conn.cursor() # execute a statement print('PostgreSQL database version:') cur.execute('SELECT version()') # display the PostgreSQL database server version db_version = cur.fetchone() print(db_version) # truncate the table cur.execute('TRUNCATE TABLE npl') # close the communication with the PostgreSQL cur.close() except (Exception, psycopg2.DatabaseError) as error: print(error) finally: if conn is not None: conn.commit() conn.close() print('Database connection closed.') if __name__ == '__main__': connect() ``` ## 2. Read Source Files from S3 bucket ``` from boto.s3.connection import S3Connection conn = S3Connection('AKIAIXM6CXHGHC62R7GA','Gcb34qctsvPoQJGGDrXzmwMbyaCZOg6zY1RFOVQO') bucket = conn.get_bucket('datalogue-demo') keys = ["store_name", "URL"] npl_data = [] for key in bucket.list(): if 'Loan' in key.name: url="https://datalogue-demo.s3.amazonaws.com/" + key.name values = ["demo-"+key.name, url] npl_data.append(dict(zip(keys, values))) print(type(npl_data)) print(type(npl_data[0])) print("\nCSV Customers Sources to connect to:\n" "-------------------------") for data_store in npl_data: print("➜ " + data_store["store_name"]) print("\n") ``` ## 3. Create datastore connections for each file in S3 bucket ``` current_stores = [] for data_store in npl_data: data_store["datastore_object"] = dtl.datastore.create( Datastore( data_store["store_name"], HttpDatastoreDef(data_store["URL"], FileFormat.Csv), ) ) print(data_store) current_stores.append(data_store["datastore_object"]) print(type(current_stores)) print(type(current_stores[0])) ``` ### 3b. Create datastore for RDBMS target ``` # host: 34.74.11.127 (use jdbc:postgresql://34.74.11.127:5432/demo for creating target store) # user: postgres # pw: L8am0pO5zjJrFm2O # bug in SDK for v<1.0; to be updated here but created in GUI for now ``` ## 4. Collecting data stores into a collection This is just used for organization, and uses the command `dtl.datastore_collection.create`. ``` print(type(npl_data)) print(type(npl_data[0])) print(npl_data[0]) for test in npl_data: # print(type(test)) print(test["datastore_object"].id) npl_collection = DatastoreCollection( name ="demo-NPL Collection", storeIds = [Datastore["datastore_object"].id for Datastore in npl_data], description = "NPL tape data of various formats" ) npl_collection2 = dtl.datastore_collection.create(npl_collection) ``` ## 5. Creating a stream ``` my_output_store = dtl.datastore.get(targetDS) print(type(my_output_store)) print(my_output_store) ``` #### Sample pipeline ``` # Define the target output schema transformation using 'structure' std_schema = Structure([ ClassNodeDescription( path = ["LoanID_"], tag = "Loan ID", pick_strategy = PickStrategy.HighScore, data_type = DataType.String ), ClassNodeDescription( path = ["Unpaid_Principal_"], tag = "Unpaid Principal Balance", pick_strategy = PickStrategy.HighScore, data_type = DataType.String ), ClassNodeDescription( path = ["Orig_Val_"], tag = "Origination Value", pick_strategy = PickStrategy.HighScore, data_type = DataType.String ), ClassNodeDescription( path = ["Int_Type_"], tag = "Interest Type", pick_strategy = PickStrategy.HighScore, data_type = DataType.String ), ClassNodeDescription( path = ["Mat_Date_"], tag = "Maturity Date", pick_strategy = PickStrategy.HighScore, data_type = DataType.String ), ClassNodeDescription( path = ["Currency_"], tag = "Currency", pick_strategy = PickStrategy.HighScore, data_type = DataType.String ), ClassNodeDescription( path = ["Orig_Country_"], tag = "Country of Origination", pick_strategy = PickStrategy.HighScore, data_type = DataType.String ), ClassNodeDescription( path = ["Days_Past_Due_"], tag = "Days Past Due", pick_strategy = PickStrategy.HighScore, data_type = DataType.String ) ] ) from datalogue.models.training import * import uuid OntologyID = '395a2b81-86c5-4f17-9d5f-6f17d4ae84f0' modelUuid = dtl.training.get_trainings(uuid.UUID(str(OntologyID)))[0].id print(modelUuid) # Define classify transformation #from datalogue.models.transformations import ReplaceLabel #tx_definition = Definition( # (List[Transformation], pipelines: List['Definition'], target_datastore ) # [ # Classify(training_id = modelUuid, use_context=True, include_classes=False, include_scores=False), # std_schema # ], # List of transformations # [], # pipelines list # my_output_store, # target_datastore # ) x = dtl.training.get_trainings('395a2b81-86c5-4f17-9d5f-6f17d4ae84f0') print(type(x)) print(x[0].id) # Define classify transformation from datalogue.models.transformations.classify import Classifier, MLMethod tx_definition = Definition( # (List[Transformation], pipelines: List['Definition'], target_datastore ) [ Classify(Classifier([MLMethod('f1175acb-f2e8-4175-a67f-3b67d57a1623')])), std_schema ], # List of transformations [], # pipelines list my_output_store, # target_datastore ) print(type(tx_definition)) print(tx_definition) # Define n stream(s), where n is number of datastore connections created from S3 bucket scan n = len(current_stores) i = 1 list_of_streams = [] for i in range(n): stream = Stream(current_stores[i], [tx_definition]) i += 1 list_of_streams.append(stream) print(type(list_of_streams)) print(type(list_of_streams[0])) # Put the streams in a collection stream_collection = dtl.stream_collection.create( list_of_streams, "demo-NPL pipeline" ) # Run the Collection dtl.stream_collection.run(stream_collection.id) ```
github_jupyter
SOP054 - Uninstall azdata command line interface ================================================ Steps ----- ### Common functions Define helper functions used in this notebook. ``` # Define `run` function for transient fault handling, hyperlinked suggestions, and scrolling updates on Windows import sys import os import re import platform import shlex import shutil import datetime from subprocess import Popen, PIPE from IPython.display import Markdown def run(cmd, return_output=False, no_output=False, error_hints=[], retry_hints=[], retry_count=0): """ Run shell command, stream stdout, print stderr and optionally return output """ max_retries = 5 install_hint = None output = "" retry = False # shlex.split is required on bash and for Windows paths with spaces # cmd_actual = shlex.split(cmd) # When running python, use the python in the ADS sandbox ({sys.executable}) # if cmd.startswith("python "): cmd_actual[0] = cmd_actual[0].replace("python", sys.executable) # On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail # with: # # UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128) # # Setting it to a default value of "en_US.UTF-8" enables pip install to complete # if platform.system() == "Darwin" and "LC_ALL" not in os.environ: os.environ["LC_ALL"] = "en_US.UTF-8" python_retry_hints, python_error_hints, install_hint = python_hints() retry_hints += python_retry_hints error_hints += python_error_hints if (cmd.startswith("kubectl ")): kubectl_retry_hints, kubectl_error_hints, install_hint = kubectl_hints() retry_hints += kubectl_retry_hints error_hints += kubectl_error_hints if (cmd.startswith("azdata ")): azdata_retry_hints, azdata_error_hints, install_hint = azdata_hints() retry_hints += azdata_retry_hints error_hints += azdata_error_hints # Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this # seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound) # # NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split. # which_binary = shutil.which(cmd_actual[0]) if which_binary == None: if install_hint is not None: display(Markdown(f'SUGGEST: Use {install_hint} to resolve this issue.')) raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") else: cmd_actual[0] = which_binary start_time = datetime.datetime.now().replace(microsecond=0) print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)") print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})") print(f" cwd: {os.getcwd()}") # Command-line tools such as CURL and AZDATA HDFS commands output # scrolling progress bars, which causes Jupyter to hang forever, to # workaround this, use no_output=True # try: if no_output: p = Popen(cmd_actual) else: p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1) with p.stdout: for line in iter(p.stdout.readline, b''): line = line.decode() if return_output: output = output + line else: if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file regex = re.compile(' "(.*)"\: "(.*)"') match = regex.match(line) if match: if match.group(1).find("HTML") != -1: display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"')) else: display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"')) else: print(line, end='') p.wait() except FileNotFoundError as e: if install_hint is not None: display(Markdown(f'SUGGEST: Use {install_hint} to resolve this issue.')) raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e if not no_output: for line in iter(p.stderr.readline, b''): line_decoded = line.decode() # azdata emits a single empty line to stderr when doing an hdfs cp, don't # print this empty "ERR:" as it confuses. # if line_decoded == "": continue print(f"ERR: {line_decoded}", end='') for error_hint in error_hints: if line_decoded.find(error_hint[0]) != -1: display(Markdown(f'SUGGEST: Use [{error_hint[2]}]({error_hint[1]}) to resolve this issue.')) for retry_hint in retry_hints: if line_decoded.find(retry_hint) != -1: if retry_count < max_retries: print(f"RETRY: {retry_count} (due to: {retry_hint})") retry_count = retry_count + 1 output = run(cmd, return_output=return_output, error_hints=error_hints, retry_hints=retry_hints, retry_count=retry_count) if return_output: return output else: return elapsed = datetime.datetime.now().replace(microsecond=0) - start_time if p.returncode != 0: raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n') print(f'\nSUCCESS: {elapsed}s elapsed\n') if return_output: return output def python_hints(): retry_hints = [] error_hints = [ ["""Library not loaded: /usr/local/opt/unixodbc""", """../common/sop008-distcp-backup-to-adl-gen2.ipynb""", """SOP008 - Backup HDFS files to Azure Data Lake Store Gen2 with distcp"""], ["""WARNING: You are using pip version""", """../install/sop040-upgrade-pip.ipynb""", """SOP040 - Upgrade pip in ADS Python sandbox"""] ] return retry_hints, error_hints, None print('Common functions defined successfully.') ``` ### Uninstall azdata CLI ``` import sys run(f'python -m pip uninstall -r https://aka.ms/azdata -y') ``` ### Pip list Verify there are no azdata modules in the list ``` run(f'python -m pip list') print('Notebook execution complete.') ```
github_jupyter
# Multi-processing example We’ll start with code that is clear, simple, and executed top-down. It’s easy to develop and incrementally testable: ``` import requests from multiprocessing.pool import ThreadPool as Pool sites = [ 'https://github.com/veit/jupyter-tutorial/', 'https://jupyter-tutorial.readthedocs.io/en/latest/', 'https://github.com/veit/pyviz-tutorial/', 'https://pyviz-tutorial.readthedocs.io/de/latest/', 'https://cusy.io/en', ] def sitesize(url): with requests.get(url) as u: return url, len(u.content) pool = Pool(10) for result in pool.imap_unordered(sitesize, sites): print(result) ``` > **Note 1:** A good development strategy is to use [map](https://docs.python.org/3/library/functions.html#map), to test your code in a single process and thread before moving to multi-processing. > **Note 2:** In order to better assess when `ThreadPool` and when process `Pool` should be used, here are some rules of thumb: > > * For CPU-heavy jobs, `multiprocessing.pool.Pool` should be used. Usually we start here with twice the number of CPU cores for the pool size, but at least 4. > > * For I/O-heavy jobs, `multiprocessing.pool.ThreadPool` should be used. Usually we start here with five times the number of CPU cores for the pool size. > > * If we use Python 3 and do not need an interface identical to `pool`, we use [concurrent.future.Executor](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Executor) instead of `multiprocessing.pool.ThreadPool`; it has a simpler interface and was designed for threads from the start. Since it returns instances of `concurrent.futures.Future`, it is compatible with many other libraries, including `asyncio`. > > * For CPU- and I/O-heavy jobs, we prefer `multiprocessing.Pool` because it provides better process isolation. ``` import requests from multiprocessing.pool import ThreadPool as Pool sites = [ 'https://github.com/veit/jupyter-tutorial/', 'https://jupyter-tutorial.readthedocs.io/en/latest/', 'https://github.com/veit/pyviz-tutorial/', 'https://pyviz-tutorial.readthedocs.io/de/latest/', 'https://cusy.io/en', ] def sitesize(url): with requests.get(url) as u: return url, len(u.content) for result in map(sitesize, sites): print(result) ``` ## What can be parallelised? ### Amdahl’s law > The increase in speed is mainly limited by the sequential part of the problem, since its execution time cannot be reduced by parallelisation. In addition, parallelisation creates additional costs, such as for communication and synchronisation of the processes. In our example, the following tasks can only be processed serially: * UDP DNS request request for the URL * UDP DNS response * Socket from the OS * TCP-Connection * Sending the HTTP request for the root resource * Waiting for the TCP response * Counting characters on the site ``` import requests from multiprocessing.pool import ThreadPool as Pool sites = [ 'https://github.com/veit/jupyter-tutorial/', 'https://jupyter-tutorial.readthedocs.io/en/latest/', 'https://github.com/veit/pyviz-tutorial/', 'https://pyviz-tutorial.readthedocs.io/de/latest/', 'https://cusy.io/en', ] def sitesize(url): with requests.get(url, stream=True) as u: return url, len(u.content) pool = Pool(4) for result in pool.imap_unordered(sitesize, sites): print(result) ``` > **Note:** [imap_unordered](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.imap_unordered) is used to improve responsiveness. However, this is only possible because the function returns the argument and result as a tuple. ## Tips * Don’t make too many trips back and forth If you get too many iterable results, this is a good indicator of too many trips, such as in ```python >>> def sitesize(url, start): ... req = urllib.request.Request() ... req.add_header('Range:%d-%d' % (start, start+1000)) ... u = urllib.request.urlopen(url, req) ... block = u.read() ... return url, len(block) ``` * Make relevant progress on every trip Once you get the process, you should make significant progress and not get bogged down. The following example illustrates intermediate steps that are too small: ```python >>> def sitesize(url, results): ... with requests.get(url, stream=True) as u: ... while True: ... line = u.iter_lines() ... results.put((url, len(line))) ``` * Don’t send or receive too much data The following example unnecessarily increases the amount of data: ```python >>> def sitesize(url): ... with requests.get(url) as u: ... return url, u.content ```
github_jupyter
## 1. Introduction <p><img src="https://assets.datacamp.com/production/project_1197/img/google_play_store.png" alt="Google Play logo"></p> <p>Mobile apps are everywhere. They are easy to create and can be very lucrative from the business standpoint. Specifically, Android is expanding as an operating system and has captured more than 74% of the total market<sup><a href="https://www.statista.com/statistics/272698/global-market-share-held-by-mobile-operating-systems-since-2009">[1]</a></sup>. </p> <p>The Google Play Store apps data has enormous potential to facilitate data-driven decisions and insights for businesses. In this notebook, we will analyze the Android app market by comparing ~10k apps in Google Play across different categories. We will also use the user reviews to draw a qualitative comparision between the apps.</p> <p>The dataset you will use here was scraped from Google Play Store in September 2018 and was published on <a href="https://www.kaggle.com/lava18/google-play-store-apps">Kaggle</a>. Here are the details: <br> <br></p> <div style="background-color: #efebe4; color: #05192d; text-align:left; vertical-align: middle; padding: 15px 25px 15px 25px; line-height: 1.6;"> <div style="font-size:20px"><b>datasets/apps.csv</b></div> This file contains all the details of the apps on Google Play. There are 9 features that describe a given app. <ul> <li><b>App:</b> Name of the app</li> <li><b>Category:</b> Category of the app. Some examples are: ART_AND_DESIGN, FINANCE, COMICS, BEAUTY etc.</li> <li><b>Rating:</b> The current average rating (out of 5) of the app on Google Play</li> <li><b>Reviews:</b> Number of user reviews given on the app</li> <li><b>Size:</b> Size of the app in MB (megabytes)</li> <li><b>Installs:</b> Number of times the app was downloaded from Google Play</li> <li><b>Type:</b> Whether the app is paid or free</li> <li><b>Price:</b> Price of the app in US$</li> <li><b>Last Updated:</b> Date on which the app was last updated on Google Play </li> </ul> </div> <div style="background-color: #efebe4; color: #05192d; text-align:left; vertical-align: middle; padding: 15px 25px 15px 25px; line-height: 1.6;"> <div style="font-size:20px"><b>datasets/user_reviews.csv</b></div> This file contains a random sample of 100 <i>[most helpful first](https://www.androidpolice.com/2019/01/21/google-play-stores-redesigned-ratings-and-reviews-section-lets-you-easily-filter-by-star-rating/)</i> user reviews for each app. The text in each review has been pre-processed and passed through a sentiment analyzer. <ul> <li><b>App:</b> Name of the app on which the user review was provided. Matches the `App` column of the `apps.csv` file</li> <li><b>Review:</b> The pre-processed user review text</li> <li><b>Sentiment Category:</b> Sentiment category of the user review - Positive, Negative or Neutral</li> <li><b>Sentiment Score:</b> Sentiment score of the user review. It lies between [0,1]. A higher score denotes a more positive sentiment.</li> </ul> </div> <p>From here on, it will be your task to explore and manipulate the data until you are able to answer the three questions described in the instructions panel.<br></p> ## Task 1: ### Import necessary libraries ``` import pandas as pd import numpy as np ``` ### Importing and cleaning dataset ``` # The path of the file is 'datasets/app.csv' apps = pd.read_csv('datasets/apps.csv') # Cleaning the installs column clean_chr = [',' , '+'] for chr in clean_chr: apps['Installs'] = apps['Installs'].apply(lambda x: x.replace(chr, '')) # Change the data type for Installs apps['Installs'] = apps['Installs'].astype(np.int64) # Check the data type for Installs apps.head() ``` ## Task 2: ### Get the mean of price and rating for different categories applications ``` # Get the average rating and average price for different categories application app_category_info = apps.groupby('Category').mean()[['Rating', 'Price']] app_category_info.reset_index(inplace=True) ``` ### Get the amount of different categories applications and create the final DataFrame ``` # Get the amount for different categories application num_of_apps = pd.DataFrame(apps['Category'].value_counts()) app_category_info = pd.merge(left=app_category_info, right=num_of_apps, left_on='Category', right_on=num_of_apps.index, how='inner') app_category_info = app_category_info[['Category', 'Rating', 'Price', 'Category_y']] # Rename for the columns app_category_info.rename(columns={'Rating':'Average rating', 'Price':'Average price', 'Category_y': 'Number of apps'}, inplace=True) app_category_info.head() ``` ## Task 3: ### Combining the App dataset and sentiment dataset ``` # read the sentiment dataset and combining with app dataset fin_apps = apps[(apps['Category'] == 'FINANCE') & (apps['Type'] == 'Free')] sen_df = pd.read_csv('datasets/user_reviews.csv') merge_df = pd.merge(left=fin_apps, right=sen_df, on='App', how='inner') ``` ### Calculate the mean value of app and get the result ``` merge_df = merge_df.groupby('App').mean() merge_df = merge_df[['Sentiment Score']] # First step is to select the top 10 app top_10 = merge_df.sort_values('Sentiment Score', ascending=False).head(10) # Then sort the App by alphabetically sorted top_10_user_feedback = top_10.sort_values('App', ascending=True) top_10_user_feedback ```
github_jupyter
``` #Dataset comes from here: #https://github.com/Mashimo/datascience/raw/master/datasets/train_catvnoncat.h5 # This code along with explanation is here: # https://towardsdatascience.com/coding-neural-network-forward-propagation-and-backpropagtion-ccf8cf369f76 import h5py # Store huge amounts of numerical data, and easily manipulate that data from NumPy. For example, you can slice into multi-terabyte datasets stored on disk, as if they wer # https://www.h5py.org/ import matplotlib.pyplot as plt import numpy as np import seaborn as sns def initialize_parameters(layers_dims): np.random.seed(1) parameters = {} L = len(layers_dims) for l in range(1, L): parameters["W" + str(l)] = np.random.randn( layers_dims[l], layers_dims[l - 1]) * 0.01 parameters["b" + str(l)] = np.zeros((layers_dims[l], 1)) assert parameters["W" + str(l)].shape == ( layers_dims[l], layers_dims[l - 1]) assert parameters["b" + str(l)].shape == (layers_dims[l], 1) return parameters def sigmoid(Z): A = 1 / (1 + np.exp(-Z)) return A, Z def tanh(Z): A = np.tanh(Z) return A, Z def relu(Z): A = np.maximum(0, Z) return A, Z def leaky_relu(Z): A = np.maximum(0.1 * Z, Z) return A, Z # Plot the 4 activation functions z = np.linspace(-10, 10, 100) # Computes post-activation outputs A_sigmoid, z = sigmoid(z) A_tanh, z = tanh(z) A_relu, z = relu(z) A_leaky_relu, z = leaky_relu(z) # Plot sigmoid plt.figure(figsize=(12, 8)) plt.subplot(2, 2, 1) plt.plot(z, A_sigmoid, label="Function") plt.plot(z, A_sigmoid * (1 - A_sigmoid), label = "Derivative") plt.legend(loc="upper left") plt.xlabel("z") plt.ylabel(r"$\frac{1}{1 + e^{-z}}$") plt.title("Sigmoid Function", fontsize=16) # Plot tanh plt.subplot(2, 2, 2) plt.plot(z, A_tanh, 'b', label = "Function") plt.plot(z, 1 - np.square(A_tanh), 'r',label="Derivative") plt.legend(loc="upper left") plt.xlabel("z") plt.ylabel(r"$\frac{e^z - e^{-z}}{e^z + e^{-z}}$") plt.title("Hyperbolic Tangent Function", fontsize=16) # plot relu plt.subplot(2, 2, 3) plt.plot(z, A_relu, 'g') plt.xlabel("z") plt.ylabel(r"$max\{0, z\}$") plt.title("ReLU Function", fontsize=16) # plot leaky relu plt.subplot(2, 2, 4) plt.plot(z, A_leaky_relu, 'y') plt.xlabel("z") plt.ylabel(r"$max\{0.1z, z\}$") plt.title("Leaky ReLU Function", fontsize=16) plt.tight_layout(); A_sigmoid, z # Define helper functions that will be used in L-model forward prop def linear_forward(A_prev, W, b): Z = np.dot(W, A_prev) + b cache = (A_prev, W, b) return Z, cache def linear_activation_forward(A_prev, W, b, activation_fn): assert activation_fn == "sigmoid" or activation_fn == "tanh" or \ activation_fn == "relu" if activation_fn == "sigmoid": Z, linear_cache = linear_forward(A_prev, W, b) A, activation_cache = sigmoid(Z) elif activation_fn == "tanh": Z, linear_cache = linear_forward(A_prev, W, b) A, activation_cache = tanh(Z) elif activation_fn == "relu": Z, linear_cache = linear_forward(A_prev, W, b) A, activation_cache = relu(Z) assert A.shape == (W.shape[0], A_prev.shape[1]) cache = (linear_cache, activation_cache) return A, cache def L_model_forward(X, parameters, hidden_layers_activation_fn="relu"): A = X caches = [] L = len(parameters) // 2 for l in range(1, L): A_prev = A A, cache = linear_activation_forward( A_prev, parameters["W" + str(l)], parameters["b" + str(l)], activation_fn=hidden_layers_activation_fn) caches.append(cache) AL, cache = linear_activation_forward( A, parameters["W" + str(L)], parameters["b" + str(L)], activation_fn="sigmoid") caches.append(cache) assert AL.shape == (1, X.shape[1]) return AL, caches # Compute cross-entropy cost def compute_cost(AL, y): m = y.shape[1] cost = - (1 / m) * np.sum( np.multiply(y, np.log(AL)) + np.multiply(1 - y, np.log(1 - AL))) return cost def sigmoid_gradient(dA, Z): A, Z = sigmoid(Z) dZ = dA * A * (1 - A) return dZ def tanh_gradient(dA, Z): A, Z = tanh(Z) dZ = dA * (1 - np.square(A)) return dZ def relu_gradient(dA, Z): A, Z = relu(Z) dZ = np.multiply(dA, np.int64(A > 0)) return dZ # define helper functions that will be used in L-model back-prop def linear_backword(dZ, cache): A_prev, W, b = cache m = A_prev.shape[1] dW = (1 / m) * np.dot(dZ, A_prev.T) db = (1 / m) * np.sum(dZ, axis=1, keepdims=True) dA_prev = np.dot(W.T, dZ) assert dA_prev.shape == A_prev.shape assert dW.shape == W.shape assert db.shape == b.shape return dA_prev, dW, db def linear_activation_backward(dA, cache, activation_fn): linear_cache, activation_cache = cache if activation_fn == "sigmoid": dZ = sigmoid_gradient(dA, activation_cache) dA_prev, dW, db = linear_backword(dZ, linear_cache) elif activation_fn == "tanh": dZ = tanh_gradient(dA, activation_cache) dA_prev, dW, db = linear_backword(dZ, linear_cache) elif activation_fn == "relu": dZ = relu_gradient(dA, activation_cache) dA_prev, dW, db = linear_backword(dZ, linear_cache) return dA_prev, dW, db def L_model_backward(AL, y, caches, hidden_layers_activation_fn="relu"): y = y.reshape(AL.shape) L = len(caches) grads = {} dAL = np.divide(AL - y, np.multiply(AL, 1 - AL)) grads["dA" + str(L - 1)], grads["dW" + str(L)], grads[ "db" + str(L)] = linear_activation_backward( dAL, caches[L - 1], "sigmoid") for l in range(L - 1, 0, -1): current_cache = caches[l - 1] grads["dA" + str(l - 1)], grads["dW" + str(l)], grads[ "db" + str(l)] = linear_activation_backward( grads["dA" + str(l)], current_cache, hidden_layers_activation_fn) return grads def update_parameters(parameters, grads, learning_rate): L = len(parameters) // 2 for l in range(1, L + 1): parameters["W" + str(l)] = parameters[ "W" + str(l)] - learning_rate * grads["dW" + str(l)] parameters["b" + str(l)] = parameters[ "b" + str(l)] - learning_rate * grads["db" + str(l)] return parameters # Import training dataset train_dataset = h5py.File("../data/train_catvnoncat.h5") X_train = np.array(train_dataset["train_set_x"]) y_train = np.array(train_dataset["train_set_y"]) test_dataset = h5py.File("../data/test_catvnoncat.h5") X_test = np.array(test_dataset["test_set_x"]) y_test = np.array(test_dataset["test_set_y"]) # Define the multi-layer model using all the helper functions we wrote before def L_layer_model( X, y, layers_dims, learning_rate=0.01, num_iterations=3000, print_cost=True, hidden_layers_activation_fn="relu"): np.random.seed(1) # initialize parameters parameters = initialize_parameters(layers_dims) # intialize cost list cost_list = [] # iterate over num_iterations for i in range(num_iterations): # iterate over L-layers to get the final output and the cache AL, caches = L_model_forward( X, parameters, hidden_layers_activation_fn) # compute cost to plot it cost = compute_cost(AL, y) # iterate over L-layers backward to get gradients grads = L_model_backward(AL, y, caches, hidden_layers_activation_fn) # update parameters parameters = update_parameters(parameters, grads, learning_rate) # append each 100th cost to the cost list if (i + 1) % 100 == 0 and print_cost: print(f"The cost after {i + 1} iterations is: {cost:.4f}") if i % 100 == 0: cost_list.append(cost) # plot the cost curve plt.figure(figsize=(10, 6)) plt.plot(cost_list) plt.xlabel("Iterations (per hundreds)") plt.ylabel("Loss") plt.title(f"Loss curve for the learning rate = {learning_rate}") return parameters def accuracy(X, parameters, y, activation_fn="relu"): probs, caches = L_model_forward(X, parameters, activation_fn) labels = (probs >= 0.5) * 1 accuracy = np.mean(labels == y) * 100 return f"The accuracy rate is: {accuracy:.2f}%."view raw ```
github_jupyter
# The Usage of DarKnight #### First of all, let's import all the modules we need(See below) ``` import sys sys.path.append('../darknight') import functions import numpy as np import pandas as pd import darkchem ``` #### Then we have to load the model and path vectors to help us predict chemical reactions behind The model's role is tranforming reactant smiles string to its correlative vector in the latent space created by darkchem package. As for path vectors, their function is to connect the reactant vector to its corresponding prodcut vector in the latent space. ``` #load model model = darkchem.utils.load_model('../models/N7b_[M+H]') #load path vectors PathB = np.load('../models/Benzene_PathVec.npy') PathN = np.load('../models/Nring_PathVec.npy') PathT = np.load('../models/CT2C2_PathVec.npy') ``` #### Predict Aromatic Hydrocarbons(contain one benzene) Chemical Reduction Reactions Predict a single reaction ``` smi = 'O=C(O)Cc1ccccc1' functions.output_single_prod(smi,model,PathB) ``` Predict multiple reactions at one time ``` data = pd.read_csv('../database/example_data/Aromatic_Hydrocarbon_test_md.csv') functions.output_multiple_prod(data,model,PathB) ``` From the results showed above, we can know that there are only 1 chemical reaction is predicted wrong, that's because the reactant is a little different from the other 9 reactants(look at its branches). In latent space, the same type of chemicals will be gathered together, while different type compounds will be distributed away. So it's normal why the first prediction is not correct. #### Predict Pyridine Derivatives(one carbon atom substituted by a Nitrogen atom) Chemical Reduction Reactions Predict a single reaction ``` smi = 'OCCCc1cccnc1' functions.output_single_prod(smi,model,PathN) ``` Predict multiple reactions at one time ``` data = pd.read_csv('../database/example_data/N_ring_test.csv') functions.output_multiple_prod(data,model,PathN) ``` For this type reaction, there are 3 bad predictions in the total 10 reactions. As mentioned before, some of them maybe not belong to the same type chemicals, another reason may result from the limited training set for the calculation of path vector. #### Predict Triple Bonds on Branched Chain Chemical Reduction Reactions Predict a single reaction ``` smi = 'C#CCCCCCCCCCC' functions.output_single_prod(smi,model,PathT) ``` Predict multiple reactions at one time ``` data = pd.read_csv('../database/example_data/CT2C2_test.csv') functions.output_multiple_prod(data,model,PathT) ``` Tested three reactants within this type reaction, all of them can be predicted perfectly.
github_jupyter
<center> <h1> Driving while Distracted, Drinking, Speeding... Killing? </h1> <h2> Bad Drivers of 'Murica </h2> </center> <b>Vashti Marin</b> <br> Marquette University <br> December 2018 <br> <br> <b>DataSet Name: </b>FiveThirtyEight Bad Drivers Dataset <br> <b>Collected from: </b>Kaggle.com at https://www.kaggle.com/fivethirtyeight/fivethirtyeight-bad-drivers-dataset <br> <b>Objective:</b> Analyze factors of Car Fatalities <br> <b>Data Set Description:</b> Number of fatalities per billion miles is collected for each state. Percentage of drivers that were involved in fatal accidents while being alcohol impaired, speeding and not distracted are included by state. Percentage of drivers that were not involved in a previous accident, car insurance premiums (in dollars) and loss incurred by insurance companies per insured driver (in dollars) are also included by state. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import statsmodels.api as sm from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import r2_score sns.set() %matplotlib inline df = pd.read_csv("bad-drivers.csv") df.head(10) ``` <img src ="wrangle.jpg" width = "400"> ``` #Wrangling and Cleaning data df.dropna() #check for null entries df.isnull().values.any() #check for duplicates df.duplicated() df.shape #shorten column names df.columns=["State","Accidents","Speeding","Drinking","NotDistracted","NoPrevious","Premiums","Loss"] df.head() #add columns with more covenient data newvalues= 100- df['NotDistracted'] df['Distracted'] = newvalues newvalues1= 100- df['NoPrevious'] df['Previous'] = newvalues1 df.head() #drop unnecessary columns df= df.drop(columns = ['NotDistracted', 'NoPrevious']) #Let's see what our data looks like now that it's clean df.head() ``` <img src ="looksgood.png" width = "400"> ``` #Now we can start our analysis. Let's describe the data as it is. df.describe() #Now let's look for any possible correlations df.corr() #Let's split our data into our train and test sets train_set, test_set = train_test_split(df, test_size=0.3, random_state=42) df_copy = train_set.copy() df_copy.describe() df_copy.corr() sns.palplot(sns.cubehelix_palette(8)) df_copy.plot.scatter(x="Accidents", y="Drinking") df_copy.plot.scatter(x="Accidents", y="Premiums") sns.regplot(x="Accidents", y="Drinking", data=df_copy) test_set_full = test_set.copy() test_set = test_set.drop(["Accidents"], axis=1) import statsmodels.formula.api as smf formula = 'Accidents~ %s'%(" + ".join(df_copy.columns.values[2:])) formula lin_reg = smf.ols(formula, data=df_copy).fit() lin_reg.summary() lin_reg.params lin_reg.conf_int() Y = df_copy["Premiums"] X = df_copy[["Accidents","Loss"]] model = sm.OLS(Y,X).fit() print(model.summary()) test_set_full.describe() test_set_full.corr() formula2 = 'Accidents~ %s'%(" + ".join(test_set_full.columns.values[2:])) formula2 lin_reg2 = smf.ols(formula2, data=test_set_full).fit() lin_reg2.summary() lin_reg2.params lin_reg2.conf_int() ``` Right about now is where we start feeling like... <img src ="giphy.gif" width = "400"><img src ="giphy2.gif" width = "400"> So why not keep going! ``` err_series = lin_reg.params - lin_reg.conf_int()[0] err_series err_series2 = lin_reg2.params - lin_reg2.conf_int()[0] err_series2 coef_df = pd.DataFrame({'coef': lin_reg.params.values[1:], 'err': err_series.values[1:], 'varname': err_series.index.values[1:] }) coef_df coef_df2 = pd.DataFrame({'coef': lin_reg2.params.values[1:], 'err': err_series2.values[1:], 'varname': err_series2.index.values[1:] }) coef_df2 #basic plot sns.set_style("whitegrid") fig, ax = plt.subplots(figsize=(8, 5)) coef_df2.plot(x='varname', y='coef', kind='bar', ax=ax, color='orange', yerr='err', legend=False, grid=False) ax.set_ylabel('') ax.set_xlabel('') ax.scatter(x=pd.np.arange(coef_df2.shape[0]), marker='s', s=120, y=coef_df2['coef'], color='orange') ax.axhline(y=0, linestyle='-', color='orange', linewidth=2) ax.xaxis.set_ticks_position('none') _ = ax.set_xticklabels(['Speeding', 'Drinking', 'Premiums', 'Loss', 'Distracted', 'Previous'], rotation=0, fontsize=14) Y = test_set_full["Premiums"] X = test_set_full[["Accidents","Loss"]] model = sm.OLS(Y,X).fit() print(model.summary()) Y = test_set_full["Accidents"] X = test_set_full[["Speeding","Drinking","Distracted"]] model = sm.OLS(Y,X).fit() print(model.summary()) sns.set_style("ticks") sns.pairplot(test_set_full, kind="reg") plt.show() X = test_set_full["Premiums"] Y = test_set_full["Loss"] m1 = sm.OLS(Y,X).fit() fig, ax = plt.subplots(figsize=(15,10)) fig = sm.graphics.plot_fit(m1, 0, ax=ax) sns.despine(left=True, bottom=True) sns.set_context("paper") X = df_copy["Speeding"] Y = df_copy["Accidents"] m1 = sm.OLS(Y,X).fit() fig, ax = plt.subplots(figsize=(15,10)) fig = sm.graphics.plot_fit(m1, 0, ax=ax) X = df_copy["Loss"] Y = df_copy["Premiums"] m1 = sm.OLS(Y,X).fit() fig, ax = plt.subplots(figsize=(15,10)) fig = sm.graphics.plot_fit(m1, 0, ax=ax) sns.set_context("poster") comp= df_copy.drop(columns = ["Accidents", "Loss", "Premiums", "State"]) f, ax = plt.subplots() sns.violinplot(data=comp) sns.despine(offset=10, trim=True); ```
github_jupyter
``` import pandas as pd with open("rockyousubset.txt", "r") as file: passes = file.readlines() passes data = [x.split("\n")[0] for x in passes] trigram_nested = {} sum([len(x) for x in data]) / len(data) #this is a nested trie such that each character contains a dictionary of the next character in the trigram. #it is just as efficient to build but can technically be more efficient to access. for word in data: if len(word) < 3: continue for i in range(len(word) - 3): if word[i] not in trigram_nested: trigram_nested[word[i]] = {} if word[i+1] not in trigram_nested[word[i]]: trigram_nested[word[i]][word[i+1]] = {} if word[i+2] not in trigram_nested[word[i]][word[i+1]]: trigram_nested[word[i]][word[i+1]][word[i+2]] = 1 else: trigram_nested[word[i]][word[i+1]][word[i+2]] += 1 print(trigram_nested.keys()) #flattened trigram for readability and analysis. trigram_flattened = {} for word in data: if len(word) < 3: continue for i in range(len(word) - 3): if word[i:i+3] not in trigram_flattened: trigram_flattened[word[i:i+3]] = 1 else: trigram_flattened[word[i:i+3]] += 1 #sorting it by decending frequency trigram_flattened = dict(sorted(trigram_flattened.items(), key=lambda item: -item[1])) len(trigram_flattened.keys()) #this will count the occurences of a particular letter in a particular spot of the trigram. #for example if you want to find the most common 1'st letter, you can run count_occruences(0) (1), or (2) #letter_pos_start= index of the letter you want to count the occurences of (0 for first letter, 1 for second ect) #letter_pos_end= index of the letter you want to count the occurences of ENDING exclusive(0 for first letter, 1 for second ect) #a call to count_occurences(0,1, trigram) will find the most common first letter. #a call to count_occurences(0,2, trigram) will find the most common first TWO letters. def count_occurences(letter_pos_start, letter_pos_end, trigram_flat): freq_table={} for trigram in trigram_flat: letter = trigram[letter_pos_start:letter_pos_end] if letter in freq_table.keys(): freq_table[letter] += 1 else: freq_table[letter] = 1 return freq_table most_common_first_two_letters = count_occurences(0,2, passes) most_common_first_two_letters = dict(sorted(most_common_first_two_letters.items(), key=lambda item: -item[1])) most_common_first_two_letters def generate_pass(length, first_two_letters, trigram_nested): cur_two_letters = first_two_letters password_generated = first_two_letters while(length != 0): length -= 1 next_most_letter_dict = trigram_nested[cur_two_letters[0]][cur_two_letters[1]] next_most_letter_dict = list(sorted(next_most_letter_dict.items(), key=lambda item: -item[1])) password_generated += str(next_most_letter_dict[0][0]) cur_two_letters = password_generated[-2:] return password_generated generate_pass(9,"ma", trigram_nested) ```
github_jupyter
# Alternating Direction Method of Multipliers Although gradient descent is a reliable algorithm that is guaranteed to converge, it is still slow. If we want to process larger sets of data (e.g. 3D imaging), have a live feed of DiffuserCam, or just want to process images more quickly, we need to tailor the algorithm more closely to the optical system involved. While this introduces more tuning parameters ("knobs" to turn), speed of reconstruction can be drastically improved. Here we present (without proof) the result of using <i>alternating direction method of multipiers (ADMM)</i> to reconstruct the image. For background on ADMM, please refer to sections 2 and 3 of: http://stanford.edu/~boyd/papers/pdf/admm_distr_stats.pdf. To understand this document, background knowledge from Chapters 5 (Duality) and 9 (Unconstrained minimization) from https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf may be necessary. For a detailed derivation of the update steps as they apply to our system, refer to the companion documents to this notebook (specific sections will be referenced). #### Generic setup code (load psf, etc.) ``` %matplotlib inline import numpy as np import numpy.fft as fft from PIL import Image import matplotlib.pyplot as plt from IPython import display ``` The code takes in two grayscale images: a point spread function (PSF) $\texttt{psfname}$ and a sensor measurement $\texttt{imgname}$. The images can be downsampled by a factor $f$, which is a power of $\frac{1}{2}$ ``` """Stuff that is normally in the config file""" psfname = "./psf_sample.tif" imgname = "./rawdata_hand_sample.tif" # Downsampling factor (used to shrink images) f = 0.25 # Hyper-parameters in the ADMM implementation (like step size in GD) mu1 = 1e-6 mu2 = 1e-5 mu3 = 4e-5 tau = 0.0001 # Number of iterations iters = 5 def loadData(show_im=True): psf = Image.open(psfname) psf = np.array(psf, dtype='float32') data = Image.open(imgname) data = np.array(data, dtype='float32') """In the picamera, there is a non-trivial background (even in the dark) that must be subtracted""" bg = np.mean(psf[5:15,5:15]) psf -= bg data -= bg """Resize to a more manageable size to do reconstruction on. Because resizing is downsampling, it is subject to aliasing (artifacts produced by the periodic nature of sampling). Demosaicing is an attempt to account for/reduce the aliasing caused. In this application, we do the simplest possible demosaicing algorithm: smoothing/blurring the image with a box filter""" def resize(img, factor): num = int(-np.log2(factor)) for i in range(num): img = 0.25*(img[::2,::2,...]+img[1::2,::2,...]+img[::2,1::2,...]+img[1::2,1::2,...]) return img psf = resize(psf, f) data = resize(data, f) """Now we normalize the images so they have the same total power. Technically not a necessary step, but the optimal hyperparameters are a function of the total power in the PSF (among other things), so it makes sense to standardize it""" psf /= np.linalg.norm(psf.ravel()) data /= np.linalg.norm(data.ravel()) if show_im: fig1 = plt.figure() plt.imshow(psf, cmap='gray') plt.title('PSF') # display.display(fig1) fig2 = plt.figure() plt.imshow(data, cmap='gray') plt.title('Raw data') # display.display(fig2) return psf, data """The "uncropped" size of the image. As with the gradient descent, we pad the images so that convolution is linear instead of circular""" psf, data = loadData(True) sensor_size = np.array(psf.shape) full_size = 2*sensor_size ``` $\newcommand\sensorco{\mathbf{x}} % vector containing (x,y) sensor coordinates$ $\newcommand\objectco{\bm \xi} % vector containing (x,y) object coordinates$ $\newcommand\measurementvec{\mathbf{b}}$ $\newcommand\measurementmtx{\mathbf{M}}$ $\newcommand\full{\mathbf{A}}$ $\newcommand\imagevec{\mathbf{v}}$ $\newcommand{\crop}{\mathbf{C}}$ $\newcommand{\rhcomment}[1]{{\bf{{\blue{{RH --- #1}}}}}}$ $\newcommand{\argmin}{\text{argmin }}$ Recall the objective function: \begin{equation} \begin{aligned} \hat{\imagevec} = \underset{w\geq 0,u,x}{\argmin}&\frac{1}{2}\|\measurementvec - \crop x\|_2^2 + \tau \|u\|_1\\ \mbox{s.t. } &x = \measurementmtx \imagevec, w = \imagevec\text{,} \end{aligned} \end{equation} For reference, the corresponding ADMM update steps derived in the companion document are: \begin{align*} u_{k+1} &\leftarrow \mathcal{T}_{\frac{\tau}{\mu_2}} \left(\Psi \imagevec_k + \frac{\eta_k}{\mu_2}\right) \\ x_{k+1} &\leftarrow \left(\crop^H \crop + \mu_1 I\right)^{-1} \left(\xi_k + \mu_1 \mathbf{M}\imagevec_k + \crop^H \measurementvec\right) \\ w_{k+1} &\leftarrow \max(\rho_k/\mu_3 +\imagevec_k, 0) \\ \imagevec_{k+1} &\leftarrow (\mu_1 \mathbf{M}^H \mathbf{M} + \mu_2 \Psi^H \Psi + \mu_3 I)^{-1} r_k, \\ \xi_{k+1} & \leftarrow \xi_k + \mu_1(\measurementmtx \imagevec_k - x_{k+1}) \\ \eta_{k+1} & \leftarrow \eta_k + \mu_2(\Psi \imagevec_{k+1} - u_{k+1}) \\ \rho_{k+1} & \leftarrow \rho_k + \mu_3(\imagevec_{k+1} - w_{k+1}) \end{align*} where $$r_k = (\mu_3 w_{k+1} - \rho_k) + \Psi^H(\mu_2 u_{k+1} - \eta_k) + \mathbf{M}^H (\mu_1 x_{k+1} - \xi_k).$$ ## We now demonstrate how each of these updates is implemented. ### The $u$-update (total variation update) \begin{equation} u_{k+1} \leftarrow \mathcal{T}_{\frac{\tau}{\mu_2}} \left(\Psi \imagevec_k + \eta_k/\mu_2\right) \end{equation} ``` def U_update(eta, image_est, tau): return SoftThresh(Psi(image_est) + eta/mu2, tau/mu2) ``` #### Implementing $\mathcal{T}$ $\mathcal{T}$ is a <i>soft-thresholding</i> operator: we zero out all values within $\tau/\mu_2$ of $0$, and all other components of the vector are decreased in magnitude by $\tau/\mu_2$. Note that $\tau$ and $\mu_2$ are regularization "hyperparameters" -- constants we can tune to increase or decrease the effect of different terms in the objective function. For example, increasing $\tau$ increases the contribution of $\|u\|_1$ to the objective function, so the solution has lower total variation and more "defined" edges. A clean way of writing soft-thresholding component-wise is: $$\left[\mathcal{T}_{\tau} (x)\right]_i = \mathrm{sgn}(x_i)\max(0,|x_i| - \tau)$$ ``` def SoftThresh(x, tau): # numpy automatically applies functions to each element of the array return np.sign(x)*np.maximum(0, np.abs(x) - tau) ``` #### Implementing $\Psi$ $\Psi \imagevec_k$ is the gradient of the image estimate (whose norm is the "total variation" of the image). Because images are made of discrete pixels, we approximate this gradient by a finite-difference. Here we choose to use the 2D forward-difference with a circular boundary condition, defined by the following relation. See Section 2.2 of the Appendix for an explanation of the boundary condition: $$\Psi v_{ij} = \begin{bmatrix}v_{i+1,j} - v_{i,j} \\ v_{i,j+1}-v_{i,j} \end{bmatrix}$$ The choice of finite difference does not have a large effect on the final result, as long as the corresponding adjoint (see the $\imagevec$-update) is calculated correctly. We implement the pixel-wise definition above in parallel using numpy operations: $\texttt{np.roll}$ shifts the image circularly, so if we subtract $\texttt{v}$ from $\texttt{np.roll(v,1,axis=0)}$ we perform all the row-wise finite differences in a single matrix subtraction. We can do the same for the column differences. $\Psi$ maps a pixel to a length-2 vector, where the first element is the row difference and the second is the column difference. $\texttt{np.stack}$ allows us to do this for every pixel in the image using the matrices calculated with $\texttt{roll}$. For an $n \times m$ image $v$, $\Psi(v)$ returns a $n\times m \times 2$ stack of two images (the horizontal and vertical differences). ``` def Psi(v): return np.stack((np.roll(v,1,axis=0) - v, np.roll(v, 1, axis=1) - v), axis=2) ``` ### The $x$-update (uncropped image update) $$x_{k+1} \leftarrow \left(\crop^H \crop + \mu_1 I\right)^{-1} \left(\xi_k + \mu_1 \measurementmtx\imagevec_k + \crop^H \measurementvec\right)$$ ``` def X_update(xi, image_est, H_fft, sensor_reading, X_divmat): return X_divmat * (xi + mu1*M(image_est, H_fft) + CT(sensor_reading)) ``` #### Implementing $\measurementmtx$ First consider the right-hand part of the update, which doesn't require any inversion. $\xi_k$ is one of the dual variables, so we will cover how it is calculated later. $\mu_1$ is a hyperparameter like above. $\measurementmtx$ is the convolution operator: $$\measurementmtx \imagevec_k \iff h \ast v_k,$$ where $h$ is the point spread function of the diffuser. We implement this convolution in the same way that we did in the gradient descent algorithm: $$h \ast v = \mathcal{F}^{-1} \left\{(\mathcal{F}(h))\cdot (\mathcal{F}(v))\right\} = \ \texttt{crop}\left[\ \texttt{DFT}^{-1}\left\{\ \texttt{DFT} [\ \texttt{pad}[h]\ ]\cdot\texttt{DFT}[\ \texttt{pad}[v]\ ]\ \right\} \right]$$ However, there are 2 details that change the implementation slightly. First, due to our variable splitting, we've actually absorbed the "sensor crop" (which crops the image from $\texttt{full_size}$ to $\texttt{sensor_size}$) into the $\crop$ operator. So, we don't need to crop down the convolution at this stage; it will be taken care of separately. Second, we can precalculate the padded DFT of $h$ (denoted $\texttt{H}\_\texttt{fft}$ in code) and our input $v_k$ is already padded (it has the dimensions of $\texttt{full_size}$). So, there is no need to pad them again. Thus our final form for $\measurementmtx$ is: $$\texttt{DFT}^{-1} \left[ \texttt{DFT}[v_k] \cdot \texttt{H}\_\texttt{fft} \right]$$ ``` def M(vk, H_fft): return np.real(fft.fftshift(fft.ifft2(fft.fft2(fft.ifftshift(vk))*H_fft))) ``` #### Implementing $\crop^H$ and $\crop$ The next term is $\crop^H \measurementvec$. First, as defined in the "Guide to modeling and algorithms", $\crop$ is the operator that accounts for the sensor cropping (it crops from size $\texttt{full_size}$ to size $\texttt{sensor_size}$). The adjoint $\crop^H$ zero-pads an image of size $\texttt{sensor_size}$ to an image of size $\texttt{full_size}$. We have chosen $\texttt{full_size}$ in such a way that it provides enough padding to make circular and linear convolutions look the same <i>after being cropped back down to $\texttt{sensor_size}$</i>. We implement $\crop^H$ by padding the sensor reading $\measurementvec$ "up" to $\texttt{full_size}$: ``` def C(M): # Image stored as matrix (row-column rather than x-y) top = (full_size[0] - sensor_size[0])//2 bottom = (full_size[0] + sensor_size[0])//2 left = (full_size[1] - sensor_size[1])//2 right = (full_size[1] + sensor_size[1])//2 return M[top:bottom,left:right] def CT(b): v_pad = (full_size[0] - sensor_size[0])//2 h_pad = (full_size[1] - sensor_size[1])//2 return np.pad(b, ((v_pad, v_pad), (h_pad, h_pad)), 'constant',constant_values=(0,0)) ``` #### Implementing $\left(\crop^H \crop + \mu_1 I\right)^{-1}$ Lastly, the inverse term. While we have largely avoided dealing with the actual matrices used to represent operators such as cropping, convolution, and padding, here we need to consider what they look like, because inverting the operators efficiently will implicitly use properties of the matrix representation. Let's go through the process with a linear function (or operator) $O$ that operates on images. Let $\mathbf{O}$ be the matrix corresponding to that operator. Recall from the algorithms guide (https://waller-lab.github.io/DiffuserCam/tutorial/algorithm_guide.pdf) that if $O$ operates on $m\times n$ images, then $\mathbf{O}$ is an $mn \times mn$-dimensional matrix, operating on vectorized images of length $mn$. Now consider the image $y = O(x)$. In matrix vector form, we can write this as $\mathbf{y} = \mathbf{O}\mathbf{x}$. This equation tells us explicitly that every pixel $\mathbf{y}_j$ is a linear combination of the pixels in $\mathbf{x}$: $\mathbf{y}_j = \sum_i\mathbf{O}_{ij} x_i$. So, if $\mathbf{O}$ is diagonal, then we find $\mathbf{y}_j = \mathbf{O}_{jj} x_j$. In other words, a diagonal operator $O$ <i>scales the $j$th pixel by factor</i> $\mathbf{O}_{jj}$! Importantly, the inverse of a diagonal matrix is also diagonal, with all the entries inverted. So, the inverse operator $O^{-1}$ also scales each pixel individually, now by factor $\dfrac{1}{\mathbf{O}_{jj}}$ (remember that $j$ is an index we give to the pixel based on the vectorized image). The main idea is that we can multiply each pixel by the appropriate weight without regard to other pixels in the image, which can easily be done in parallel. Now we apply the process for $\mathbf{O} = (\crop^H \crop + \mu_1 I)$: $\crop^H \crop$ is a crop down to an image of size $\texttt{sensor_size}$ and then a zero-pad back up to $\texttt{full_size}$. In other words, it zeros out any pixels outside of the cropped region, and leaves everything inside the cropped region untouched. We know this corresponds to a diagonal matrix because it is pointwise multiplication of each pixel by either a $0$ or a $1$. So, $\mathbf{O}$ corresponds to a pointwise multiplication of every pixel by either $0 + \mu_1$ or $1 + \mu_1$. The inverse of this operator is thus pointwise multiplication of every pixel by either $\dfrac{1}{\mu_1}$ or $\dfrac{1}{1 + \mu_1}$. Later, we will see an example of inversion where the operator is diagonalizable in a different, non-standard basis (not pixel-wise) -- the basic idea will remain the same. Since $\mathbf{O}^{-1}$ doesn't change with each iteration, we <i>precompute</i> the entries. Whenever we need to calculate its action on an image, we use numpy element-wise division with the stored entries. We create the multiplication mask of $\mu_1$'s and $1+\mu_1$'s by first creating the submask of ones (with size $\texttt{sensor_size}$) using $\texttt{np.ones}$. We can pad to the right size using the zero-pad function $\texttt{CT}$: ``` def precompute_X_divmat(): """Only call this function once! Store it in a variable and only use that variable during every update step""" return 1./(CT(np.ones(sensor_size)) + mu1) ``` ### The $w$-update (non-negativity update) $$w_{k+1} \leftarrow \max(\rho_k/\mu_3 +\imagevec_k, 0)$$ Updates for $\rho_k$ and $\imagevec_k$ will be calculated later, so we can assume we have the variables while implementing $w_{k+1}$. ``` def W_update(rho, image_est): return np.maximum(rho/mu3 + image_est, 0) ``` ### The $\imagevec$-update (Image estimate update!) $$\imagevec_{k+1} \leftarrow (\mu_1 \measurementmtx^H \measurementmtx + \mu_2 \Psi^H \Psi + \mu_3 I)^{-1} r_k,$$ where $$r_k = (\mu_3 w_{k+1} - \rho_k) + \Psi^H(\mu_2 u_{k+1} - \eta_k) + \measurementmtx^H (\mu_1 x_{k+1} - \xi_k)$$ While long, most of these terms are expressions involving other variables that have already been calculated. There are three things that require new calculations: $\Psi^H$ (adjoint of finite difference), $\measurementmtx^H$ (adjoint of convolution), and the large inverse. ``` def r_calc(w, rho, u, eta, x, xi, H_fft): return (mu3*w - rho)+PsiT(mu2*u - eta) + MT(mu1*x - xi, H_fft) def V_update(w, rho, u, eta, x, xi, H_fft, R_divmat): freq_space_result = R_divmat*fft.fft2( fft.ifftshift(r_calc(w, rho, u, eta, x, xi, H_fft)) ) return np.real(fft.fftshift(fft.ifft2(freq_space_result))) ``` #### Implementing $\Psi^H$ We calculate the adjoint of the finite difference in the appendix of the companion document. Because $\Psi$ maps a single image to two images stacked on top of each other, $\Psi^H$ must map a stack of two images to a single image. Let each "pixel" in the stack be $u_{ij} = \begin{bmatrix} u_{ij}^x & u_{ij}^y \end{bmatrix}^H$. Then, $$\Psi^H u_{ij} = (u_{i-1,j}^x - u_{i,j}^x) + (u_{i,j-1}^y - u_{i,j}^y)$$ In words, the adjoint of the 2D forward difference (i.e. of the form $v_{j+1}- v_j$) is the sum of two backward differences (i.e. of the form $u_{i-1} - u_i$). Again, since all the boundary conditions are circular, we can use $\texttt{np.roll}$ to help us parallelize the pixel-wise differences. $\texttt{PsiT}$ takes in an $n\times m \times 2$ matrix and outputs a $n\times m$ matrix. ``` def PsiT(U): diff1 = np.roll(U[...,0],-1,axis=0) - U[...,0] diff2 = np.roll(U[...,1],-1,axis=1) - U[...,1] return diff1 + diff2 ``` #### Implementing $\measurementmtx^H$ This is the adjoint of the convolution operator, which we calculated in the implementation of gradient descent as: $$\measurementmtx^H x = \mathcal{F}^{-1}\left\{ \mathcal{F}(\mathbf{h})^* \cdot \mathcal{F}\left(\text{pad}[x]\right)\right\}$$ Again, our input is already padded and we can pre-calculate the DFT of $h$. So, we get $$\measurementmtx^H x = \texttt{DFT}^{-1} \left[ \texttt{DFT}[x] \cdot \texttt{H}\_\texttt{fft}^* \right]$$ ``` def MT(x, H_fft): x_zeroed = fft.ifftshift(x) return np.real(fft.fftshift(fft.ifft2(fft.fft2(x_zeroed) * np.conj(H_fft)))) ``` #### Implementing $(\mu_1 \measurementmtx^H \measurementmtx + \mu_2 \Psi^H \Psi + \mu_3 I)^{-1}$ If we try to calculate the matrix entries for acting $\mu_1 \measurementmtx^H \measurementmtx$ on a vectorized image like before, we find that the product is not diagonal, because convolutions can depend on many different pixels at once by definition. In general, large matrix inverses are the main bottleneck in an algorithm; we need to be careful to make sure that all inverses involved in our algorithm are easily calculated via some method that doesn't require instantiating the matrix. In the previous case, we were lucky because <i>both</i> $\crop^H \crop$ and $\mu_1 I$ are diagonalizable <i>in the same standard basis</i>, so multiplying by its inverse is the same as dividing by the summed entries. Our only option here is to find a different basis where all the terms are diagonalizable. In fact, the matrix corresponding to $\measurementmtx^H \measurementmtx$ is not diagonal in the standard basis precisely because it is diagonal in a different basis! As we showed in the Gradient Descent notebook: $$\begin{align} \measurementmtx^H \measurementmtx \imagevec &= \mathbf{F}^{-1} \mathrm{diag}(\mathbf{Fh})^* \ \mathrm{diag}(\mathbf{Fh}) \ \mathbf{F} \imagevec \\ &= \mathcal{F}^{-1} \left\{ \mid\mathcal{F}h \mid^2 \cdot (\mathcal{F}v) \right\}, \end{align}$$ we find that it performs pixel-wise multiplication in the <i>Fourier</i> (frequency) space. In other words, $\measurementmtx^H \measurementmtx$ is diagonal if we take a Fourier transform first. If we wanted to efficiently calculate the inverse $(\mu_1 \measurementmtx^H \measurementmtx)^{-1}$, we would do a DFT to switch to frequency space, do division by the values of $\mid\mathcal{F}h\mid^2$ in that space, and then inverse DFT to get back to standard (pixel) basis. However, the inverse we wish to calculate has other terms as well, so we can only use this method if $\Psi^H \Psi$ and $I$ are also diagonalizable by a Fourier transform. Since $\mathcal{F}^{-1} I \mathcal{F} = I$, the identity is diagonalizable. We knew $\measurementmtx^H \measurementmtx$ was diagonalizable by a Fourier Transform because it was a convolution. So, to show $\Psi^H \Psi$ is also diagonalizable by a Fourier transform we must write $\Psi^H \Psi \imagevec = \Psi^H \left(\Psi \imagevec \right)$ as a convolution between $\imagevec$ and a fixed kernel. To that end, in 2D the forward difference acted on the $i,j$th pixel looks like: $$\begin{align} \Psi v_{ij} &= \begin{bmatrix}v_{i+1,j} - v_{i,j} \\ v_{i,j+1}-v_{i,j}\end{bmatrix}\\ &= \begin{bmatrix} \begin{bmatrix}0 & -1 & 1\end{bmatrix} \cdot \begin{bmatrix}v_{i-1,j} & v_{i,j}& v_{i+1,j}\end{bmatrix}\\ \begin{bmatrix}0 \\ -1 \\ 1 \end{bmatrix} \cdot \begin{bmatrix}v_{i,j-1} \\ v_{i,j} \\ v_{i,j+1}\end{bmatrix} \end{bmatrix} \\ \end{align}$$ The above formula holds for every pixel in the image (assuming the same circular boundary conditions that are used to derive $\Psi^H$ itself). This formula is also the pointwise definition of cross-correlation with the <i>non-vectorized</i> image $v$: $$ \Psi \imagevec = \begin{bmatrix} \begin{bmatrix}0 & 0 & 0 \\ 0 & -1 & 1 \\ 0 & 0 & 0\end{bmatrix} \star v \\ \begin{bmatrix} 0 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 1 & 0 \end{bmatrix} \star v \end{bmatrix}, \\ $$ where we have written both the row-wise and column-wise pixel differences in terms of a cross-correlation, just with two different "kernels." Let's call the row (top) kernel $k_{RF}$ and the column (bottom) kernel $k_{CF}$, where the "$F$" stands for forward difference. Using exactly the same process with $\Psi^H$ (and the stack of two images $u$ defined as above): $$ \Psi^H u = -k_{RB} \star u^x - k_{CB} \star u^y, $$ where the "B" stands for backwards difference. So, \begin{align} \Psi^H \Psi \imagevec &= \Psi^H \begin{bmatrix}k_{RF} \star v \\ k_{CF} \star v\end{bmatrix} \\ &= -k_{RB} \star \left(k_{RF} \star v\right) - k_{CB} \star \left( k_{CF} \star v \right) \end{align} At this point, we cannot simplify the operations because cross-correlations are not associative -- we need to reformulate the expression in terms of _convolutions_. So, we use the property that cross-correlation with a kernel $k$ is equivalent to convolution with the _flipped_ kernel $k'$ where we flip horizontally _and_ vertically. For example, if $k = \begin{bmatrix} 0 & -1 & 1\end{bmatrix}$, then $k' = \begin{bmatrix} 1 & -1 & 0 \end{bmatrix}$. If $k = \begin{bmatrix} 0 \\ -1 \\ 1 \end{bmatrix}$ then $k' = \begin{bmatrix} 1 \\ -1 \\ 0\end{bmatrix}$. Flipping the kernels and using associativity of convolution, we find: \begin{align} \Psi^H \Psi \imagevec &= -k_{RB} \star \left(k_{RF} \star v\right) - k_{CB} \star \left( k_{CF} \star v \right) \\ &= -k_{RB}' \ast \left(k_{RF}' \ast v\right) - k_{CB}' \ast \left( k_{CF}' \ast v\right) \\ &= -\left[(k_{RB}' \ast k_{RF}') + (k_{CB}' \ast k_{CF}')\right] \ast v \end{align} Finally, we again use the convolution theorem: for any $k, v$, $ k \ast v = \mathcal{F}^{-1} \{ \mathcal{F}(k) \cdot \mathcal{F}(v) \} $. Thus, setting $k = -(k_{RB}' \ast k_{RF}') - (k_{CB}' \ast k_{CF}')$, we have that $$ \\ \Psi^H \Psi \imagevec = \mathcal{F}^{-1} \{ \mathcal{F}(k) \cdot \mathcal{F}(v) \}, \\ $$ which corresponds to pixel-wise multiplication by $\mathcal{F}(k)$ in the Fourier space. To summarize, in matrix form, the inverse term can be written as $$ \begin{align} \\ (\mu_1 \measurementmtx^H \measurementmtx + \mu_2 \Psi^H \Psi + \mu_3 I)^{-1} &= (\mu_1 \mathbf{F}^{-1} D_M \mathbf{F} + \mu_2 \mathbf{F}^{-1} D_{\Psi} \mathbf{F} + \mu_3 \mathbf{F}^{-1} \mathbf{F})^{-1} \\ &= \mathbf{F}^{-1} (\mu_1 D_M + \mu_2 D_{\Psi} + \mu_3)^{-1} \mathbf{F}, \\ \end{align} $$ where $D_M$ and $D_{\Psi}$ are the diagonal operators corresponding to $\mathbf{M}^H \mathbf{M}$ and $\Psi^H \Psi$ implemented using pixel-wise multiplication in the Fourier space: \begin{cases} D_M = \mathrm{diag}(|\mathcal{F}(h)|^2) \\ D_{\Psi} = \mathrm{diag}(\mathcal{F}(\texttt{pad}(k))), \end{cases} where the $\texttt{pad}$ is used just to make the kernel the same size as the image. What is that kernel? $$ \begin{align} \\ k &= -[(k_{RB}' \ast k_{RF}') + (k_{CB}' \ast k_{CF}')] \\ \\ &= -\left(\begin{bmatrix} 0 & 0 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & 0 \end{bmatrix} \ast \begin{bmatrix} 0 & 0 & 0 \\ 1 & -1 & 0 \\ 0 & 0 & 0 \end{bmatrix} \right) - \left(\begin{bmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -1 & 0 \end{bmatrix} \ast \begin{bmatrix} 0 & 1 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 0 \end{bmatrix} \right) \\ \\ &= -\begin{bmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & -2 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} - \begin{bmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & -2 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} \\ \\ &= \begin{bmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 \\ 0 & -1 & 4 & -1 & 0 \\ 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} \end{align} $$ Following the convention established in the gradient descent notebook, since this is a Fourier space mask, we store it already shifted so that the origin (4) is in the corner: ``` def precompute_PsiTPsi(): PsiTPsi = np.zeros(full_size) PsiTPsi[0,0] = 4 PsiTPsi[0,1] = PsiTPsi[1,0] = PsiTPsi[0,-1] = PsiTPsi[-1,0] = -1 PsiTPsi = fft.fft2(PsiTPsi) return PsiTPsi ``` So, to calculate the inverse operator, we do a DFT to convert to Fourier space, divide the $i,j$th pixel by the $i,j$th entry in $\left(\mu_1 |\mathcal{F}(h)|^2 + \mu_2 \mathcal{F}(k)^* + \mu_3\right)$, and inverse DFT to convert back to real space. As usual, the pixel-wise division can be done in parallel. ``` def precompute_R_divmat(H_fft, PsiTPsi): """Only call this function once! Store it in a variable and only use that variable during every update step""" MTM_component = mu1*(np.abs(np.conj(H_fft)*H_fft)) PsiTPsi_component = mu2*np.abs(PsiTPsi) id_component = mu3 """This matrix is a mask in frequency space. So we will only use it on images that have already been transformed via an fft""" return 1./(MTM_component + PsiTPsi_component + id_component) ``` ### Dual updates: \begin{align*} \xi_{k+1} & \leftarrow \xi_k + \mu_1(\measurementmtx \imagevec_k - x_{k+1}) \\ \eta_{k+1} & \leftarrow \eta_k + \mu_2(\Psi \imagevec_{k+1} - u_{k+1}) \\ \rho_{k+1} & \leftarrow \rho_k + \mu_3(\imagevec_{k+1} - w_{k+1}) \end{align*} ``` def xi_update(xi, V, H_fft, X): return xi + mu1*(M(V,H_fft) - X) def eta_update(eta, V, U): return eta + mu2*(Psi(V) - U) def rho_update(rho, V, W): return rho + mu3*(V - W) ``` ## Putting it all together First, we initialize all the primal and dual variables. Note that the dimensions matter -- we use $\texttt{full_size}$ as the "uncropped" image size. The primal variables $x, w, \imagevec$ all take on this dimension. $u$ is the result of a gradient, which (as is shown below) must be the size of two stacked images. The dual variable dimensions can be derived by applying their update rule to the primal variables. ``` def init_Matrices(H_fft): X = np.zeros(full_size) U = np.zeros((full_size[0], full_size[1], 2)) V = np.zeros(full_size) W = np.zeros(full_size) xi = np.zeros_like(M(V,H_fft)) eta = np.zeros_like(Psi(V)) rho = np.zeros_like(W) return X,U,V,W,xi,eta,rho def precompute_H_fft(psf): return fft.fft2(fft.ifftshift(CT(psf))) ``` Again, the ADMM steps for reference: \begin{align*} u_{k+1} &\leftarrow \mathcal{T}_{\frac{\tau}{\mu_2}} \left(\Psi \imagevec_k + \frac{\eta_k}{\mu_2}\right) \\ x_{k+1} &\leftarrow \left(\crop^H \crop + \mu_1 I\right)^{-1} \left(\xi_k + \mu_1 \mathbf{M}\imagevec_k + \crop^H \measurementvec\right) \\ w_{k+1} &\leftarrow \max(\rho_k/\mu_3 +\imagevec_k, 0) \\ \imagevec_{k+1} &\leftarrow (\mu_1 \mathbf{M}^H \mathbf{M} + \mu_2 \Psi^H \Psi + \mu_3 I)^{-1} r_k, \\ \xi_{k+1} & \leftarrow \xi_k + \mu_1(\measurementmtx \imagevec_k - x_{k+1}) \\ \eta_{k+1} & \leftarrow \eta_k + \mu_2(\Psi \imagevec_{k+1} - u_{k+1}) \\ \rho_{k+1} & \leftarrow \rho_k + \mu_3(\imagevec_{k+1} - w_{k+1}) \end{align*} ``` def ADMMStep(X,U,V,W,xi,eta,rho, precomputed): H_fft, data, X_divmat, R_divmat = precomputed U = U_update(eta, V, tau) X = X_update(xi, V, H_fft, data, X_divmat) V = V_update(W, rho, U, eta, X, xi, H_fft, R_divmat) W = W_update(rho, V) xi = xi_update(xi, V, H_fft, X) eta = eta_update(eta, V, U) rho = rho_update(rho, V, W) return X,U,V,W,xi,eta,rho def runADMM(psf, data): H_fft = precompute_H_fft(psf) X,U,V,W,xi,eta,rho = init_Matrices(H_fft) X_divmat = precompute_X_divmat() PsiTPsi = precompute_PsiTPsi() R_divmat = precompute_R_divmat(H_fft, PsiTPsi) for i in range(iters): X,U,V,W,xi,eta,rho = ADMMStep(X,U,V,W,xi,eta,rho, [H_fft, data, X_divmat, R_divmat]) if i % 1 == 0: image = C(V) image[image<0] = 0 f = plt.figure(1) plt.imshow(image, cmap='gray') plt.title('Reconstruction after iteration {}'.format(i)) display.display(f) display.clear_output(wait=True) return image #psf, data = loadData(True) final_im = runADMM(psf, data) plt.imshow(final_im, cmap='gray') plt.title('Final reconstructed image after {} iterations'.format(iters)) display.display() ```
github_jupyter
<a href="https://colab.research.google.com/github/wileyw/DeepLearningDemos/blob/master/sound/simple_audio_working_vggish_clean_freeze_vggish_weights.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ##### Copyright 2020 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Simple audio recognition: Recognizing keywords <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/audio/simple_audio"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/audio/simple_audio.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/audio/simple_audio.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/audio/simple_audio.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This tutorial will show you how to build a basic speech recognition network that recognizes ten different words. It's important to know that real speech and audio recognition systems are much more complex, but like MNIST for images, it should give you a basic understanding of the techniques involved. Once you've completed this tutorial, you'll have a model that tries to classify a one second audio clip as "down", "go", "left", "no", "right", "stop", "up" and "yes". ``` !ls from google.colab import files import os if not os.path.exists('custom_dataset.zip'): files.upload() !unzip custom_dataset.zip !ls !git clone https://github.com/google-coral/project-keyword-spotter.git !ls project-keyword-spotter/ !cp project-keyword-spotter/mel_features.py . !ls import mel_features ``` ## Setup Import necessary modules and dependencies. ``` import os import pathlib import matplotlib.pyplot as plt import numpy as np import seaborn as sns import tensorflow as tf from tensorflow.keras.layers.experimental import preprocessing from tensorflow.keras import layers from tensorflow.keras import models from IPython import display # Set seed for experiment reproducibility seed = 42 tf.random.set_seed(seed) np.random.seed(seed) ``` ## Import the Speech Commands dataset You'll write a script to download a portion of the [Speech Commands dataset](https://www.tensorflow.org/datasets/catalog/speech_commands). The original dataset consists of over 105,000 WAV audio files of people saying thirty different words. This data was collected by Google and released under a CC BY license, and you can help improve it by [contributing five minutes of your own voice](https://aiyprojects.withgoogle.com/open_speech_recording). You'll be using a portion of the dataset to save time with data loading. Extract the `mini_speech_commands.zip` and load it in using the `tf.data` API. ``` data_dir = pathlib.Path('data/mini_speech_commands') if not data_dir.exists(): tf.keras.utils.get_file( 'mini_speech_commands.zip', origin="http://storage.googleapis.com/download.tensorflow.org/data/mini_speech_commands.zip", extract=True, cache_dir='.', cache_subdir='data') ``` Check basic statistics about the dataset. ``` !ls data/mini_speech_commands !mv data/mini_speech_commands data/mini_speech_commands.bak !mkdir data/mini_speech_commands !#cp -r data/mini_speech_commands.bak/left data/mini_speech_commands/left !#cp -r data/mini_speech_commands.bak/stop data/mini_speech_commands/stop !mkdir data/mini_speech_commands/unknown !#cp data/mini_speech_commands.bak/up/*.wav data/mini_speech_commands/unknown !#cp data/mini_speech_commands.bak/go/*.wav data/mini_speech_commands/unknown !#cp data/mini_speech_commands.bak/stop/*.wav data/mini_speech_commands/unknown !#cp data/mini_speech_commands.bak/no/*.wav data/mini_speech_commands/unknown !#cp data/mini_speech_commands.bak/yes/*.wav data/mini_speech_commands/unknown !#cp data/mini_speech_commands.bak/down/*.wav data/mini_speech_commands/unknown !cp custom_dataset/background/*.wav data/mini_speech_commands/unknown !mkdir data/mini_speech_commands/cough !cp custom_dataset/cough/*.wav data/mini_speech_commands/cough !ls data/mini_speech_commands/unknown commands = np.array(tf.io.gfile.listdir(str(data_dir))) commands = commands[commands != 'README.md'] print('Commands:', commands) ``` Extract the audio files into a list and shuffle it. ``` filenames = tf.io.gfile.glob(str(data_dir) + '/*/*') filenames = tf.random.shuffle(filenames) num_samples = len(filenames) print('Number of total examples:', num_samples) print('Number of examples per label:', len(tf.io.gfile.listdir(str(data_dir/commands[0])))) print('Example file tensor:', filenames[0]) ``` Split the files into training, validation and test sets using a 80:10:10 ratio, respectively. ``` train_files = filenames[:-20] val_files = filenames[-20: -10] test_files = filenames[-10:] print('Training set size', len(train_files)) print('Validation set size', len(val_files)) print('Test set size', len(test_files)) ``` ## Reading audio files and their labels The audio file will initially be read as a binary file, which you'll want to convert into a numerical tensor. To load an audio file, you will use [`tf.audio.decode_wav`](https://www.tensorflow.org/api_docs/python/tf/audio/decode_wav), which returns the WAV-encoded audio as a Tensor and the sample rate. A WAV file contains time series data with a set number of samples per second. Each sample represents the amplitude of the audio signal at that specific time. In a 16-bit system, like the files in `mini_speech_commands`, the values range from -32768 to 32767. The sample rate for this dataset is 16kHz. Note that `tf.audio.decode_wav` will normalize the values to the range [-1.0, 1.0]. ``` def decode_audio(audio_binary): audio, _ = tf.audio.decode_wav(audio_binary) return tf.squeeze(audio, axis=-1) ``` The label for each WAV file is its parent directory. ``` def get_label(file_path): parts = tf.strings.split(file_path, os.path.sep) # Note: You'll use indexing here instead of tuple unpacking to enable this # to work in a TensorFlow graph. return parts[-2] ``` Let's define a method that will take in the filename of the WAV file and output a tuple containing the audio and labels for supervised training. ``` def get_waveform_and_label(file_path): label = get_label(file_path) audio_binary = tf.io.read_file(file_path) waveform = decode_audio(audio_binary) return waveform, label ``` You will now apply `process_path` to build your training set to extract the audio-label pairs and check the results. You'll build the validation and test sets using a similar procedure later on. ``` AUTOTUNE = tf.data.AUTOTUNE files_ds = tf.data.Dataset.from_tensor_slices(train_files) waveform_ds = files_ds.map(get_waveform_and_label, num_parallel_calls=AUTOTUNE) ``` Let's examine a few audio waveforms with their corresponding labels. ``` rows = 3 cols = 3 n = rows*cols fig, axes = plt.subplots(rows, cols, figsize=(10, 12)) for i, (audio, label) in enumerate(waveform_ds.take(n)): r = i // cols c = i % cols ax = axes[r][c] ax.plot(audio.numpy()) ax.set_yticks(np.arange(-1.2, 1.2, 0.2)) label = label.numpy().decode('utf-8') ax.set_title(label) plt.show() ``` ## Spectrogram You'll convert the waveform into a spectrogram, which shows frequency changes over time and can be represented as a 2D image. This can be done by applying the short-time Fourier transform (STFT) to convert the audio into the time-frequency domain. A Fourier transform ([`tf.signal.fft`](https://www.tensorflow.org/api_docs/python/tf/signal/fft)) converts a signal to its component frequencies, but loses all time information. The STFT ([`tf.signal.stft`](https://www.tensorflow.org/api_docs/python/tf/signal/stft)) splits the signal into windows of time and runs a Fourier transform on each window, preserving some time information, and returning a 2D tensor that you can run standard convolutions on. STFT produces an array of complex numbers representing magnitude and phase. However, you'll only need the magnitude for this tutorial, which can be derived by applying `tf.abs` on the output of `tf.signal.stft`. Choose `frame_length` and `frame_step` parameters such that the generated spectrogram "image" is almost square. For more information on STFT parameters choice, you can refer to [this video](https://www.coursera.org/lecture/audio-signal-processing/stft-2-tjEQe) on audio signal processing. You also want the waveforms to have the same length, so that when you convert it to a spectrogram image, the results will have similar dimensions. This can be done by simply zero padding the audio clips that are shorter than one second. ``` def get_spectrogram(waveform): # Padding for files with less than 16000 samples zero_padding = tf.zeros([16000] - tf.shape(waveform), dtype=tf.float32) # Concatenate audio with padding so that all audio clips will be of the # same length waveform = tf.cast(waveform, tf.float32) equal_length = tf.concat([waveform, zero_padding], 0) spectrogram = tf.signal.stft( equal_length, frame_length=255, frame_step=128) spectrogram = tf.abs(spectrogram) return spectrogram import numpy as np class Uint8LogMelFeatureExtractor(object): """Provide uint8 log mel spectrogram slices from an AudioRecorder object. This class provides one public method, get_next_spectrogram(), which gets a specified number of spectral slices from an AudioRecorder. """ def __init__(self, num_frames_hop=48): self.spectrogram_window_length_seconds = 0.025 self.spectrogram_hop_length_seconds = 0.010 self.num_mel_bins = 64 #32 self.frame_length_spectra = 96 #98 if self.frame_length_spectra % num_frames_hop: raise ValueError('Invalid num_frames_hop value (%d), ' 'must devide %d' % (num_frames_hop, self.frame_length_spectra)) self.frame_hop_spectra = num_frames_hop self._norm_factor = 3 self._clear_buffers() def _clear_buffers(self): self._audio_buffer = np.array([], dtype=np.int16).reshape(0, 1) self._spectrogram = np.zeros((self.frame_length_spectra, self.num_mel_bins), dtype=np.float32) def _spectrogram_underlap_samples(self, audio_sample_rate_hz): return int((self.spectrogram_window_length_seconds - self.spectrogram_hop_length_seconds) * audio_sample_rate_hz) def _frame_duration_seconds(self, num_spectra): return (self.spectrogram_window_length_seconds + (num_spectra - 1) * self.spectrogram_hop_length_seconds) def compute_spectrogram_and_normalize(self, audio_samples, audio_sample_rate_hz): spectrogram = self._compute_spectrogram(audio_samples, audio_sample_rate_hz) spectrogram -= np.mean(spectrogram, axis=0) if self._norm_factor: spectrogram /= self._norm_factor * np.std(spectrogram, axis=0) spectrogram += 1 spectrogram *= 127.5 return np.maximum(0, np.minimum(255, spectrogram)).astype(np.float32) def _compute_spectrogram(self, audio_samples, audio_sample_rate_hz): """Compute log-mel spectrogram and scale it to uint8.""" samples = audio_samples.flatten() / float(2**15) spectrogram = 30 * ( mel_features.log_mel_spectrogram( samples, audio_sample_rate_hz, log_offset=0.001, window_length_secs=self.spectrogram_window_length_seconds, hop_length_secs=self.spectrogram_hop_length_seconds, num_mel_bins=self.num_mel_bins, lower_edge_hertz=60, upper_edge_hertz=3800) - np.log(1e-3)) return spectrogram def _get_next_spectra(self, recorder, num_spectra): """Returns the next spectrogram. Compute num_spectra spectrogram samples from an AudioRecorder. Blocks until num_spectra spectrogram slices are available. Args: recorder: an AudioRecorder object from which to get raw audio samples. num_spectra: the number of spectrogram slices to return. Returns: num_spectra spectrogram slices computed from the samples. """ required_audio_duration_seconds = self._frame_duration_seconds(num_spectra) logger.info("required_audio_duration_seconds %f", required_audio_duration_seconds) required_num_samples = int( np.ceil(required_audio_duration_seconds * recorder.audio_sample_rate_hz)) logger.info("required_num_samples %d, %s", required_num_samples, str(self._audio_buffer.shape)) audio_samples = np.concatenate( (self._audio_buffer, recorder.get_audio(required_num_samples - len(self._audio_buffer))[0])) self._audio_buffer = audio_samples[ required_num_samples - self._spectrogram_underlap_samples(recorder.audio_sample_rate_hz):] spectrogram = self._compute_spectrogram( audio_samples[:required_num_samples], recorder.audio_sample_rate_hz) assert len(spectrogram) == num_spectra return spectrogram def get_next_spectrogram(self, recorder): """Get the most recent spectrogram frame. Blocks until the frame is available. Args: recorder: an AudioRecorder instance which provides the audio samples. Returns: The next spectrogram frame as a uint8 numpy array. """ assert recorder.is_active logger.info("self._spectrogram shape %s", str(self._spectrogram.shape)) self._spectrogram[:-self.frame_hop_spectra] = ( self._spectrogram[self.frame_hop_spectra:]) self._spectrogram[-self.frame_hop_spectra:] = ( self._get_next_spectra(recorder, self.frame_hop_spectra)) # Return a copy of the internal state that's safe to persist and won't # change the next time we call this function. logger.info("self._spectrogram shape %s", str(self._spectrogram.shape)) spectrogram = self._spectrogram.copy() spectrogram -= np.mean(spectrogram, axis=0) if self._norm_factor: spectrogram /= self._norm_factor * np.std(spectrogram, axis=0) spectrogram += 1 spectrogram *= 127.5 return np.maximum(0, np.minimum(255, spectrogram)).astype(np.uint8) feature_extractor = Uint8LogMelFeatureExtractor() def get_spectrogram2(waveform): """ # Padding for files with less than 16000 samples zero_padding = tf.zeros([16000] - tf.shape(waveform), dtype=tf.float32) # Concatenate audio with padding so that all audio clips will be of the # same length waveform = tf.cast(waveform, tf.float32) equal_length = tf.concat([waveform, zero_padding], 0) spectrogram = tf.signal.stft( equal_length, frame_length=255, frame_step=128) spectrogram = tf.abs(spectrogram) return spectrogram """ waveform = waveform.numpy() #print(waveform.shape) #print(type(waveform)) spectrogram = feature_extractor.compute_spectrogram_and_normalize(waveform[:15680], 16000) return spectrogram for waveform, label in waveform_ds.take(1): label2 = label.numpy().decode('utf-8') spectrogram2 = get_spectrogram2(waveform) print('Label:', label2) print('Waveform shape:', waveform.shape) print('Spectrogram shape:', spectrogram2.shape) print('Spectrogram type:', spectrogram2.dtype) ``` Next, you will explore the data. Compare the waveform, the spectrogram and the actual audio of one example from the dataset. ``` for waveform, label in waveform_ds.take(1): label = label.numpy().decode('utf-8') print(waveform.shape) spectrogram = get_spectrogram(waveform) print('Label:', label) print('Waveform shape:', waveform.shape) print('Spectrogram shape:', spectrogram.shape) print('Audio playback') print('Spectrogram type:', spectrogram.dtype) display.display(display.Audio(waveform, rate=16000)) def plot_spectrogram(spectrogram, ax): # Convert to frequencies to log scale and transpose so that the time is # represented in the x-axis (columns). log_spec = np.log(spectrogram.T) height = log_spec.shape[0] X = np.arange(16000, step=height + 1) Y = range(height) ax.pcolormesh(X, Y, log_spec) fig, axes = plt.subplots(2, figsize=(12, 8)) timescale = np.arange(waveform.shape[0]) axes[0].plot(timescale, waveform.numpy()) axes[0].set_title('Waveform') axes[0].set_xlim([0, 16000]) plot_spectrogram(spectrogram.numpy(), axes[1]) axes[1].set_title('Spectrogram') plt.show() ``` Now transform the waveform dataset to have spectrogram images and their corresponding labels as integer IDs. ``` def get_spectrogram_and_label_id(audio, label): spectrogram = get_spectrogram(audio) spectrogram = tf.expand_dims(spectrogram, -1) label_id = tf.argmax(label == commands) return spectrogram, label_id spectrogram_ds = waveform_ds.map( get_spectrogram_and_label_id, num_parallel_calls=AUTOTUNE) ``` Examine the spectrogram "images" for different samples of the dataset. ``` rows = 3 cols = 3 n = rows*cols fig, axes = plt.subplots(rows, cols, figsize=(10, 10)) for i, (spectrogram, label_id) in enumerate(spectrogram_ds.take(n)): r = i // cols c = i % cols ax = axes[r][c] plot_spectrogram(np.squeeze(spectrogram.numpy()), ax) ax.set_title(commands[label_id.numpy()]) ax.axis('off') plt.show() ``` ## Build and train the model Now you can build and train your model. But before you do that, you'll need to repeat the training set preprocessing on the validation and test sets. ``` def preprocess_dataset(files): files_ds = tf.data.Dataset.from_tensor_slices(files) output_ds = files_ds.map(get_waveform_and_label, num_parallel_calls=AUTOTUNE) output_ds = output_ds.map( get_spectrogram_and_label_id, num_parallel_calls=AUTOTUNE) return output_ds train_ds = spectrogram_ds val_ds = preprocess_dataset(val_files) test_ds = preprocess_dataset(test_files) def only_load_dataset(files): files_ds = tf.data.Dataset.from_tensor_slices(files) output_ds = files_ds.map(get_waveform_and_label, num_parallel_calls=AUTOTUNE) return output_ds train_waveform_data = only_load_dataset(train_files) val_waveform_data = only_load_dataset(val_files) test_waveform_data = only_load_dataset(test_files) ``` Batch the training and validation sets for model training. ``` batch_size = 64 train_ds = train_ds.batch(batch_size) val_ds = val_ds.batch(batch_size) ``` Add dataset [`cache()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cache) and [`prefetch()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#prefetch) operations to reduce read latency while training the model. ``` train_ds = train_ds.cache().prefetch(AUTOTUNE) val_ds = val_ds.cache().prefetch(AUTOTUNE) ``` For the model, you'll use a simple convolutional neural network (CNN), since you have transformed the audio files into spectrogram images. The model also has the following additional preprocessing layers: - A [`Resizing`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing) layer to downsample the input to enable the model to train faster. - A [`Normalization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) layer to normalize each pixel in the image based on its mean and standard deviation. For the `Normalization` layer, its `adapt` method would first need to be called on the training data in order to compute aggregate statistics (i.e. mean and standard deviation). ``` #for spectrogram, _ in spectrogram_ds.take(1): # input_shape = spectrogram.shape for data_item, label in train_waveform_data.take(10): spectrogram = feature_extractor.compute_spectrogram_and_normalize(data_item.numpy()[:15680], 16000) print(spectrogram.shape) if spectrogram.shape[0] != 96: continue input_shape = (spectrogram.shape[0], spectrogram.shape[1], 1) print('Input shape:', input_shape) num_labels = len(commands) norm_layer = preprocessing.Normalization() norm_layer.adapt(spectrogram_ds.map(lambda x, _: x)) #preprocessing.Resizing(32, 32), model = models.Sequential([ layers.Input(shape=input_shape), norm_layer, layers.Conv2D(32, 3, activation='relu'), layers.Conv2D(64, 3, activation='relu'), layers.MaxPooling2D(), layers.Dropout(0.25), layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dropout(0.5), layers.Dense(num_labels), ]) model.summary() # https://github.com/antoinemrcr/vggish2Keras from tensorflow.keras.layers import Input, Dense, Conv2D, MaxPooling2D, Flatten from tensorflow.keras.models import Model def get_vggish_keras(): NUM_FRAMES = 96 # Frames in input mel-spectrogram patch NUM_BANDS = 64 # Frequency bands in input mel-spectrogram patch EMBEDDING_SIZE = 128 # Size of embedding layer input_shape = (NUM_FRAMES,NUM_BANDS,1) img_input = Input( shape=input_shape) # Block 1 x = Conv2D(64, (3, 3), activation='relu', padding='same', name='conv1')(img_input) x = MaxPooling2D((2, 2), strides=(2, 2), name='pool1')(x) # Block 2 x = Conv2D(128, (3, 3), activation='relu', padding='same', name='conv2')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='pool2')(x) # Block 3 x = Conv2D(256, (3, 3), activation='relu', padding='same', name='conv3_1')(x) x = Conv2D(256, (3, 3), activation='relu', padding='same', name='conv3_2')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='pool3')(x) # Block 4 x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv4_1')(x) x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv4_2')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='pool4')(x) # Block fc x = Flatten(name='flatten')(x) x = Dense(4096, activation='relu', name='fc1_1')(x) x = Dense(4096, activation='relu', name='fc1_2')(x) x = Dense(EMBEDDING_SIZE, activation='relu', name='fc2')(x) model = Model(img_input, x, name='vggish') return model model_vggish = get_vggish_keras() model_vggish.summary() !ls !du -sh vggish_weights.ckpt # The file should be around 275M checkpoint_path = 'vggish_weights.ckpt' if os.path.exists(checkpoint_path): print('Loading VGGish Checkpoint Path') model_vggish.load_weights(checkpoint_path) else: print('{} not detected, weights not loaded'.format(checkpoint_path)) new_model = tf.keras.Sequential() model_vggish.trainable = False new_model.add(model_vggish) new_model.add(layers.Dense(2, name='last')) new_model.summary() model = new_model model.compile( optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'], ) new_train_data = [] new_train_labels = [] new_val_data = [] new_val_labels = [] new_test_data = [] new_test_labels = [] for data_item, label in train_waveform_data: spectrogram = feature_extractor.compute_spectrogram_and_normalize(data_item.numpy()[:15680], 16000) label = label.numpy().decode('utf-8') label_id = tf.argmax(label == commands) # NOTE: Spectrogram shape is not always the same if spectrogram.shape[0] != 96: continue new_train_data.append(spectrogram) new_train_labels.append(label_id) for data_item, label in val_waveform_data: spectrogram = feature_extractor.compute_spectrogram_and_normalize(data_item.numpy()[:15680], 16000) label = label.numpy().decode('utf-8') label_id = tf.argmax(label == commands) if spectrogram.shape[0] != 96: continue new_val_data.append(spectrogram) new_val_labels.append(label_id) for data_item, label in test_waveform_data: spectrogram = feature_extractor.compute_spectrogram_and_normalize(data_item.numpy()[:15680], 16000) label = label.numpy().decode('utf-8') label_id = tf.argmax(label == commands) if spectrogram.shape[0] != 96: continue new_test_data.append(spectrogram) new_test_labels.append(label_id) new_train_data = np.array(new_train_data).astype('float32') new_val_data = np.array(new_val_data).astype('float32') new_test_data = np.array(new_test_data).astype('float32') new_train_labels = np.array(new_train_labels) new_val_labels = np.array(new_val_labels) new_test_labels = np.array(new_test_labels) # (1, 98, 32, 1) new_train_data = np.expand_dims(new_train_data, axis=3) new_val_data = np.expand_dims(new_val_data, axis=3) new_test_data = np.expand_dims(new_test_data, axis=3) print('--------') print(new_train_data.shape) print(new_val_data.shape) print(new_test_data.shape) print(new_train_labels.shape) print(new_val_labels.shape) print(new_test_labels.shape) print('--------') EPOCHS = 30 #history = model.fit( # train_ds, # validation_data=val_ds, # epochs=EPOCHS, # callbacks=tf.keras.callbacks.EarlyStopping(verbose=1, patience=2), #) history = model.fit( new_train_data, new_train_labels, validation_data=(new_val_data, new_val_labels), epochs=EPOCHS, #callbacks=tf.keras.callbacks.EarlyStopping(verbose=1, patience=2), ) ``` Let's check the training and validation loss curves to see how your model has improved during training. ``` metrics = history.history plt.plot(history.epoch, metrics['loss'], metrics['val_loss']) plt.legend(['loss', 'val_loss']) plt.show() ``` ## Evaluate test set performance Let's run the model on the test set and check performance. ``` #test_audio = [] #test_labels = [] #for audio, label in test_ds: # test_audio.append(audio.numpy()) # test_labels.append(label.numpy()) #test_audio = np.array(test_audio) #test_labels = np.array(test_labels) test_audio = new_test_data test_labels = new_test_labels y_pred = np.argmax(model.predict(test_audio), axis=1) y_true = test_labels test_acc = sum(y_pred == y_true) / len(y_true) print(f'Test set accuracy: {test_acc:.0%}') ``` ### Display a confusion matrix A confusion matrix is helpful to see how well the model did on each of the commands in the test set. ``` confusion_mtx = tf.math.confusion_matrix(y_true, y_pred) plt.figure(figsize=(10, 8)) sns.heatmap(confusion_mtx, xticklabels=commands, yticklabels=commands, annot=True, fmt='g') plt.xlabel('Prediction') plt.ylabel('Label') plt.show() ``` ## Run inference on an audio file Finally, verify the model's prediction output using an input audio file of someone saying "no." How well does your model perform? ``` !ls data/mini_speech_commands/cough #sample_file = data_dir/'no/01bb6a2a_nohash_0.wav' #sample_file = data_dir/'left/b46e8153_nohash_0.wav' #sample_file = data_dir/'no/ac7840d8_nohash_1.wav' #sample_file = data_dir/'no/5588c7e6_nohash_0.wav' #sample_file = data_dir/'up/52e228e9_nohash_0.wav' sample_file = data_dir/'cough/pos-0422-096-cough-m-31-8.wav' #sample_ds = preprocess_dataset([str(sample_file)]) X = only_load_dataset([str(sample_file)]) for waveform, label in X.take(1): label = label.numpy().decode('utf-8') print(waveform, label) spectrogram = feature_extractor.compute_spectrogram_and_normalize(waveform.numpy()[:15680], 16000) # NOTE: Dimensions need to be expanded spectrogram = np.expand_dims(spectrogram, axis=-1) spectrogram = np.expand_dims(spectrogram, axis=0) print(spectrogram.shape) prediction = model(spectrogram) print(prediction.shape) plt.bar(commands, tf.nn.softmax(prediction[0])) plt.title(f'Predictions for "{label}"') plt.show() #for spectrogram, label in sample_ds.batch(1): # prediction = model(spectrogram) # plt.bar(commands, tf.nn.softmax(prediction[0])) # plt.title(f'Predictions for "{commands[label[0]]}"') # plt.show() print(model) converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() # Save the model. with open('model.tflite', 'wb') as f: f.write(tflite_model) ! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - ! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list ! sudo apt-get update ! sudo apt-get install edgetpu-compiler # Define representative dataset print(new_test_data.shape) def representative_dataset(): yield [new_test_data] # Add quantization in order to run on the EdgeTPU converter2 = tf.lite.TFLiteConverter.from_keras_model(model) converter2.optimizations = [tf.lite.Optimize.DEFAULT] converter2.representative_dataset = representative_dataset tflite_quant_model = converter2.convert() with open('model_quantized.tflite', 'wb') as f: f.write(tflite_quant_model) !edgetpu_compiler model_quantized.tflite !ls -l !ls -l # https://www.tensorflow.org/lite/guide/inference interpreter = tf.lite.Interpreter(model_path="model.tflite") interpreter.allocate_tensors() # Get input and output tensors. input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() print(input_details) print(output_details) # Test the model on random input data. input_shape = input_details[0]['shape'] input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32) interpreter.set_tensor(input_details[0]['index'], input_data) interpreter.invoke() # The function `get_tensor()` returns a copy of the tensor data. # Use `tensor()` in order to get a pointer to the tensor. output_data = interpreter.get_tensor(output_details[0]['index']) print(output_data) #sample_file = data_dir/'no/01bb6a2a_nohash_0.wav' #sample_file = data_dir/'left/b46e8153_nohash_0.wav' sample_file = data_dir/'cough/pos-0422-096-cough-m-31-8.wav' #sample_ds = preprocess_dataset([str(sample_file)]) #waveform, label = get_waveform_and_label(sample_file) #spectrogram = feature_extractor._compute_spectrogram(waveform, 16000) X = only_load_dataset([str(sample_file)]) for waveform, label in X.take(1): label = label.numpy().decode('utf-8') spectrogram = feature_extractor.compute_spectrogram_and_normalize(waveform.numpy()[:15680], 16000) spectrogram = np.expand_dims(spectrogram, axis=-1) spectrogram = np.expand_dims(spectrogram, axis=0) print('Original--------------------') print(spectrogram.shape) prediction = model(spectrogram) print(prediction) print('TFLITE--------------------') # NOTE: dtype needs to be np.float32 input_data = np.array(spectrogram, dtype=np.float32) print(input_data.shape) interpreter.set_tensor(input_details[0]['index'], input_data) interpreter.invoke() prediction2 = interpreter.get_tensor(output_details[0]['index']) print(prediction2) print(np.argmax(np.array(prediction).flatten())) print(np.argmax(np.array(prediction2).flatten())) # NOTE: Remember to add softmax after the prediction plt.bar(commands, tf.nn.softmax(prediction[0])) plt.title(f'Predictions for "{label}"') plt.show() plt.imshow(np.squeeze(spectrogram).T) plt.show() ``` You can see that your model very clearly recognized the audio command as "no." ``` from google.colab import files files.download('model.tflite') from google.colab import files files.download('model_quantized_edgetpu.tflite') ``` ## Next steps This tutorial showed how you could do simple audio classification using a convolutional neural network with TensorFlow and Python. * To learn how to use transfer learning for audio classification, check out the [Sound classification with YAMNet](https://www.tensorflow.org/hub/tutorials/yamnet) tutorial. * To build your own interactive web app for audio classification, consider taking the [TensorFlow.js - Audio recognition using transfer learning codelab](https://codelabs.developers.google.com/codelabs/tensorflowjs-audio-codelab/index.html#0). * TensorFlow also has additional support for [audio data preparation and augmentation](https://www.tensorflow.org/io/tutorials/audio) to help with your own audio-based projects.
github_jupyter
``` import os from functools import reduce import ee from ismn.interface import ISMN_Interface import pandas as pd # Authenticate the library using your Google account ee.Authenticate() # Initializes the Google Earth Engine library ee.Initialize() ``` #### Function definitions ``` ## Cloud masking steps for Sentinel-2 images ## # Define cloud mask parameters CLOUD_FILTER = 80 # maximum percent of cloud cover in the image CLD_PRB_THRESH = 50 # probability of clouds at pixel NIR_DRK_THRESH = 0.15 # NIR reflectance; values below threshold are considered clouds CLD_PRJ_DIST = 1 # maximum distance to search for cloud shadows from cloud edges BUFFER = 50 # dilation distance from edge of cloud objects ## Creates an image collection from specified parameters of cloud probability def get_s2_sr_cld_col(start_date, end_date): # Import and filter S2 SR. s2_sr_col = (ee.ImageCollection('COPERNICUS/S2_SR') .filterDate(start_date, end_date) .filter(ee.Filter.lte('CLOUDY_PIXEL_PERCENTAGE', CLOUD_FILTER))) # Import and filter s2cloudless. s2_cloudless_col = (ee.ImageCollection('COPERNICUS/S2_CLOUD_PROBABILITY') .filterDate(start_date, end_date)) # Join the filtered s2cloudless collection to the SR collection by the 'system:index' property. return ee.ImageCollection(ee.Join.saveFirst('s2cloudless').apply(**{ 'primary': s2_sr_col, 'secondary': s2_cloudless_col, 'condition': ee.Filter.equals(**{ 'leftField': 'system:index', 'rightField': 'system:index' }) })) ## Cloud component ## Adds the s2cloudless probability layers and derives a cloud mask def add_cloud_bands(img): # Get s2cloudless image, subset the probability band. cld_prb = ee.Image(img.get('s2cloudless')).select('probability') # Condition s2cloudless by the probability threshold value. is_cloud = cld_prb.gt(CLD_PRB_THRESH).rename('clouds') # Add the cloud probability layer and cloud mask as image bands. return img.addBands(ee.Image([cld_prb, is_cloud])) ## Cloud shadow component ## Adds dark pixels, cloud projection, and identified shadows as bands to the image collection def add_shadow_bands(img): # Identify water pixels from the SCL band. not_water = img.select('SCL').neq(6) # Identify dark NIR pixels that are not water (potential cloud shadow pixels). SR_BAND_SCALE = 1e4 dark_pixels = img.select('B8').lt(NIR_DRK_THRESH*SR_BAND_SCALE).multiply(not_water).rename('dark_pixels') # Determine the direction to project cloud shadow from clouds (assumes UTM projection). shadow_azimuth = ee.Number(90).subtract(ee.Number(img.get('MEAN_SOLAR_AZIMUTH_ANGLE'))); # Project shadows from clouds for the distance specified by the CLD_PRJ_DIST input. cld_proj = (img.select('clouds').directionalDistanceTransform(shadow_azimuth, CLD_PRJ_DIST*10) .reproject(**{'crs': img.select(0).projection(), 'scale': 100}) .select('distance') .mask() .rename('cloud_transform')) # Identify the intersection of dark pixels with cloud shadow projection. shadows = cld_proj.multiply(dark_pixels).rename('shadows') # Add dark pixels, cloud projection, and identified shadows as image bands. return img.addBands(ee.Image([dark_pixels, cld_proj, shadows])) ## Assembles the cloud-shadow mask to produce a final masking of the images def add_cld_shdw_mask(img): # Add cloud component bands. img_cloud = add_cloud_bands(img) # Add cloud shadow component bands. img_cloud_shadow = add_shadow_bands(img_cloud) # Combine cloud and shadow mask, set cloud and shadow as value 1, else 0. is_cld_shdw = img_cloud_shadow.select('clouds').add(img_cloud_shadow.select('shadows')).gt(0) # Remove small cloud-shadow patches and dilate remaining pixels by BUFFER input. # 20 m scale is for speed, and assumes clouds don't require 10 m precision. is_cld_shdw = (is_cld_shdw.focal_min(2).focal_max(BUFFER*2/20) .reproject(**{'crs': img.select([0]).projection(), 'scale': 20}) .rename('cloudmask')) # Add the final cloud-shadow mask to the image. return img.addBands(is_cld_shdw) ## Converts an EarthEngine image collection into a Pandas array for a given point ## with the specified bands. ## The buffer attribute should be equal to half the spatial resolution of the final product. ## The bands should be given as a list, even for single bands. def ee_to_df(ee_arr, lon, lat, buffer, int_limit, bands, start_date, end_date): # Converts columns to numeric values def to_numeric(dataframe, band): dataframe[band] = pd.to_numeric(dataframe[band], errors='coerce') return dataframe # Transform the client-side data to a dataframe poi = ee.Geometry.Point(lon, lat) arr = ee_arr.select(bands).getRegion(poi, buffer).getInfo() df = pd.DataFrame(arr) headers = df.iloc[0] df = pd.DataFrame(df.values[1:], columns=headers) # Applies the to_numeric function and fills NaN rows with interpolated values for band in bands: df = to_numeric(df, band) if int_limit > 0: df[band].interpolate(method='linear', limit=int_limit, limit_direction='both', inplace=True) df.drop_duplicates(keep='first') # remove duplicates # Creates an index date column and drops unnecessary date, time, and coordinate columns df['Date'] = pd.to_datetime(df['time'], unit='ms') df['Date'] = df['Date'].dt.date df.set_index('Date', inplace=True) df.drop(['id', 'time', 'longitude', 'latitude'], axis=1, inplace=True) # Drop duplicate entries from the index and reindex the dataset to daily timesteps df = df[~df.index.duplicated()] df = df.reindex(pd.date_range(start=start_date, end=end_date, freq='D')) df.index.name = 'Date' return df ## Creates a new column that counts the number of days between precipitation events def dry_days(df): dry_streak = [] counter = 0 for day, value in enumerate(df['total_precipitation']): if value > 0: dry_streak.append(0) counter = 0 else: counter += 1 dry_streak.append(counter) df['Dry_days'] = dry_streak df['Dry_days'] = df['Dry_days'].astype('float') return df ## Calculates and adds the vegetation index profiles from the S2 data. ## The indices are interpolated linearly 30 days in both directions. ## NDWI is based on https://doi.org/10.1016/S0034-4257(96)00067-3 def add_vegetation_index(df, name, band_1, band_2): df[name] = (df[band_1] - df[band_2]) / (df[band_1] + df[band_2]) df[name].interpolate(method='linear', limit=30, limit_direction='both', inplace=True) return df ## Compiles a dataframes of all remotely sensed EE variables at a given point def get_rs_data(lon, lat, point_id): # MODIS lst = ee_to_df(land_t, lon, lat, 5, 5, ['LST_Day_1km'], start_date, end_date) lst = lst * 0.02 - 273.15 # scaling factor and Kelvin to Celcius conversion # Sentinel-1 sigma = ee_to_df(s1, lon, lat, 5, 0, ['VV', 'VH', 'angle'], start_date, end_date) # Sentinel-2 s2_vi = ee_to_df(s2, lon, lat, 5, 0, ['B4', 'B8', 'B8A', 'B11'], start_date, end_date) s2_vi = add_vegetation_index(s2_vi, 'NDVI', 'B4', 'B8') s2_vi = add_vegetation_index(s2_vi, 'NDWI', 'B8A', 'B11') s2_vi.drop(['B4', 'B8', 'B8A', 'B11'], axis=1, inplace=True) # ERA-5 era5_weather = ee_to_df(era5, lon, lat, 5, 0, ['mean_2m_air_temperature', 'total_precipitation'], start_date, end_date) era5_weather = dry_days(era5_weather) era5_weather['mean_2m_air_temperature'] = era5_weather['mean_2m_air_temperature'] - 273.15 # Merge dataframes dfs = [lst, sigma, s2_vi, era5_weather] df = reduce(lambda left,right: pd.merge(left, right, on=['Date'], how='outer'), dfs) df['ID'] = point_id return df ## Creates a dataframe from ISMN data ## Using the ISMN reader https://github.com/TUW-GEO/ismn def get_sm_data(network, depth_start, depth_end, snow_depth=False): # Retrieves the name of each station in the network stations = {} for n, station in enumerate(sm_data[network]): stations[n] = station.metadata['station'][1] # Creates a new dataframe with the station IDs and coordinates grid = sm_data.collection.grid gpis, lon, lat = grid.get_grid_points() df_coords = pd.DataFrame(index=pd.Index(gpis, name='ID'), data={'longitude': lon, 'latitude': lat}) df_coords = df_coords.rename(index=stations) # Writes soil moisture at 0-5cm depth to a dataframe sm_df = pd.DataFrame() for network, station, sensor in sm_data.collection.iter_sensors( variable='soil_moisture', depth=[depth_start, depth_end]): data = sensor.read_data() sensor_df = pd.DataFrame(data) sensor_df = sensor_df[sensor_df['soil_moisture_flag'] == 'G'] # filters out bad readings sensor_df = sensor_df.resample('D').mean() # resamples to daily values sensor_df['Date'] = sensor_df.index.date sensor_df['ID'] = station.metadata['station'][1] sm_df = sm_df.append(sensor_df) # Writes snow depth to a dataframe if specified and filters out days with snow cover if snow_depth: snow_df = pd.DataFrame() for network, station, sensor in sm_data.collection.iter_sensors( variable='snow_depth', depth=[0., 0.]): data = sensor.read_data() sensor_df = pd.DataFrame(data) sensor_df.loc[sensor_df['snow_depth']<=0, 'snow'] = 'yes' sensor_df = sensor_df.resample('D').count() # counts the entries with snow cover sensor_df['Date'] = sensor_df.index.date sensor_df['ID'] = station.metadata['station'][1] sensor_df = sensor_df[['ID', 'Date', 'snow']] snow_df = snow_df.append(sensor_df) try: df = pd.merge(sm_df, snow_df, how='left', left_on=['Date', 'ID'], right_on=['Date', 'ID']) df = df[df['snow']==0] # any day with snow cover entries is discarded except KeyError: print('No snow depth variable found, set snow_depth=False and run again') else: df = sm_df df.rename({'Date_x' : 'Date'}, axis=1, inplace=True) df.set_index('Date', inplace=True) df = df[['ID', 'soil_moisture']] print('ISMN data successfully translated to dataframe') return df, df_coords ## Queries the Earth Engine data for each soil moisture probe (lat, lon coordinates) in the ## dataset and merges the EE data with the sm data def merge_sm_rs_data(soil_moisture_df, coordinate_df): df = pd.DataFrame() print('Extracting data from Earth Engine...') i_start, i_end = 1, len(coordinate_df) for station, coords in coordinate_df.iterrows(): print('Station {} of {}'.format(i_start, i_end)) df_sm_temp = soil_moisture_df[soil_moisture_df['ID'] == station] df_rs_temp = (get_rs_data(coords['longitude'], coords['latitude'], station)) df_rs_temp = df_rs_temp.reindex(pd.date_range(start=start_date, end=end_date, freq='D')) df_temp = pd.merge(df_rs_temp, df_sm_temp, how='left', left_index=True, right_index=True) df_temp.rename({'ID_x' : 'ID'}, axis=1, inplace=True) df_temp.drop('ID_y', axis=1, inplace=True) df = df.append(df_temp) i_start += 1 df.index.name = 'Date' return df ``` #### Data specifications ``` #!# Important to verify that all datasets contain data between the two periods #!# #!# The current timeframe between start and end are the lower and upper limits, #!# #!# defined by the start of Sentinel-2 SR data and end of ERA-5 start_date, end_date = '2017-03-28', '2020-07-09' # Sentinel-1 GRD C-band SAR data # https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S1_GRD # Values are expressed in decibel on a logarithmic scale s1 = (ee.ImageCollection("COPERNICUS/S1_GRD") .filterDate(ee.Date(start_date), ee.Date(end_date)) .filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VV')) .filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VH')) .filter(ee.Filter.eq('instrumentMode', 'IW'))) # Sentinel-2 Level-2A surface reflectance # https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SR s2_sr_cld_col_eval = get_s2_sr_cld_col(start_date, end_date) s2 = s2_sr_cld_col_eval.map(add_cld_shdw_mask) # MODIS daily land surface temperature # https://developers.google.com/earth-engine/datasets/catalog/MODIS_006_MOD11A1 # Temperature is given in Kelvin with a scaling factor of 0.02 land_t = (ee.ImageCollection("MODIS/006/MOD11A1") .filterDate(start_date, end_date) .select('LST_Day_1km')) # ERA-5 daily aggregate re-analysis data # https://developers.google.com/earth-engine/datasets/catalog/ECMWF_ERA5_DAILY # Maximum temperature (air) uses Kelvin # Precipitation is in meters era5 = (ee.ImageCollection('ECMWF/ERA5/DAILY') .filterDate(start_date, end_date) .select(['mean_2m_air_temperature', 'total_precipitation'])) # Soil moisture data retrieved from ISMN # Retrieved from https://ismn.geo.tuwien.ac.at/en/ root = r'C:\Users\Nick\Documents\_Master_Thesis\Soil_Moisture\ISMN' sm_data = ISMN_Interface(os.path.join(root, 'REMEDHUS.zip'), parallel=True) ``` #### Dataframe compiler ``` ## Compile the final merged dataframe of RS and SM data # Specify the probe network name # For multiple networks, these must be iterated through either through a script or by saving # a .csv for each network and then later merged network = 'SNOTEL' # Creates a soil moisture dataframe from the ISMN data # Specify the depth intervals (in meters) of the moisture measurements, e.g. 0.0, 0.05 for soil # moisture between 0-5cm # Set snow_depth=True to filter out days with snow cover if the data contains a snow depth variable df_sm, df_coords = get_sm_data(network, 0.0, 0.05, snow_depth=True) # Merge the soil moisture and remote sensing data for each station df = merge_sm_rs_data(df_sm, df_coords) # Export the dataframe to a local disk # Specify the out_dir with your local path out_dir = r'C:\Users\Nick\Documents\_Master_Thesis\Soil_Moisture\ISMN' df.to_csv(os.path.join(out_dir, 'SM_Data_{}.csv'.format(network))) print('Done!') ```
github_jupyter
# Datetime The `datetime` package is part of the Python Standard Libary. This notebook lists some of the commonly used functions of the `datetime` package. ## [`datetime`](https://docs.python.org/3/library/datetime.html#module-datetime) for working with dates and times ``` import datetime as dt local_now = dt.datetime.now() print('local now: {}'.format(local_now)) utc_now = dt.datetime.utcnow() print('utc now: {}'.format(utc_now)) # You can access any value separately: print('{} {} {} {} {} {}'.format(local_now.year, local_now.month, local_now.day, local_now.hour, local_now.minute, local_now.second)) print('date: {}'.format(local_now.date())) print('time: {}'.format(local_now.time())) ``` ### `strftime()` For string formatting the `datetime` ``` formatted1 = local_now.strftime('%Y/%m/%d-%H:%M:%S') print(formatted1) formatted2 = local_now.strftime('date: %Y-%m-%d time:%H:%M:%S') print(formatted2) ``` ### `strptime()` For converting a datetime string into a `datetime` object ``` my_dt = dt.datetime.strptime('2000-01-01 10:00:00', '%Y-%m-%d %H:%M:%S') print('my_dt: {}'.format(my_dt)) ``` ### [`timedelta`](https://docs.python.org/3/library/datetime.html#timedelta-objects) For working with time difference. ``` tomorrow = local_now + dt.timedelta(days=1) print('tomorrow this time: {}'.format(tomorrow)) delta = tomorrow - local_now print('tomorrow - now = {}'.format(delta)) print('days: {}, seconds: {}'.format(delta.days, delta.seconds)) print('total seconds: {}'.format(delta.total_seconds())) ``` ### Working with timezones Let's first make sure [`pytz`](http://pytz.sourceforge.net/) is installed. ``` import sys !{sys.executable} -m pip install pytz import datetime as dt import pytz naive_utc_now = dt.datetime.utcnow() print('naive utc now: {}, tzinfo: {}'.format(naive_utc_now, naive_utc_now.tzinfo)) # Localizing naive datetimes UTC_TZ = pytz.timezone('UTC') utc_now = UTC_TZ.localize(naive_utc_now) print('utc now: {}, tzinfo: {}'.format(utc_now, utc_now.tzinfo)) # Converting localized datetimes to different timezone PARIS_TZ = pytz.timezone('Europe/Paris') paris_now = PARIS_TZ.normalize(utc_now) print('Paris: {}, tzinfo: {}'.format(paris_now, paris_now.tzinfo)) NEW_YORK_TZ = pytz.timezone('America/New_York') ny_now = NEW_YORK_TZ.normalize(utc_now) print('New York: {}, tzinfo: {}'.format(ny_now, ny_now.tzinfo)) ``` **NOTE**: If your project uses datetimes heavily, you may want to take a look at external libraries, such as [Pendulum](https://pendulum.eustace.io/docs/) and [Maya](https://github.com/kennethreitz/maya), which make working with datetimes easier for certain use cases. ## acknowledgement This notebook is an adapted version of the notebook from this git repository: - https://github.com/BIDS/2016-01-14-berkeley/tree/gh-pages/python
github_jupyter
# Run Generic Automated EAS tests This is a starting-point notebook for running tests from the generic EAS suite in `tests/eas/generic.py`. The test classes that are imported here provide helper methods to aid analysis of the cause of failure. You can use Python's `help` built-in to find those methods (or you can just read the docstrings in the code). These tests make estimation of the energy efficiency of task placements, without directly examining the behaviour of cpufreq or cpuidle. Several test classes are provided, the only difference between them being the workload that is used. ### Setup ``` %load_ext autoreload %autoreload 2 %matplotlib inline import logging from conf import LisaLogging LisaLogging.setup()#level=logging.WARNING) import pandas as pd from perf_analysis import PerfAnalysis import trappy from trappy import ILinePlot from trappy.stats.grammar import Parser ``` ## Run test workload If you simply want to run all the tests and get pass/fail results, use this command in the LISA shell: `lisa-test tests/eas/generic.py`. This notebook is intended as a starting point for analysing what scheduler behaviour was judged to be faulty. Target configuration is taken from `$LISA_HOME/target.config` - you'll need to edit that file to provide connection details for the target you want to test. ``` from tests.eas.generic import TwoBigTasks, TwoBigThreeSmall, RampUp, RampDown, EnergyModelWakeMigration, OneSmallTask ``` By default we'll run the EnergyModelWakeMigration test, which runs a workload alternating between high and low-intensity. All the other test classes shown above have the same interface, but run different workloads. To run the tests on different workloads, change this line below: ``` t = EnergyModelWakeMigration(methodName="test_task_placement") print t.__doc__ t.setUpClass() experiment = t.executor.experiments[0] ``` ## Examine trace `get_power_df` and `get_expected_power_df` look at the ftrace results from the workload estimation and judge the energy efficiency of the system, considering *only task placement* (assuming perfect load-tracking/prediction, cpuidle, and cpufreq systems). The energy estimation doesn't take every single wakeup and idle period into account, but simply estimates an average power usage over the time that each task spent attached to each CPU during each phase of the rt-app workload. These return DataFrames estimating the energy usage of the system under each task placement. `estimated_power` will represent this estimation for the scheduling pattern that we actually observed, while `expected_power` will represent our estimation of how much power an **optimal** scheduling pattern would use. Check the docstrings for these functions (and other functions in the test class) for more detail. ``` # print t.get_power_df.__doc__ estimated_power = t.get_power_df(experiment) # print t.get_expected_power_df.__doc__ expected_power = t.get_expected_power_df(experiment) ``` ## Plot Schedule ``` trace = t.get_trace(experiment) trappy.plotter.plot_trace(trace.ftrace) ``` ## Plot estimated ideal and estimated power usage This plot shows how the power estimation for the observed scheduling pattern varies from the estimated power for an ideal schedule. Where the plotted value for the observed power is higher than the plotted ideal power, the system was wasting power (e.g. a low-intensity task was unnecessarily placed on a high-power CPU). Where the observed value is *lower* than the ideal value, this means the system was *too* efficient (e.g. a high-intensity task was placed on a low-power CPU that could not accomadate its compute requirements). ``` df = pd.concat([ expected_power.sum(axis=1), estimated_power.sum(axis=1)], axis=1, keys=['ideal_power', 'observed_power']).fillna(method='ffill') ILinePlot(df, column=df.columns.tolist(), drawstyle='steps-post').view() ``` ## Plot CPU frequency ``` trace.analysis.frequency.plotClusterFrequencies() ``` ## Assertions These are the assertions used to generate pass/fail results s. They aren't very useful in this interactive context - it's much more interesting to examine plots like the one above and see whether the behaviour was desirable or not. These are intended for automated regression testing. Nonetheless, let's see what the results would be for this run. `test_slack` checks the "slack" reported by the rt-app workload. If this slack was negative, this means the workload didn't receive enough CPU capacity. In a real system this would represent lacking interactive performance. ``` try: t.test_slack() except AssertionError as e: print "test_slack failed:" print e else: print "test_slack passed" ``` `test_task_placement` checks that the task placement was energy efficient, taking advantage of lower-power CPUs whenever possible. ``` try: t.test_task_placement() except AssertionError as e: print "test_task_placement failed:" print e else: print "test_task_placement passed" ```
github_jupyter
# Amazon Augmented AI (Amazon A2I) integration with Amazon Comprehend [Example] Visit https://github.com/aws-samples/amazon-a2i-sample-jupyter-notebooks for all A2I Sample Notebooks 1. [Introduction](#Introduction) 2. [Prerequisites](#Prerequisites) 2. [Workteam](#Workteam) 3. [Permissions](#Notebook-Permission) 3. [Client Setup](#Client-Setup) 4. [Create Control Plane Resources](#Create-Control-Plane-Resources) 1. [Create Human Task UI](#Create-Human-Task-UI) 2. [Create Flow Definition](#Create-Flow-Definition) 5. [Starting Human Loops](#Scenario-1-:-When-Activation-Conditions-are-met-,-and-HumanLoop-is-created) 1. [Wait For Workers to Complete Task](#Wait-For-Workers-to-Complete-Task) 2. [Check Status of Human Loop](#Check-Status-of-Human-Loop) 3. [View Task Results](#View-Task-Results) ## Introduction Amazon Augmented AI (Amazon A2I) makes it easy to build the workflows required for human review of ML predictions. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers. You can create your own workflows for ML models built on Amazon SageMaker or any other tools. Using Amazon A2I, you can allow human reviewers to step in when a model is unable to make a high confidence prediction or to audit its predictions on an on-going basis. Learn more here: https://aws.amazon.com/augmented-ai/ In this tutorial, we will show how you can use **Amazon A2I with AWS Comprehend's Detect Sentiment API.** For more in depth instructions, visit https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-getting-started.html To incorporate Amazon A2I into your human review workflows, you need three resources: * A **worker task template** to create a worker UI. The worker UI displays your input data, such as documents or images, and instructions to workers. It also provides interactive tools that the worker uses to complete your tasks. For more information, see https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-instructions-overview.html * A **human review workflow**, also referred to as a flow definition. You use the flow definition to configure your human workforce and provide information about how to accomplish the human review task. You can create a flow definition in the Amazon Augmented AI console or with Amazon A2I APIs. To learn more about both of these options, see https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-flow-definition.html * A **human loop** to start your human review workflow. When you use one of the built-in task types, the corresponding AWS service creates and starts a human loop on your behalf when the conditions specified in your flow definition are met or for each object if no conditions were specified. When a human loop is triggered, human review tasks are sent to the workers as specified in the flow definition. When using a custom task type, as this tutorial will show, you start a human loop using the Amazon Augmented AI Runtime API. When you call StartHumanLoop in your custom application, a task is sent to human reviewers. ### Install Latest SDK ``` # First, let's get the latest installations of our dependencies !pip install --upgrade pip !pip install boto3 --upgrade !pip install -U botocore ``` ## Setup We need to set up the following data: * `region` - Region to call A2I * `bucket` - A S3 bucket accessible by the given role * Used to store the sample images & output results * Must be within the same region A2I is called from * `role` - The IAM role used as part of StartHumanLoop. By default, this notebook will use the execution role * `workteam` - Group of people to send the work to ``` # Region REGION = '<REGION>' ``` #### Setup Bucket and Paths ``` import boto3 import botocore BUCKET = '<YOUR_BUCKET>' OUTPUT_PATH = f's3://{BUCKET}/a2i-results' ``` ### Role and Permissions The AWS IAM Role used to execute the notebook needs to have the following permissions: * ComprehendFullAccess * SagemakerFullAccess * S3 Read/Write Access to the BUCKET listed above * AmazonSageMakerMechanicalTurkAccess (if using MechanicalTurk as your Workforce) ``` from sagemaker import get_execution_role # Setting Role to the default SageMaker Execution Role ROLE = get_execution_role() display(ROLE) ``` ### Workteam or Workforce A workforce is the group of workers that you have selected to label your dataset. You can choose either the Amazon Mechanical Turk workforce, a vendor-managed workforce, or you can create your own private workforce for human reviews. Whichever workforce type you choose, Amazon Augmented AI takes care of sending tasks to workers. When you use a private workforce, you also create work teams, a group of workers from your workforce that are assigned to Amazon Augmented AI human review tasks. You can have multiple work teams and can assign one or more work teams to each job. To create your Workteam, visit the instructions here: https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management.html After you have created your workteam, replace YOUR_WORKTEAM_ARN below ``` WORKTEAM_ARN= "<YOUR_WORKTEAM>" ``` Visit: https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-permissions-security.html to add the necessary permissions to your role ## Client Setup Here we are going to setup the rest of our clients. ``` import io import json import uuid import time import boto3 import botocore # Amazon SageMaker client sagemaker = boto3.client('sagemaker', REGION) # Amazon Comprehend client comprehend = boto3.client('comprehend', REGION) # Amazon Augment AI (A2I) client a2i = boto3.client('sagemaker-a2i-runtime') s3 = boto3.client('s3', REGION) ``` ### Comprehend helper method ``` # Will help us parse Detect Sentiment API responses def capsToCamel(all_caps_string): if all_caps_string == 'POSITIVE': return 'Positive' elif all_caps_string == 'NEGATIVE': return 'Negative' elif all_caps_string == 'NEUTRAL': return 'Neutral' ``` ## Create Control Plane Resources ### Create Human Task UI Create a human task UI resource, giving a UI template in liquid html. This template will be rendered to the human workers whenever human loop is required. Below we've provided a simple demo template that is compatible with AWS Comprehend's Detect Sentiment API input and response. For over 70 pre built UIs, check: https://github.com/aws-samples/amazon-a2i-sample-task-uis ``` template = r""" <script src="https://assets.crowd.aws/crowd-html-elements.js"></script> <crowd-form> <crowd-classifier name="sentiment" categories="['Positive', 'Negative', 'Neutral', 'Mixed']" initial-value="{{ task.input.initialValue }}" header="What sentiment does this text convey?" > <classification-target> {{ task.input.taskObject }} </classification-target> <full-instructions header="Sentiment Analysis Instructions"> <p><strong>Positive</strong> sentiment include: joy, excitement, delight</p> <p><strong>Negative</strong> sentiment include: anger, sarcasm, anxiety</p> <p><strong>Neutral</strong>: neither positive or negative, such as stating a fact</p> <p><strong>Mixed</strong>: when the sentiment is mixed</p> </full-instructions> <short-instructions> Choose the primary sentiment that is expressed by the text. </short-instructions> </crowd-classifier> </crowd-form> """ def create_task_ui(): ''' Creates a Human Task UI resource. Returns: struct: HumanTaskUiArn ''' response = sagemaker.create_human_task_ui( HumanTaskUiName=taskUIName, UiTemplate={'Content': template}) return response # Task UI name - this value is unique per account and region. You can also provide your own value here. taskUIName = 'ui-comprehend-' + str(uuid.uuid4()) # Create task UI humanTaskUiResponse = create_task_ui() humanTaskUiArn = humanTaskUiResponse['HumanTaskUiArn'] print(humanTaskUiArn) ``` ### Creating the Flow Definition In this section, we're going to create a flow definition definition. Flow Definitions allow us to specify: * The workforce that your tasks will be sent to. * The instructions that your workforce will receive. This is called a worker task template. * The configuration of your worker tasks, including the number of workers that receive a task and time limits to complete tasks. * Where your output data will be stored. This demo is going to use the API, but you can optionally create this workflow definition in the console as well. For more details and instructions, see: https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-flow-definition.html. ``` # Flow definition name - this value is unique per account and region. You can also provide your own value here. flowDefinitionName = 'fd-comprehend-demo-' + str(uuid.uuid4()) create_workflow_definition_response = sagemaker.create_flow_definition( FlowDefinitionName= flowDefinitionName, RoleArn= ROLE, HumanLoopConfig= { "WorkteamArn": WORKTEAM_ARN, "HumanTaskUiArn": humanTaskUiArn, "TaskCount": 1, "TaskDescription": "Identify the sentiment of the provided text", "TaskTitle": "Detect Sentiment of Text" }, OutputConfig={ "S3OutputPath" : OUTPUT_PATH } ) flowDefinitionArn = create_workflow_definition_response['FlowDefinitionArn'] # let's save this ARN for future use # Describe flow definition - status should be active for x in range(60): describeFlowDefinitionResponse = sagemaker.describe_flow_definition(FlowDefinitionName=flowDefinitionName) print(describeFlowDefinitionResponse['FlowDefinitionStatus']) if (describeFlowDefinitionResponse['FlowDefinitionStatus'] == 'Active'): print("Flow Definition is active") break time.sleep(2) ``` ## Human Loops ### Detect Sentiment with AWS Comprehend Now that we have setup our Flow Definition, we are ready to call AWS Comprehend and start our human loops. In this tutorial, we are interested in starting a HumanLoop only if the SentimentScore returned by AWS Comprehend is less than 99%. So, with a bit of logic, we can check the response for each call to Detect Sentiment, and if the SentimentScore is less than 99%, we will kick off a HumanLoop to engage our workforce for a human review. #### Sample Data ``` sample_detect_sentiment_blurbs = ['I enjoy this product', 'I am unhappy with this product', 'It is okay', 'sometimes it works'] human_loops_started = [] SENTIMENT_SCORE_THRESHOLD = .99 for blurb in sample_detect_sentiment_blurbs: # Call AWS Comprehend's Detect Sentiment API response = comprehend.detect_sentiment(Text=blurb, LanguageCode='en') sentiment = response['Sentiment'] print(f'Processing blurb: \"{blurb}\"') # Our condition for when we want to engage a human for review if (response['SentimentScore'][capsToCamel(sentiment)]< SENTIMENT_SCORE_THRESHOLD): humanLoopName = str(uuid.uuid4()) inputContent = { "initialValue": sentiment.title(), "taskObject": blurb } start_loop_response = a2i.start_human_loop( HumanLoopName=humanLoopName, FlowDefinitionArn=flowDefinitionArn, HumanLoopInput={ "InputContent": json.dumps(inputContent) } ) human_loops_started.append(humanLoopName) print(f'SentimentScore of {response["SentimentScore"][capsToCamel(sentiment)]} is less than the threshold of {SENTIMENT_SCORE_THRESHOLD}') print(f'Starting human loop with name: {humanLoopName} \n') else: print(f'SentimentScore of {response["SentimentScore"][capsToCamel(sentiment)]} is above threshold of {SENTIMENT_SCORE_THRESHOLD}') print('No human loop created. \n') ``` ### Check Status of Human Loop ``` completed_human_loops = [] for human_loop_name in human_loops_started: resp = a2i.describe_human_loop(HumanLoopName=human_loop_name) print(f'HumanLoop Name: {human_loop_name}') print(f'HumanLoop Status: {resp["HumanLoopStatus"]}') print(f'HumanLoop Output Destination: {resp["HumanLoopOutput"]}') print('\n') if resp["HumanLoopStatus"] == "Completed": completed_human_loops.append(resp) ``` ### Wait For Workers to Complete Task ``` workteamName = WORKTEAM_ARN[WORKTEAM_ARN.rfind('/') + 1:] print("Navigate to the private worker portal and do the tasks. Make sure you've invited yourself to your workteam!") print('https://' + sagemaker.describe_workteam(WorkteamName=workteamName)['Workteam']['SubDomain']) ``` ### Check Status of Human Loop Again ``` completed_human_loops = [] for human_loop_name in human_loops_started: resp = a2i.describe_human_loop(HumanLoopName=human_loop_name) print(f'HumanLoop Name: {human_loop_name}') print(f'HumanLoop Status: {resp["HumanLoopStatus"]}') print(f'HumanLoop Output Destination: {resp["HumanLoopOutput"]}') print('\n') if resp["HumanLoopStatus"] == "Completed": completed_human_loops.append(resp) ``` ### View Task Results Once work is completed, Amazon A2I stores results in your S3 bucket and sends a Cloudwatch event. Your results should be available in the S3 OUTPUT_PATH when all work is completed. ``` import re import pprint pp = pprint.PrettyPrinter(indent=4) for resp in completed_human_loops: splitted_string = re.split('s3://' + BUCKET + '/', resp['HumanLoopOutput']['OutputS3Uri']) output_bucket_key = splitted_string[1] response = s3.get_object(Bucket=BUCKET, Key=output_bucket_key) content = response["Body"].read() json_output = json.loads(content) pp.pprint(json_output) print('\n') ``` ### The End!
github_jupyter
## 1) Raw Data Filtering and Tokenization Given raw social media data, tokenize and filter the data to prepare it for annotation by Amazon Mechanical Turkers, or direct prediction from the model. For processing data that has been annotated, see "Annotated Data Filtering and Tokenization." ``` import pandas as pd import numpy as np import os import requests import json from custom_tokenizer import * %load_ext autoreload %autoreload 2 ``` ### Load data ``` # create dataframe of all possible data filepath = '../data/third_data' df = pd.read_json(filepath+'.json') # special-case for third_data.json df = df.rename(columns={"text": "question", "pid": "post_id"}) # ensure no index overlap index_offset = len(pd.read_json('../data/original_data.json')) index_offset += len(pd.read_json('../data/second_data.json')) df.index += index_offset df['source_file'] = filepath df['index'] = df.index df with open("../data/answers_vqa.txt") as f: valid_ans = set() for row in f: valid_ans.add(str.strip(row)) ``` ### Tokenize ``` df['r_tokenization'] = df.response.apply(lambda x: response_tokenize(x)) df['q_tokenization'] = df.question.apply(lambda x: question_tokenize(x)) ``` ### Remove emojis and non-ASCII characters ``` import unicodedata import emoji emoji_regex = emoji.get_emoji_regexp() def filter_unicode(x): filtered_tokens = [] for token in x: if token == '': continue # skip anything that isn't a letter if len(token) == 1 and unicodedata.category(token)[0] != 'L': continue else: filtered_tokens.append(token) return filtered_tokens df['r_tokenization'] = df.r_tokenization.apply(lambda x: [emoji_regex.sub(r'', token) for token in x]) df['r_tokenization'] = df.r_tokenization.apply(lambda x: filter_unicode(x)) df['response_filtered'] = df.r_tokenization.apply(lambda x: " ".join(x)) df['response_invalid'] = df.response_filtered.apply(lambda x: not x.isascii()) response_invalid = df[df.response_invalid == True] print("Now dropping {} rows where unicode characters were still present...".format(len(response_invalid))) print("Examples: ", "; ".join(response_invalid.head(5).response_filtered.values)) df = df.drop(response_invalid.index) ``` ### Remove certain questions known to cause confusion ``` bad_questions = df[df.q_tokenization.str[0] == "where"] print("Now dropping {} rows of bad questions...".format(len(bad_questions))) print("Examples: ", " ".join(bad_questions.head(5).question.values)) df = df.drop(bad_questions.index) ``` ### Restrict to responses that could contain VQA 2.0 vocab only ``` def convert_yes_no(response): if response is None: return for idx, token in enumerate(response): if token in ['yep', 'yup', 'yeah', 'yess', 'yesss']: response[idx] = 'yes' elif token in ['nope']: response[idx] = 'no' return response df.r_tokenization = df.r_tokenization.apply(lambda x: convert_yes_no(x)) def vocab_in_response(response): for token in response: if token in valid_ans: return True return False df['in_vocab'] = df.r_tokenization.apply(lambda x: vocab_in_response(x)) out_of_vocab = df[df.in_vocab == False] print("Now dropping {} rows of responses that don't have any in-vocab tokens...".format(len(out_of_vocab))) print("Examples: ", "; ".join(out_of_vocab.head(25).response_filtered.values)) df = df.drop(out_of_vocab.index) ``` ### Preview and Save Dataframe ``` df df.to_csv(filepath+'_filtered.csv', index_label='index') ```
github_jupyter
``` # Erasmus+ ICCT project (2018-1-SI01-KA203-047081) # Toggle cell visibility from IPython.display import HTML tag = HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide() } else { $('div.input').show() } code_show = !code_show } $( document ).ready(code_toggle); </script> Toggle cell visibility <a href="javascript:code_toggle()">here</a>.''') display(tag) # Hide the code completely # from IPython.display import HTML # tag = HTML('''<style> # div.input { # display:none; # } # </style>''') # display(tag) ``` ## Notranja stabilnost - primer 1 ### Kako upravljati s tem interaktivnim primerom? Za dan <u>stabilen</u> sistem poizkusi doseči divergenten odziv zgolj s spreminjanjem začetnih pogojev. $$ \dot{x} = \underbrace{\begin{bmatrix}0&1\\-0.8&-0.5\end{bmatrix}}_{A}x $$ Odgovori na naslednji vprašanji: - Ali je možno doseči divergenten odziv za dani sistem? - Ali je možno doseči divergenten odziv za katerikoli stabilen sistem? ``` %matplotlib inline import control as control import numpy import sympy as sym from IPython.display import display, Markdown import ipywidgets as widgets import matplotlib.pyplot as plt #matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value ! class matrixWidget(widgets.VBox): def updateM(self,change): for irow in range(0,self.n): for icol in range(0,self.m): self.M_[irow,icol] = self.children[irow].children[icol].value #print(self.M_[irow,icol]) self.value = self.M_ def dummychangecallback(self,change): pass def __init__(self,n,m): self.n = n self.m = m self.M_ = numpy.matrix(numpy.zeros((self.n,self.m))) self.value = self.M_ widgets.VBox.__init__(self, children = [ widgets.HBox(children = [widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)] ) for j in range(n) ]) #fill in widgets and tell interact to call updateM each time a children changes value for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].value = self.M_[irow,icol] self.children[irow].children[icol].observe(self.updateM, names='value') #value = Unicode('example@example.com', help="The email value.").tag(sync=True) self.observe(self.updateM, names='value', type= 'All') def setM(self, newM): #disable callbacks, change values, and reenable self.unobserve(self.updateM, names='value', type= 'All') for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].unobserve(self.updateM, names='value') self.M_ = newM self.value = self.M_ for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].value = self.M_[irow,icol] for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].observe(self.updateM, names='value') self.observe(self.updateM, names='value', type= 'All') #self.children[irow].children[icol].observe(self.updateM, names='value') #overlaod class for state space systems that DO NOT remove "useless" states (what "professor" of automatic control would do this?) class sss(control.StateSpace): def __init__(self,*args): #call base class init constructor control.StateSpace.__init__(self,*args) #disable function below in base class def _remove_useless_states(self): pass # Preparatory cell A = numpy.matrix([[0.,1.],[-4.0/5.0,-5.0/10.0]]) X0 = numpy.matrix([[0.0],[0.0]]) Aw = matrixWidget(2,2) Aw.setM(A) X0w = matrixWidget(2,1) X0w.setM(X0) # Misc #create dummy widget DW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px')) #create button widget START = widgets.Button( description='Test', disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' tooltip='Test', icon='check' ) def on_start_button_clicked(b): #This is a workaround to have intreactive_output call the callback: # force the value of the dummy widget to change if DW.value> 0 : DW.value = -1 else: DW.value = 1 pass START.on_click(on_start_button_clicked) # Main cell def main_callback(A, X0, DW): sols = numpy.linalg.eig(A) sys = sss(A,[[1],[0]],[0,1],0) pole = control.pole(sys) if numpy.real(pole[0]) != 0: p1r = abs(numpy.real(pole[0])) else: p1r = 1 if numpy.real(pole[1]) != 0: p2r = abs(numpy.real(pole[1])) else: p2r = 1 if numpy.imag(pole[0]) != 0: p1i = abs(numpy.imag(pole[0])) else: p1i = 1 if numpy.imag(pole[1]) != 0: p2i = abs(numpy.imag(pole[1])) else: p2i = 1 print('Lastni vrednosti matrike A sta',round(sols[0][0],4),'in',round(sols[0][1],4)) #T = numpy.linspace(0, 60, 1000) T, yout, xout = control.initial_response(sys,X0=X0,return_x=True) fig = plt.figure("Prosti odziv", figsize=(16,5)) ax = fig.add_subplot(121) plt.plot(T,xout[0]) plt.grid() ax.set_xlabel('čas [s]') ax.set_ylabel(r'$x_1$') ax1 = fig.add_subplot(122) plt.plot(T,xout[1]) plt.grid() ax1.set_xlabel('čas [s]') ax1.set_ylabel(r'$x_2$') alltogether = widgets.HBox([widgets.VBox([widgets.Label('$A$:',border=3), Aw]), widgets.Label(' ',border=3), widgets.VBox([widgets.Label('$X_0$:',border=3), X0w]), START]) out = widgets.interactive_output(main_callback, {'A':Aw, 'X0':X0w, 'DW':DW}) out.layout.height = '350px' display(out, alltogether) #create dummy widget 2 DW2 = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px')) DW2.value = -1 #create button widget START2 = widgets.Button( description='Prikaži pravilna odgovora', disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' tooltip='Pritisni za prikaz pravilnih odgovorov', icon='check', layout=widgets.Layout(width='200px', height='auto') ) def on_start_button_clicked2(b): #This is a workaround to have intreactive_output call the callback: # force the value of the dummy widget to change if DW2.value> 0 : DW2.value = -1 else: DW2.value = 1 pass START2.on_click(on_start_button_clicked2) def main_callback2(DW2): if DW2 > 0: display(Markdown(r'''>Odgovor: Prosti odziv sistema zavisi zgolj od lastnih vrednosti matrike $A$ in je linearna kombinacija njihovih modalnih oblik. Ker je sistem stabilen, ima zgolj konvergentne modalne oblike - odziv sistema tako ne more biti divergenten, ne glede na izbrane vrednosti začetnih pogojev.''')) else: display(Markdown('')) #create a graphic structure to hold all widgets alltogether2 = widgets.VBox([START2]) out2 = widgets.interactive_output(main_callback2,{'DW2':DW2}) #out.layout.height = '300px' display(out2,alltogether2) ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import scipy as sp from scipy.integrate import odeint import networkx as nx import scipy.stats as sp_s import pybel as pb import time import csv import json import torch import pyro pyro.set_rng_seed(101) ``` # Create Causal Graph node ``` # create generic discrete probability function class cg_node(): def __init__(self,n_inputs,name): self.n_inputs = n_inputs self.name = name if n_inputs == 0: self.label = 'exogenous' else: self.label = 'endogenous' return def p_init(self,input_data,var_data): self.n_data = len(input_data) self.input_data = input_data self.var_data = var_data if self.n_inputs == 0: p_ave = np.zeros(3) n_count = self.n_data for i in range(0,3): p_ave[i] = np.sum(var_data == i-1)/n_count elif self.n_inputs == 1: n_count = np.zeros(3) p_ave = np.zeros((3,3)) for i in range(0,3): n_count[i] = np.sum(input_data == i-1) for j in range(0,3): p_ave[j,i] = np.sum((input_data[:,0] == i-1)*(var_data == j-1))/n_count[i] elif self.n_inputs == 2: n_count = np.zeros((3,3)) p_ave = np.zeros((3,3,3)) for i in range(0,3): for j in range(0,3): n_count[i,j] = np.sum((input_data[:,0] == i-1)*(input_data[:,1] == j-1)) for k in range(0,3): p_ave[k,i,j] = np.sum( (input_data[:,0] == i-1)*(input_data[:,1] == j-1)*(var_data == k-1))/n_count[i,j] elif self.n_inputs == 3: n_count = np.zeros((3,3,3)) p_ave = np.zeros((3,3,3,3)) for i in range(0,3): for j in range(0,3): for k in range(0,3): n_count[i,j,k] = np.sum( (input_data[:,0] == i-1)*(input_data[:,1] == j-1)*(input_data[:,2] == k-1)) for m in range(0,3): p_ave[m,i,j,k] = np.sum((input_data[:,0] == i-1)*(input_data[:,1] == j-1) *(input_data[:,2] == k-1)*(var_data == m-1))/n_count[i,j,k] elif self.n_inputs == 4: n_count = np.zeros((3,3,3,3)) p_ave = np.zeros((3,3,3,3,3)) for i in range(0,3): for j in range(0,3): for k in range(0,3): for m in range(0,3): n_count[i,j,k,m] = np.sum((input_data[:,0] == i-1)*(input_data[:,1] == j-1) *(input_data[:,2] == k-1)*(input_data[:,3] == m-1)) for q in range(0,3): p_ave[q,i,j,k,m] = np.sum((input_data[:,0] == i-1)*(input_data[:,1] == j-1) *(input_data[:,2] == k-1)*(input_data[:,3] == m-1) *(var_data == q-1))/n_count[i,j,k,m] else: print('error -- too many inputs') return self.n_count = torch.tensor(n_count/self.n_data) self.prob_dist = torch.tensor(p_ave) return def sample(self,data_in=[]): if self.n_inputs == 0: samp_out = pyro.sample('',pyro.distributions.Multinomial(probs = self.prob_dist)).bool() elif self.n_inputs == 1: p_temp = torch.squeeze(self.prob_dist[:,data_in[0]]) samp_out = pyro.sample('',pyro.distributions.Multinomial(probs = p_temp)).bool() elif self.n_inputs == 2: p_temp = torch.squeeze(self.prob_dist[:,data_in[0],data_in[1]]) samp_out = pyro.sample('',pyro.distributions.Multinomial(probs = p_temp)).bool() elif self.n_inputs == 3: p_temp = torch.squeeze(self.prob_dist[:,data_in[0],data_in[1],data_in[2]]) samp_out = pyro.sample('',pyro.distributions.Multinomial(probs = p_temp)).bool() else: print('error -- too many inputs') samp_out = [] return samp_out ``` ## Causal graph class ``` class cg_graph(): def __init__(self,str_list=[],bel_graph=[]): edge_list = [] entity_list = [] if str_list: for item in str_list: sub_ind = item.find('=') sub_temp = item[:sub_ind-1] obj_temp = item[sub_ind+3:] rel_temp = item[sub_ind:sub_ind+2] if sub_temp not in entity_list: entity_list.append(sub_temp) if obj_temp not in entity_list: entity_list.append(obj_temp) # ignore hasVariant, partOf relations if rel_temp.find('crease') > 0: edge_list.append([sub_temp,obj_temp,rel_temp]) # check for duplicate edges #nodes_temp = [sub_temp,obj_temp] #list_temp = [[item[0],item[1]] for item in edge_list] #if nodes_temp in list_temp: #ind_temp = list_temp.index(nodes_temp) #edge_list[ind_temp][2] += ',' + rel_temp #else: #edge_list.append([sub_temp,obj_temp,rel_temp]) elif bel_graph: for item in bel_graph.edges: edge_temp = bel_graph.get_edge_data(item[0],item[1],item[2]) sub_temp = str(item[0]).replace('"','') obj_temp = str(item[1]).replace('"','') rel_temp = edge_temp['relation'] if sub_temp not in entity_list: entity_list.append(sub_temp) if obj_temp not in entity_list: entity_list.append(obj_temp) # ignore hasVariant, partOf relations if rel_temp.find('crease') > 0: edge_list.append([sub_temp,obj_temp,rel_temp]) # check for duplicate edges #nodes_temp = [sub_temp,obj_temp] #list_temp = [[item[0],item[1]] for item in edge_list] #if nodes_temp in list_temp: #ind_temp = list_temp.index(nodes_temp) #edge_list[ind_temp][2] += ',' + rel_temp #else: #edge_list.append([sub_temp,obj_temp,rel_temp]) n_nodes = len(entity_list) self.n_nodes = n_nodes adj_mat = np.zeros((n_nodes,n_nodes),dtype=int) for item in edge_list: out_ind = entity_list.index(item[0]) in_ind = entity_list.index(item[1]) adj_mat[out_ind,in_ind] = 1 self.edge_list = edge_list self.entity_list = entity_list self.adj_mat = adj_mat self.graph = nx.DiGraph(adj_mat) # check to make sure that it's a DAG if nx.algorithms.dag.is_directed_acyclic_graph(self.graph): print('The causal graph is a acyclic') else: print('The causal graph has cycles -- this is a problem') node_dict = {} for i in range(0,n_nodes): node_dict[entity_list[i]] = cg_node(np.sum(adj_mat[:,i]),entity_list[i]) self.node_dict = node_dict self.cond_list = [] self.sample_dict = {} self.parent_ind_list = [] self.child_ind_list = [] self.parent_name_dict = {} self.child_name_dict = {} self.parent_ind_list = [np.where(self.adj_mat[:,i] > 0)[0] for i in range(0,n_nodes)] self.child_ind_list = [np.where(self.adj_mat[i,:] > 0)[0] for i in range(0,n_nodes)] for i in range(0,n_nodes): self.parent_name_dict[entity_list[i]] = [entity_list[item] for item in self.parent_ind_list[i]] self.child_name_dict[entity_list[i]] = [entity_list[item] for item in self.child_ind_list[i]] # create rank-3 delta tensor tensor_temp = torch.zeros((3,3,3)).double() for i in range(0,3): tensor_temp[i,i,i] = 1 self.tensor_temp = tensor_temp return def prob_init(self,data_in): # initialize all of the nodes exog_list = [] prob_dict = {} for name in self.node_dict: i = self.entity_list.index(name) data_in_temp = data_in[:,self.parent_ind_list[i]] data_out_temp = data_in[:,i] self.node_dict[name].p_init(data_in_temp,data_out_temp) if self.node_dict[name].n_inputs == 0: exog_list.append(name) prob_dict[name] = self.node_dict[name].prob_dist self.exog_list = exog_list self.prob_dict = prob_dict return def sample_vars(self,names,flag=0): # do a multi-variable sample # sample only those variables w/o sample data if np.any([item in self.sample_dict for item in names]): sample_list = [item for item in names if item not in self.sample_dict] if sample_list: self.sample_vars(sample_list,flag+1) # sample exogenous variables elif np.any([item in self.exog_list for item in names]): in_exog = [item for item in names if item in self.exog_list] not_in_exog = [item for item in names if item not in self.exog_list] for item in in_exog: self.sample_dict[item] = self.node_dict[item].sample() if not_in_exog: self.sample_vars(not_in_exog,flag+1) # if you have samples from all of the parents, sample names # otherwise, sample the parents elif names: sample_list = [] sample_list2 = [] for item in names: parent_list = self.parent_name_dict[item] if np.all([item2 in self.sample_dict for item2 in parent_list]): self.sample_dict[item] = self.node_dict[item].sample( [self.sample_dict[item2] for item2 in parent_list]) else: sample_list = sample_list + [item2 for item2 in parent_list if item2 not in sample_list] sample_list2 = sample_list2 + [item] if sample_list: self.sample_vars(sample_list,flag+1) if sample_list2: self.sample_vars(sample_list2,flag+1) # if you're back at the root node, return the samples # otherwise, don't return anything - the values are stored in self.sample_dict if flag == 0: tensor_sample = torch.Tensor([-1,0,1]).int() output =[tensor_sample[self.sample_dict[item]] for item in names] self.sample_dict = {} return output else: return def gen_path_nodes(self,sources,destinations): source_inds = [self.entity_list.index(item) for item in sources] dest_inds = [self.entity_list.index(item) for item in destinations] nodes = [] for i in source_inds: for j in dest_inds: for path in nx.all_simple_paths(self.graph, source=i, target=j): for ind in path: if self.entity_list[ind] not in nodes: nodes.append(self.entity_list[ind]) return nodes def joint_dist_add(self,add_node,nodes_temp,prob_temp): # create a new joint distribution with add_node now included # deliberately let out 'a' - for the variable being added str_temp = 'bcdefghijklmnopqrstuvwxyz' # find parent indices par_inds = [nodes_temp.index(item2) for item2 in self.parent_name_dict[add_node]] n_inds = len(par_inds) if n_inds == 1: str1 = 'ay,' + str_temp[par_inds[0]] + 'yz,' str2 = str_temp[:len(nodes_temp)].replace(str_temp[par_inds[0]],'z') str_sum = str1 + str2 prob_out = torch.einsum(str_sum,self.prob_dict[add_node],self.tensor_temp,prob_temp) elif n_inds == 2: str1 = 'awy,' + str_temp[par_inds[0]] + 'wx,' + str_temp[par_inds[1]] + 'yz,' str2 = str_temp[:len(nodes_temp)].replace( str_temp[par_inds[0]],'x').replace(str_temp[par_inds[1]],'z') str_sum = str1 + str2 prob_out = torch.einsum(str_sum,self.prob_dict[add_node],self.tensor_temp,self.tensor_temp,prob_temp) elif n_inds == 3: str1 = ('auwy,' + str_temp[par_inds[0]] + 'uv,' + str_temp[par_inds[1]] + 'wx,' + str_temp[par_inds[2]] + 'yz,') str2 = str_temp[:len(nodes_temp)].replace( str_temp[par_inds[0]],'u').replace(str_temp[par_inds[1]],'x').replace( str_temp[par_inds[2]],'z') str_sum = str1 + str2 prob_out = torch.einsum(str_sum, self.prob_dict[add_node],self.tensor_temp,self.tensor_temp,self.tensor_temp,prob_temp) else: print('too many parents') prob_out = prob_temp return prob_out def calc_prob(self,names): # calculate the joint probability over a list of named nodes # find all paths from exogenous nodes path_nodes = self.gen_path_nodes(self.exog_list,names) for item in names: if item in self.exog_list and item not in path_nodes: path_nodes.append(item) print(path_nodes) # get joint exogenous probability distribution nodes_temp = [] for item in self.exog_list: if item in path_nodes: nodes_temp.append(item) #print(nodes_temp) #print(self.exog_list) prob_temp = self.prob_dict[nodes_temp[0]] for item in nodes_temp[1:]: prob_temp = torch.einsum('...i,j',prob_temp,self.prob_dict[item]) # identify all of the children of nodes_temp in path_nodes child_nodes = [] for item in nodes_temp: for item2 in self.child_name_dict[item]: if item2 in path_nodes and item2 not in child_nodes: child_nodes.append(item2) flag = 0 # iterate through node children until target nodes are reached while flag == 0: #print(nodes_temp) # determine which nodes to add # all children of nodes_temp in path_nodes, not in nodes_temp, and that have all their parents in # nodes_temp add_nodes = [] for item in nodes_temp: for item2 in self.child_name_dict[item]: if (item2 in path_nodes and item2 not in nodes_temp and np.all([item3 in nodes_temp for item3 in self.parent_name_dict[item2]]) and item2 not in add_nodes): add_nodes.append(item2) #print(add_nodes) # add nodes to the joint distribution for item in add_nodes: prob_temp = self.joint_dist_add(item,nodes_temp,prob_temp) # add the new node to nodes_temp nodes_temp = [item] + nodes_temp # determine which nodes to subtract # all nodes in nodes_temp not in names and that have all their children in nodes_temp sub_nodes = [] for item in nodes_temp: child_path_list = [] for item2 in self.child_name_dict[item]: if item2 in path_nodes: child_path_list.append(item2) if item not in names and np.all([item2 in nodes_temp for item2 in child_path_list]): sub_nodes.append(item) #print(sub_nodes) # sum over the sub_nodes probabilities if sub_nodes: remove_indices = [nodes_temp.index(item) for item in sub_nodes] prob_temp = torch.sum(prob_temp,dim=remove_indices) # remove summed nodes from nodes_temp for item in sub_nodes: nodes_temp.remove(item) if sorted(nodes_temp) == sorted(names): flag = 1 permute_inds = [nodes_temp.index(item) for item in names] prob_temp = prob_temp.permute(permute_inds) #print() return prob_temp def calc_cond_prob(self,cond_prob,uncond_prob): # check to make sure the lists don't overlap if np.any([item in uncond_prob for item in cond_prob]): print('error -- overlapping lists') return n_cond = len(cond_prob) n_uncond = len(cond_prob) p_joint = self.calc_prob(cond_prob + uncond_prob) p_uncond = self.calc_prob(uncond_prob) if n_uncond == 1: p_cond = torch.einsum('...i,ijk,j',p_joint,self.tensor_temp,1/p_uncond) elif n_uncond == 2: p_cond = torch.einsum('...il,ijk,lmn,jm',p_joint,self.tensor_temp,self.tensor_temp,1/p_uncond) elif n_uncond == 3: p_cond = torch.einsum('...ilp,ijk,lmn,pqr,jmq', p_joint,self.tensor_temp,self.tensor_temp,self.tensor_temp,1/p_uncond) else: print('too many conditioned variables') p_cond = p_joint return p_cond def calc_do(self,names,do_vars,do_vals): # calculate the final probability distribution of the variable in question given do variables # add do_vars to list of exogenous variables self.exog_list += do_vars # sever links from do_vars to parents child_temp = self.child_name_dict for item in self.child_name_dict: for item2 in do_vars: if item2 in self.child_name_dict[item]: self.child_name_dict[item].remove(item2) names_prob_dict = {} for item in do_vars: names_prob_dict[item] = self.prob_dict[item] # specify distributions for those do_vars for item in do_vars: p_temp = np.zeros(3) p_temp[do_vals[item]+1] = 1 self.prob_dict[item] = torch.Tensor(p_temp) prob_out = self.calc_prob(names) # restore original list of exogenous variables for item in do_vars: self.exog_list.remove(item) # restore original probability distributions for item in do_vars: self.prob_dict[item] = names_prob_dict[item] # restore original child dictionary self.child_name_dict = child_temp return prob_out def calc_do_cond(self,do_vars,do_vals,cond_prob,uncond_prob): # check to make sure the lists don't overlap if (np.any([item in uncond_prob for item in cond_prob]) or np.any([item in do_vars for item in cond_prob]) or np.any([item in do_vars for item in uncond_prob])): print('error -- overlapping lists') return n_cond = len(cond_prob) n_uncond = len(cond_prob) p_joint = self.calc_do(cond_prob + uncond_prob,do_vars,do_vals) p_uncond = self.calc_do(uncond_prob,do_vars,do_vals) if n_uncond == 1: p_cond = torch.einsum('...i,ijk,j',p_joint,self.tensor_temp,1/p_uncond) elif n_uncond == 2: p_cond = torch.einsum('...il,ijk,lmn,jm',p_joint,self.tensor_temp,self.tensor_temp,1/p_uncond) elif n_uncond == 3: p_cond = torch.einsum('...ilp,ijk,lmn,pqr,jmq', p_joint,self.tensor_temp,self.tensor_temp,self.tensor_temp,1/p_uncond) else: print('too many conditioned variables') p_cond = p_joint return p_cond def calc_counterfact(): return def calc_cde(self,names,do_vars,do_vals,ctrl_vars,ctrl_vals): tot_vars = do_vars + ctrl_vars tot_vals = {} tot_ctrl_vals = {} for item in do_vars: tot_vals[item] = do_vals[item] tot_ctrl_vals = ctrl_vals[item] for item in ctrl_vars: tot_vals[item] = 0 tot_ctrl_vals[item] = ctrl_vals[item] return self.calc_do(names,tot_vars,tot_vals) - self.calc_do(names,tot_vars,tot_ctrl_vals) return def calc_te(self,names,do_vars,do_vals): do_vals_0 = {} for item in do_vals: do_vals_0[item] = 0 return self.calc_do(names,do_vars,do_vals) - self.calc_do(do_vars,do_vals_0) def calc_nde(self,names,do_vars,do_vals): # identify parents of names parent_list = [] for item in names: for item2 in self.parent_name_dict[item]: if item2 not in parent_list: parent_list.append(item2) par_do_vals = [] for item in do_vars: if item in parent_list: par_do_vals.append(item) if not par_do_vals: print('no direct effect') prob_out = 0 else: # calculate probability of non-do_var parents given do_vars = 0 non_do_parents = [item for item in parent_list if item not in do_vars] do_vals_0 = {} for item in do_vars: do_vals_0[item] = 0 non_par_do_vars = [] for item in do_vars: if item not in parent_list: non_par_do_vars.append(item) nodes_temp = non_do_parents prob_temp = self.calc_do(non_do_parents,do_vars,do_vals_0) # do outer product to get overall distribution for item in non_par_do_vars: p_temp = np.zeros(3) p_temp[do_vals[item]] = 1 p_add = torch.Tensor(p_temp) prob_temp = torch.einsum('...i,j',prob_temp,p_add) nodes_temp.append(item) n_sum = len(nodes_temp) n_names = len(names) for item in names: prob_temp = self.joint_dist_add(item,nodes_temp,prob_temp) # calculate overall joint probability distribution sum_axes = range(n_names,n_names+n_sum) prob_do_vals = np.sum(prob_temp,axis=tuple(sum_axes)) prob_out = prob_do_vals - self.calc_do(names,do_vars,do_vals_0) return prob_out def calc_nie(): # identify parents of names parent_list = [] for item in names: for item2 in self.parent_name_dict[item]: if item2 not in parent_list: parent_list.append(item2) par_do_vals = [] for item in parent_list: if item in do_vars: par_do_vals.append(item) if np.all([item in do_vars for item in parent_list]): print('no indirect effect') prob_out = 0 else: # calculate probability of non-do_var parents given do_vars = do_vals non_do_parents = [item for item in parent_list if item not in do_vars] do_vals_0 = {} for item in do_vars: do_vals_0[item] = 0 non_par_do_vars = [] for item in do_vars: if item not in parent_list: non_par_do_vars.append(item) nodes_temp = non_do_parents prob_temp = self.calc_do(non_do_parents,do_vars,do_vals) # do outer product to get overall distribution for item in non_par_do_vars: p_temp = np.zeros(3) p_temp[1] = 1 p_add = torch.Tensor(p_temp) prob_temp = torch.einsum('...i,j',prob_temp,p_add) nodes_temp.append(item) n_sum = len(nodes_temp) n_names = len(names) for item in names: prob_temp = self.joint_dist_add(item,nodes_temp,prob_temp) # calculate overall joint probability distribution sum_axes = range(n_names,n_names+n_sum) prob_do_vals = np.sum(prob_temp,axis=tuple(sum_axes)) prob_out = prob_do_vals - self.calc_do(names,do_vars,do_vals_0) return prob_out def cond_mut_info(self,target,test,cond,data_in): cond_temp = cond if not cond: # find parents of target for item in target: for item2 in self.parent_name_dict[item]: if item2 not in cond_temp: cond_temp.append(item2) target_inds = [self.entity_list.index(item) for item in target] test_inds = [self.entity_list.index(item) for item in test] cond_inds = [self.entity_list.index(item) for item in cond_temp] n_total = len(data_in) null_joint = data_in[:,target_inds + cond_inds] null_cond = data_in[:,cond_inds] hypth_joint = data_in[:,target_inds + test_inds + cond_inds] hypth_cond = data_in[:,test_inds + cond_inds] null_entropy = 0 null_list = [] hypth_entropy = 0 hypth_list = [] for i in range(0,n_total): if np.all([np.any(null_joint[i,:] != item) for item in null_list]): num_sum = np.sum([np.all(null_joint[i,:] == item) for item in null_joint]) denom_sum = np.sum([np.all(null_cond[i,:] == item) for item in null_cond]) null_entropy = null_entropy - num_sum*np.log(num_sum/denom_sum) null_list.append(null_joint[i,:]) if np.all([np.any(hypth_joint[i,:] != item) for item in hypth_list]): num_sum = np.sum([np.all(hypth_joint[i,:] == item) for item in hypth_joint]) denom_sum = np.sum([np.all(hypth_cond[i,:] == item) for item in hypth_cond]) hypth_entropy = hypth_entropy - num_sum*np.log(num_sum/denom_sum) hypth_list.append(hypth_joint[i,:]) return (null_entropy - hypth_entropy)/n_total def g_test(self,name,data_in): # do the G-test on a single variable of interest p_name = self.calc_prob(name)*len(data_in) name_ind = self.entity_list.index(name[0]) name_data = data_in[:,name_ind] p_data = torch.Tensor([np.sum(name_data == -1),np.sum(name_data == 0),np.sum(name_data == 1)]) print(p_name) print(p_data) g_val = 2*torch.sum(p_data*torch.log(p_data/p_name)) dof = len(data_in) - 1 p_val = 1-sp.stats.chi2.cdf(g_val.item(), dof) return g_val,p_val def g_test_emp(self,name,data_in): # do the G-test on a single variable of interest #p_name = self.calc_prob(name)*len(data_in) # generate an empirical distribution for variable name model_data = np.zeros(len(data_in)) for i in range(0,len(data_in)): model_data[i] = self.sample_vars(name)[0].item() p_model = torch.Tensor([np.sum(model_data == -1),np.sum(model_data == 0),np.sum(model_data == 1)]) print(p_model) name_ind = self.entity_list.index(name[0]) name_data = data_in[:,name_ind] p_data = torch.Tensor([np.sum(name_data == -1),np.sum(name_data == 0),np.sum(name_data == 1)]) print(p_data) g_val = 2*torch.sum(p_data*torch.log(p_data/p_model)) dof = len(data_in) - 1 p_val = 1-sp.stats.chi2.cdf(g_val.item(), dof) return g_val,p_val def write_to_cf(self,filename,spacing): # write the causal graph to a text file to import into causal fusion pos_dict = nx.drawing.layout.planar_layout(self.graph) write_dict = {} write_dict['name'] = 'causal_graph' # write nodes write_dict['nodes'] = [] for i in range(0,len(self.entity_list)): name = self.entity_list[i] write_dict['nodes'].append({}) write_dict['nodes'][-1]['id'] = 'node' + str(i) write_dict['nodes'][-1]['name'] = name write_dict['nodes'][-1]['label'] = name write_dict['nodes'][-1]['type'] = 'basic' write_dict['nodes'][-1]['metadata'] = {} write_dict['nodes'][-1]['metadata']['x'] = spacing*pos_dict[i][0] write_dict['nodes'][-1]['metadata']['y'] = spacing*pos_dict[i][1] write_dict['nodes'][-1]['metadata']['label'] = '' write_dict['nodes'][-1]['metadata']['shape'] = 'ellipse' write_dict['nodes'][-1]['metadata']['fontSize'] = 14 write_dict['nodes'][-1]['metadata']['sizeLabelMode'] = 5 write_dict['nodes'][-1]['metadata']['font'] = {} write_dict['nodes'][-1]['metadata']['font']['size'] = 14 write_dict['nodes'][-1]['metadata']['size'] = 14 write_dict['nodes'][-1]['metadata']['labelNodeId'] = 'node' + str(i) + 'ID' write_dict['nodes'][-1]['metadata']['labelNodeOffset'] = {} write_dict['nodes'][-1]['metadata']['labelNodeOffset']['x'] = 0 write_dict['nodes'][-1]['metadata']['labelNodeOffset']['y'] = 0 write_dict['nodes'][-1]['metadata']['labelOffset'] = {} write_dict['nodes'][-1]['metadata']['labelOffset']['x'] = 0 write_dict['nodes'][-1]['metadata']['labelOffset']['y'] = 0 write_dict['nodes'][-1]['metadata']['shadow'] = {} write_dict['nodes'][-1]['metadata']['shadow']['color'] = '#00000080' write_dict['nodes'][-1]['metadata']['shadow']['size'] = 0 write_dict['nodes'][-1]['metadata']['shadow']['x'] = 0 write_dict['nodes'][-1]['metadata']['shadow']['y'] = 0 # write edges write_dict['edges'] = [] for i in range(0,len(self.edge_list)): item = self.edge_list[i] from_node = self.entity_list.index(item[0]) to_node = self.entity_list.index(item[1]) write_dict['edges'].append({}) write_dict['edges'][-1]['id'] = 'node' + str(from_node) + '->node' + str(to_node) write_dict['edges'][-1]['from'] = item[0] write_dict['edges'][-1]['to'] = item[1] write_dict['edges'][-1]['type'] = 'directed' write_dict['edges'][-1]['metadata'] = {} write_dict['edges'][-1]['metadata']['isLabelDraggable'] = True write_dict['edges'][-1]['metadata']['label'] = '' write_dict['task'] = {} write_dict['metadata'] = {} write_dict['project_id'] = '123456789' write_dict['_fileType'] = 'graph' with open(filename + '.json', 'w') as json_file: json.dump(write_dict, json_file) ``` # Test out cg_graph ``` bel_temp = pb.from_bel_script('sag_bel_graph.txt') graph_test = cg_graph(bel_graph=bel_temp) print(dir(graph_test)) print() for item in graph_test.edge_list: print(item) print() graph_test.write_to_cf('sag_graph',300) str_list = ['temp =| cloudy','cloudy => rainy','temp => icream','rainy =| icream'] graph_test = cg_graph(str_list=str_list) for item in graph_test.edge_list: print(item) print() for item in graph_test.entity_list: print(item) for item in data: print(item) print(data[item]) print() print(data['metadata']) print() for item in data['metadata']: print(item) print(data['metadata'][item]) print(type(item)) print() for item in graph_test.node_dict: print([graph_test.node_dict[item].name,graph_test.node_dict[item].n_inputs]) graph_test.prob_init(tot_data) print(graph_test.exog_list) for item in graph_test.node_dict: print(item) print(graph_test.node_dict[item].n_count) print(graph_test.node_dict[item].prob_dist) print() graph_test.sample_vars(['icream','rainy']) x = graph_test.calc_prob(['rainy','icream']) y = graph_test.calc_prob(['icream']) y2 = graph_test.calc_prob(['rainy']) print(x) print() print(y) print(torch.sum(x,dim=0)) print() print(y2) print(torch.sum(x,dim=1)) # this is somehow reversed!! z = graph_test.calc_cond_prob(['rainy'],['icream']) print() print(z) print() print(torch.matmul(z,y)) print(torch.sum(z,dim=1)) print(torch.sum(z,dim=0)) do_dict = {} do_dict['rainy'] = torch.Tensor([1]).int() for item in do_dict: print(do_dict[item]) print(graph_test.prob_dict[item]) print(graph_test.exog_list) a1 = graph_test.calc_do(['icream'],['rainy'],do_dict) print(a1) graph_test.calc_do_cond(['rainy'],do_dict,['temp'],['icream']) for item in graph_test.prob_dict: print(item) print(graph_test.prob_dict[item]) print() print(graph_test.cond_mut_info(['rainy'],['temp'],['cloudy'],tot_data)) print(graph_test.gen_path_nodes(graph_test.exog_list,['temp'])) print(graph_test.prob_dict['icream']) a = graph_test.g_test(['icream'],tot_data) print(a) print(graph_test.prob_dict['cloudy'][:,graph_test.node_dict['temp'].sample()]) print(graph_test.prob_dict['cloudy']) print(graph_test.calc_prob(['cloudy'])) print(torch.matmul(graph_test.prob_dict['cloudy'],graph_test.prob_dict['temp'])) print(torch.matmul(graph_test.prob_dict['temp'],graph_test.prob_dict['cloudy'])) print() a = graph_test.g_test_emp(['icream'],tot_data) print(a) x = pyro.sample('',pyro.distributions.Multinomial(probs = torch.Tensor([0.3,0.2,0.5]))).bool() #samp_out = pyro.sample('',pyro.distributions.Multinomial(probs = self.prob_dist)) print(x) y = torch.Tensor([-1,0,1])[x] print(y) z = torch.Tensor([1]).int() print(z) print(z+1) print(graph_test.calc_prob(['icream'])) print(dir(graph_test)) print() print(graph_test.entity_list) print(graph_test.adj_mat) sp.stats? def indep_vars(n_samples): T_list = [] C_list = [] P_list = [] for i in range(0,n_samples): #x = pyro.sample("x_{}".format(i), pyro.distributions.Normal(20,5)) #T_temp = pyro.distributions.Normal(20,5).sample() #C_temp = 0.5*pyro.distributions.Beta(1,1+T_temp/10).sample() + 0.5*pyro.distributions.Uniform(0,1).sample() #P_temp = (0.5*pyro.distributions.Exponential(1).sample() #+ 0.5*pyro.distributions.Exponential(1/(C_temp+1)).sample()) T_list.append(pyro.sample("T_{}".format(i), pyro.distributions.Normal(20,5))) C_list.append(0.5*pyro.sample("C1_{}".format(i),pyro.distributions.Beta(1,1+T_list[-1]/10)) + 0.5*pyro.sample("C2_{}".format(i),pyro.distributions.Uniform(0,1))) P_list.append(0.5*pyro.sample("P1_{}".format(i), pyro.distributions.Exponential(1)) + 0.5*pyro.sample("P2.{}".format(i),pyro.distributions.Exponential(1/(C_list[-1]+1)))) return T_list,C_list,P_list def dep_vars(T_list,C_list,P_list): n_pts = len(T_list) I_list = [] for i in range(0,n_pts): T_temp = T_list[i] C_temp = C_list[i] P_temp = P_list[i] if P_temp > 2.5 or T_temp < 15: I_list.append(1e-6*pyro.sample("I_{}".format(i),pyro.distributions.Bernoulli(1))) else: I_list.append(pyro.sample("I_{}".format(i), pyro.distributions.Beta(2*(2.5-P_temp)*(T_temp-12)/(2.5*12),2))) #I_temp = torch.tensor(0.5) return I_list n_data = 10000 temp,cloud,precip = indep_vars(n_data) icream = dep_vars(temp,cloud,precip) # trinarize data relative to baseline T_base = 20 C_base = 0.38 P_base = 1.2 I_base = 0.23 T_sig = 1.0 C_sig = 0.02 P_sig = 0.06 I_sig = 0.01 n_count_tri = np.zeros((3,3,3)) p_ave_tri = np.zeros((3,3,3,3)) def cond_test(val,base,sig): conds = [val < base-sig,val > base-sig and val < base+sig,val > base+sig] return conds.index(True)-1 T_ind = [] C_ind = [] P_ind = [] I_ind = [] for ind in range(0,n_data): T_ind.append(cond_test(temp[ind],T_base,T_sig)) C_ind.append(cond_test(cloud[ind],C_base,C_sig)) P_ind.append(cond_test(precip[ind],P_base,P_sig)) I_ind.append(cond_test(icream[ind],I_base,I_sig)) tot_data = np.asarray([T_ind,C_ind,P_ind,I_ind]).T print(np.shape(tot_data)) print(tot_data[0:5,:]) print(type(tot_data)) ```
github_jupyter
``` # Copyright 2021 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # AI Platform (Unified) SDK: Train and deploy an SKLearn model with pre-built containers (formerly hosted runtimes) ## Installation Install the latest (preview) version of AI Platform (Unified) SDK. ``` ! pip3 install -U google-cloud-aiplatform --user ``` Install the Google *cloud-storage* library as well. ``` ! pip3 install google-cloud-storage ``` ### Restart the Kernel Once you've installed the AI Platform (Unified) SDK and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages. ``` import os if not os.getenv("AUTORUN"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) ``` ## Before you begin ### GPU run-time *Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** ### Set up your GCP project **The following steps are required, regardless of your notebook environment.** 1. [Select or create a GCP project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs. 2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project) 3. [Enable the AI Platform APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component) 4. [Google Cloud SDK](https://cloud.google.com/sdk) is already installed in AI Platform Notebooks. 5. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. **Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. ``` PROJECT_ID = "[your-project-id]" #@param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID ``` #### Region You can also change the `REGION` variable, which is used for operations throughout the rest of this notebook. Below are regions supported for AI Platform (Unified). We recommend when possible, to choose the region closest to you. - Americas: `us-central1` - Europe: `europe-west4` - Asia Pacific: `asia-east1` You cannot use a Multi-Regional Storage bucket for training with AI Platform. Not all regions provide support for all AI Platform services. For the latest support per region, see [Region support for AI Platform (Unified) services](https://cloud.google.com/ai-platform-unified/docs/general/locations) ``` REGION = 'us-central1' #@param {type: "string"} ``` #### Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. ``` from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") ``` ### Authenticate your GCP account **If you are using AI Platform Notebooks**, your environment is already authenticated. Skip this step. *Note: If you are on an AI Platform notebook and run the cell, the cell knows to skip executing the authentication steps.* ``` import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your Google Cloud account. This provides access # to your Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on AI Platform, then don't execute this code if not os.path.exists('/opt/deeplearning/metadata/env_version'): if 'google.colab' in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this tutorial in a notebook locally, replace the string # below with the path to your service account key and run this cell to # authenticate your Google Cloud account. else: %env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json # Log in to your account on Google Cloud ! gcloud auth login ``` ### Create a Cloud Storage bucket **The following steps are required, regardless of your notebook environment.** This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. ``` BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]": BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP ``` **Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket. ``` ! gsutil mb -l $REGION gs://$BUCKET_NAME ``` Finally, validate access to your Cloud Storage bucket by examining its contents: ``` ! gsutil ls -al gs://$BUCKET_NAME ``` ### Set up variables Next, set up some variables used throughout the tutorial. ### Import libraries and define constants #### Import AI Platform (Unified) SDK Import the AI Platform (Unified) SDK into our Python environment. ``` import os import sys import time from google.cloud.aiplatform import gapic as aip from google.protobuf.struct_pb2 import Value from google.protobuf.struct_pb2 import Struct from google.protobuf.json_format import MessageToJson from google.protobuf.json_format import ParseDict ``` #### AI Platform (Unified) constants Setup up the following constants for AI Platform (Unified): - `API_ENDPOINT`: The AI Platform (Unified) API service endpoint for dataset, model, job, pipeline and endpoint services. - `PARENT`: The AI Platform (Unified) location root path for dataset, model and endpoint resources. ``` # API Endpoint API_ENDPOINT = "{0}-aiplatform.googleapis.com".format(REGION) # AI Platform (Unified) location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION ``` ## Clients The AI Platform (Unified) SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AI Platform). You will use several clients in this tutorial, so set them all up upfront. - Model Service for managed models. - Endpoint Service for deployment. - Job Service for batch jobs and custom training. - Prediction Service for serving. ``` # client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_model_client(): client = aip.ModelServiceClient( client_options=client_options ) return client def create_endpoint_client(): client = aip.EndpointServiceClient( client_options=client_options ) return client def create_prediction_client(): client = aip.PredictionServiceClient( client_options=client_options ) return client def create_job_client(): client = aip.JobServiceClient( client_options=client_options ) return client clients = {} clients['model'] = create_model_client() clients['endpoint'] = create_endpoint_client() clients['prediction'] = create_prediction_client() clients['job'] = create_job_client() for client in clients.items(): print(client) ``` ## Prepare a trainer script ### Package assembly ``` # Make folder for python training script ! rm -rf custom ! mkdir custom # Add package information ! touch custom/README.md setup_cfg = "[egg_info]\n\ tag_build =\n\ tag_date = 0" ! echo "$setup_cfg" > custom/setup.cfg setup_py = "import setuptools\n\ setuptools.setup(\n\ install_requires=[\n\ ],\n\ packages=setuptools.find_packages())" ! echo "$setup_py" > custom/setup.py pkg_info = "Metadata-Version: 1.0\n\ Name: Custom Census Income\n\ Version: 0.0.0\n\ Summary: Demonstration training script\n\ Home-page: www.google.com\n\ Author: Google\n\ Author-email: aferlitsch@google.com\n\ License: Public\n\ Description: Demo\n\ Platform: AI Platform (Unified)" ! echo "$pkg_info" > custom/PKG-INFO # Make the training subfolder ! mkdir custom/trainer ! touch custom/trainer/__init__.py ``` ### Task.py contents ``` %%writefile custom/trainer/task.py # Single Instance Training for Census Income from sklearn.ensemble import RandomForestClassifier import joblib from sklearn.feature_selection import SelectKBest from sklearn.pipeline import FeatureUnion from sklearn.pipeline import Pipeline from sklearn.preprocessing import LabelBinarizer import datetime import pandas as pd from google.cloud import storage import numpy as np import argparse import os import sys parser = argparse.ArgumentParser() parser.add_argument('--model-dir', dest='model_dir', default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.') args = parser.parse_args() print('Python Version = {}'.format(sys.version)) # Public bucket holding the census data bucket = storage.Client().bucket('cloud-samples-data') # Path to the data inside the public bucket blob = bucket.blob('ai-platform/sklearn/census_data/adult.data') # Download the data blob.download_to_filename('adult.data') # Define the format of your input data including unused columns (These are the columns from the census data files) COLUMNS = ( 'age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'income-level' ) # Categorical columns are columns that need to be turned into a numerical value to be used by scikit-learn CATEGORICAL_COLUMNS = ( 'workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country' ) # Load the training census dataset with open('./adult.data', 'r') as train_data: raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS) # Remove the column we are trying to predict ('income-level') from our features list # Convert the Dataframe to a lists of lists train_features = raw_training_data.drop('income-level', axis=1).values.tolist() # Create our training labels list, convert the Dataframe to a lists of lists train_labels = (raw_training_data['income-level'] == ' >50K').values.tolist() # Since the census data set has categorical features, we need to convert # them to numerical values. We'll use a list of pipelines to convert each # categorical column and then use FeatureUnion to combine them before calling # the RandomForestClassifier. categorical_pipelines = [] # Each categorical column needs to be extracted individually and converted to a numerical value. # To do this, each categorical column will use a pipeline that extracts one feature column via # SelectKBest(k=1) and a LabelBinarizer() to convert the categorical value to a numerical one. # A scores array (created below) will select and extract the feature column. The scores array is # created by iterating over the COLUMNS and checking if it is a CATEGORICAL_COLUMN. for i, col in enumerate(COLUMNS[:-1]): if col in CATEGORICAL_COLUMNS: # Create a scores array to get the individual categorical column. # Example: # data = [39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married', 'Adm-clerical', # 'Not-in-family', 'White', 'Male', 2174, 0, 40, 'United-States'] # scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] # # Returns: [['State-gov']] # Build the scores array. scores = [0] * len(COLUMNS[:-1]) # This column is the categorical column we want to extract. scores[i] = 1 skb = SelectKBest(k=1) skb.scores_ = scores # Convert the categorical column to a numerical value lbn = LabelBinarizer() r = skb.transform(train_features) lbn.fit(r) # Create the pipeline to extract the categorical feature categorical_pipelines.append( ('categorical-{}'.format(i), Pipeline([ ('SKB-{}'.format(i), skb), ('LBN-{}'.format(i), lbn)]))) # Create pipeline to extract the numerical features skb = SelectKBest(k=6) # From COLUMNS use the features that are numerical skb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0] categorical_pipelines.append(('numerical', skb)) # Combine all the features using FeatureUnion preprocess = FeatureUnion(categorical_pipelines) # Create the classifier classifier = RandomForestClassifier() # Transform the features and fit them to the classifier classifier.fit(preprocess.transform(train_features), train_labels) # Create the overall model as a single pipeline pipeline = Pipeline([ ('union', preprocess), ('classifier', classifier) ]) # Split path into bucket and subdirectory bucket = args.model_dir.split('/')[2] subdir = args.model_dir.split('/')[-1] # Write model to a local file joblib.dump(pipeline, 'model.joblib') # Upload the model to GCS bucket = storage.Client().bucket(bucket) blob = bucket.blob(subdir + '/model.joblib') blob.upload_from_filename('model.joblib') ``` ### Store training script on your Cloud Storage bucket ``` ! rm -f custom.tar custom.tar.gz ! tar cvf custom.tar custom ! gzip custom.tar ! gsutil cp custom.tar.gz gs://$BUCKET_NAME/census.tar.gz ``` ## Train a model ### [projects.locations.customJobs.create](https://cloud.google.com/ai-platform-unified/docs/reference/rest/v1beta1/projects.locations.trainingPipelines/create) #### Request ``` TRAIN_IMAGE = 'gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest' JOB_NAME = "custom_job_SKL" + TIMESTAMP WORKER_POOL_SPEC = [ { "replica_count": 1, "machine_spec": { "machine_type": 'n1-standard-4' }, "python_package_spec": { "executor_image_uri": TRAIN_IMAGE, "package_uris": ["gs://" + BUCKET_NAME + "/census.tar.gz"], "python_module": "trainer.task", "args": [ "--model-dir=" + 'gs://{}/{}'.format(BUCKET_NAME, JOB_NAME) ] } } ] training_job = aip.CustomJob( display_name=JOB_NAME, job_spec={ "worker_pool_specs": WORKER_POOL_SPEC } ) print(MessageToJson( aip.CreateCustomJobRequest( parent=PARENT, custom_job=training_job ).__dict__["_pb"]) ) ``` *Example output*: ``` { "parent": "projects/migration-ucaip-training/locations/us-central1", "customJob": { "displayName": "custom_job_SKL20210323185534", "jobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4" }, "replicaCount": "1", "pythonPackageSpec": { "executorImageUri": "gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest", "packageUris": [ "gs://migration-ucaip-trainingaip-20210323185534/census.tar.gz" ], "pythonModule": "trainer.task", "args": [ "--model-dir=gs://migration-ucaip-trainingaip-20210323185534/custom_job_SKL20210323185534" ] } } ] } } } ``` #### Call ``` request = clients["job"].create_custom_job( parent=PARENT, custom_job=training_job ) ``` #### Response ``` print(MessageToJson(request.__dict__["_pb"])) ``` *Example output*: ``` { "name": "projects/116273516712/locations/us-central1/customJobs/3216493723709865984", "displayName": "custom_job_SKL20210323185534", "jobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4" }, "replicaCount": "1", "diskSpec": { "bootDiskType": "pd-ssd", "bootDiskSizeGb": 100 }, "pythonPackageSpec": { "executorImageUri": "gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest", "packageUris": [ "gs://migration-ucaip-trainingaip-20210323185534/census.tar.gz" ], "pythonModule": "trainer.task", "args": [ "--model-dir=gs://migration-ucaip-trainingaip-20210323185534/custom_job_SKL20210323185534" ] } } ] }, "state": "JOB_STATE_PENDING", "createTime": "2021-03-23T18:55:41.688375Z", "updateTime": "2021-03-23T18:55:41.688375Z" } ``` ``` # The full unique ID for the custom training job custom_training_id = request.name # The short numeric ID for the custom training job custom_training_short_id = custom_training_id.split('/')[-1] print(custom_training_id) ``` ### [projects.locations.customJobs.get](https://cloud.google.com/ai-platform-unified/docs/reference/rest/v1beta1/projects.locations.trainingPipelines/get) #### Call ``` request = clients['job'].get_custom_job( name=custom_training_id ) ``` #### Response ``` print(MessageToJson(request.__dict__["_pb"])) ``` *Example output*: ``` { "name": "projects/116273516712/locations/us-central1/customJobs/3216493723709865984", "displayName": "custom_job_SKL20210323185534", "jobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4" }, "replicaCount": "1", "diskSpec": { "bootDiskType": "pd-ssd", "bootDiskSizeGb": 100 }, "pythonPackageSpec": { "executorImageUri": "gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest", "packageUris": [ "gs://migration-ucaip-trainingaip-20210323185534/census.tar.gz" ], "pythonModule": "trainer.task", "args": [ "--model-dir=gs://migration-ucaip-trainingaip-20210323185534/custom_job_SKL20210323185534" ] } } ] }, "state": "JOB_STATE_PENDING", "createTime": "2021-03-23T18:55:41.688375Z", "updateTime": "2021-03-23T18:55:41.688375Z" } ``` ``` while True: response = clients["job"].get_custom_job(name=custom_training_id) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: break else: print("Training Time:", response.end_time - response.start_time) break time.sleep(60) # model artifact output directory on Google Cloud Storage model_artifact_dir = response.job_spec.worker_pool_specs[0].python_package_spec.args[0].split("=")[-1] print("artifact location " + model_artifact_dir) ``` ## Deploy the model ### [projects.locations.models.upload](https://cloud.google.com/ai-platform-unified/docs/reference/rest/v1beta1/projects.locations.models/upload) #### Request ``` DEPLOY_IMAGE = 'gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest' model = { "display_name": "custom_job_SKL" + TIMESTAMP, "artifact_uri": model_artifact_dir, "container_spec": { "image_uri": DEPLOY_IMAGE, "ports": [{"container_port": 8080}] } } print(MessageToJson( aip.UploadModelRequest( parent=PARENT, model=model ).__dict__["_pb"]) ) ``` *Example output*: ``` { "parent": "projects/migration-ucaip-training/locations/us-central1", "model": { "displayName": "custom_job_SKL20210323185534", "containerSpec": { "imageUri": "gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest", "ports": [ { "containerPort": 8080 } ] }, "artifactUri": "gs://migration-ucaip-trainingaip-20210323185534/custom_job_SKL20210323185534" } } ``` #### Call ``` request = clients['model'].upload_model( parent=PARENT, model=model ) ``` #### Response ``` result = request.result() print(MessageToJson(result.__dict__["_pb"])) ``` *Example output*: ``` { "model": "projects/116273516712/locations/us-central1/models/5984808915752189952" } ``` ``` model_id = result.model ``` ## Make batch predictions ### Make a batch prediction file ``` import json import tensorflow as tf INSTANCES = [ [25, "Private", 226802, "11th", 7, "Never-married", "Machine-op-inspct", "Own-child", "Black", "Male", 0, 0, 40, "United-States"], [38, "Private", 89814, "HS-grad", 9, "Married-civ-spouse", "Farming-fishing", "Husband", "White", "Male", 0, 0, 50, "United-States"], [28, "Local-gov", 336951, "Assoc-acdm", 12, "Married-civ-spouse", "Protective-serv", "Husband", "White", "Male", 0, 0, 40, "United-States"], [44, "Private", 160323, "Some-college", 10, "Married-civ-spouse", "Machine-op-inspct", "Husband", "Black", "Male", 7688, 0, 40, "United-States"], [18, "?", 103497, "Some-college", 10, "Never-married", "?", "Own-child", "White", "Female", 0, 0, 30, "United-States"], [34, "Private", 198693, "10th", 6, "Never-married", "Other-service", "Not-in-family", "White", "Male", 0, 0, 30, "United-States"], [29, "?", 227026, "HS-grad", 9, "Never-married", "?", "Unmarried", "Black", "Male", 0, 0, 40, "United-States"], [63, "Self-emp-not-inc", 104626, "Prof-school", 15, "Married-civ-spouse", "Prof-specialty", "Husband", "White", "Male", 3103, 0, 32, "United-States"], [24, "Private", 369667, "Some-college", 10, "Never-married", "Other-service", "Unmarried", "White", "Female", 0, 0, 40, "United-States"], [55, "Private", 104996, "7th-8th", 4, "Married-civ-spouse", "Craft-repair", "Husband", "White", "Male", 0, 0, 10, "United-States"] ] gcs_input_uri = "gs://" + BUCKET_NAME + "/" + "test.jsonl" with tf.io.gfile.GFile(gcs_input_uri, 'w') as f: for i in INSTANCES: f.write(json.dumps(i) + '\n') ! gsutil cat $gcs_input_uri ``` *Example output*: ``` [25, "Private", 226802, "11th", 7, "Never-married", "Machine-op-inspct", "Own-child", "Black", "Male", 0, 0, 40, "United-States"] [38, "Private", 89814, "HS-grad", 9, "Married-civ-spouse", "Farming-fishing", "Husband", "White", "Male", 0, 0, 50, "United-States"] [28, "Local-gov", 336951, "Assoc-acdm", 12, "Married-civ-spouse", "Protective-serv", "Husband", "White", "Male", 0, 0, 40, "United-States"] [44, "Private", 160323, "Some-college", 10, "Married-civ-spouse", "Machine-op-inspct", "Husband", "Black", "Male", 7688, 0, 40, "United-States"] [18, "?", 103497, "Some-college", 10, "Never-married", "?", "Own-child", "White", "Female", 0, 0, 30, "United-States"] [34, "Private", 198693, "10th", 6, "Never-married", "Other-service", "Not-in-family", "White", "Male", 0, 0, 30, "United-States"] [29, "?", 227026, "HS-grad", 9, "Never-married", "?", "Unmarried", "Black", "Male", 0, 0, 40, "United-States"] [63, "Self-emp-not-inc", 104626, "Prof-school", 15, "Married-civ-spouse", "Prof-specialty", "Husband", "White", "Male", 3103, 0, 32, "United-States"] [24, "Private", 369667, "Some-college", 10, "Never-married", "Other-service", "Unmarried", "White", "Female", 0, 0, 40, "United-States"] [55, "Private", 104996, "7th-8th", 4, "Married-civ-spouse", "Craft-repair", "Husband", "White", "Male", 0, 0, 10, "United-States"] ``` ### [projects.locations.batchPredictionJobs.create](https://cloud.google.com/ai-platform-unified/docs/reference/rest/v1beta1/projects.locations.batchPredictionJobs/create) #### Request ``` model_parameters = Value(struct_value=Struct( fields={ "confidence_threshold": Value(number_value=0.5), "max_predictions": Value(number_value=10000.0) } )) batch_prediction_job = { "display_name": "custom_job_SKL" + TIMESTAMP, "model": model_id, "input_config": { "instances_format": "jsonl", "gcs_source": { "uris": [gcs_input_uri] } }, "model_parameters": model_parameters, "output_config": { "predictions_format": "jsonl", "gcs_destination": { "output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/" } }, "dedicated_resources": { "machine_spec": { "machine_type": "n1-standard-2" }, "starting_replica_count": 1, "max_replica_count": 1 } } print(MessageToJson( aip.CreateBatchPredictionJobRequest( parent=PARENT, batch_prediction_job=batch_prediction_job ).__dict__["_pb"]) ) ``` *Example output*: ``` { "parent": "projects/migration-ucaip-training/locations/us-central1", "batchPredictionJob": { "displayName": "custom_job_SKL20210323185534", "model": "projects/116273516712/locations/us-central1/models/5984808915752189952", "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { "uris": [ "gs://migration-ucaip-trainingaip-20210323185534/test.jsonl" ] } }, "modelParameters": { "confidence_threshold": 0.5, "max_predictions": 10000.0 }, "outputConfig": { "predictionsFormat": "jsonl", "gcsDestination": { "outputUriPrefix": "gs://migration-ucaip-trainingaip-20210323185534/batch_output/" } }, "dedicatedResources": { "machineSpec": { "machineType": "n1-standard-2" }, "startingReplicaCount": 1, "maxReplicaCount": 1 } } } ``` #### Call ``` request = clients["job"].create_batch_prediction_job( parent=PARENT, batch_prediction_job=batch_prediction_job ) ``` #### Response ``` print(MessageToJson(request.__dict__["_pb"])) ``` *Example output*: ``` { "name": "projects/116273516712/locations/us-central1/batchPredictionJobs/2509428582212698112", "displayName": "custom_job_SKL20210323185534", "model": "projects/116273516712/locations/us-central1/models/5984808915752189952", "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { "uris": [ "gs://migration-ucaip-trainingaip-20210323185534/test.jsonl" ] } }, "modelParameters": { "max_predictions": 10000.0, "confidence_threshold": 0.5 }, "outputConfig": { "predictionsFormat": "jsonl", "gcsDestination": { "outputUriPrefix": "gs://migration-ucaip-trainingaip-20210323185534/batch_output/" } }, "dedicatedResources": { "machineSpec": { "machineType": "n1-standard-2" }, "startingReplicaCount": 1, "maxReplicaCount": 1 }, "manualBatchTuningParameters": {}, "state": "JOB_STATE_PENDING", "createTime": "2021-03-23T19:05:07.344290Z", "updateTime": "2021-03-23T19:05:07.344290Z" } ``` ``` # The fully qualified ID for the batch job batch_job_id = request.name # The short numeric ID for the batch job batch_job_short_id = batch_job_id.split('/')[-1] print(batch_job_id) ``` ### [projects.locations.batchPredictionJobs.get](https://cloud.google.com/ai-platform-unified/docs/reference/rest/v1beta1/projects.locations.batchPredictionJobs/get) #### Call ``` request = clients["job"].get_batch_prediction_job( name=batch_job_id ) ``` #### Response ``` print(MessageToJson(request.__dict__["_pb"])) ``` *Example output*: ``` { "name": "projects/116273516712/locations/us-central1/batchPredictionJobs/2509428582212698112", "displayName": "custom_job_SKL20210323185534", "model": "projects/116273516712/locations/us-central1/models/5984808915752189952", "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { "uris": [ "gs://migration-ucaip-trainingaip-20210323185534/test.jsonl" ] } }, "modelParameters": { "confidence_threshold": 0.5, "max_predictions": 10000.0 }, "outputConfig": { "predictionsFormat": "jsonl", "gcsDestination": { "outputUriPrefix": "gs://migration-ucaip-trainingaip-20210323185534/batch_output/" } }, "dedicatedResources": { "machineSpec": { "machineType": "n1-standard-2" }, "startingReplicaCount": 1, "maxReplicaCount": 1 }, "manualBatchTuningParameters": {}, "state": "JOB_STATE_PENDING", "createTime": "2021-03-23T19:05:07.344290Z", "updateTime": "2021-03-23T19:05:07.344290Z" } ``` ``` def get_latest_predictions(gcs_out_dir): ''' Get the latest prediction subfolder using the timestamp in the subfolder name''' folders = !gsutil ls $gcs_out_dir latest = "" for folder in folders: subfolder = folder.split('/')[-2] if subfolder.startswith('prediction-'): if subfolder > latest: latest = folder[:-1] return latest while True: response = clients["job"].get_batch_prediction_job(name=batch_job_id) if response.state != aip.JobState.JOB_STATE_SUCCEEDED: print("The job has not completed:", response.state) if response.state == aip.JobState.JOB_STATE_FAILED: break else: folder = get_latest_predictions(response.output_config.gcs_destination.output_uri_prefix) ! gsutil ls $folder/prediction* ! gsutil cat -h $folder/prediction* break time.sleep(60) ``` *Example output*: ``` ==> gs://migration-ucaip-trainingaip-20210323185534/batch_output/prediction-custom_job_SKL20210323185534-2021_03_23T12_05_07_282Z/prediction.errors_stats-00000-of-00001 <== ==> gs://migration-ucaip-trainingaip-20210323185534/batch_output/prediction-custom_job_SKL20210323185534-2021_03_23T12_05_07_282Z/prediction.results-00000-of-00001 <== {"instance": [25, "Private", 226802, "11th", 7, "Never-married", "Machine-op-inspct", "Own-child", "Black", "Male", 0, 0, 40, "United-States"], "prediction": false} {"instance": [38, "Private", 89814, "HS-grad", 9, "Married-civ-spouse", "Farming-fishing", "Husband", "White", "Male", 0, 0, 50, "United-States"], "prediction": false} {"instance": [28, "Local-gov", 336951, "Assoc-acdm", 12, "Married-civ-spouse", "Protective-serv", "Husband", "White", "Male", 0, 0, 40, "United-States"], "prediction": false} {"instance": [44, "Private", 160323, "Some-college", 10, "Married-civ-spouse", "Machine-op-inspct", "Husband", "Black", "Male", 7688, 0, 40, "United-States"], "prediction": true} {"instance": [18, "?", 103497, "Some-college", 10, "Never-married", "?", "Own-child", "White", "Female", 0, 0, 30, "United-States"], "prediction": false} {"instance": [34, "Private", 198693, "10th", 6, "Never-married", "Other-service", "Not-in-family", "White", "Male", 0, 0, 30, "United-States"], "prediction": false} {"instance": [29, "?", 227026, "HS-grad", 9, "Never-married", "?", "Unmarried", "Black", "Male", 0, 0, 40, "United-States"], "prediction": false} {"instance": [63, "Self-emp-not-inc", 104626, "Prof-school", 15, "Married-civ-spouse", "Prof-specialty", "Husband", "White", "Male", 3103, 0, 32, "United-States"], "prediction": false} {"instance": [24, "Private", 369667, "Some-college", 10, "Never-married", "Other-service", "Unmarried", "White", "Female", 0, 0, 40, "United-States"], "prediction": false} {"instance": [55, "Private", 104996, "7th-8th", 4, "Married-civ-spouse", "Craft-repair", "Husband", "White", "Male", 0, 0, 10, "United-States"], "prediction": false} ``` ## Make online predictions ### [projects.locations.endpoints.create](https://cloud.google.com/ai-platform-unified/docs/reference/rest/v1beta1/projects.locations.endpoints/create) #### Request ``` endpoint = { "display_name": "custom_job_SKL" + TIMESTAMP } print(MessageToJson( aip.CreateEndpointRequest( parent=PARENT, endpoint=endpoint ).__dict__["_pb"]) ) ``` *Example output*: ``` { "parent": "projects/migration-ucaip-training/locations/us-central1", "endpoint": { "displayName": "custom_job_SKL20210323185534" } } ``` #### Call ``` request = clients["endpoint"].create_endpoint( parent=PARENT, endpoint=endpoint ) ``` #### Response ``` result = request.result() print(MessageToJson(result.__dict__["_pb"])) ``` *Example output*: ``` { "name": "projects/116273516712/locations/us-central1/endpoints/695823734614786048" } ``` ``` # The full unique ID for the endpoint endpoint_id = result.name # The short numeric ID for the endpoint endpoint_short_id = endpoint_id.split('/')[-1] print(endpoint_id) ``` ### [projects.locations.endpoints.deployModel](https://cloud.google.com/ai-platform-unified/docs/reference/rest/v1beta1/projects.locations.endpoints/deployModel) #### Request ``` deployed_model = { "model": model_id, "display_name": "custom_job_SKL" + TIMESTAMP, "dedicated_resources": { "min_replica_count": 1, "max_replica_count": 1, "machine_spec": { "machine_type": 'n1-standard-4', "accelerator_count": 0 } } } print(MessageToJson( aip.DeployModelRequest( endpoint=endpoint_id, deployed_model=deployed_model, traffic_split={"0": 100} ).__dict__["_pb"]) ) ``` *Example output*: ``` { "endpoint": "projects/116273516712/locations/us-central1/endpoints/695823734614786048", "deployedModel": { "model": "projects/116273516712/locations/us-central1/models/5984808915752189952", "displayName": "custom_job_SKL20210323185534", "dedicatedResources": { "machineSpec": { "machineType": "n1-standard-4" }, "minReplicaCount": 1, "maxReplicaCount": 1 } }, "trafficSplit": { "0": 100 } } ``` #### Call ``` request = clients["endpoint"].deploy_model( endpoint=endpoint_id, deployed_model=deployed_model, traffic_split={"0": 100} ) ``` #### Response ``` result = request.result() print(MessageToJson(result.__dict__["_pb"])) ``` *Example output*: ``` { "deployedModel": { "id": "6653241616695820288" } } ``` ``` # The unique ID for the deployed model deployed_model_id = result.deployed_model.id print(deployed_model_id) ``` ### [projects.locations.endpoints.predict](https://cloud.google.com/ai-platform-unified/docs/reference/rest/v1beta1/projects.locations.endpoints/predict) ### Prepare file for online prediction ``` INSTANCES = [ [25, "Private", 226802, "11th", 7, "Never-married", "Machine-op-inspct", "Own-child", "Black", "Male", 0, 0, 40, "United-States"], [38, "Private", 89814, "HS-grad", 9, "Married-civ-spouse", "Farming-fishing", "Husband", "White", "Male", 0, 0, 50, "United-States"], [28, "Local-gov", 336951, "Assoc-acdm", 12, "Married-civ-spouse", "Protective-serv", "Husband", "White", "Male", 0, 0, 40, "United-States"], [44, "Private", 160323, "Some-college", 10, "Married-civ-spouse", "Machine-op-inspct", "Husband", "Black", "Male", 7688, 0, 40, "United-States"], [18, "?", 103497, "Some-college", 10, "Never-married", "?", "Own-child", "White", "Female", 0, 0, 30, "United-States"], [34, "Private", 198693, "10th", 6, "Never-married", "Other-service", "Not-in-family", "White", "Male", 0, 0, 30, "United-States"], [29, "?", 227026, "HS-grad", 9, "Never-married", "?", "Unmarried", "Black", "Male", 0, 0, 40, "United-States"], [63, "Self-emp-not-inc", 104626, "Prof-school", 15, "Married-civ-spouse", "Prof-specialty", "Husband", "White", "Male", 3103, 0, 32, "United-States"], [24, "Private", 369667, "Some-college", 10, "Never-married", "Other-service", "Unmarried", "White", "Female", 0, 0, 40, "United-States"], [55, "Private", 104996, "7th-8th", 4, "Married-civ-spouse", "Craft-repair", "Husband", "White", "Male", 0, 0, 10, "United-States"] ] ``` #### Request ``` prediction_request = aip.PredictRequest(endpoint=endpoint_id) prediction_request.instances.append(INSTANCES) print(MessageToJson( prediction_request.__dict__["_pb"]) ) ``` *Example output*: ``` { "endpoint": "projects/116273516712/locations/us-central1/endpoints/695823734614786048", "instances": [ [ [ 25.0, "Private", 226802.0, "11th", 7.0, "Never-married", "Machine-op-inspct", "Own-child", "Black", "Male", 0.0, 0.0, 40.0, "United-States" ], [ 38.0, "Private", 89814.0, "HS-grad", 9.0, "Married-civ-spouse", "Farming-fishing", "Husband", "White", "Male", 0.0, 0.0, 50.0, "United-States" ], [ 28.0, "Local-gov", 336951.0, "Assoc-acdm", 12.0, "Married-civ-spouse", "Protective-serv", "Husband", "White", "Male", 0.0, 0.0, 40.0, "United-States" ], [ 44.0, "Private", 160323.0, "Some-college", 10.0, "Married-civ-spouse", "Machine-op-inspct", "Husband", "Black", "Male", 7688.0, 0.0, 40.0, "United-States" ], [ 18.0, "?", 103497.0, "Some-college", 10.0, "Never-married", "?", "Own-child", "White", "Female", 0.0, 0.0, 30.0, "United-States" ], [ 34.0, "Private", 198693.0, "10th", 6.0, "Never-married", "Other-service", "Not-in-family", "White", "Male", 0.0, 0.0, 30.0, "United-States" ], [ 29.0, "?", 227026.0, "HS-grad", 9.0, "Never-married", "?", "Unmarried", "Black", "Male", 0.0, 0.0, 40.0, "United-States" ], [ 63.0, "Self-emp-not-inc", 104626.0, "Prof-school", 15.0, "Married-civ-spouse", "Prof-specialty", "Husband", "White", "Male", 3103.0, 0.0, 32.0, "United-States" ], [ 24.0, "Private", 369667.0, "Some-college", 10.0, "Never-married", "Other-service", "Unmarried", "White", "Female", 0.0, 0.0, 40.0, "United-States" ], [ 55.0, "Private", 104996.0, "7th-8th", 4.0, "Married-civ-spouse", "Craft-repair", "Husband", "White", "Male", 0.0, 0.0, 10.0, "United-States" ] ] ] } ``` #### Call ``` request = clients["prediction"].predict( endpoint=endpoint_id, instances=INSTANCES ) ``` #### Response ``` print(MessageToJson(request.__dict__["_pb"])) ``` *Example output*: ``` { "predictions": [ false, false, false, true, false, false, false, false, false, false ], "deployedModelId": "6653241616695820288" } ``` ### [projects.locations.endpoints.undeployModel](https://cloud.google.com/ai-platform-unified/docs/reference/rest/v1beta1/projects.locations.endpoints/undeployModel) #### Call ``` request = clients['endpoint'].undeploy_model( endpoint=endpoint_id, deployed_model_id=deployed_model_id, traffic_split={} ) ``` #### Response ``` result = request.result() print(MessageToJson(result.__dict__["_pb"])) ``` *Example output*: ``` {} ``` # Cleaning up To clean up all GCP resources used in this project, you can [delete the GCP project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial. ``` delete_model = True delete_endpoint = True delete_pipeline = True delete_batchjob = True delete_bucket = True # Delete the model using the AI Platform (Unified) fully qualified identifier for the model try: if delete_model: clients['model'].delete_model(name=model_id) except Exception as e: print(e) # Delete the endpoint using the AI Platform (Unified) fully qualified identifier for the endpoint try: if delete_endpoint: clients['endpoint'].delete_endpoint(name=endpoint_id) except Exception as e: print(e) # Delete the custom training using the AI Platform (Unified) fully qualified identifier for the custome training try: if custom_training_id: clients['job'].delete_custom_job(name=custom_training_id) except Exception as e: print(e) # Delete the batch job using the AI Platform (Unified) fully qualified identifier for the batch job try: if delete_batchjob: clients['job'].delete_batch_prediction_job(name=batch_job_id) except Exception as e: print(e) if delete_bucket and 'BUCKET_NAME' in globals(): ! gsutil rm -r gs://$BUCKET_NAME ```
github_jupyter
``` import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns %matplotlib inline df = pd.read_csv("Case Study 1 - Dataset - Personal Loan Propensity.csv") df.head() df.describe() df.columns df.isnull().sum() lst = ['V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V8', 'V9', 'V10', 'V11', 'V12', 'V13', 'V14', 'V15', 'V16', 'V17', 'V18', 'V19', 'V20', 'V21', 'V22', 'V23', 'V24', 'V25', 'V26', 'V27', 'V28', 'V29', 'V30', 'V31', 'V32'] # for item in lst: # sns.countplot(x="V32",hue=item, data=df) sns.countplot(x="V32", data=df) df.info() df["V32"].value_counts() baseline = 262356/282924 baseline import datetime as DT now = pd.Timestamp('now') df["V3"] = pd.to_datetime(df["V3"],format='%m%d%y') df['V3'] = df['V3'].where(df['V3'] < now, df['V3'] - np.timedelta64(100, 'Y')) # 2 df['age'] = (now - df['V3']).astype('<m8[Y]') # 3 print(df[["age","V3"]]) df.columns df.drop(["V3"],axis=1) lst_quant = df.select_dtypes(include="number") lst_quant.columns lst_categorical = df.select_dtypes(include="object") lst_categorical.columns # for item in lst_categorical: # sns.countplot(x= item, data=df) # plt.show() # %matplotlib inline sns.countplot(x="V32",hue='V6', data=df) # sns.pairplot(lst_quant) for item in lst_categorical: print(item) print(df[item].value_counts()) #df=pd.DataFrame({item,df[item].value_counts()}) sex = pd.get_dummies(df["V2"],drop_first=True) residence = pd.get_dummies(df["V6"]) product = pd.get_dummies(df["V7"]) loan_type = pd.get_dummies(df["V11"]) payment = pd.get_dummies(df["V12"],drop_first=True) df.head() # df = df.drop(["V2","V6","V11","V12"],axis=1) # df = df.drop(["V7"],axis=1) df = df.drop(["V1","V4"], axis=1) df.columns df =pd.concat([df,sex,residence,product,loan_type,payment],axis=1) sex.head() residence.head() payment.head() loan_type.head() product.head() df[["V3"]].value_count from datetime import date today = date.today() def calculateAge(birthDate): age = (pd.to_datetime('today').days-pd.to_datetime(birthDate[0]).days)/365.2425 return age age = df[["V3"]].apply(calculateAge,axis=1) df.columns len(df["V4"].values) type(lst_categorical) plt.subplots(figsize=(15,15)) sns.heatmap(lst_quant.corr()) df.columns from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score, roc_curve,roc_auc_score y = df["V32"] X = df.drop(["V32","V3"],axis=1) X.columns x_train,x_test,y_train,y_test = train_test_split(X, y, test_size=0.30, random_state=42) log_model = LogisticRegression() log_model.fit(x_train,y_train) prediction = log_model.predict(x_test) from sklearn.metrics import classification_report, confusion_matrix print(classification_report(y_test,prediction)) print(accuracy_score(prediction,y_test)) 0.9273020316410061 from sklearn.svm import SVC SVC_model = SVC() SVC_model.fit(x_train,y_train) svc_prediction = SVC_model.predict(x_test) print(classification_report(y_test,svc_prediction)) print(accuracy_score(y_test,svc_prediction)) print(roc_auc_score(log_prediction,y_test)) # print(roc_auc_score(svc_prediction,y_test)) from sklearn.ensemble import Ra ```
github_jupyter
``` # tools for handling files import sys import os # pandas/numpy for handling data import pandas as pd import numpy as np from pandas import ExcelWriter from pandas import ExcelFile # seaborn/matplotlib for graphing import matplotlib.pyplot as plt from matplotlib.ticker import StrMethodFormatter from matplotlib import colors from matplotlib.ticker import PercentFormatter import seaborn as sns from ptitprince import PtitPrince as pt # statistics from statistics import mean import statsmodels.api as sm from statsmodels.formula.api import ols from scipy import stats # for reading individual telomere length data from files from ast import literal_eval # for grabbing individual cells import more_itertools chr_data = '../excel data/Chromosome_Aberrations_telodGH_unrelatedAstros+SK_complete_TeloAberr_astros125___graphs_7_17_19_227pm.xlsx' nasa_chr_data = pd.read_excel(chr_data) #pre-flight telomeric aberr data pre_f_telo_aberr = nasa_chr_data.iloc[0:90, 147:155] mid_f1_telo_aberr = nasa_chr_data.iloc[0:90, 155:163] mid_f2_telo_aberr = nasa_chr_data.iloc[0:90, 163:171] post_f_telo_aberr = nasa_chr_data.iloc[0:90, 179:187] print(pre_f_telo_aberr.columns) display( pre_f_telo_aberr.head(1), mid_f1_telo_aberr.head(1), mid_f2_telo_aberr.head(1), post_f_telo_aberr.head(1)) telo_aberr_cols = ['astro id', 'flight status', 'Cell Number', 'Image File Number', '# of Fragile Telos', '# of STL-complete', '# of STL-hetero', '# of sat associations'] pre_f_telo_aberr.columns = telo_aberr_cols mid_f1_telo_aberr.columns = telo_aberr_cols mid_f2_telo_aberr.columns = telo_aberr_cols post_f_telo_aberr.columns = telo_aberr_cols all_astro_telo_aberr = pd.concat([pre_f_telo_aberr, mid_f1_telo_aberr, mid_f2_telo_aberr, post_f_telo_aberr], axis=0, ignore_index=True) all_astro_telo_aberr = all_astro_telo_aberr.drop(['Cell Number', 'Image File Number'], axis=1) print(all_astro_telo_aberr.shape) all_astro_telo_aberr.head(4) all_astro_telo_aberr.to_csv('./excel data/All_astronauts_telomeric_aberrations.csv') all_astro_telo_aberr.to_excel('./excel data/All_astronauts_telomeric_aberrations.xlsx') # testing = pd.read_csv('./excel data/all_astronauts_telomeric_aberrations.csv', index_col=0) melt_all_astro_telo_aberr = pd.melt(all_astro_telo_aberr, id_vars=['astro id', 'flight status'], var_name='aberration type', value_name='count per cell') melt_all_astro_telo_aberr.head(4) melt_all_astro_telo_aberr['flight status test'] = melt_all_astro_telo_aberr['flight status'] melt_all_astro_telo_aberr.to_csv('./excel data/All_astronauts_telomeric_aberrations_tidy_data.csv') melt_all_astro_telo_aberr.to_excel('./excel data/All_astronauts_telomeric_aberrations_tidy_data.xlsx') sns.set_style(style="darkgrid",rc= {'patch.edgecolor': 'black'}) data=melt_all_astro_telo_aberr[melt_all_astro_telo_aberr['aberration type'] != '# of sat associations'] x='aberration type' y='count per cell' hue='flight status' ax1= sns.catplot(x=x, y=y, hue=hue, data=data, kind='bar', height=4, aspect=3) # ax2 = plt.twinx() data2=melt_all_astro_telo_aberr[melt_all_astro_telo_aberr['aberration type'] == '# of sat associations'] ax2= sns.catplot(x=x, y=y, data=data2, hue=hue, kind='bar', height=4, aspect=3) #pre-flight chr aberr data pre_f = nasa_chr_data.iloc[0:404, 1:15] #grabbing column names to avoid formatting errors from excel chr_aberr_cols = pre_f.columns chr_aberr_cols = ['astro id', 'flight status', 'cell number', 'image file number', 'dicentrics', 'translocations', 'inversions', 'terminal inversions', 'terminal SCEs paint cis', 'terminal SCEs dark cis', 'subtelo SCEs', 'sister chromatid exchanges', 'insertions', 'satellite associations'] pre_f.columns = chr_aberr_cols #mid-flight 1 chr aberr data mid_f_1 = nasa_chr_data.iloc[35:109, 25:39] mid_f_1.columns = chr_aberr_cols #mid-flight 2 chr aberr data mid_f_2 = nasa_chr_data.iloc[0:111, 49:63] mid_f_2.columns = chr_aberr_cols #post-flight chr aberr data post_f = nasa_chr_data.iloc[0:400, 96:110] post_f.columns = chr_aberr_cols all_astro_chr_aberr = pd.concat([pre_f, mid_f_1, mid_f_2, post_f], axis=0, ignore_index=True) all_astro_chr_aberr = all_astro_chr_aberr.drop(['cell number', 'image file number', 'insertions'], axis=1).dropna().reset_index(drop=True) print(all_astro_chr_aberr.columns) all_astro_chr_aberr.head(4) all_astro_chr_aberr['total inversions'] = all_astro_chr_aberr['terminal inversions'] + all_astro_chr_aberr['inversions'] all_astro_chr_aberr['terminal SCEs'] = all_astro_chr_aberr['terminal SCEs paint cis'] + all_astro_chr_aberr['terminal SCEs dark cis'] combine_inv_termSCEs_all_astro_chr_aberr = all_astro_chr_aberr.drop(['terminal inversions', 'inversions', 'terminal SCEs paint cis', 'terminal SCEs dark cis'], axis=1) # all_astro_chr_aberr melt_all_astro_chr_aberr = pd.melt(combine_inv_termSCEs_all_astro_chr_aberr, id_vars=['astro id', 'flight status'], var_name='aberration type', value_name='count per cell') melt_all_astro_chr_aberr.head(4) order_cat=['dicentrics', 'translocations', 'total inversions', 'terminal SCEs', 'sister chromatid exchanges', 'subtelo SCEs', 'satellite associations'] ax = sns.set(font_scale=1) ax = sns.set_style(style="darkgrid",rc= {'patch.edgecolor': 'black'}) ax = sns.catplot(x='aberration type', y='count per cell', hue='flight status', kind='bar', order=order_cat, orient='v', height=4, aspect=3, data=melt_all_astro_chr_aberr) plt.title('chr aberr by subtelo dgh: 11 astros, pre-, mid1/2-, post-', fontsize=16) # height=10, aspect=2.5, mid_combined1 = all_astro_chr_aberr.replace('mid-flight 1', 'mid-flight') mid_combined = mid_combined1.replace('mid-flight 2', 'mid-flight') # int64 to avoid making a str ending w/ .0, i.e 2171.0 mid_combined['astro id'] = mid_combined['astro id'].astype('int64') # then to str to enable string matching mid_combined['astro id'] = mid_combined['astro id'].astype('str') mid_combined = pd.melt(mid_combined, id_vars=['astro id', 'flight status'], var_name='aberration type', value_name='count per cell') # ax = sns.set(font_scale=1) # ax = sns.set_style(style="darkgrid",rc= {'patch.edgecolor': 'black'}) # ax = sns.catplot(x='aberration type', y='count per cell', # hue='flight status', kind='bar', order=order_cat, # orient='v', height=4, aspect=3, data=mid_combined) # plt.title('chr aberr by subtelo dgh: 11 astros, pre-, mid-, post-', fontsize=16) mid_flight_only_astros = mid_combined[(mid_combined['astro id'] == '2171') | (mid_combined['astro id'] == '1536') | (mid_combined['astro id'] == '5163')] print(mid_flight_only_astros[(mid_flight_only_astros['flight status'] == 'pre-flight') & (mid_flight_only_astros['aberration type'] == 'total inversions')]['count per cell'].mean(axis=0), mid_flight_only_astros[(mid_flight_only_astros['flight status'] == 'mid-flight') & (mid_flight_only_astros['aberration type'] == 'total inversions')]['count per cell'].mean(axis=0)) stats.ttest_ind(pre, mid, equal_var=True, axis=0) # pre mid_combined['aberration type'].unique() order_cat=['dicentrics', 'translocations', 'total inversions', 'terminal SCEs', 'sister chromatid exchanges', 'subtelo SCEs', 'satellite associations'] ax = sns.set(font_scale=1) ax = sns.set_style(style="darkgrid",rc= {'patch.edgecolor': 'black'}) ax = sns.catplot(x='aberration type', y='count per cell', hue='flight status', kind='bar', order=order_cat, orient='v', height=4, aspect=3, data=mid_flight_only_astros) plt.title('chr aberr by subtelo dgh: 3 astros, pre-, mid-, post-', fontsize=16) mid_flight_removed = mid_combined[mid_combined['flight status'] != 'mid-flight'] ax = sns.set(font_scale=1) ax = sns.set_style(style="darkgrid",rc= {'patch.edgecolor': 'black'}) ax = sns.catplot(x='aberration type', y='count per cell', hue='flight status', kind='bar', order=order_cat, orient='v', height=4, aspect=3, data=mid_flight_removed) plt.title('chr aberr by subtelo dgh: 11 astros, pre-, post-', fontsize=16) all_astro_chr_aberr.to_csv('./excel data/All_astronauts_chromosome_aberration_data.csv') all_astro_chr_aberr.to_excel('./excel data/All_astronauts_chromosome_aberration_data.xlsx') melt_all_astro_chr_aberr.to_csv('./excel data/All_astronauts_chromosome_aberration_data_tidy_data.csv') melt_all_astro_chr_aberr.to_excel('./excel data/All_astronauts_chromosome_aberration_data_tidy_data.xlsx') ```
github_jupyter
[[source]](../api/alibi.explainers.anchor_tabular.rst) # Anchors ## Overview The anchor algorithm is based on the [Anchors: High-Precision Model-Agnostic Explanations](https://homes.cs.washington.edu/~marcotcr/aaai18.pdf) paper by Ribeiro et al. and builds on the open source [code](https://github.com/marcotcr/anchor) from the paper's first author. The algorithm provides model-agnostic (*black box*) and human interpretable explanations suitable for classification models applied to images, text and tabular data. The idea behind anchors is to explain the behaviour of complex models with high-precision rules called *anchors*. These anchors are locally sufficient conditions to ensure a certain prediction with a high degree of confidence. Anchors address a key shortcoming of local explanation methods like [LIME](https://arxiv.org/abs/1602.04938) which proxy the local behaviour of the model in a linear way. It is however unclear to what extent the explanation holds up in the region around the instance to be explained, since both the model and data can exhibit non-linear behaviour in the neighborhood of the instance. This approach can easily lead to overconfidence in the explanation and misleading conclusions on unseen but similar instances. The anchor algorithm tackles this issue by incorporating coverage, the region where the explanation applies, into the optimization problem. A simple example from sentiment classification illustrates this (Figure 1). Dependent on the sentence, the occurrence of the word *not* is interpreted as positive or negative for the sentiment by LIME. It is clear that the explanation using *not* is very local. Anchors however aim to maximize the coverage, and require *not* to occur together with *good* or *bad* to ensure respectively negative or positive sentiment. ![LIMEsentiment](lime_sentiment.png) Ribeiro et al., *Anchors: High-Precision Model-Agnostic Explanations*, 2018 As highlighted by the above example, an anchor explanation consists of *if-then rules*, called the anchors, which sufficiently guarantee the explanation locally and try to maximize the area for which the explanation holds. This means that as long as the anchor holds, the prediction should remain the same regardless of the values of the features not present in the anchor. Going back to the sentiment example: as long as *not good* is present, the sentiment is negative, regardless of the other words in the movie review. ### Text For text classification, an interpretable anchor consists of the words that need to be present to ensure a prediction, regardless of the other words in the input. The words that are not present in a candidate anchor can be sampled in 2 ways: * Replace word token by UNK token. * Replace word token by sampled token from a corpus with the same POS tag and probability proportional to the similarity in the embedding space. By sampling similar words, we keep more context than simply using the UNK token. ### Tabular Data Anchors are also suitable for tabular data with both categorical and continuous features. The continuous features are discretized into quantiles (e.g. deciles), so they become more interpretable. The features in a candidate anchor are kept constant (same category or bin for discretized features) while we sample the other features from a training set. As a result, anchors for tabular data need access to training data. Let's illustrate this with an example. Say we want to predict whether a person makes less or more than £50,000 per year based on the person's characteristics including age (continuous variable) and marital status (categorical variable). The following would then be a potential anchor: Hugo makes more than £50,000 because he is married and his age is between 35 and 45 years. ### Images Similar to LIME, images are first segmented into superpixels, maintaining local image structure. The interpretable representation then consists of the presence or absence of each superpixel in the anchor. It is crucial to generate meaningful superpixels in order to arrive at interpretable explanations. The algorithm supports a number of standard image segmentation algorithms ([felzenszwalb, slic and quickshift](https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_segmentations.html#sphx-glr-auto-examples-segmentation-plot-segmentations-py)) and allows the user to provide a custom segmentation function. The superpixels not present in a candidate anchor can be masked in 2 ways: * Take the average value of that superpixel. * Use the pixel values of a superimposed picture over the masked superpixels. ![anchorimage](anchor_image.png) Ribeiro et al., *Anchors: High-Precision Model-Agnostic Explanations*, 2018 ### Efficiently Computing Anchors The anchor needs to return the same prediction as the original instance with a minimal confidence of e.g. 95%. If multiple candidate anchors satisfy this constraint, we go with the anchor that has the largest coverage. Because the number of potential anchors is exponential in the feature space, we need a faster approximate solution. The anchors are constructed bottom-up in combination with [beam search](https://en.wikipedia.org/wiki/Beam_search). We start with an empty rule or anchor, and incrementally add an *if-then* rule in each iteration until the minimal confidence constraint is satisfied. If multiple valid anchors are found, the one with the largest coverage is returned. In order to select the best candidate anchors for the beam width efficiently during each iteration, we formulate the problem as a [pure exploration multi-armed bandit](https://www.cse.iitb.ac.in/~shivaram/papers/kk_colt_2013.pdf) problem. This limits the number of model prediction calls which can be a computational bottleneck. For more details, we refer the reader to the original [paper](https://homes.cs.washington.edu/~marcotcr/aaai18.pdf). ## Usage While each data type has specific requirements to initialize the explainer and return explanations, the underlying algorithm to construct the anchors is the same. In order to efficiently generate anchors, the following hyperparameters need to be set to sensible values when calling the `explain` method: * `threshold`: the previously discussed minimal confidence level. `threshold` defines the minimum fraction of samples for a candidate anchor that need to lead to the same prediction as the original instance. A higher value gives more confidence in the anchor, but also leads to more computation time. The default value is 0.95. * `tau`: determines when we assume convergence for the multi-armed bandit. A bigger value for `tau` means faster convergence but also looser anchor conditions. By default equal to 0.15. * `beam_size`: the size of the beam width. A bigger beam width can lead to a better overall anchor at the expense of more computation time. * `batch_size`: the batch size used for sampling. A bigger batch size gives more confidence in the anchor, again at the expense of computation time since it involves more model prediction calls. The default value is 100. * `coverage_samples`: number of samples used to compute the coverage of the anchor. By default set to 10000. ### Text #### Initialization Since the explainer works on black box models, only access to a predict function is needed. The model below is a simple logistic regression trained on movie reviews with negative or positive sentiment and pre-processed with a CountVectorizer: ```python predict_fn = lambda x: clf.predict(vectorizer.transform(x)) ``` If we choose to sample similar words from a corpus, we first need to load a spaCy model: ```python import spacy from alibi.utils.download import spacy_model model = 'en_core_web_md' spacy_model(model=model) nlp = spacy.load(model) ``` We can now initialize our explainer: ```python explainer = AnchorText(nlp, predict_fn) ``` #### Explanation Let's define the instance we want to explain and verify that the sentiment prediction on the original instance is positive: ```python text = 'This is a good book .' class_names = ['negative', 'positive'] pred = class_names[predict_fn([text])[0]] ``` Now we can explain the instance: ```python explanation = explainer.explain(text, threshold=0.95, use_similarity_proba=False, use_unk=True, sample_proba=0.5) ``` We set the confidence `threshold` at 95%. `use_unk` equals True means that we replace words outside of the candidate anchor with UNK tokens with a sample probability equal to `sample_proba`. Instead of using UNK tokens, we can sample from the `top_n` similar words to the ground truth word in the corpus by setting `use_unk` to False. ```python explanation = explainer.explain(text, threshold=0.95, use_unk=False, sample_proba=0.5, top_n=100) ``` It is also possible to sample words from the corpus proportional to the word similarity with the ground truth word by setting `use_similarity_proba` to True and `use_unk` to False. We can put more weight on similar words by decreasing the `temperature` argument. The following explanation perturbs original tokens with probability equal to `sample_proba`. The perturbed tokens are then sampled from the `top_n` most similar tokens in the corpus with sample probability proportional to the word similarity with the original token. ```python explanation = explainer.explain(text, threshold=0.95, use_similarity_proba=True, use_unk=False, sample_proba=0.5, top_n=20, temperature=0.2) ``` The `explain` method returns an `Explanation` object with the following attributes: * *anchor*: a list of words in the anchor. * *precision*: the fraction of times the sampled instances where the anchor holds yields the same prediction as the original instance. The precision will always be $\geq$ `threshold` for a valid anchor. * *coverage*: the fraction of sampled instances the anchor applies to. The *raw* attribute is a dictionary which also contains example instances where the anchor holds and the prediction is the same as on the original instance, as well as examples where the anchor holds but the prediction changed to give the user a sense of where the anchor fails. *raw* also stores information on the *anchor*, *precision* and *coverage* of partial anchors. This allows the user to track the improvement in for instance the *precision* as more features (words in the case of text) are added to the anchor. ### Tabular Data #### Initialization and fit To initialize the explainer, we provide a predict function, a list with the feature names to make the anchors easy to understand as well as an optional mapping from the encoded categorical features to a description of the category. An example for `categorical_names` would be *category_map = {0: list('married', 'divorced'), 3: list('high school diploma', 'master's degree')}*. Each key in *category_map* refers to the column index in the input for the relevant categorical variable, while the values are lists with the options for each categorical variable. To make it easy, we provide a utility function `gen_category_map` to generate this map automatically from a Pandas dataframe: ```python from alibi.utils.data import gen_category_map category_map = gen_category_map(df) ``` Then initialize the explainer: ```python predict_fn = lambda x: clf.predict(preprocessor.transform(x)) explainer = AnchorTabular(predict_fn, feature_names, categorical_names=category_map) ``` Tabular data requires a fit step to map the ordinal features into quantiles and therefore needs access to a representative set of the training data. `disc_perc` is a list with percentiles used for binning: ```python explainer.fit(X_train, disc_perc=[25, 50, 75]) ``` #### Explanation Let's check the prediction of the model on the original instance and explain: ```python class_names = ['<=50K', '>50K'] pred = class_names[explainer.predict_fn(X)[0]] explanation = explainer.explain(X, threshold=0.95) ``` The returned `Explanation` object contains the same attributes as the text explainer, so you could explain a prediction as follows: ``` Prediction: <=50K Anchor: Marital Status = Never-Married AND Relationship = Own-child Precision: 1.00 Coverage: 0.13 ``` ### Images #### Initialization Besides the predict function, we also need to specify either a built in or custom superpixel segmentation function. The built in methods are [felzenszwalb](https://scikit-image.org/docs/dev/api/skimage.segmentation.html#skimage.segmentation.felzenszwalb), [slic](https://scikit-image.org/docs/dev/api/skimage.segmentation.html#skimage.segmentation.slic) and [quickshift](https://scikit-image.org/docs/dev/api/skimage.segmentation.html#skimage.segmentation.quickshift). It is important to create sensible superpixels in order to speed up convergence and generate interpretable explanations. Tuning the hyperparameters of the segmentation method is recommended. ```python explainer = AnchorImage(predict_fn, image_shape, segmentation_fn='slic', segmentation_kwargs={'n_segments': 15, 'compactness': 20, 'sigma': .5}, images_background=None) ``` Example of superpixels generated for the Persian cat picture using the *slic* method: ![persiancat](persiancat.png) ![persiancatsegm](persiancatsegm.png) The following function would be an example of a custom segmentation function dividing the image into rectangles. ```python def superpixel(image, size=(4, 7)): segments = np.zeros([image.shape[0], image.shape[1]]) row_idx, col_idx = np.where(segments == 0) for i, j in zip(row_idx, col_idx): segments[i, j] = int((image.shape[1]/size[1]) * (i//size[0]) + j//size[1]) return segments ``` The `images_background` parameter allows the user to provide images used to superimpose on the masked superpixels, not present in the candidate anchor, instead of taking the average value of the masked superpixel. The superimposed images need to have the same shape as the explained instance. #### Explanation We can then explain the instance in the usual way: ```python explanation = explainer.explain(image, p_sample=.5) ``` `p_sample` determines the fraction of superpixels that are either changed to the average superpixel value or that are superimposed. The `Explanation` object again contains information about the anchor's *precision*, *coverage* and examples where the anchor does or does not hold. On top of that, it also contains a masked image with only the anchor superpixels visible under the *anchor* attribute (see image below) as well as the image's superpixels under *segments*. ![persiancatanchor](persiancatanchor.png) ## Examples ### Image [Anchor explanations for ImageNet](../examples/anchor_image_imagenet.nblink) [Anchor explanations for fashion MNIST](../examples/anchor_image_fashion_mnist.nblink) ### Tabular Data [Anchor explanations on the Iris dataset](../examples/anchor_tabular_iris.nblink) [Anchor explanations for income prediction](../examples/anchor_tabular_adult.nblink) ### Text [Anchor explanations for movie sentiment](../examples/anchor_text_movie.nblink)
github_jupyter
# [모듈 5.1] HPO 사용 모델 빌딩 파이프라인 개발 (SageMaker Model Building Pipeline 모든 스텝) 이 노트북은 아래와 같은 목차로 진행 됩니다. 전체를 모두 실행시에 완료 시간은 **약 30분** 소요 됩니다. - 0. SageMaker Model Building Pipeline 개요 - 1. 파이프라인 변수 및 환경 설정 - 2. 파이프라인 스텝 단계 정의 - (1) 전처리 스텝 단계 정의 - (2) 모델 학습을 위한 학습단계 정의 - (3) 모델 평가 단계 - (4) 모델 등록 스텝 - (5) 세이지 메이커 모델 생성 스텝 생성 - (6) HPO 단계 - (7) 조건 단계 - 3. 모델 빌딩 파이프라인 정의 및 실행 - 4. Pipleline 캐싱 및 파라미터 이용한 실행 - 5. 정리 작업 --- # 0.SageMaker Model Building Pipeline 개요 - 필요시에 이전 노트북을 참조하세요: scratch/8.5.All-Pipeline.ipynb # 1. 파이프라인 변수 및 환경 설정 ``` import boto3 import sagemaker import pandas as pd region = boto3.Session().region_name sagemaker_session = sagemaker.session.Session() role = sagemaker.get_execution_role() sm_client = boto3.client('sagemaker', region_name=region) %store -r ``` ## 파이프라인 변수 설정 ``` from sagemaker.workflow.parameters import ( ParameterInteger, ParameterString, ParameterFloat, ) processing_instance_count = ParameterInteger( name="ProcessingInstanceCount", default_value=1 ) processing_instance_type = ParameterString( name="ProcessingInstanceType", default_value="ml.m5.xlarge" ) training_instance_type = ParameterString( name="TrainingInstanceType", default_value="ml.m5.xlarge" ) training_instance_count = ParameterInteger( name="TrainingInstanceCount", default_value=1 ) model_eval_threshold = ParameterFloat( name="model2eval2threshold", default_value=0.85 ) input_data = ParameterString( name="InputData", default_value=input_data_uri, ) model_approval_status = ParameterString( name="ModelApprovalStatus", default_value="PendingManualApproval" ) ``` ## 캐싱 정의 - 참고: 캐싱 파이프라인 단계: [Caching Pipeline Steps](https://docs.aws.amazon.com/ko_kr/sagemaker/latest/dg/pipelines-caching.html) ``` from sagemaker.workflow.steps import CacheConfig cache_config = CacheConfig(enable_caching=True, expire_after="7d") ``` # 2. 파이프라인 스텝 단계 정의 # (1) 전처리 스텝 단계 정의 - input_data_uri 입력 데이타를 대상으로 전처리를 수행 합니다. ``` from sagemaker.sklearn.processing import SKLearnProcessor split_rate = 0.2 framework_version = "0.23-1" sklearn_processor = SKLearnProcessor( framework_version=framework_version, instance_type=processing_instance_type, instance_count=processing_instance_count, base_job_name="sklearn-fraud-process", role=role, ) print("input_data: \n", input_data) from sagemaker.processing import ProcessingInput, ProcessingOutput from sagemaker.workflow.steps import ProcessingStep step_process = ProcessingStep( name="FraudScratchProcess", processor=sklearn_processor, inputs=[ # ProcessingInput(source=input_data_uri,destination='/opt/ml/processing/input'), ProcessingInput(source=input_data, destination='/opt/ml/processing/input'), ], outputs=[ProcessingOutput(output_name="train", source='/opt/ml/processing/output/train'), ProcessingOutput(output_name="test", source='/opt/ml/processing/output/test')], job_arguments=["--split_rate", f"{split_rate}"], code= 'src/preprocessing.py', cache_config = cache_config, # 캐시 정의 ) ``` ## (2)모델 학습을 위한 학습단계 정의 ### 기본 훈련 변수 및 하이퍼파라미터 설정 ``` from sagemaker.xgboost.estimator import XGBoost bucket = sagemaker_session.default_bucket() prefix = 'fraud2train' estimator_output_path = f's3://{bucket}/{prefix}/training_jobs' base_hyperparameters = { "scale_pos_weight" : "29", "max_depth": "6", "alpha" : "0", "eta": "0.3", "min_child_weight": "1", "objective": "binary:logistic", "num_round": "100", } xgb_train = XGBoost( entry_point = "xgboost_script.py", source_dir = "src", output_path = estimator_output_path, code_location = estimator_output_path, hyperparameters = base_hyperparameters, role = role, instance_count = training_instance_count, instance_type = training_instance_type, framework_version = "1.0-1") ``` 훈련의 입력이 이전 전처리의 결과가 제공됩니다. - `step_process.properties.ProcessingOutputConfig.Outputs["train"].S3Output.S3Uri` ``` from sagemaker.inputs import TrainingInput from sagemaker.workflow.steps import TrainingStep step_train = TrainingStep( name="FraudScratchTrain", estimator=xgb_train, inputs={ "train": TrainingInput( s3_data=step_process.properties.ProcessingOutputConfig.Outputs[ "train" ].S3Output.S3Uri, # s3_data= train_preproc_dir_artifact, content_type="text/csv" ), }, cache_config = cache_config, # 캐시 정의 ) ``` ## (3) 모델 평가 단계 ### ScriptProcessor 의 기본 도커 컨테이너 지정 ScriptProcessor 의 기본 도커 컨테이너로 Scikit-learn를 기본 이미지를 사용함. - 사용자가 정의한 도커 컨테이너도 사용할 수 있습니다. ``` from sagemaker.processing import ScriptProcessor script_eval = SKLearnProcessor( framework_version= "0.23-1", role=role, instance_type=processing_instance_type, instance_count=1, base_job_name="script-fraud-scratch-eval", ) from sagemaker.workflow.properties import PropertyFile from sagemaker.workflow.steps import ProcessingStep from sagemaker.workflow.properties import PropertyFile evaluation_report = PropertyFile( name="EvaluationReport", output_name="evaluation", path="evaluation.json" ) step_eval = ProcessingStep( name="FraudEval", processor=script_eval, inputs=[ ProcessingInput( source= step_train.properties.ModelArtifacts.S3ModelArtifacts, destination="/opt/ml/processing/model" ), ProcessingInput( source=step_process.properties.ProcessingOutputConfig.Outputs[ "test" ].S3Output.S3Uri, destination="/opt/ml/processing/test" ) ], outputs=[ ProcessingOutput(output_name="evaluation", source="/opt/ml/processing/evaluation"), ], code="src/evaluation.py", cache_config = cache_config, # 캐시 정의 property_files=[evaluation_report], # 현재 이 라인을 넣으면 에러 발생 ) ``` ## (4) 모델 등록 스텝 ### 모델 그룹 생성 - 참고 - 모델 그룹 릭스팅 API: [ListModelPackageGroups](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ListModelPackageGroups.html) - 모델 지표 등록: [Model Quality Metrics](https://docs.aws.amazon.com/ko_kr/sagemaker/latest/dg/model-monitor-model-quality-metrics.html) ``` model_package_group_name = f"{project_prefix}" model_package_group_input_dict = { "ModelPackageGroupName" : model_package_group_name, "ModelPackageGroupDescription" : "Sample model package group" } response = sm_client.list_model_package_groups(NameContains=model_package_group_name) if len(response['ModelPackageGroupSummaryList']) == 0: print("No model group exists") print("Create model group") create_model_pacakge_group_response = sm_client.create_model_package_group(**model_package_group_input_dict) print('ModelPackageGroup Arn : {}'.format(create_model_pacakge_group_response['ModelPackageGroupArn'])) else: print(f"{model_package_group_name} exitss") from sagemaker.workflow.step_collections import RegisterModel from sagemaker.model_metrics import MetricsSource, ModelMetrics model_metrics = ModelMetrics( model_statistics=MetricsSource( s3_uri="{}/evaluation.json".format( step_eval.arguments["ProcessingOutputConfig"]["Outputs"][0]["S3Output"]["S3Uri"] ), content_type="application/json" ) ) step_register = RegisterModel( name= "FraudScratcRegisterhModel", estimator=xgb_train, image_uri= step_train.properties.AlgorithmSpecification.TrainingImage, model_data= step_train.properties.ModelArtifacts.S3ModelArtifacts, content_types=["text/csv"], response_types=["text/csv"], inference_instances=["ml.t2.medium", "ml.m5.xlarge"], transform_instances=["ml.m5.xlarge"], model_package_group_name=model_package_group_name, approval_status=model_approval_status, model_metrics=model_metrics, ) ``` ## (5) 세이지 메이커 모델 스텝 생성 - 아래 두 파리미터의 입력이 이전 스텝의 결과가 제공됩니다. - image_uri= step_train.properties.AlgorithmSpecification.TrainingImage, - model_data= step_train.properties.ModelArtifacts.S3ModelArtifacts, ``` from sagemaker.model import Model model = Model( image_uri= step_train.properties.AlgorithmSpecification.TrainingImage, model_data= step_train.properties.ModelArtifacts.S3ModelArtifacts, sagemaker_session=sagemaker_session, role=role, ) from sagemaker.inputs import CreateModelInput from sagemaker.workflow.steps import CreateModelStep inputs = CreateModelInput( instance_type="ml.m5.large", # accelerator_type="ml.eia1.medium", ) step_create_model = CreateModelStep( name="FraudScratchModel", model=model, inputs=inputs, ) ``` ## (6) HPO 스텝 ``` from sagemaker.tuner import ( IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner, ) hyperparameter_ranges = { "eta": ContinuousParameter(0, 1), "min_child_weight": ContinuousParameter(1, 10), "alpha": ContinuousParameter(0, 2), "max_depth": IntegerParameter(1, 10), } objective_metric_name = "validation:auc" tuner = HyperparameterTuner( xgb_train, objective_metric_name, hyperparameter_ranges, max_jobs=5, max_parallel_jobs=5, ) from sagemaker.workflow.steps import TuningStep step_tuning = TuningStep( name = "HPTuning", tuner = tuner, inputs={ "train": TrainingInput( s3_data=step_process.properties.ProcessingOutputConfig.Outputs[ "train" ].S3Output.S3Uri, # s3_data= train_preproc_dir_artifact, content_type="text/csv" ), }, cache_config = cache_config, # 캐시 정의 ) ``` ## (7) 조건 스텝 ``` from sagemaker.workflow.conditions import ConditionLessThanOrEqualTo from sagemaker.workflow.condition_step import ( ConditionStep, JsonGet, ) cond_lte = ConditionLessThanOrEqualTo( left=JsonGet( step=step_eval, property_file=evaluation_report, json_path="binary_classification_metrics.auc.value", ), # right=8.0 right = model_eval_threshold ) step_cond = ConditionStep( name="FruadScratchCond", conditions=[cond_lte], if_steps=[step_tuning], else_steps=[step_register, step_create_model], ) ``` # 3.모델 빌딩 파이프라인 정의 및 실행 위에서 정의한 아래의 4개의 스텝으로 파이프라인 정의를 합니다. - steps=[step_process, step_train, step_create_model, step_deploy], - 아래는 약 20분 정도 소요 됩니다. ``` from sagemaker.workflow.pipeline import Pipeline project_prefix = 'sagemaker-pipeline-phase2-step-by-step' pipeline_name = project_prefix pipeline = Pipeline( name=pipeline_name, parameters=[ processing_instance_type, processing_instance_count, training_instance_type, training_instance_count, input_data, model_eval_threshold, model_approval_status, ], # steps=[step_process, step_train, step_register, step_eval, step_cond], steps=[step_process, step_train, step_eval, step_cond], ) import json definition = json.loads(pipeline.definition()) # definition ``` ### 파이프라인을 SageMaker에 제출하고 실행하기 ``` pipeline.upsert(role_arn=role) ``` 디폴트값을 이용하여 파이프라인을 샐행합니다. ``` execution = pipeline.start() ``` ### 파이프라인 운영: 파이프라인 대기 및 실행상태 확인 워크플로우의 실행상황을 살펴봅니다. ``` execution.describe() execution.wait() ``` 실행이 완료될 때까지 기다립니다. 실행된 단계들을 리스트업합니다. 파이프라인의 단계실행 서비스에 의해 시작되거나 완료된 단계를 보여줍니다. ``` execution.list_steps() ``` # 4. Pipeline 캐싱 및 파라미터 이용한 실행 - 캐싱은 2021년 7월 현재 Training, Processing, Transform 의 Step에 적용이 되어 있습니다. - 상세 사항은 여기를 확인하세요. --> [캐싱 파이프라인 단계](https://docs.aws.amazon.com/ko_kr/sagemaker/latest/dg/pipelines-caching.html) ``` is_cache = True %%time from IPython.display import display as dp import time if is_cache: execution = pipeline.start( parameters=dict( model2eval2threshold=0.8, ) ) # execution = pipeline.start() time.sleep(10) dp(execution.list_steps()) execution.wait() if is_cache: dp(execution.list_steps()) ```
github_jupyter
### Chapter 21 ## Saving and Loading Trained Models ### 21.0 Introduction In the last 20 chapters around 200 recipies, we have convered how to take raw data nad usem achine learning to create well-performing predictive models. However, for all our work to be worthwhile we eventually need to do something with our model, such as integrating it with an existing software application. To accomplish this goal, we need to be able to bot hsave our models after training and load them when they are needed by an application. This is the focus of the final chapter ### 21.1 Saving and Loading a scikit-learn Model #### Problem You have trained a scikit-learn model and want to save it and load it elsewhere. #### Solution Save the model as a pickle file: ``` # load libraries from sklearn.ensemble import RandomForestClassifier from sklearn import datasets from sklearn.externals import joblib # load data iris = datasets.load_iris() features = iris.data target = iris.target # create decision tree classifier object classifier = RandomForestClassifier() # train model model = classifier.fit(features, target) # save model as pickle file joblib.dump(model, "model.pkl") ``` Once the model is saved we can use scikit-learn in our destination application (e.g., web application) to load the model: ``` # load model from file classifier = joblib.load("model.pkl") ``` And use it to make predictions ``` # create new observation new_observation = [[ 5.2, 3.2, 1.1, 0.1]] # predict obserrvation's class classifier.predict(new_observation) ``` ### Discussion The first step in using a model in production is to save that model as a file that can be loaded by another application or workflow. We can accomplish this by saving the model as a pickle file, a Python-specific data format. Specifically, to save the model we use `joblib`, which is a library extending pickle for cases when we have large NumPy arrays--a common occurance for trained models in scikit-learn. When saving scikit-learn models, be aware that saved models might not be compatible between versions of scikit-learn; therefore, it can be helpful to include the version of scikit-learn used in the model in the filename: ``` # import library import sklearn # get scikit-learn version scikit_version = joblib.__version__ # save model as pickle file joblib.dump(model, "model_(version).pkl".format(version=scikit_version)) ``` ### 21.2 Saving and Loading a Keras Model #### Problem You have a trained Keras model and want to save it and load it elsewhere. #### Solution Save the model as HDF5: ``` # load libraries import numpy as np from keras.datasets import imdb from keras.preprocessing.text import Tokenizer from keras import models from keras import layers from keras.models import load_model # set random seed np.random.seed(0) # set the number of features we want number_of_features = 1000 # load data and target vector from movie review data (train_Data, train_target), (test_data, test_target) = imdb.load_data(num_words=number_of_features) # convert movie review data to a one-hot encoded feature matrix tokenizer = Tokenizer(num_words=number_of_features) train_features = tokenizer.sequences_to_matrix(train_data, mode="binary") test_features = tokenizer.sequences_to_matrix(test_data, mode="binary") # start neural network network = models.Sequential() # add fully connected layer with ReLU activation function network.add(layers.Dense(units=16, activation="relu", input_shape=(number_of_features,))) # add fully connected layer with a sigmoid activation function network.add(layers.Dense(units=1, activation="sigmoid")) # compile neural network network.compile(loss="binary_crossentropy", optimizer="rmsprop", metrics=["accuracy"]) # train neural network history = network.fit(train_features, train_target, epochs=3, verbose=0, batch_size=100, validation_data=(test_features, test_target)) # save neural network network.save("model.h5") ``` We can then load the model either in another application or for additional training ``` # load neural network network = load_model("model.h5") ``` #### Discussion Unlike scikit-learn, Keras does not recommend you save models using pickle. Instead, models are saved as an HDF5 file. The HDF5 file contains everything you need to not only load the model to make predicitons (i.e., achitecture and trained parameters), but also to restart training (i.e. loss and optimizer settings and the current state)
github_jupyter
``` from grid_search_tools import GSTools from ptstrategy_cointegration_kalman import CointKalmanStrategy from custom_analyzer import Metrics from pandas_datafeed import PandasData from pair_selector import * import backtrader as bt import backtrader.feeds as btfeeds import pandas as pd import warnings import glob import os import uuid import itertools import json %load_ext autoreload %autoreload 2 # INPUT PARAMETERS DIR = "../ib-data/nyse-daily-tech/" BT_START_DT = '2018-03-19' TEST_PERIOD = 200 PRE_PAIR_FORMATION = 252 + 252 + 252 + 52 - 60 - 52 PAIR_FORMATION_LEN = 60 # top PCT percentage of the pairs with lowest distance will be backtested PCT = 0.9 # STRATEGY PARAMETERS ENTER_THRESHOLD_SIZE = [2] EXIT_THRESHOLD_SIZE = [0.25, 0.5] LOSS_LIMIT = [-0.005] MAX_LOOKBACK = 52 CONSIDER_BORROW_COST = False CONSIDER_COMMISSION = True # ADDITIONAL INFO OTHER_INFO = "" # Where to save the ouputs DST_DIR = "../backtest-results/cointegration-kalman/experiment3-y3/" CONFIG = { 'DIR': DIR, 'BT_START_DT': BT_START_DT, 'TEST_PERIOD': TEST_PERIOD, 'PAIR_FORMATION_LEN': PAIR_FORMATION_LEN, 'PCT': PCT, 'ENTER_THRESHOLD_SIZE': ENTER_THRESHOLD_SIZE, 'EXIT_THRESHOLD_SIZE': EXIT_THRESHOLD_SIZE, 'LOSS_LIMIT': LOSS_LIMIT, 'MAX_LOOKBACK': MAX_LOOKBACK, 'CONSIDER_BORROW_COST': CONSIDER_BORROW_COST, 'CONSIDER_COMMISSION': CONSIDER_COMMISSION, 'DST_DIR': DST_DIR, 'OTHER_INFO': OTHER_INFO, } # create json string CONFIG_JSON_STR = json.dumps(CONFIG) # create directory if neccessary if not os.path.exists(DST_DIR): os.makedirs(DST_DIR) # save json string to a file with open(DST_DIR + 'config.json', 'w') as outfile: json.dump(CONFIG_JSON_STR, outfile) print("---------------------------------------------------------------------") ################################################################################################################### # Load data data = GSTools.load_csv_files(DIR) dt_idx = GSTools.get_trading_dates(data) print("Initial number of datafeeds: " + str(len(dt_idx)) + ".") ################################################################################################################### # get position of intended start date of backtest bt_start_idx = dt_idx.get_loc(BT_START_DT) size = PRE_PAIR_FORMATION + PAIR_FORMATION_LEN + MAX_LOOKBACK + (len(dt_idx) - bt_start_idx) print("To fulfill BT_START_DT, PAIR_FORMATION_LEN and MAX_LOOKBACK, size = " + str(size) + ".") # get datafeeds which fulfill size requirement data = GSTools.cut_datafeeds(data, size=size) print("After cutting datafeeds, " + str(len(data.keys())) + " datafeeds remaining.") ################################################################################################################### # just to be safe, sync the start end dates of the dataframes data, start_dt, end_dt = GSTools.sync_start_end(data) dt_idx = GSTools.get_trading_dates(data) print("Backtest start date: " + str(dt_idx[PRE_PAIR_FORMATION + PAIR_FORMATION_LEN + MAX_LOOKBACK])) print("Backtest end date: " + str(dt_idx[PRE_PAIR_FORMATION + PAIR_FORMATION_LEN + MAX_LOOKBACK + TEST_PERIOD - 1])) ################################################################################################################### # get aggregated close prices close_df = GSTools.get_aggregated(data, col='close') if close_df.isnull().values.any(): warnings.warn("There are null values in the aggregated close price df.") else: print("No null values detected in aggregated close price df.") ################################################################################################################### # total number of stocks remaining N = len(data.keys()) # number of pairs of interest K = int(PCT * N * (N-1) / 2) ################################################################################################################### # pair selection good_pairs = coint(df=close_df[PRE_PAIR_FORMATION:PRE_PAIR_FORMATION + PAIR_FORMATION_LEN], intercept=True, sig_level=0.005) good_pairs.sort(key=lambda x: x[2]) good_pairs = good_pairs[0 : K] print("From " + str(int(N * (N-1) / 2)) + " pairs, " + str(len(good_pairs)) + " pairs passed the cointegration test.") print("---------------------------------------------------------------------") GSTools.get_aggregated_with_dates(data, col='close').set_index("date").loc["2015-01-02":"2015-01-10"] # combinations of parameters param_combinations = list(itertools.product(ENTER_THRESHOLD_SIZE, EXIT_THRESHOLD_SIZE, LOSS_LIMIT)) # list to store MACRO results macro_results = [] for i, params in enumerate(param_combinations, 1): # set params print("Running " + str(i) + "/" + str(len(param_combinations))) print("Backtesting all pairs using parameters " + str(params)) # list to store MICRO results results = [] for pair in good_pairs: # get names of both stock stk0, stk1, _ = pair # get data of both stock stk0_df, stk1_df = data[stk0], data[stk1] stk0_df_test = stk0_df[PRE_PAIR_FORMATION + PAIR_FORMATION_LEN : PRE_PAIR_FORMATION + PAIR_FORMATION_LEN + MAX_LOOKBACK + TEST_PERIOD] stk1_df_test = stk1_df[PRE_PAIR_FORMATION + PAIR_FORMATION_LEN : PRE_PAIR_FORMATION + PAIR_FORMATION_LEN + MAX_LOOKBACK + TEST_PERIOD] # Create a cerebro cerebro = bt.Cerebro() # Create data feeds data0 = bt.feeds.PandasData(dataname=stk0_df_test, timeframe=(bt.TimeFrame.Days), datetime=0) data1 = bt.feeds.PandasData(dataname=stk1_df_test, timeframe=(bt.TimeFrame.Days), datetime=0) # add data feeds to cerebro cerebro.adddata(data0) cerebro.adddata(data1) # Add the strategy cerebro.addstrategy(CointKalmanStrategy, max_lookback=MAX_LOOKBACK, enter_threshold_size=params[0], exit_threshold_size=params[1], loss_limit=params[2], consider_borrow_cost=CONSIDER_BORROW_COST, consider_commission=CONSIDER_COMMISSION ) # Add analyzers cerebro.addanalyzer(bt.analyzers.SharpeRatio, _name='mysharpe') cerebro.addanalyzer(Metrics, lookback=MAX_LOOKBACK, _name='metrics') # Add the commission - only stocks like a for each operation cerebro.broker.setcash(1000000) # And run it strat = cerebro.run() # get MICRO metrics sharperatio = strat[0].analyzers.mysharpe.get_analysis()['sharperatio'] returnstd = strat[0].analyzers.metrics.returns_std() startcash = cerebro.getbroker().startingcash endcash = cerebro.getbroker().getvalue() profit = (endcash - startcash) / startcash results.append((stk0 + "-" + stk1, sharperatio, profit, returnstd)) # convert to dataframe results_df = pd.DataFrame(results) results_df.columns = ['pair', 'sharpe_ratio', 'overall_return', 'returns_std'] # save as csv uuid_str = str(uuid.uuid4()) path = DST_DIR + str(uuid_str) + ".csv" results_df.to_csv(path_or_buf=path, index=False) # calculate MACRO attributes avg_sharpe_ratio = results_df['sharpe_ratio'].mean() median_sharpe_ratio = results_df['sharpe_ratio'].median() avg_overall_return = results_df['overall_return'].mean() median_overall_return = results_df['overall_return'].median() overall_return_std = results_df['overall_return'].std() macro_results.append((params[0], params[1], params[2], avg_sharpe_ratio, median_sharpe_ratio, avg_overall_return, median_overall_return, overall_return_std, uuid_str )) # nextline print("") macro_results_df = pd.DataFrame(macro_results) macro_results_df.columns = ['enter_threshold_size', 'exit_threshold_size', 'loss_limit', 'avg_sharpe_ratio', 'median_sharpe_ratio', 'avg_overall_return', 'median_overall_return', 'overall_return_std', 'uuid'] macro_results_df.to_csv(DST_DIR + 'summary.csv', index=False) macro_results_df = pd.read_csv(DST_DIR + 'summary.csv') macro_results_df macro_results_df[macro_results_df['avg_overall_return'] == max(macro_results_df['avg_overall_return'])] ``` ### Test single pair ``` # set your params _STK0 = 'ATEN' _STK1 = 'DQ' _MAX_LOOKBACK = 252 _ENTER_THRESHOLD_SIZE = 1.25 _EXIT_THRESHOLD_SIZE = 0.5 _LOSS_LIMIT = -0.02 def backtest_single_pair(_stk0, _stk1, _max_lookback, _enter_threshold_size, _exit_threshold_size, _loss_limit): # get data of both stock stk0_df, stk1_df = data[_stk0], data[_stk1] stk0_df_test = stk0_df[PRE_PAIR_FORMATION + PAIR_FORMATION_LEN : PRE_PAIR_FORMATION + PAIR_FORMATION_LEN + MAX_LOOKBACK + TEST_PERIOD] stk1_df_test = stk1_df[PRE_PAIR_FORMATION + PAIR_FORMATION_LEN : PRE_PAIR_FORMATION + PAIR_FORMATION_LEN + MAX_LOOKBACK + TEST_PERIOD] # Create a cerebro cerebro = bt.Cerebro() # Create data feeds data0 = bt.feeds.PandasData(dataname=stk0_df_test, timeframe=(bt.TimeFrame.Days), datetime=0) data1 = bt.feeds.PandasData(dataname=stk1_df_test, timeframe=(bt.TimeFrame.Days), datetime=0) # add data feeds to cerebro cerebro.adddata(data0) cerebro.adddata(data1) # Add the strategy cerebro.addstrategy(CointKalmanStrategy, max_lookback=_max_lookback, enter_threshold_size=_enter_threshold_size, exit_threshold_size=_exit_threshold_size, loss_limit=_loss_limit, print_bar=False, print_msg=True, print_transaction=True, ) # Add analyzers cerebro.addanalyzer(bt.analyzers.SharpeRatio, _name='mysharpe') cerebro.addanalyzer(Metrics, lookback=MAX_LOOKBACK, _name='metrics') # Add the commission - only stocks like a for each operation cerebro.broker.setcash(1000000) # And run it strat = cerebro.run() return (cerebro, strat) cerebro, strat = backtest_single_pair(_STK0, _STK1, _MAX_LOOKBACK, _ENTER_THRESHOLD_SIZE, _EXIT_THRESHOLD_SIZE, _LOSS_LIMIT) cerebro.plot() sharperatio = strat[0].analyzers.mysharpe.get_analysis()['sharperatio'] startcash = cerebro.getbroker().startingcash endcash = cerebro.getbroker().getvalue() profit = (endcash - startcash) / startcash print(sharperatio) print(profit) n_trades = strat[0].analyzers.metrics.n_trades n_resolved_trades = strat[0].analyzers.metrics.n_resolved_trades n_unresolved_trades = strat[0].analyzers.metrics.n_unresolved_trades avg_holding_period = strat[0].analyzers.metrics.avg_holding_period len_unresolved_trade= strat[0].analyzers.metrics.len_unresolved_trade returns = strat[0].analyzers.metrics.returns pv = strat[0].analyzers.metrics.pv print(n_trades, n_resolved_trades, n_unresolved_trades, avg_holding_period, len_unresolved_trade) plt.plot(returns) plt.show() plt.plot(pv) plt.show() ```
github_jupyter
``` import io import json import os import pickle from collections import Counter, OrderedDict from collections import defaultdict import numpy as np import pandas as pd import torch from nltk.tokenize import sent_tokenize, word_tokenize from torch.utils.data import Dataset # from torchtext.data.utils import get_tokenizer dir_dataset = os.path.expanduser('~/Documents/master3/leomed_scratch/files_small') fn_findings = os.path.join(dir_dataset, 'train_findings.csv') print(os.path.exists(fn_findings)) report_findings = pd.read_csv(fn_findings)['findings'] ``` ## Implementing Word2Vec ``` sentence_lengths = [] data = '' for sentence in report_findings: data+= sentence sentence_lengths.append(len(sentence)) from matplotlib import pyplot as plt plt.hist(np.array(sentence_lengths), bins = 30) plt.show() print('max_length: ', max(sentence_lengths)) print('mean sent_length: ', np.mean(sentence_lengths)) print(len(data)) tokenizer = get_tokenizer("basic_english") tokens = tokenizer(data) print(len(tokens)) tokens = list(set(tokens)) print(len(tokens)) word2idx = {w: idx for (idx, w) in enumerate(tokens)} idx2word = {idx: w for (idx, w) in enumerate(tokens)} vocab_size = len(tokens) print(idx2word) # todo: this: https://github.com/iffsid/mmvae/blob/public/src/datasets.py class OrderedCounter(Counter, OrderedDict): """Counter that remembers the order elements are first encountered.""" def __repr__(self): return '%s(%r)' % (self.__class__.__name__, OrderedDict(self)) def __reduce__(self): return self.__class__, (OrderedDict(self),) class CUBSentences(Dataset): def __init__(self, root_data_dir: str, split: str, transform=None, **kwargs): """split: 'trainval' or 'test' """ super().__init__() self.data_dir = os.path.join(root_data_dir, 'cub') self.split = split self.max_sequence_length = kwargs.get('max_sequence_length', 32) self.min_occ = kwargs.get('min_occ', 3) self.transform = transform os.makedirs(os.path.join(root_data_dir, "lang_emb"), exist_ok=True) self.gen_dir = os.path.join(self.data_dir, "oc:{}_msl:{}". format(self.min_occ, self.max_sequence_length)) if split == 'train': self.raw_data_path = os.path.join(self.data_dir, 'text_trainvalclasses.txt') elif split == 'test': self.raw_data_path = os.path.join(self.data_dir, 'text_testclasses.txt') else: raise Exception("Only train or test split is available") os.makedirs(self.gen_dir, exist_ok=True) self.data_file = 'cub.{}.s{}'.format(split, self.max_sequence_length) self.vocab_file = 'cub.vocab' if not os.path.exists(os.path.join(self.gen_dir, self.data_file)): print("Data file not found for {} split at {}. Creating new... (this may take a while)". format(split.upper(), os.path.join(self.gen_dir, self.data_file))) self._create_data() else: self._load_data() def __len__(self): return len(self.data) def __getitem__(self, idx): sent = self.data[str(idx)]['idx'] if self.transform is not None: sent = self.transform(sent) return sent, self.data[str(idx)]['length'] @property def vocab_size(self): return len(self.w2i) @property def pad_idx(self): return self.w2i['<pad>'] @property def eos_idx(self): return self.w2i['<eos>'] @property def unk_idx(self): return self.w2i['<unk>'] def get_w2i(self): return self.w2i def get_i2w(self): return self.i2w def _load_data(self, vocab=True): with open(os.path.join(self.gen_dir, self.data_file), 'rb') as file: self.data = json.load(file) if vocab: self._load_vocab() def _load_vocab(self): if not os.path.exists(os.path.join(self.gen_dir, self.vocab_file)): self._create_vocab() with open(os.path.join(self.gen_dir, self.vocab_file), 'r') as vocab_file: vocab = json.load(vocab_file) self.w2i, self.i2w = vocab['w2i'], vocab['i2w'] def _create_data(self): if self.split == 'train' and not os.path.exists(os.path.join(self.gen_dir, self.vocab_file)): self._create_vocab() else: self._load_vocab() with open(self.raw_data_path, 'r') as file: text = file.read() sentences = sent_tokenize(text) data = defaultdict(dict) pad_count = 0 for i, line in enumerate(sentences): words = word_tokenize(line) tok = words[:self.max_sequence_length - 1] tok = tok + ['<eos>'] length = len(tok) if self.max_sequence_length > length: tok.extend(['<pad>'] * (self.max_sequence_length - length)) pad_count += 1 idx = [self.w2i.get(w, self.w2i['<exc>']) for w in tok] id = len(data) data[id]['tok'] = tok data[id]['idx'] = idx data[id]['length'] = length print("{} out of {} sentences are truncated with max sentence length {}.". format(len(sentences) - pad_count, len(sentences), self.max_sequence_length)) with io.open(os.path.join(self.gen_dir, self.data_file), 'wb') as data_file: data = json.dumps(data, ensure_ascii=False) data_file.write(data.encode('utf8', 'replace')) self._load_data(vocab=False) def _create_vocab(self): assert self.split == 'train', "Vocablurary can only be created for training file." with open(self.raw_data_path, 'r') as file: text = file.read() sentences = sent_tokenize(text) occ_register = OrderedCounter() w2i = dict() i2w = dict() special_tokens = ['<exc>', '<pad>', '<eos>'] for st in special_tokens: i2w[len(w2i)] = st w2i[st] = len(w2i) texts = [] unq_words = [] for i, line in enumerate(sentences): words = word_tokenize(line) occ_register.update(words) texts.append(words) for w, occ in occ_register.items(): if occ > self.min_occ and w not in special_tokens: i2w[len(w2i)] = w w2i[w] = len(w2i) else: unq_words.append(w) assert len(w2i) == len(i2w) print("Vocablurary of {} keys created, {} words are excluded (occurrence <= {})." .format(len(w2i), len(unq_words), self.min_occ)) vocab = dict(w2i=w2i, i2w=i2w) with io.open(os.path.join(self.gen_dir, self.vocab_file), 'wb') as vocab_file: data = json.dumps(vocab, ensure_ascii=False) vocab_file.write(data.encode('utf8', 'replace')) with open(os.path.join(self.gen_dir, 'cub.unique'), 'wb') as unq_file: pickle.dump(np.array(unq_words), unq_file) with open(os.path.join(self.gen_dir, 'cub.all'), 'wb') as a_file: pickle.dump(occ_register, a_file) self._load_vocab() tx = lambda data: torch.Tensor(data) maxSentLen = 32 t_data = CUBSentences('', split='train', transform=tx, max_sequence_length=maxSentLen) ```
github_jupyter
# Parallel Monto-Carlo options pricing ## Problem setup ``` %matplotlib inline import sys import time import numpy as np from matplotlib import pyplot as plt try: import seaborn except ImportError: pass import ipyparallel as parallel def price_options(S=100.0, K=100.0, sigma=0.25, r=0.05, days=260, paths=10000): """ Price European and Asian options using a Monte Carlo method. Parameters ---------- S : float The initial price of the stock. K : float The strike price of the option. sigma : float The volatility of the stock. r : float The risk free interest rate. days : int The number of days until the option expires. paths : int The number of Monte Carlo paths used to price the option. Returns ------- A tuple of (E. call, E. put, A. call, A. put) option prices. """ import numpy as np from math import exp,sqrt h = 1.0/days const1 = exp((r-0.5*sigma**2)*h) const2 = sigma*sqrt(h) stock_price = S*np.ones(paths, dtype='float64') stock_price_sum = np.zeros(paths, dtype='float64') for j in range(days): growth_factor = const1*np.exp(const2*np.random.standard_normal(paths)) stock_price = stock_price*growth_factor stock_price_sum = stock_price_sum + stock_price stock_price_avg = stock_price_sum/days zeros = np.zeros(paths, dtype='float64') r_factor = exp(-r*h*days) euro_put = r_factor*np.mean(np.maximum(zeros, K-stock_price)) asian_put = r_factor*np.mean(np.maximum(zeros, K-stock_price_avg)) euro_call = r_factor*np.mean(np.maximum(zeros, stock_price-K)) asian_call = r_factor*np.mean(np.maximum(zeros, stock_price_avg-K)) return (euro_call, euro_put, asian_call, asian_put) price = 100.0 # Initial price rate = 0.05 # Interest rate days = 260 # Days to expiration paths = 10000 # Number of MC paths n_strikes = 6 # Number of strike values min_strike = 90.0 # Min strike price max_strike = 110.0 # Max strike price n_sigmas = 5 # Number of volatility values min_sigma = 0.1 # Min volatility max_sigma = 0.4 # Max volatility strike_vals = np.linspace(min_strike, max_strike, n_strikes) sigma_vals = np.linspace(min_sigma, max_sigma, n_sigmas) ``` ## Parallel computation across strike prices and volatilities The Client is used to setup the calculation and works with all engines. ``` rc = parallel.Client() ``` A LoadBalancedView is an interface to the engines that provides dynamic load balancing at the expense of not knowing which engine will execute the code. ``` view = rc.load_balanced_view() print("Strike prices: ", strike_vals) print("Volatilities: ", sigma_vals) ``` Submit tasks for each (strike, sigma) pair. ``` t1 = time.time() async_results = [] for strike in strike_vals: for sigma in sigma_vals: ar = view.apply_async(price_options, price, strike, sigma, rate, days, paths) async_results.append(ar) print("Submitted tasks: ", len(async_results)) ``` Block until all tasks are completed. ``` rc.wait(async_results) t2 = time.time() t = t2-t1 print("Parallel calculation completed, time = %s s" % t) ``` ## Process and visualize results Get the results using the `get` method: ``` results = [ar.get() for ar in async_results] ``` Assemble the result into a structured NumPy array. ``` prices = np.empty(n_strikes*n_sigmas, dtype=[('ecall',float),('eput',float),('acall',float),('aput',float)] ) for i, price in enumerate(results): prices[i] = tuple(price) prices.shape = (n_strikes, n_sigmas) ``` Plot the value of the European call in (volatility, strike) space. ``` plt.figure() plt.contourf(sigma_vals, strike_vals, prices['ecall']) plt.axis('tight') plt.colorbar() plt.title('European Call') plt.xlabel("Volatility") plt.ylabel("Strike Price") ``` Plot the value of the Asian call in (volatility, strike) space. ``` plt.figure() plt.contourf(sigma_vals, strike_vals, prices['acall']) plt.axis('tight') plt.colorbar() plt.title("Asian Call") plt.xlabel("Volatility") plt.ylabel("Strike Price") ``` Plot the value of the European put in (volatility, strike) space. ``` plt.figure() plt.contourf(sigma_vals, strike_vals, prices['eput']) plt.axis('tight') plt.colorbar() plt.title("European Put") plt.xlabel("Volatility") plt.ylabel("Strike Price") ``` Plot the value of the Asian put in (volatility, strike) space. ``` plt.figure() plt.contourf(sigma_vals, strike_vals, prices['aput']) plt.axis('tight') plt.colorbar() plt.title("Asian Put") plt.xlabel("Volatility") plt.ylabel("Strike Price") ```
github_jupyter
# Dinâmica do "Spruce budworm" O modelo da Larva do pinheiro, é um modelo clássico em ecologia. Sua dinâmica, influenciada pela predação de pássaros, é dada pela seguinte equação diferencial: $$\frac{dB}{dt}=r_B B\left(1-\frac{B}{K_B}\right)-\beta\frac{B^2}{\alpha^2 + B^2}$$ #### Exercício 1: Explique o significado dos termos desta equação. ``` %display typeset # dBdt: variação da população de larvas ao longo do tempo (indivíduos/tempo) # r_B: taxa de crescimento da população de larvas (1/tempo) # B: população de larvas (indivíduos) # K_B: capacidade do sistema (indivíduos) # beta: taxa de predação (indivíduos/tempo) # alpha: eficiência do predador (indivíduos) ``` #### Exercício 2: Escreva o modelo em forma adimensional. Há mais de uma maneira de se adimensionalizar este modelo. Distcuta as opções e justifique a sua escolha. ``` var('B t R_B K_B beta alpha') dBdt = R_B*B*(1-B/K_B) - beta*(B**2/(alpha**2 + B**2)) pretty_print(dBdt) pretty_print(html("Fazemos:")) show(html(r"$(R_B B \alpha) / \alpha$ ; e $(B^2 / \alpha^2) / (\alpha^2/\alpha^2 + N^2/\alpha^2)$ , onde $u = B/\alpha$")) # Teremos: var('B t R_B u K_B beta A') dBdt = R_B*u*A*(1-B/K_B) - beta*(u**2/(1 + u**2)) pretty_print(dBdt) ``` Multiplicamos tudo por $1/\beta$ e fazemos $v = (R_B \alpha) (1/\beta)$ Teremos: $$\frac{dB}{dt} \frac{1}{\beta} = v u \frac{1-B}{K_B} - \frac{u^2}{(1 + u^2)}$$ Fazemos $K_B = y \alpha$ e substituímos. Como $u = \frac{B}{\alpha}$, teremos: $$\frac{dB}{dt} \frac{1}{\beta} = v u \left(1 - \frac{u}{y}\right) - \frac{u^2}{(1 + u^2)}$$ Passando o $\frac{1}{\beta}$ para o outro lado, teremos: ``` var('beta t v u y') dBdt = beta * (v*u*(1 - u/y) - (u**2/(1 + u**2))) pretty_print(dBdt) ``` Sendo que só o beta ainda está com dimensão, sendo esta indivíduos sobre tempo. Logo, fazendo $z = \frac{\beta t}{\alpha}$, teremos a adimensionalização ``` var('z t v u y') dzdt = (v*u*(1 - u/y) - (u**2/(1 + u**2))) pretty_print(dzdt) ``` #### Exercício 3 Mostre que $B=0$ é um equilíbrio instável. ``` # Equação original: var('B t R_B K_B beta alpha') dBdt = R_B*B*(1-B/K_B) - beta*(B**2/(alpha**2 + B**2)) pretty_print(solve([dBdt == 0], B)) # Gráfico: plot(0.5*B*(1-B/20) - 1*(B**2/(1**2 + B**2)),(B,0,18),ymax=1.5) ``` #### Exercício 4: Quantos equilíbrios existem além de $B=0$? ``` # Além de B = 0 existem 3 equilíbrios (com autovalores complexos) ``` #### Exercício 5: Plote o diagrama de bifurcação deste modelo, utilizando $β>0$ como parâmetro de bifurcação. ``` forget () import numpy as np def drawbif(func,l,u): pts = [] for v in np.linspace(l,u,100): g = func(beta=v) xvals = solve(g,x) # print(xvals) pts.extend([(v,n(i.rhs().real_part())) for i in xvals if n(i.rhs().real_part())>0]) show(points(pts),axes_labels=[r"$\beta$",'$B$'],gridlines=True, xmin=0) var('beta') R_B = 0.5 K_B = 20 alpha = 1 f = R_B*x*(1-x/K_B) - beta*(x**2/(alpha**2 + x**2)) drawbif(f,0,3) ```
github_jupyter
<a href="https://colab.research.google.com/github/purnimapatel/Comparing-of-Deep_learning-Neural-Network/blob/main/LSTM_RNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` #import the libraries import numpy as np import pandas as pd #from numpy import mean #from numpy import std #from numpy import dstack from matplotlib import pyplot from keras.models import Sequential from keras.layers import Dense from keras.layers import Flatten from keras.layers import Dropout from sklearn.preprocessing import StandardScaler from keras.layers import LSTM from keras.layers.convolutional import Conv1D from keras.layers.convolutional import MaxPooling1D from keras.utils import to_categorical from keras.models import Sequential from keras.layers.core import Dense, Dropout #opening the zip-file from zipfile import ZipFile file_name1="UCI HAR Dataset.zip" with ZipFile(file_name1,'r') as zip: zip.extractall() print("DONE_1") #those are separate normalised input features for the neural network SIGNALS = [ "body_acc_x", "body_acc_y", "body_acc_z", "body_gyro_x", "body_gyro_y", "body_gyro_z", "total_acc_x", "total_acc_y", "total_acc_z" ] # Output classes to learn how to classify LABELS = [ "WALKING", "WALKING_UPSTAIRS", "WALKING_DOWNSTAIRS", "SITTING", "STANDING", "LAYING" ] # Utility function to read the data from csv file def _read_csv(filename): return pd.read_csv(filename, delim_whitespace=True, header=None) # Utility function to load the load_signals def load_signals(subset): signals_data = [] for signal in SIGNALS: filename = f'/content/UCI HAR Dataset/{subset}/Inertial Signals/{signal}_{subset}.txt' #/content/UCI HAR Dataset/train/Inertial Signals #/content/UCI HAR Dataset/train/Inertial Signals/body_acc_x_train.txt signals_data.append( _read_csv(filename).to_numpy() ) # Transpose is used to change the dimensionality of the output, # aggregating the signals by combination of sample/timestep. # Resultant shape is (7352 train/2947 test samples, 128 timesteps, 9 signals) return np.transpose(signals_data, (1, 2, 0)) def load_y(subset): filename = f'/content/UCI HAR Dataset/{subset}/y_{subset}.txt' y = _read_csv(filename)[0] return pd.get_dummies(y).to_numpy() def load_data(): """ Obtain the dataset from multiple files. Returns: X_train, X_test, y_train, y_test """ X_train, X_test = load_signals('train'), load_signals('test') y_train, y_test = load_y('train'), load_y('test') return X_train, y_train, X_test, y_test # Importing tensorflow np.random.seed(42) import tensorflow as tf tf.random.set_seed(42) # Initializing parameters epochs = 30 batch_size = 16 n_hidden = 32 # Utility function to count the number of classes def _count_classes(y): return len(set([tuple(category) for category in y])) # Loading the train and test data X_train, Y_train, X_test, Y_test = load_data() print(X_train.shape) print(X_test.shape) print(Y_train.shape) print(Y_test.shape) training_data_count = len(X_train) test_data_count = len(X_test) print(training_data_count) print(test_data_count) ``` # LSTM_RNN ``` # Importing tensorflow np.random.seed(42) import tensorflow as tf tf.random.set_seed(42) # initialising the parameters epochs = 30 batch_size = 16 n_hidden = 32 timesteps = len(X_train[0]) input_dim = len(X_train[0][0]) n_classes = _count_classes(Y_train) #n_classes be 6 print(timesteps) print(input_dim) print(len(X_train)) print(n_classes) ``` **Base_model of LSTM-RNN** ``` # Initiliazing the sequential model model = Sequential() # Configuring the parameters model.add(LSTM(n_hidden, input_shape=(timesteps, input_dim))) # Adding a dropout layer model.add(Dropout(0.5)) # Adding a dense output layer with sigmoid activation model.add(Dense(n_classes, activation='sigmoid')) model.summary() # Compiling the model model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) # Training the model model.fit(X_train,Y_train, batch_size=batch_size, validation_data=(X_test, Y_test), epochs=epochs,shuffle=True) Y_pred = model.predict(X_test) pred = np.argmax(Y_pred,axis = 1) Y_actual = np.argmax(Y_test,axis = 1) from sklearn.metrics import classification_report, confusion_matrix confusion_matrix(Y_actual, pred) print(classification_report(Y_actual, pred)) ``` **Multi_Layer LSTM** ``` # Initiliazing the sequential model model = Sequential() # Configuring the parameters model.add(LSTM(32,return_sequences=True,input_shape=(timesteps, input_dim))) # Adding a dropout layer model.add(Dropout(0.5)) model.add(LSTM(28,input_shape=(timesteps, input_dim))) # Adding a dropout layer model.add(Dropout(0.6)) # Adding a dense output layer with sigmoid activation model.add(Dense(n_classes, activation='sigmoid')) model.summary() # Compiling the model model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) # Training the model model.fit(X_train, Y_train, batch_size=batch_size, validation_data=(X_test, Y_test), epochs=epochs) Y_pred = model.predict(X_test) pred = np.argmax(Y_pred,axis = 1) Y_actual = np.argmax(Y_test,axis = 1) confusion_matrix(Y_actual, pred) print(classification_report(Y_actual, pred)) ```
github_jupyter
``` import pandas as pd df = pd.read_csv('progress.csv') for ind in df.index: print(df['Student Name'][ind],df['Student Email'][ind], df['# of Skill Badges Completed in Track 1'][ind],df['# of Skill Badges Completed in Track 2'][ind]) import smtplib, ssl port = 465 # For SSL smtp_server = "smtp.gmail.com" sender_email = "" # Enter your address password = "" # Enter your password for ind in df.index: print(df['Student Name'][ind],df['Student Email'][ind], df['# of Skill Badges Completed in Track 1'][ind],df['# of Skill Badges Completed in Track 2'][ind]) name=df['Student Name'][ind] mail=df['Student Email'][ind] t1=df['# of Skill Badges Completed in Track 1'][ind] t2=df['# of Skill Badges Completed in Track 2'][ind] if t1==0 and t2 ==0: receiver_email = mail # Enter receiver address message = """\ Subject: Start completing your Qwiklabs quests & skill badges | 30 Days of Google Cloud program Dear {}, Thank you so much for enrolling in the 30 Days of Google Cloud program. We noticed that you have not completed any quests or skill badges in the program so far. Please note that you have until 5th November to complete the milestones mentioned on the prizes rules section here and earn those exciting prizes. Please start completing them ASAP. As always, please feel free to reach out to us on our Whatsapp group - in case of any questions or queries. All the best & happy learning, Your 30 Days of Google Cloud Facilitator .""".format(name) context = ssl.create_default_context() with smtplib.SMTP_SSL(smtp_server, port, context=context) as server: server.login(sender_email, password) server.sendmail(sender_email, receiver_email, message) elif t1==6 and t2 ==6: receiver_email = mail # Enter receiver address message = """\ Subject: Congratulations! You have successfully achieved your milestone | 30 Days of Google Cloud program Dear {}, Congratulations on successfully achieving your milestone in the 30 Days of Google Cloud program. We are so excited for and can’t wait for you to receive your prizes. Please note that your prizes will be delivered to you when the program ends i.e. after 5th November. Meanwhile we request you to please not stop your learning journey and keep on working to get more badges on Qwiklabs so that you can become an expert in cloud. As always, please feel free to reach out to us on our Whatsapp group - in case of any questions or queries. All the best & happy learning, Your 30 Days of Google Cloud Facilitator .""".format(name) context = ssl.create_default_context() with smtplib.SMTP_SSL(smtp_server, port, context=context) as server: server.login(sender_email, password) server.sendmail(sender_email, receiver_email, message.encode("utf-8")) elif t1==6 and t2 !=6: receiver_email = mail # Enter receiver address message = """\ Subject: Congratulations! You have successfully achieved your milestone | 30 Days of Google Cloud program Dear {}, Congratulations on successfully achieving your milestone in the 30 Days of Google Cloud program. We are so excited for and can’t wait for you to receive your prizes. Please note that your prizes will be delivered to you when the program ends i.e. after 5th November. Meanwhile we request you to please not stop your learning journey and keep on working to get badges in the track 2. You have completed {} out of 6 skill badges in Track 2. As always, please feel free to reach out to us on our Whatsapp group - in case of any questions or queries. All the best & happy learning, Your 30 Days of Google Cloud Facilitator .""".format(name,t2) context = ssl.create_default_context() with smtplib.SMTP_SSL(smtp_server, port, context=context) as server: server.login(sender_email, password) server.sendmail(sender_email, receiver_email, message.encode("utf-8")) elif t2==6 and t1 !=6: receiver_email = mail # Enter receiver address message = """\ Subject: Congratulations! You have successfully achieved your milestone | 30 Days of Google Cloud program Dear {}, Congratulations on successfully achieving your milestone in the 30 Days of Google Cloud program. We are so excited for and can’t wait for you to receive your prizes. Please note that your prizes will be delivered to you when the program ends i.e. after 5th November. Meanwhile we request you to please not stop your learning journey and keep on working to get badges in the track 1. You have completed {} out of 6 skill badges in Track 1. As always, please feel free to reach out to us on our Whatsapp group - in case of any questions or queries. All the best & happy learning, Your 30 Days of Google Cloud Facilitator .""".format(name,t1) context = ssl.create_default_context() with smtplib.SMTP_SSL(smtp_server, port, context=context) as server: server.login(sender_email, password) server.sendmail(sender_email, receiver_email, message.encode("utf-8")) else: receiver_email = mail # Enter receiver address message = """\ Subject: You are almost there to win your prizes | 30 Days of Google Cloud program Dear {}, We noticed that you have completed {} skill badges already in the program's Track 1 and are just {} away from earning your prizes for track 1 also we found out that you have completed {} skill badges already in the program's Track 2 and are just {} away from earning your prizes for track 2. We are so glad that you have made it so far in the program. Please complete the remaining quests and the skill badges ASAP so that you can be entitled to your prizes. Note: You have until 5th November to complete the milestones mentioned on the prizes rules section https://events.withgoogle.com/30daysofgooglecloud/prize-rules/#content and earn those exciting prizes. As always, please feel free to reach out to us on our Whatsapp group - in case of any questions or queries. All the best & happy learning, Your 30 Days of Google Cloud Facilitator .""".format(name,t1,6-t1,t2,6-t2) context = ssl.create_default_context() with smtplib.SMTP_SSL(smtp_server, port, context=context) as server: server.login(sender_email, password) server.sendmail(sender_email, receiver_email, message.encode("utf-8")) ```
github_jupyter
``` from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error, r2_score from sklearn.metrics import mean_absolute_error from sklearn.model_selection import GridSearchCV from sklearn.model_selection import KFold from sklearn.model_selection import ShuffleSplit from sklearn.metrics import accuracy_score from keras.layers import Dense from keras.models import Sequential from keras.optimizers import SGD from matplotlib import pyplot as plt import matplotlib as mpl import seaborn as sns import numpy as np import pandas as pd import category_encoders as ce import os import pickle import gc from tqdm import tqdm import pickle from sklearn.svm import SVR from sklearn.linear_model import LinearRegression from sklearn import linear_model from sklearn.neighbors import KNeighborsRegressor from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor from sklearn.ensemble import ExtraTreesRegressor from sklearn import ensemble import xgboost as xgb def encode_text_features(encode_decode, data_frame, encoder_isa=None, encoder_mem_type=None): # Implement Categorical OneHot encoding for ISA and mem-type if encode_decode == 'encode': encoder_isa = ce.one_hot.OneHotEncoder(cols=['isa']) encoder_mem_type = ce.one_hot.OneHotEncoder(cols=['mem-type']) encoder_isa.fit(data_frame, verbose=1) df_new1 = encoder_isa.transform(data_frame) encoder_mem_type.fit(df_new1, verbose=1) df_new = encoder_mem_type.transform(df_new1) encoded_data_frame = df_new else: df_new1 = encoder_isa.transform(data_frame) df_new = encoder_mem_type.transform(df_new1) encoded_data_frame = df_new return encoded_data_frame, encoder_isa, encoder_mem_type def absolute_percentage_error(Y_test, Y_pred): error = 0 for i in range(len(Y_test)): if(Y_test[i]!= 0 ): error = error + (abs(Y_test[i] - Y_pred[i]))/Y_test[i] error = error/ len(Y_test) return error ``` # Dataset 1 :dijkstra_physical ``` def process_all_dijkstra_physical(dataset_path, dataset_name, path_for_saving_data): ################## Data Preprocessing ###################### df = pd.read_csv(dataset_path) encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df, encoder_isa = None, encoder_mem_type=None) # total_data = encoded_data_frame.drop(columns = ['arch', 'arch1']) total_data = encoded_data_frame.drop(columns = ['arch']) total_data = total_data.fillna(0) X_columns = total_data.drop(columns = 'runtime').columns X = total_data.drop(columns = ['runtime']).to_numpy() Y = total_data['runtime'].to_numpy() # X_columns = total_data.drop(columns = 'PS').columns # X = total_data.drop(columns = ['runtime','PS']).to_numpy() # Y = total_data['runtime'].to_numpy() print('Data X and Y shape', X.shape, Y.shape) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42) print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) ################## Data Preprocessing ###################### # Put best models here using grid search # 1. SVR best_svr = SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1, kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False) # 2. LR best_lr = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False) # 3. RR best_rr = linear_model.Ridge(alpha=10, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=None, solver='lsqr', tol=0.001) # 4. KNN best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=15, p=1, weights='distance') # 5. GPR best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None, n_restarts_optimizer=0, normalize_y=True, optimizer='fmin_l_bfgs_b', random_state=None) # 6. Decision Tree best_dt = DecisionTreeRegressor(criterion='friedman_mse', max_depth=3, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=2, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='best') # 7. Random Forest best_rf = RandomForestRegressor(bootstrap=True, criterion='mae', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='True') # 8. Extra Trees Regressor best_etr = ExtraTreesRegressor(bootstrap=False, criterion='friedman_mse', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start='True') # 9. GBR best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mae', init=None, learning_rate=0.1, loss='lad', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_iter_no_change=None, presort='auto', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) # 10. XGB best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0, importance_type='gain', learning_rate=0.5, max_delta_step=0, max_depth=4, min_child_weight=1, missing=None, n_estimators=10, n_jobs=1, nthread=None, objective='reg:linear', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, validate_parameters=False, verbosity=1) best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb] best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr' , 'best_gbr', 'best_xgb'] k = 0 df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) for model in best_models: print('Running model number:', k+1, 'with Model Name: ', best_models_name[k]) r2_scores = [] mse_scores = [] mape_scores = [] mae_scores = [] # cv = KFold(n_splits = 10, random_state = 42, shuffle = True) cv = ShuffleSplit(n_splits=10, random_state=0) # print(cv) fold = 1 for train_index, test_index in cv.split(X): model_orig = model # print("Train Index: ", train_index, "\n") # print("Test Index: ", test_index) X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index] # print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape) model_orig.fit(X_train_fold, Y_train_fold) Y_pred_fold = model_orig.predict(X_test_fold) # save the folds to disk data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold] filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle' pickle.dump(data, open(filename, 'wb')) # save the model to disk filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav' fold = fold + 1 pickle.dump(model_orig, open(filename, 'wb')) # some time later... ''' # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) ''' # scores.append(best_svr.score(X_test, y_test)) ''' plt.figure() plt.plot(Y_test_fold, 'b') plt.plot(Y_pred_fold, 'r') ''' # print('Accuracy =',accuracy_score(Y_test, Y_pred)) r2_scores.append(r2_score(Y_test_fold, Y_pred_fold)) mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold)) mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold)) mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold)) df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name , 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True) k = k + 1 print(df.head()) df.to_csv(path_for_saving_data + '.csv') # print('MSE for 10 folds\n', mse_scores) # print('\nR2 scores for 10 folds\n', r2_scores) # print('\nMAPE for 10 folds\n', mape_scores) # print('\nMAE scores for 10 folds\n', mae_scores) # print('\nMean MSE = ', np.mean(mse_scores), '\nMedian MSE = ', np.median(mse_scores)) # print('\nMean R2 score =',np.mean(r2_scores), '\nMedian R2 scores = ', np.median(r2_scores)) # print('\nMean Absolute Percentage Error =',np.mean(mape_scores), # '\nMedian Absolute Percentage Error =', np.median(mape_scores)) # print('\nMean MAE =',np.mean(mae_scores), # '\nMedian MAE =', np.median(mae_scores)) dataset_name = 'dijkstra_physical' dataset_path = '\\Dataset_CSV\\all_datasets\\dijkstra_physical.csv' path_for_saving_data = '\\Saved_Models_Data\\' + dataset_name process_all_dijkstra_physical(dataset_path, dataset_name, path_for_saving_data) ``` # Dataset 2 : dijkstra_simulated ``` def process_all_dijkstra_simulated(dataset_path, dataset_name, path_for_saving_data): ################## Data Preprocessing ###################### df = pd.read_csv(dataset_path) encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df, encoder_isa = None, encoder_mem_type=None) # total_data = encoded_data_frame.drop(columns = ['arch', 'arch1']) total_data = encoded_data_frame.drop(columns = ['arch']) total_data = total_data.fillna(0) X_columns = total_data.drop(columns = 'runtime').columns X = total_data.drop(columns = ['runtime']).to_numpy() Y = total_data['runtime'].to_numpy() # X_columns = total_data.drop(columns = 'PS').columns # X = total_data.drop(columns = ['runtime','PS']).to_numpy() # Y = total_data['runtime'].to_numpy() print('Data X and Y shape', X.shape, Y.shape) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42) print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) ################## Data Preprocessing ###################### # Put best models here using grid search # 1. SVR best_svr = SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1, kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False) # 2. LR best_lr = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False) # 3. RR best_rr = linear_model.Ridge(alpha=10, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=None, solver='lsqr', tol=0.001) # 4. KNN best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=15, p=1, weights='distance') # 5. GPR best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None, n_restarts_optimizer=0, normalize_y=True, optimizer='fmin_l_bfgs_b', random_state=None) # 6. Decision Tree best_dt = DecisionTreeRegressor(criterion='friedman_mse', max_depth=3, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=2, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='best') # 7. Random Forest best_rf = RandomForestRegressor(bootstrap=True, criterion='mae', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='True') # 8. Extra Trees Regressor best_etr = ExtraTreesRegressor(bootstrap=False, criterion='friedman_mse', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start='True') # 9. GBR best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mae', init=None, learning_rate=0.1, loss='lad', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_iter_no_change=None, presort='auto', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) # 10. XGB best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0, importance_type='gain', learning_rate=0.5, max_delta_step=0, max_depth=4, min_child_weight=1, missing=None, n_estimators=10, n_jobs=1, nthread=None, objective='reg:linear', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, validate_parameters=False, verbosity=1) best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb] best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr' , 'best_gbr', 'best_xgb'] k = 0 df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) for model in best_models: print('Running model number:', k+1, 'with Model Name: ', best_models_name[k]) r2_scores = [] mse_scores = [] mape_scores = [] mae_scores = [] # cv = KFold(n_splits = 10, random_state = 42, shuffle = True) cv = ShuffleSplit(n_splits=10, random_state=0) # print(cv) fold = 1 for train_index, test_index in cv.split(X): model_orig = model # print("Train Index: ", train_index, "\n") # print("Test Index: ", test_index) X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index] # print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape) model_orig.fit(X_train_fold, Y_train_fold) Y_pred_fold = model_orig.predict(X_test_fold) # save the folds to disk data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold] filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle' pickle.dump(data, open(filename, 'wb')) # save the model to disk filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav' fold = fold + 1 pickle.dump(model_orig, open(filename, 'wb')) # some time later... ''' # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) ''' # scores.append(best_svr.score(X_test, y_test)) ''' plt.figure() plt.plot(Y_test_fold, 'b') plt.plot(Y_pred_fold, 'r') ''' # print('Accuracy =',accuracy_score(Y_test, Y_pred)) r2_scores.append(r2_score(Y_test_fold, Y_pred_fold)) mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold)) mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold)) mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold)) df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name , 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True) k = k + 1 print(df.head()) df.to_csv(path_for_saving_data + '.csv') # print('MSE for 10 folds\n', mse_scores) # print('\nR2 scores for 10 folds\n', r2_scores) # print('\nMAPE for 10 folds\n', mape_scores) # print('\nMAE scores for 10 folds\n', mae_scores) # print('\nMean MSE = ', np.mean(mse_scores), '\nMedian MSE = ', np.median(mse_scores)) # print('\nMean R2 score =',np.mean(r2_scores), '\nMedian R2 scores = ', np.median(r2_scores)) # print('\nMean Absolute Percentage Error =',np.mean(mape_scores), # '\nMedian Absolute Percentage Error =', np.median(mape_scores)) # print('\nMean MAE =',np.mean(mae_scores), # '\nMedian MAE =', np.median(mae_scores)) dataset_name = 'dijkstra_physical' dataset_path = '\\Dataset_CSV\\all_datasets\\\\dijkstra_physical.csv' path_for_saving_data = '\\Saved_Models_Data\\' + dataset_name process_all_dijkstra_simulated(dataset_path, dataset_name, path_for_saving_data) ``` # Dataset 3 : qsort_physical ``` def process_all_qsort_physical(dataset_path, dataset_name, path_for_saving_data): ################## Data Preprocessing ###################### df = pd.read_csv(dataset_path) encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df, encoder_isa = None, encoder_mem_type=None) # total_data = encoded_data_frame.drop(columns = ['arch', 'arch1']) total_data = encoded_data_frame.drop(columns = ['arch']) total_data = total_data.fillna(0) X_columns = total_data.drop(columns = 'runtime').columns X = total_data.drop(columns = ['runtime']).to_numpy() Y = total_data['runtime'].to_numpy() # X_columns = total_data.drop(columns = 'PS').columns # X = total_data.drop(columns = ['runtime','PS']).to_numpy() # Y = total_data['runtime'].to_numpy() print('Data X and Y shape', X.shape, Y.shape) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42) print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) ################## Data Preprocessing ###################### # Put best models here using grid search # 1. SVR best_svr = SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1, kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False) # 2. LR best_lr = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False) # 3. RR best_rr = linear_model.Ridge(alpha=10, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=None, solver='lsqr', tol=0.001) # 4. KNN best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=15, p=1, weights='distance') # 5. GPR best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None, n_restarts_optimizer=0, normalize_y=True, optimizer='fmin_l_bfgs_b', random_state=None) # 6. Decision Tree best_dt = DecisionTreeRegressor(criterion='friedman_mse', max_depth=3, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=2, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='best') # 7. Random Forest best_rf = RandomForestRegressor(bootstrap=True, criterion='mae', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='True') # 8. Extra Trees Regressor best_etr = ExtraTreesRegressor(bootstrap=False, criterion='friedman_mse', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start='True') # 9. GBR best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mae', init=None, learning_rate=0.1, loss='lad', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_iter_no_change=None, presort='auto', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) # 10. XGB best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0, importance_type='gain', learning_rate=0.5, max_delta_step=0, max_depth=4, min_child_weight=1, missing=None, n_estimators=10, n_jobs=1, nthread=None, objective='reg:linear', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, validate_parameters=False, verbosity=1) best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb] best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr' , 'best_gbr', 'best_xgb'] k = 0 df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) for model in best_models: print('Running model number:', k+1, 'with Model Name: ', best_models_name[k]) r2_scores = [] mse_scores = [] mape_scores = [] mae_scores = [] # cv = KFold(n_splits = 10, random_state = 42, shuffle = True) cv = ShuffleSplit(n_splits=10, random_state=0) # print(cv) fold = 1 for train_index, test_index in cv.split(X): model_orig = model # print("Train Index: ", train_index, "\n") # print("Test Index: ", test_index) X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index] # print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape) model_orig.fit(X_train_fold, Y_train_fold) Y_pred_fold = model_orig.predict(X_test_fold) # save the folds to disk data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold] filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle' pickle.dump(data, open(filename, 'wb')) # save the model to disk filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav' fold = fold + 1 pickle.dump(model_orig, open(filename, 'wb')) # some time later... ''' # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) ''' # scores.append(best_svr.score(X_test, y_test)) ''' plt.figure() plt.plot(Y_test_fold, 'b') plt.plot(Y_pred_fold, 'r') ''' # print('Accuracy =',accuracy_score(Y_test, Y_pred)) r2_scores.append(r2_score(Y_test_fold, Y_pred_fold)) mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold)) mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold)) mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold)) df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name , 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True) k = k + 1 print(df.head()) df.to_csv(path_for_saving_data + '.csv') # print('MSE for 10 folds\n', mse_scores) # print('\nR2 scores for 10 folds\n', r2_scores) # print('\nMAPE for 10 folds\n', mape_scores) # print('\nMAE scores for 10 folds\n', mae_scores) # print('\nMean MSE = ', np.mean(mse_scores), '\nMedian MSE = ', np.median(mse_scores)) # print('\nMean R2 score =',np.mean(r2_scores), '\nMedian R2 scores = ', np.median(r2_scores)) # print('\nMean Absolute Percentage Error =',np.mean(mape_scores), # '\nMedian Absolute Percentage Error =', np.median(mape_scores)) # print('\nMean MAE =',np.mean(mae_scores), # '\nMedian MAE =', np.median(mae_scores)) dataset_name = 'qsort_physical' dataset_path = '\\Dataset_CSV\\all_datasets\\qsort_physical.csv' path_for_saving_data = '\\Saved_Models_Data\\' + dataset_name process_all_qsort_physical(dataset_path, dataset_name, path_for_saving_data) ``` # Dataset 4 : qsort_simulated ``` def process_all_qsort_simulated(dataset_path, dataset_name, path_for_saving_data): ################## Data Preprocessing ###################### df = pd.read_csv(dataset_path) encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df, encoder_isa = None, encoder_mem_type=None) # total_data = encoded_data_frame.drop(columns = ['arch', 'arch1']) total_data = encoded_data_frame.drop(columns = ['arch']) total_data = total_data.fillna(0) X_columns = total_data.drop(columns = 'runtime').columns X = total_data.drop(columns = ['runtime']).to_numpy() Y = total_data['runtime'].to_numpy() # X_columns = total_data.drop(columns = 'PS').columns # X = total_data.drop(columns = ['runtime','PS']).to_numpy() # Y = total_data['runtime'].to_numpy() print('Data X and Y shape', X.shape, Y.shape) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42) print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) ################## Data Preprocessing ###################### # Put best models here using grid search # 1. SVR best_svr = SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1, kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False) # 2. LR best_lr = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False) # 3. RR best_rr = linear_model.Ridge(alpha=10, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=None, solver='lsqr', tol=0.001) # 4. KNN best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=15, p=1, weights='distance') # 5. GPR best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None, n_restarts_optimizer=0, normalize_y=True, optimizer='fmin_l_bfgs_b', random_state=None) # 6. Decision Tree best_dt = DecisionTreeRegressor(criterion='friedman_mse', max_depth=3, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=2, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='best') # 7. Random Forest best_rf = RandomForestRegressor(bootstrap=True, criterion='mae', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='True') # 8. Extra Trees Regressor best_etr = ExtraTreesRegressor(bootstrap=False, criterion='friedman_mse', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start='True') # 9. GBR best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mae', init=None, learning_rate=0.1, loss='lad', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_iter_no_change=None, presort='auto', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) # 10. XGB best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0, importance_type='gain', learning_rate=0.5, max_delta_step=0, max_depth=4, min_child_weight=1, missing=None, n_estimators=10, n_jobs=1, nthread=None, objective='reg:linear', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, validate_parameters=False, verbosity=1) best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb] best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr' , 'best_gbr', 'best_xgb'] k = 0 df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) for model in best_models: print('Running model number:', k+1, 'with Model Name: ', best_models_name[k]) r2_scores = [] mse_scores = [] mape_scores = [] mae_scores = [] # cv = KFold(n_splits = 10, random_state = 42, shuffle = True) cv = ShuffleSplit(n_splits=10, random_state=0) # print(cv) fold = 1 for train_index, test_index in cv.split(X): model_orig = model # print("Train Index: ", train_index, "\n") # print("Test Index: ", test_index) X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index] # print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape) model_orig.fit(X_train_fold, Y_train_fold) Y_pred_fold = model_orig.predict(X_test_fold) # save the folds to disk data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold] filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle' pickle.dump(data, open(filename, 'wb')) # save the model to disk filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav' fold = fold + 1 pickle.dump(model_orig, open(filename, 'wb')) # some time later... ''' # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) ''' # scores.append(best_svr.score(X_test, y_test)) ''' plt.figure() plt.plot(Y_test_fold, 'b') plt.plot(Y_pred_fold, 'r') ''' # print('Accuracy =',accuracy_score(Y_test, Y_pred)) r2_scores.append(r2_score(Y_test_fold, Y_pred_fold)) mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold)) mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold)) mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold)) df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name , 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True) k = k + 1 print(df.head()) df.to_csv(path_for_saving_data + '.csv') # print('MSE for 10 folds\n', mse_scores) # print('\nR2 scores for 10 folds\n', r2_scores) # print('\nMAPE for 10 folds\n', mape_scores) # print('\nMAE scores for 10 folds\n', mae_scores) # print('\nMean MSE = ', np.mean(mse_scores), '\nMedian MSE = ', np.median(mse_scores)) # print('\nMean R2 score =',np.mean(r2_scores), '\nMedian R2 scores = ', np.median(r2_scores)) # print('\nMean Absolute Percentage Error =',np.mean(mape_scores), # '\nMedian Absolute Percentage Error =', np.median(mape_scores)) # print('\nMean MAE =',np.mean(mae_scores), # '\nMedian MAE =', np.median(mae_scores)) dataset_name = 'qsort_simulated' dataset_path = '\\Dataset_CSV\\all_datasets\\qsort_simulated.csv' path_for_saving_data = '\\Saved_Models_Data\\' + dataset_name process_all_qsort_simulated(dataset_path, dataset_name, path_for_saving_data) ``` # Dataset 5 : mantevominiFE_physical ``` def process_all_mantevominiFE_physical(dataset_path, dataset_name, path_for_saving_data): ################## Data Preprocessing ###################### df = pd.read_csv(dataset_path) encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df, encoder_isa = None, encoder_mem_type=None) # total_data = encoded_data_frame.drop(columns = ['arch', 'arch1']) total_data = encoded_data_frame.drop(columns = ['arch']) total_data = total_data.fillna(0) X_columns = total_data.drop(columns = 'runtime').columns X = total_data.drop(columns = ['runtime']).to_numpy() Y = total_data['runtime'].to_numpy() # X_columns = total_data.drop(columns = 'PS').columns # X = total_data.drop(columns = ['runtime','PS']).to_numpy() # Y = total_data['runtime'].to_numpy() print('Data X and Y shape', X.shape, Y.shape) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42) print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) ################## Data Preprocessing ###################### # Put best models here using grid search # 1. SVR best_svr = SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1, kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False) # 2. LR best_lr = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False) # 3. RR best_rr = linear_model.Ridge(alpha=10, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=None, solver='lsqr', tol=0.001) # 4. KNN best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=15, p=1, weights='distance') # 5. GPR best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None, n_restarts_optimizer=0, normalize_y=True, optimizer='fmin_l_bfgs_b', random_state=None) # 6. Decision Tree best_dt = DecisionTreeRegressor(criterion='friedman_mse', max_depth=3, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=2, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='best') # 7. Random Forest best_rf = RandomForestRegressor(bootstrap=True, criterion='mae', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='True') # 8. Extra Trees Regressor best_etr = ExtraTreesRegressor(bootstrap=False, criterion='friedman_mse', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start='True') # 9. GBR best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mae', init=None, learning_rate=0.1, loss='lad', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_iter_no_change=None, presort='auto', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) # 10. XGB best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0, importance_type='gain', learning_rate=0.5, max_delta_step=0, max_depth=4, min_child_weight=1, missing=None, n_estimators=10, n_jobs=1, nthread=None, objective='reg:linear', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, validate_parameters=False, verbosity=1) best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb] best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr' , 'best_gbr', 'best_xgb'] k = 0 df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) for model in best_models: print('Running model number:', k+1, 'with Model Name: ', best_models_name[k]) r2_scores = [] mse_scores = [] mape_scores = [] mae_scores = [] # cv = KFold(n_splits = 10, random_state = 42, shuffle = True) cv = ShuffleSplit(n_splits=10, random_state=0) # print(cv) fold = 1 for train_index, test_index in cv.split(X): model_orig = model # print("Train Index: ", train_index, "\n") # print("Test Index: ", test_index) X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index] # print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape) model_orig.fit(X_train_fold, Y_train_fold) Y_pred_fold = model_orig.predict(X_test_fold) # save the folds to disk data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold] filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle' pickle.dump(data, open(filename, 'wb')) # save the model to disk filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav' fold = fold + 1 pickle.dump(model_orig, open(filename, 'wb')) # some time later... ''' # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) ''' # scores.append(best_svr.score(X_test, y_test)) ''' plt.figure() plt.plot(Y_test_fold, 'b') plt.plot(Y_pred_fold, 'r') ''' # print('Accuracy =',accuracy_score(Y_test, Y_pred)) r2_scores.append(r2_score(Y_test_fold, Y_pred_fold)) mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold)) mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold)) mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold)) df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name , 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True) k = k + 1 print(df.head()) df.to_csv(path_for_saving_data + '.csv') # print('MSE for 10 folds\n', mse_scores) # print('\nR2 scores for 10 folds\n', r2_scores) # print('\nMAPE for 10 folds\n', mape_scores) # print('\nMAE scores for 10 folds\n', mae_scores) # print('\nMean MSE = ', np.mean(mse_scores), '\nMedian MSE = ', np.median(mse_scores)) # print('\nMean R2 score =',np.mean(r2_scores), '\nMedian R2 scores = ', np.median(r2_scores)) # print('\nMean Absolute Percentage Error =',np.mean(mape_scores), # '\nMedian Absolute Percentage Error =', np.median(mape_scores)) # print('\nMean MAE =',np.mean(mae_scores), # '\nMedian MAE =', np.median(mae_scores)) dataset_name = 'runtimes_final_mantevo_miniFE' dataset_path = '\\Dataset_CSV\\all_datasets\\runtimes_final_mantevo_miniFE.csv' path_for_saving_data = '\\Saved_Models_Data\\' + dataset_name process_all_mantevominiFE_physical(dataset_path, dataset_name, path_for_saving_data) ``` # Dataset 6 : npbEP_physical ``` def process_all_ npbEP_physical(dataset_path, dataset_name, path_for_saving_data): ################## Data Preprocessing ###################### df = pd.read_csv(dataset_path) encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df, encoder_isa = None, encoder_mem_type=None) # total_data = encoded_data_frame.drop(columns = ['arch', 'arch1']) total_data = encoded_data_frame.drop(columns = ['arch']) total_data = total_data.fillna(0) X_columns = total_data.drop(columns = 'runtime').columns X = total_data.drop(columns = ['runtime']).to_numpy() Y = total_data['runtime'].to_numpy() # X_columns = total_data.drop(columns = 'PS').columns # X = total_data.drop(columns = ['runtime','PS']).to_numpy() # Y = total_data['runtime'].to_numpy() print('Data X and Y shape', X.shape, Y.shape) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42) print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) ################## Data Preprocessing ###################### # Put best models here using grid search # 1. SVR best_svr = SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1, kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False) # 2. LR best_lr = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False) # 3. RR best_rr = linear_model.Ridge(alpha=10, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=None, solver='lsqr', tol=0.001) # 4. KNN best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=15, p=1, weights='distance') # 5. GPR best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None, n_restarts_optimizer=0, normalize_y=True, optimizer='fmin_l_bfgs_b', random_state=None) # 6. Decision Tree best_dt = DecisionTreeRegressor(criterion='friedman_mse', max_depth=3, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=2, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='best') # 7. Random Forest best_rf = RandomForestRegressor(bootstrap=True, criterion='mae', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='True') # 8. Extra Trees Regressor best_etr = ExtraTreesRegressor(bootstrap=False, criterion='friedman_mse', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start='True') # 9. GBR best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mae', init=None, learning_rate=0.1, loss='lad', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_iter_no_change=None, presort='auto', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) # 10. XGB best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0, importance_type='gain', learning_rate=0.5, max_delta_step=0, max_depth=4, min_child_weight=1, missing=None, n_estimators=10, n_jobs=1, nthread=None, objective='reg:linear', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, validate_parameters=False, verbosity=1) best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb] best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr' , 'best_gbr', 'best_xgb'] k = 0 df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) for model in best_models: print('Running model number:', k+1, 'with Model Name: ', best_models_name[k]) r2_scores = [] mse_scores = [] mape_scores = [] mae_scores = [] # cv = KFold(n_splits = 10, random_state = 42, shuffle = True) cv = ShuffleSplit(n_splits=10, random_state=0) # print(cv) fold = 1 for train_index, test_index in cv.split(X): model_orig = model # print("Train Index: ", train_index, "\n") # print("Test Index: ", test_index) X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index] # print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape) model_orig.fit(X_train_fold, Y_train_fold) Y_pred_fold = model_orig.predict(X_test_fold) # save the folds to disk data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold] filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle' pickle.dump(data, open(filename, 'wb')) # save the model to disk filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav' fold = fold + 1 pickle.dump(model_orig, open(filename, 'wb')) # some time later... ''' # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) ''' # scores.append(best_svr.score(X_test, y_test)) ''' plt.figure() plt.plot(Y_test_fold, 'b') plt.plot(Y_pred_fold, 'r') ''' # print('Accuracy =',accuracy_score(Y_test, Y_pred)) r2_scores.append(r2_score(Y_test_fold, Y_pred_fold)) mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold)) mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold)) mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold)) df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name , 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True) k = k + 1 print(df.head()) df.to_csv(path_for_saving_data + '.csv') # print('MSE for 10 folds\n', mse_scores) # print('\nR2 scores for 10 folds\n', r2_scores) # print('\nMAPE for 10 folds\n', mape_scores) # print('\nMAE scores for 10 folds\n', mae_scores) # print('\nMean MSE = ', np.mean(mse_scores), '\nMedian MSE = ', np.median(mse_scores)) # print('\nMean R2 score =',np.mean(r2_scores), '\nMedian R2 scores = ', np.median(r2_scores)) # print('\nMean Absolute Percentage Error =',np.mean(mape_scores), # '\nMedian Absolute Percentage Error =', np.median(mape_scores)) # print('\nMean MAE =',np.mean(mae_scores), # '\nMedian MAE =', np.median(mae_scores)) dataset_name = 'runtimes_final_npb_ep' dataset_path = '\\Dataset_CSV\\all_datasets\\runtimes_final_npb_ep.csv.csv' path_for_saving_data = '\\Saved_Models_Data\\' + dataset_name process_all_ npbEP_physical(dataset_path, dataset_name, path_for_saving_data) ``` # Dataset 7 : npbMG_physical ``` def process_all_npbMG_physical(dataset_path, dataset_name, path_for_saving_data): ################## Data Preprocessing ###################### df = pd.read_csv(dataset_path) encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df, encoder_isa = None, encoder_mem_type=None) # total_data = encoded_data_frame.drop(columns = ['arch', 'arch1']) total_data = encoded_data_frame.drop(columns = ['arch']) total_data = total_data.fillna(0) X_columns = total_data.drop(columns = 'runtime').columns X = total_data.drop(columns = ['runtime']).to_numpy() Y = total_data['runtime'].to_numpy() # X_columns = total_data.drop(columns = 'PS').columns # X = total_data.drop(columns = ['runtime','PS']).to_numpy() # Y = total_data['runtime'].to_numpy() print('Data X and Y shape', X.shape, Y.shape) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42) print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) ################## Data Preprocessing ###################### # Put best models here using grid search # 1. SVR best_svr = SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1, kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False) # 2. LR best_lr = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False) # 3. RR best_rr = linear_model.Ridge(alpha=10, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=None, solver='lsqr', tol=0.001) # 4. KNN best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=15, p=1, weights='distance') # 5. GPR best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None, n_restarts_optimizer=0, normalize_y=True, optimizer='fmin_l_bfgs_b', random_state=None) # 6. Decision Tree best_dt = DecisionTreeRegressor(criterion='friedman_mse', max_depth=3, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=2, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='best') # 7. Random Forest best_rf = RandomForestRegressor(bootstrap=True, criterion='mae', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='True') # 8. Extra Trees Regressor best_etr = ExtraTreesRegressor(bootstrap=False, criterion='friedman_mse', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start='True') # 9. GBR best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mae', init=None, learning_rate=0.1, loss='lad', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_iter_no_change=None, presort='auto', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) # 10. XGB best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0, importance_type='gain', learning_rate=0.5, max_delta_step=0, max_depth=4, min_child_weight=1, missing=None, n_estimators=10, n_jobs=1, nthread=None, objective='reg:linear', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, validate_parameters=False, verbosity=1) best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb] best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr' , 'best_gbr', 'best_xgb'] k = 0 df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) for model in best_models: print('Running model number:', k+1, 'with Model Name: ', best_models_name[k]) r2_scores = [] mse_scores = [] mape_scores = [] mae_scores = [] # cv = KFold(n_splits = 10, random_state = 42, shuffle = True) cv = ShuffleSplit(n_splits=10, random_state=0) # print(cv) fold = 1 for train_index, test_index in cv.split(X): model_orig = model # print("Train Index: ", train_index, "\n") # print("Test Index: ", test_index) X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index] # print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape) model_orig.fit(X_train_fold, Y_train_fold) Y_pred_fold = model_orig.predict(X_test_fold) # save the folds to disk data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold] filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle' pickle.dump(data, open(filename, 'wb')) # save the model to disk filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav' fold = fold + 1 pickle.dump(model_orig, open(filename, 'wb')) # some time later... ''' # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) ''' # scores.append(best_svr.score(X_test, y_test)) ''' plt.figure() plt.plot(Y_test_fold, 'b') plt.plot(Y_pred_fold, 'r') ''' # print('Accuracy =',accuracy_score(Y_test, Y_pred)) r2_scores.append(r2_score(Y_test_fold, Y_pred_fold)) mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold)) mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold)) mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold)) df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name , 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True) k = k + 1 print(df.head()) df.to_csv(path_for_saving_data + '.csv') # print('MSE for 10 folds\n', mse_scores) # print('\nR2 scores for 10 folds\n', r2_scores) # print('\nMAPE for 10 folds\n', mape_scores) # print('\nMAE scores for 10 folds\n', mae_scores) # print('\nMean MSE = ', np.mean(mse_scores), '\nMedian MSE = ', np.median(mse_scores)) # print('\nMean R2 score =',np.mean(r2_scores), '\nMedian R2 scores = ', np.median(r2_scores)) # print('\nMean Absolute Percentage Error =',np.mean(mape_scores), # '\nMedian Absolute Percentage Error =', np.median(mape_scores)) # print('\nMean MAE =',np.mean(mae_scores), # '\nMedian MAE =', np.median(mae_scores)) dataset_name = 'runtimes_final_npb_mg' dataset_path = '\\Dataset_CSV\\all_datasets\\runtimes_final_npb_mg.csv' path_for_saving_data = '\\Saved_Models_Data\\' + dataset_name process_all_npbMG_physical(dataset_path, dataset_name, path_for_saving_data) ``` # Dataset 8 : sha_physical ``` def process_all_sha_physical(dataset_path, dataset_name, path_for_saving_data): ################## Data Preprocessing ###################### df = pd.read_csv(dataset_path) encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df, encoder_isa = None, encoder_mem_type=None) # total_data = encoded_data_frame.drop(columns = ['arch', 'arch1']) total_data = encoded_data_frame.drop(columns = ['arch']) total_data = total_data.fillna(0) X_columns = total_data.drop(columns = 'runtime').columns X = total_data.drop(columns = ['runtime']).to_numpy() Y = total_data['runtime'].to_numpy() # X_columns = total_data.drop(columns = 'PS').columns # X = total_data.drop(columns = ['runtime','PS']).to_numpy() # Y = total_data['runtime'].to_numpy() print('Data X and Y shape', X.shape, Y.shape) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42) print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) ################## Data Preprocessing ###################### # Put best models here using grid search # 1. SVR best_svr = SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1, kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False) # 2. LR best_lr = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=True) # 3. RR best_rr = linear_model.Ridge(alpha=10, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=None, solver='lsqr', tol=0.001) # 4. KNN best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=13, p=7, weights='distance') # 5. GPR best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None, n_restarts_optimizer=0, normalize_y=True, optimizer='fmin_l_bfgs_b', random_state=None) # 6. Decision Tree best_dt = DecisionTreeRegressor(criterion='friedman_mse', max_depth=15, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=2, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='random') # 7. Random Forest best_rf = RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='False') # 8. Extra Trees Regressor best_etr = ExtraTreesRegressor(bootstrap=False, criterion='mse', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=200, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start='True') # 9. GBR best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mse', init=None, learning_rate=0.1, loss='lad', max_depth=5, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_iter_no_change=None, presort='auto', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) # 10. XGB best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0, importance_type='gain', learning_rate=0.5, max_delta_step=0, max_depth=4, min_child_weight=1, missing=None, n_estimators=10, n_jobs=1, nthread=None, objective='reg:linear', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, validate_parameters=False, verbosity=1) best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb] best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr' , 'best_gbr', 'best_xgb'] k = 0 df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) for model in best_models: print('Running model number:', k+1, 'with Model Name: ', best_models_name[k]) r2_scores = [] mse_scores = [] mape_scores = [] mae_scores = [] # cv = KFold(n_splits = 10, random_state = 42, shuffle = True) cv = ShuffleSplit(n_splits=10, random_state=0) # print(cv) fold = 1 for train_index, test_index in cv.split(X): model_orig = model # print("Train Index: ", train_index, "\n") # print("Test Index: ", test_index) X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index] # print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape) model_orig.fit(X_train_fold, Y_train_fold) Y_pred_fold = model_orig.predict(X_test_fold) # save the folds to disk data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold] filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle' pickle.dump(data, open(filename, 'wb')) # save the model to disk filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav' fold = fold + 1 pickle.dump(model_orig, open(filename, 'wb')) # some time later... ''' # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) ''' # scores.append(best_svr.score(X_test, y_test)) ''' plt.figure() plt.plot(Y_test_fold, 'b') plt.plot(Y_pred_fold, 'r') ''' # print('Accuracy =',accuracy_score(Y_test, Y_pred)) r2_scores.append(r2_score(Y_test_fold, Y_pred_fold)) mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold)) mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold)) mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold)) df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name , 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True) k = k + 1 print(df.head()) df.to_csv(path_for_saving_data + '.csv') # print('MSE for 10 folds\n', mse_scores) # print('\nR2 scores for 10 folds\n', r2_scores) # print('\nMAPE for 10 folds\n', mape_scores) # print('\nMAE scores for 10 folds\n', mae_scores) # print('\nMean MSE = ', np.mean(mse_scores), '\nMedian MSE = ', np.median(mse_scores)) # print('\nMean R2 score =',np.mean(r2_scores), '\nMedian R2 scores = ', np.median(r2_scores)) # print('\nMean Absolute Percentage Error =',np.mean(mape_scores), # '\nMedian Absolute Percentage Error =', np.median(mape_scores)) # print('\nMean MAE =',np.mean(mae_scores), # '\nMedian MAE =', np.median(mae_scores)) dataset_name = 'sha_physical' dataset_path = '\\Dataset_CSV\\all_datasets\\sha_physical.csv' path_for_saving_data = '\\Saved_Models_Data\\' + dataset_name process_all_sha_physical(dataset_path, dataset_name, path_for_saving_data) ``` # Dataset 9 : sha_simulated ``` def process_all_sha_simulated(dataset_path, dataset_name, path_for_saving_data): ################## Data Preprocessing ###################### df = pd.read_csv(dataset_path) encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df, encoder_isa = None, encoder_mem_type=None) # total_data = encoded_data_frame.drop(columns = ['arch', 'arch1']) total_data = encoded_data_frame.drop(columns = ['arch']) total_data = total_data.fillna(0) X_columns = total_data.drop(columns = 'runtime').columns X = total_data.drop(columns = ['runtime']).to_numpy() Y = total_data['runtime'].to_numpy() # X_columns = total_data.drop(columns = 'PS').columns # X = total_data.drop(columns = ['runtime','PS']).to_numpy() # Y = total_data['runtime'].to_numpy() print('Data X and Y shape', X.shape, Y.shape) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42) print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) ################## Data Preprocessing ###################### # Put best models here using grid search # 1. SVR best_svr = SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1, kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False) # 2. LR best_lr = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False) # 3. RR best_rr = linear_model.Ridge(alpha=10, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=None, solver='lsqr', tol=0.001) # 4. KNN best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=15, p=1, weights='distance') # 5. GPR best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None, n_restarts_optimizer=0, normalize_y=True, optimizer='fmin_l_bfgs_b', random_state=None) # 6. Decision Tree best_dt = DecisionTreeRegressor(criterion='friedman_mse', max_depth=3, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=2, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='best') # 7. Random Forest best_rf = RandomForestRegressor(bootstrap=True, criterion='mae', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='True') # 8. Extra Trees Regressor best_etr = ExtraTreesRegressor(bootstrap=False, criterion='friedman_mse', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start='True') # 9. GBR best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mae', init=None, learning_rate=0.1, loss='lad', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_iter_no_change=None, presort='auto', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) # 10. XGB best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0, importance_type='gain', learning_rate=0.5, max_delta_step=0, max_depth=4, min_child_weight=1, missing=None, n_estimators=10, n_jobs=1, nthread=None, objective='reg:linear', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, validate_parameters=False, verbosity=1) best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb] best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr' , 'best_gbr', 'best_xgb'] k = 0 df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) for model in best_models: print('Running model number:', k+1, 'with Model Name: ', best_models_name[k]) r2_scores = [] mse_scores = [] mape_scores = [] mae_scores = [] # cv = KFold(n_splits = 10, random_state = 42, shuffle = True) cv = ShuffleSplit(n_splits=10, random_state=0) # print(cv) fold = 1 for train_index, test_index in cv.split(X): model_orig = model # print("Train Index: ", train_index, "\n") # print("Test Index: ", test_index) X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index] # print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape) model_orig.fit(X_train_fold, Y_train_fold) Y_pred_fold = model_orig.predict(X_test_fold) # save the folds to disk data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold] filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle' pickle.dump(data, open(filename, 'wb')) # save the model to disk filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav' fold = fold + 1 pickle.dump(model_orig, open(filename, 'wb')) # some time later... ''' # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) ''' # scores.append(best_svr.score(X_test, y_test)) ''' plt.figure() plt.plot(Y_test_fold, 'b') plt.plot(Y_pred_fold, 'r') ''' # print('Accuracy =',accuracy_score(Y_test, Y_pred)) r2_scores.append(r2_score(Y_test_fold, Y_pred_fold)) mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold)) mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold)) mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold)) df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name , 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True) k = k + 1 print(df.head()) df.to_csv(path_for_saving_data + '.csv') # print('MSE for 10 folds\n', mse_scores) # print('\nR2 scores for 10 folds\n', r2_scores) # print('\nMAPE for 10 folds\n', mape_scores) # print('\nMAE scores for 10 folds\n', mae_scores) # print('\nMean MSE = ', np.mean(mse_scores), '\nMedian MSE = ', np.median(mse_scores)) # print('\nMean R2 score =',np.mean(r2_scores), '\nMedian R2 scores = ', np.median(r2_scores)) # print('\nMean Absolute Percentage Error =',np.mean(mape_scores), # '\nMedian Absolute Percentage Error =', np.median(mape_scores)) # print('\nMean MAE =',np.mean(mae_scores), # '\nMedian MAE =', np.median(mae_scores)) dataset_name = 'sha_simulated' dataset_path = '\\Dataset_CSV\\all_datasets\\sha_simulated.csv' path_for_saving_data = '\\Saved_Models_Data\\' + dataset_name process_all_sha_simulated(dataset_path, dataset_name, path_for_saving_data) ``` # Dataset 10 : stitch_physical ``` def process_all_stitch_physical(dataset_path, dataset_name, path_for_saving_data): ################## Data Preprocessing ###################### df = pd.read_csv(dataset_path) encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df, encoder_isa = None, encoder_mem_type=None) # total_data = encoded_data_frame.drop(columns = ['arch', 'arch1']) total_data = encoded_data_frame.drop(columns = ['arch']) total_data = total_data.fillna(0) X_columns = total_data.drop(columns = 'runtime').columns X = total_data.drop(columns = ['runtime']).to_numpy() Y = total_data['runtime'].to_numpy() # X_columns = total_data.drop(columns = 'PS').columns # X = total_data.drop(columns = ['runtime','PS']).to_numpy() # Y = total_data['runtime'].to_numpy() print('Data X and Y shape', X.shape, Y.shape) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42) print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) ################## Data Preprocessing ###################### # Put best models here using grid search # 1. SVR best_svr = SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1, kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False) # 2. LR best_lr = LinearRegression(copy_X=True, fit_intercept=False, n_jobs=None, normalize=True) # 3. RR best_rr = linear_model.Ridge(alpha=0.1, copy_X=True, fit_intercept=True, max_iter=None, normalize=True, random_state=None, solver='sparse_cg', tol=0.001) # 4. KNN best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=13, p=4, weights='distance') # 5. GPR best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None, n_restarts_optimizer=0, normalize_y=False, optimizer='fmin_l_bfgs_b', random_state=None) # 6. Decision Tree best_dt = DecisionTreeRegressor(criterion='mae', max_depth=9, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='random') # 7. Random Forest best_rf = RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=4, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='False') # 8. Extra Trees Regressor best_etr = ExtraTreesRegressor(bootstrap=False, criterion='friedman_mse', max_depth=4, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start='True') # 9. GBR best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mse', init=None, learning_rate=0.1, loss='lad', max_depth=5, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_iter_no_change=None, presort='auto', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) # 10. XGB best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0, importance_type='gain', learning_rate=0.5, max_delta_step=0, max_depth=4, min_child_weight=1, missing=None, n_estimators=10, n_jobs=1, nthread=None, objective='reg:linear', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, validate_parameters=False, verbosity=1) best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb] best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr' , 'best_gbr', 'best_xgb'] k = 0 df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) for model in best_models: print('Running model number:', k+1, 'with Model Name: ', best_models_name[k]) r2_scores = [] mse_scores = [] mape_scores = [] mae_scores = [] # cv = KFold(n_splits = 10, random_state = 42, shuffle = True) cv = ShuffleSplit(n_splits=10, random_state=0) # print(cv) fold = 1 for train_index, test_index in cv.split(X): model_orig = model # print("Train Index: ", train_index, "\n") # print("Test Index: ", test_index) X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index] # print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape) model_orig.fit(X_train_fold, Y_train_fold) Y_pred_fold = model_orig.predict(X_test_fold) # save the folds to disk data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold] filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle' pickle.dump(data, open(filename, 'wb')) # save the model to disk filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav' fold = fold + 1 pickle.dump(model_orig, open(filename, 'wb')) # some time later... ''' # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) ''' # scores.append(best_svr.score(X_test, y_test)) ''' plt.figure() plt.plot(Y_test_fold, 'b') plt.plot(Y_pred_fold, 'r') ''' # print('Accuracy =',accuracy_score(Y_test, Y_pred)) r2_scores.append(r2_score(Y_test_fold, Y_pred_fold)) mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold)) mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold)) mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold)) df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name , 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True) k = k + 1 print(df.head()) df.to_csv(path_for_saving_data + '.csv') # print('MSE for 10 folds\n', mse_scores) # print('\nR2 scores for 10 folds\n', r2_scores) # print('\nMAPE for 10 folds\n', mape_scores) # print('\nMAE scores for 10 folds\n', mae_scores) # print('\nMean MSE = ', np.mean(mse_scores), '\nMedian MSE = ', np.median(mse_scores)) # print('\nMean R2 score =',np.mean(r2_scores), '\nMedian R2 scores = ', np.median(r2_scores)) # print('\nMean Absolute Percentage Error =',np.mean(mape_scores), # '\nMedian Absolute Percentage Error =', np.median(mape_scores)) # print('\nMean MAE =',np.mean(mae_scores), # '\nMedian MAE =', np.median(mae_scores)) dataset_name = 'stitch_physical' dataset_path = '\\Dataset_CSV\\all_datasets\\stitch_physical.csv' path_for_saving_data = '\\Saved_Models_Data\\' + dataset_name process_all_stitch_physical(dataset_path, dataset_name, path_for_saving_data) ``` # Dataset 11 : stitch_simulated ``` def process_all_stitch_simulated(dataset_path, dataset_name, path_for_saving_data): ################## Data Preprocessing ###################### df = pd.read_csv(dataset_path) encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df, encoder_isa = None, encoder_mem_type=None) # total_data = encoded_data_frame.drop(columns = ['arch', 'arch1']) total_data = encoded_data_frame.drop(columns = ['arch']) total_data = total_data.fillna(0) X_columns = total_data.drop(columns = 'runtime').columns X = total_data.drop(columns = ['runtime']).to_numpy() Y = total_data['runtime'].to_numpy() # X_columns = total_data.drop(columns = 'PS').columns # X = total_data.drop(columns = ['runtime','PS']).to_numpy() # Y = total_data['runtime'].to_numpy() print('Data X and Y shape', X.shape, Y.shape) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42) print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) ################## Data Preprocessing ###################### # Put best models here using grid search # 1. SVR best_svr = SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1, kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False) # 2. LR best_lr = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False) # 3. RR best_rr = linear_model.Ridge(alpha=10, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=None, solver='lsqr', tol=0.001) # 4. KNN best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=15, p=1, weights='distance') # 5. GPR best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None, n_restarts_optimizer=0, normalize_y=True, optimizer='fmin_l_bfgs_b', random_state=None) # 6. Decision Tree best_dt = DecisionTreeRegressor(criterion='friedman_mse', max_depth=3, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=2, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='best') # 7. Random Forest best_rf = RandomForestRegressor(bootstrap=True, criterion='mae', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='True') # 8. Extra Trees Regressor best_etr = ExtraTreesRegressor(bootstrap=False, criterion='friedman_mse', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start='True') # 9. GBR best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mae', init=None, learning_rate=0.1, loss='lad', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_iter_no_change=None, presort='auto', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) # 10. XGB best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0, importance_type='gain', learning_rate=0.5, max_delta_step=0, max_depth=4, min_child_weight=1, missing=None, n_estimators=10, n_jobs=1, nthread=None, objective='reg:linear', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, validate_parameters=False, verbosity=1) best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb] best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr' , 'best_gbr', 'best_xgb'] k = 0 df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) for model in best_models: print('Running model number:', k+1, 'with Model Name: ', best_models_name[k]) r2_scores = [] mse_scores = [] mape_scores = [] mae_scores = [] # cv = KFold(n_splits = 10, random_state = 42, shuffle = True) cv = ShuffleSplit(n_splits=10, random_state=0) # print(cv) fold = 1 for train_index, test_index in cv.split(X): model_orig = model # print("Train Index: ", train_index, "\n") # print("Test Index: ", test_index) X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index] # print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape) model_orig.fit(X_train_fold, Y_train_fold) Y_pred_fold = model_orig.predict(X_test_fold) # save the folds to disk data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold] filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle' pickle.dump(data, open(filename, 'wb')) # save the model to disk filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav' fold = fold + 1 pickle.dump(model_orig, open(filename, 'wb')) # some time later... ''' # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) ''' # scores.append(best_svr.score(X_test, y_test)) ''' plt.figure() plt.plot(Y_test_fold, 'b') plt.plot(Y_pred_fold, 'r') ''' # print('Accuracy =',accuracy_score(Y_test, Y_pred)) r2_scores.append(r2_score(Y_test_fold, Y_pred_fold)) mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold)) mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold)) mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold)) df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name , 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True) k = k + 1 print(df.head()) df.to_csv(path_for_saving_data + '.csv') # print('MSE for 10 folds\n', mse_scores) # print('\nR2 scores for 10 folds\n', r2_scores) # print('\nMAPE for 10 folds\n', mape_scores) # print('\nMAE scores for 10 folds\n', mae_scores) # print('\nMean MSE = ', np.mean(mse_scores), '\nMedian MSE = ', np.median(mse_scores)) # print('\nMean R2 score =',np.mean(r2_scores), '\nMedian R2 scores = ', np.median(r2_scores)) # print('\nMean Absolute Percentage Error =',np.mean(mape_scores), # '\nMedian Absolute Percentage Error =', np.median(mape_scores)) # print('\nMean MAE =',np.mean(mae_scores), # '\nMedian MAE =', np.median(mae_scores)) dataset_name = 'stitch_simulated' dataset_path = '\\Dataset_CSV\\all_datasets\\stitch_simulated.csv' path_for_saving_data = '\\Saved_Models_Data\\' + dataset_name process_all_stitch_simulated(dataset_path, dataset_name, path_for_saving_data) ``` # Dataset 12 : svm_physical ``` def process_all_svm_physical(dataset_path, dataset_name, path_for_saving_data): ################## Data Preprocessing ###################### df = pd.read_csv(dataset_path) encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df, encoder_isa = None, encoder_mem_type=None) # total_data = encoded_data_frame.drop(columns = ['arch', 'arch1']) total_data = encoded_data_frame.drop(columns = ['arch']) total_data = total_data.fillna(0) X_columns = total_data.drop(columns = 'runtime').columns X = total_data.drop(columns = ['runtime']).to_numpy() Y = total_data['runtime'].to_numpy() # X_columns = total_data.drop(columns = 'PS').columns # X = total_data.drop(columns = ['runtime','PS']).to_numpy() # Y = total_data['runtime'].to_numpy() print('Data X and Y shape', X.shape, Y.shape) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42) print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) ################## Data Preprocessing ###################### # Put best models here using grid search # 1. SVR best_svr = SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1, kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False) # 2. LR best_lr = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False) # 3. RR best_rr = linear_model.Ridge(alpha=0.1, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=None, solver='sparse_cg', tol=0.001) # 4. KNN best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=2, p=1, weights='distance') # 5. GPR best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None, n_restarts_optimizer=0, normalize_y=True, optimizer='fmin_l_bfgs_b', random_state=None) # 6. Decision Tree best_dt = DecisionTreeRegressor(criterion='mse', max_depth=5, max_features='log2', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='random') # 7. Random Forest best_rf = RandomForestRegressor(bootstrap=True, criterion='mae', max_depth=4, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='True') # 8. Extra Trees Regressor best_etr = ExtraTreesRegressor(bootstrap=False, criterion='mse', max_depth=5, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start='True') # 9. GBR best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mae', init=None, learning_rate=0.1, loss='lad', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_iter_no_change=None, presort='auto', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) # 10. XGB best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0, importance_type='gain', learning_rate=0.5, max_delta_step=0, max_depth=4, min_child_weight=1, missing=None, n_estimators=10, n_jobs=1, nthread=None, objective='reg:linear', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, validate_parameters=False, verbosity=1) best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb] best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr' , 'best_gbr', 'best_xgb'] k = 0 df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) for model in best_models: print('Running model number:', k+1, 'with Model Name: ', best_models_name[k]) r2_scores = [] mse_scores = [] mape_scores = [] mae_scores = [] # cv = KFold(n_splits = 10, random_state = 42, shuffle = True) cv = ShuffleSplit(n_splits=10, random_state=0) # print(cv) fold = 1 for train_index, test_index in cv.split(X): model_orig = model # print("Train Index: ", train_index, "\n") # print("Test Index: ", test_index) X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index] # print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape) model_orig.fit(X_train_fold, Y_train_fold) Y_pred_fold = model_orig.predict(X_test_fold) # save the folds to disk data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold] filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle' pickle.dump(data, open(filename, 'wb')) # save the model to disk filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav' fold = fold + 1 pickle.dump(model_orig, open(filename, 'wb')) # some time later... ''' # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) ''' # scores.append(best_svr.score(X_test, y_test)) ''' plt.figure() plt.plot(Y_test_fold, 'b') plt.plot(Y_pred_fold, 'r') ''' # print('Accuracy =',accuracy_score(Y_test, Y_pred)) r2_scores.append(r2_score(Y_test_fold, Y_pred_fold)) mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold)) mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold)) mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold)) df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name , 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True) k = k + 1 print(df.head()) df.to_csv(path_for_saving_data + '.csv') # print('MSE for 10 folds\n', mse_scores) # print('\nR2 scores for 10 folds\n', r2_scores) # print('\nMAPE for 10 folds\n', mape_scores) # print('\nMAE scores for 10 folds\n', mae_scores) # print('\nMean MSE = ', np.mean(mse_scores), '\nMedian MSE = ', np.median(mse_scores)) # print('\nMean R2 score =',np.mean(r2_scores), '\nMedian R2 scores = ', np.median(r2_scores)) # print('\nMean Absolute Percentage Error =',np.mean(mape_scores), # '\nMedian Absolute Percentage Error =', np.median(mape_scores)) # print('\nMean MAE =',np.mean(mae_scores), # '\nMedian MAE =', np.median(mae_scores)) dataset_name = 'svm_physical' dataset_path = '\\Dataset_CSV\\all_datasets\\svm_physical.csv' path_for_saving_data = '\\Saved_Models_Data\\' + dataset_name process_all_svm_physical(dataset_path, dataset_name, path_for_saving_data) ``` # Dataset 13 : svm_simulated ``` def process_all_svm_simulated(dataset_path, dataset_name, path_for_saving_data): ################## Data Preprocessing ###################### df = pd.read_csv(dataset_path) encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df, encoder_isa = None, encoder_mem_type=None) # total_data = encoded_data_frame.drop(columns = ['arch', 'arch1']) total_data = encoded_data_frame.drop(columns = ['arch']) total_data = total_data.fillna(0) X_columns = total_data.drop(columns = 'runtime').columns X = total_data.drop(columns = ['runtime']).to_numpy() Y = total_data['runtime'].to_numpy() # X_columns = total_data.drop(columns = 'PS').columns # X = total_data.drop(columns = ['runtime','PS']).to_numpy() # Y = total_data['runtime'].to_numpy() print('Data X and Y shape', X.shape, Y.shape) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42) print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) ################## Data Preprocessing ###################### # Put best models here using grid search # 1. SVR best_svr = SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1, kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False) # 2. LR best_lr = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False) # 3. RR best_rr = linear_model.Ridge(alpha=10, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=None, solver='lsqr', tol=0.001) # 4. KNN best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=15, p=1, weights='distance') # 5. GPR best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None, n_restarts_optimizer=0, normalize_y=True, optimizer='fmin_l_bfgs_b', random_state=None) # 6. Decision Tree best_dt = DecisionTreeRegressor(criterion='friedman_mse', max_depth=3, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=2, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='best') # 7. Random Forest best_rf = RandomForestRegressor(bootstrap=True, criterion='mae', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='True') # 8. Extra Trees Regressor best_etr = ExtraTreesRegressor(bootstrap=False, criterion='friedman_mse', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start='True') # 9. GBR best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mae', init=None, learning_rate=0.1, loss='lad', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_iter_no_change=None, presort='auto', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) # 10. XGB best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0, importance_type='gain', learning_rate=0.5, max_delta_step=0, max_depth=4, min_child_weight=1, missing=None, n_estimators=10, n_jobs=1, nthread=None, objective='reg:linear', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, validate_parameters=False, verbosity=1) best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb] best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr' , 'best_gbr', 'best_xgb'] k = 0 df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) for model in best_models: print('Running model number:', k+1, 'with Model Name: ', best_models_name[k]) r2_scores = [] mse_scores = [] mape_scores = [] mae_scores = [] # cv = KFold(n_splits = 10, random_state = 42, shuffle = True) cv = ShuffleSplit(n_splits=10, random_state=0) # print(cv) fold = 1 for train_index, test_index in cv.split(X): model_orig = model # print("Train Index: ", train_index, "\n") # print("Test Index: ", test_index) X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index] # print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape) model_orig.fit(X_train_fold, Y_train_fold) Y_pred_fold = model_orig.predict(X_test_fold) # save the folds to disk data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold] filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle' pickle.dump(data, open(filename, 'wb')) # save the model to disk filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav' fold = fold + 1 pickle.dump(model_orig, open(filename, 'wb')) # some time later... ''' # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) ''' # scores.append(best_svr.score(X_test, y_test)) ''' plt.figure() plt.plot(Y_test_fold, 'b') plt.plot(Y_pred_fold, 'r') ''' # print('Accuracy =',accuracy_score(Y_test, Y_pred)) r2_scores.append(r2_score(Y_test_fold, Y_pred_fold)) mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold)) mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold)) mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold)) df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name , 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True) k = k + 1 print(df.head()) df.to_csv(path_for_saving_data + '.csv') # print('MSE for 10 folds\n', mse_scores) # print('\nR2 scores for 10 folds\n', r2_scores) # print('\nMAPE for 10 folds\n', mape_scores) # print('\nMAE scores for 10 folds\n', mae_scores) # print('\nMean MSE = ', np.mean(mse_scores), '\nMedian MSE = ', np.median(mse_scores)) # print('\nMean R2 score =',np.mean(r2_scores), '\nMedian R2 scores = ', np.median(r2_scores)) # print('\nMean Absolute Percentage Error =',np.mean(mape_scores), # '\nMedian Absolute Percentage Error =', np.median(mape_scores)) # print('\nMean MAE =',np.mean(mae_scores), # '\nMedian MAE =', np.median(mae_scores)) dataset_name = 'svm_simulated' dataset_path = '\\Dataset_CSV\\all_datasets\\svm_simulated.csv' path_for_saving_data = '\\Saved_Models_Data\\' + dataset_name process_all_svm_simulated(dataset_path, dataset_name, path_for_saving_data) ```
github_jupyter
# Systems Biology and Medicine Practical: Flux Balance Analysis in Python **Authors**: Thierry D.G.A Mondeel, Stefania Astrologo, Ewelina Weglarz-Tomczak & Hans V. Westerhoff <br/> University of Amsterdam <br/>2016-2018 ## Your work is not automatically saved! These notebooks will remain available online before, during and after the practicals. <span style="color:red">**Note:** ONCE YOU LOG OUT (or click away the tab in your browser) YOUR WORK IS NOT SAVED!</span> ## Two ways to save your work If you wish to keep these notebooks with the adjustments you make, do this: - "File -> Download as -> Notebook". When you open this tutorial at a later date you can upload your saved notebooks in the "Tree view" (where you end up after clicking the Binder image on the initial website). Simply click the "Upload" button in the top-right corner in Tree view and upload your saved notebook. ## Assignments Through the various notebooks we have set up some assignments for you to complete. These will be highlighted in red, occasionally including a time estimate for how long you should maximally spend on this, as follows: <span style="color:red">**Assignment (2 min):**</span> Read the introduction section below. ## Introduction This notebook serves as the hub for the flux balance analysis practicals. The aim of this tutorial is to introduce you to constraint-based modeling and the applications of the human metabolic reconstruction RECON2. At the bottom you will find the table of contents linking to all the various subjects and assignments spread over various notebooks. Return here when you finish each one. **<span style="color:red">Golden rule:</span>** Ask one of the teaching assistants if anything is unclear! There are various software packages that enable the user to perform COnstraint-Based Reconstruction and Analysis (COBRA). We will use the Python based: [Cobrapy](https://github.com/opencobra/cobrapy). <span style="color:red">**Assignment (3 min):**</span> Visit the [COBRA website](https://opencobra.github.io/) and read the secion "What is COBRA?". To perform flux balance analysis with Cobrapy you need to understand at least the basics of python. Therefore, the first two tutorials will cover the basics of the Jupyter interface and Python. If you already have experience with Python feel free to skim (not skip) the first part. Even if you already have some experience there may be some tips and tricks in the first part that will come in handy later. After the Python introduction we will introduce Cobrapy and how to do computational analysis on the human metabolic reconstruction. Happy learning! ## Table of Contents The links below will take you to various parts of this tutorial. - [Getting to know the Jupyter notebook interface](./FBA_tutorials/0_running_code_in_notebook.ipynb) ~ < 20 min - [Python essentials](./FBA_tutorials/1_introduction_to_python.ipynb) ~ 40 min - [A crash course on flux balance analysis](./FBA_tutorials/2_introduction_to_FBA_FVA_RECON2.ipynb) ~ 60 min - [Integrating cancer cell line transcriptomics on the human metabolic map](./FBA_tutorials/6_Cancer_cell_line_models.ipynb) ~ 60 min ## Concluding remarks We hope this tutorial achieved its aims to: * Introduce you to programming with Python * Illustrate the uses and pitfalls of flux balance analysis * Show you some real life examples of using metabolic maps and flux balance analysis * Illustrate how genome-wide metabolic maps may be used to go beyond "textbook understanding" of pathways and show novel unexpected alternative ways to achieve metabolic tasks. * Highlight one interesting way to use metabolic maps by integrating transcriptomics data of cancer cell lines
github_jupyter
# Normalizing Flows In this tutorial we will implement Normalizing Flows. Normalizing Flows are a general framework for defining expressive probability distributions. They can be defined with simple base distribution and a series of invertable transformations and are trained using MLE. Let's begin with a simple task: learn a mapping between two normal distributions. ``` import torch from torch import distributions as dist import matplotlib.pyplot as plt %matplotlib inline # base distribution with known density base = dist.Normal(torch.zeros(2), torch.ones(2)) # target distribution with "unknown" denisity target = dist.Uniform(low=torch.tensor([4., 0.]), high=torch.tensor([6., 5.])) plt.figure(dpi=120) y = base.sample([1000]) plt.scatter(y.data.numpy().T[0], y.data.numpy().T[1], label="base") y = target.sample([1000]) plt.scatter(y.data.numpy().T[0], y.data.numpy().T[1], label="target") plt.legend() ``` ## Change of variable change of volume rule Let $z \in \mathbb{R}^d$ be a random variable with distribution $p_z(z)$ and $f: \mathbb{R}^d \rightarrow \mathbb{R}^d$ an invertible mapping from $z$ to $y$. We can define the distribution of the resulting variable $y = f(z)$ as follows: $$ p_y(y) = p_z(f^{-1}(y)) \cdot \det | J(f^{-1}(y)) |$$ We can also compose a series of invertible mappings $f_1 \dotso f_n$ together. In this case the distribution of the resulting variable $y = f_n \circ \dotsb \circ f_1(z) = F(z)$ will be: $$ p_y(y) = p_z(F^{-1}(y)) \cdot \prod_{n=1}^{N} |\det J (f_n^{-1}(y))|$$ Now the composition of these invertible mappings is called a "Normalizing Flow" and we can train it to learn the distribution of our data by maximizing the following expression: $$ \sum_i \log p_y(y_i) = \sum_i (\log p_z(F^{-1}(y_)) + \sum_{n=1}^{N} \log |\det J (f_n^{-1}(y_i))|)$$ So let's build a Normalizing Flow! ``` import torch.nn.functional as F from torch import nn, optim # abstract flow class class Flow(nn.Module): def __init__(self, size, base): super().__init__() self.size = size self.base = base def forward(self, x): raise NotImplementedError def inverse(self, y): raise NotImplementedError def sample(self, shape): x = self.base.sample(shape) return self(x) def log_prob(self, y): x, ldij = self.inverse(y) return self.base.log_prob(x) + ldij # basically an affine flow class LinearFlow(Flow): def __init__(self, size, base): super(self.__class__, self).__init__(size, base) self.linear = nn.Linear(size, size) def forward(self, x): return self.linear(x) def inverse(self, y): batch = y.shape[0] inverse_weight = torch.inverse(self.linear.weight) x = F.linear(y - self.linear.bias, inverse_weight) ldij = torch.log(1 / self.linear.weight.det().abs()).view(1, 1) return x, ldij class LeakyReLUFlow(Flow): def __init__(self, size, base, negative_slope=0.01): super(self.__class__, self).__init__(size, base) self.negative_slope = negative_slope def forward(self, x): return torch.where(x >= 0, x, x * self.negative_slope) def inverse(self, y): i = torch.ones_like(y) j_inv = torch.where(y >= 0, i, 1.0 / self.negative_slope * i) return torch.where(y >= 0, y, 1. / self.negative_slope * y),\ torch.log(j_inv.abs()).sum(1, keepdim=True) flow = base for i in range(2): flow = LinearFlow(2, flow) flow = LeakyReLUFlow(2, flow) flow = LinearFlow(2, flow) optimizer = optim.Adam(flow.parameters(), lr=1e-3) from IPython.display import clear_output epoch = 0 loss_history = list() while True: optimizer.zero_grad() y = target.sample([1000]) # minimize -log p(data) loss = -flow.log_prob(y).mean() loss.backward() optimizer.step() epoch += 1 loss_history.append(loss.item()) if epoch % 500 == 0: clear_output(True) plt.figure(figsize=[12, 6], dpi=120) plt.subplot(1, 2, 1) plt.title("Distributions") y = base.sample([1000]) plt.scatter(y.data.numpy().T[0], y.data.numpy().T[1], label="base", alpha=.7) y = target.sample([1000]) plt.scatter(y.data.numpy().T[0], y.data.numpy().T[1], label="target", alpha=.7) y = flow.sample([1000]) plt.scatter(y.data.numpy().T[0], y.data.numpy().T[1], label="learnt", alpha=.7) plt.legend() plt.subplot(1, 2, 2) plt.title("Loss function") plt.plot(loss_history) plt.yscale('log') plt.show() ``` Author: [Denis Mazur](https://github.com/deniskamazur) More on normalizing flows: * [Normalizing Flows Tutorial by Eric Jang](https://blog.evjang.com/2018/01/nf1.html) * [An in-depth overview of everything Normalizing Flow related by Papamakarios et al.](https://arxiv.org/pdf/1912.02762.pdf) * [Convolutional Normalizing Flows for image generation](https://openai.com/blog/glow/) * [Continuous time Normalizing Flows](https://arxiv.org/pdf/1810.01367.pdf)
github_jupyter
``` import os import glob # First of all, we need some VHR data, let's use Pleiades data path = glob.glob(os.path.join("/home", "data", "DATA", "PRODS", "PLEIADES", "5547047101", "IMG_PHR1A_PMS_001"))[0] # Create logger import logging logger = logging.getLogger("eoreader") logger.setLevel(logging.INFO) # create console handler and set level to debug ch = logging.StreamHandler() ch.setLevel(logging.INFO) # create formatter formatter = logging.Formatter('%(message)s') # add formatter to ch ch.setFormatter(formatter) # add ch to logger logger.addHandler(ch) from eoreader.reader import Reader # Create the reader eoreader = Reader() # Open your product prod = eoreader.open(path, remove_tmp=True) print(f"Acquisition datetime: {prod.datetime}") print(f"Condensed name: {prod.condensed_name}") # Please be aware that EOReader will always work in UTM projection, so if you give WGS84 data, # EOReader will reproject the stacks and this can be time consuming from eoreader.bands import * from eoreader.env_vars import DEM_PATH # Here, if you want to orthorectify or pansharpen your data manually, you can set your stack here. # If you do not provide this stack but you give a non-orthorectified product to EOReader # (ie. SEN or PRJ products for Pleiades), you must provide a DEM to orthorectify correctly the data # prod.ortho_stack = "" os.environ[DEM_PATH] = os.path.join("/home", "data", "DS2", "BASES_DE_DONNEES", "GLOBAL", "MERIT_Hydrologically_Adjusted_Elevations", "MERIT_DEM.vrt") # Open here some more interesting geographical data: extent and footprint base = prod.extent.plot(color='cyan', edgecolor='black') prod.footprint.plot(ax=base, color='blue', edgecolor='black', alpha=0.5) # Select the bands you want to load bands = [GREEN, NDVI, TIR_1, CLOUDS, HILLSHADE] # Be sure they exist for Pleiades sensor: ok_bands = [band for band in bands if prod.has_band(band)] print(to_str(ok_bands)) # Pleiades doesn't provide TIR and SHADOWS bands # Load those bands as a dict of xarray.DataArray band_dict = prod.load(ok_bands) band_dict[GREEN] # The nan corresponds to the nodata you see on the footprint # Plot a subsampled version band_dict[GREEN][:, ::10, ::10].plot() # Plot a subsampled version band_dict[NDVI][:, ::10, ::10].plot() # Plot a subsampled version band_dict[CLOUDS][:, ::10, ::10].plot() # Plot a subsampled version band_dict[HILLSHADE][:, ::10, ::10].plot() # You can also stack those bands stack = prod.stack(ok_bands) stack # Plot a subsampled version import matplotlib.pyplot as plt nrows = len(stack) fig, axes = plt.subplots(nrows=nrows, figsize=(2 * nrows, 6 * nrows), subplot_kw={"box_aspect": 1}) for i in range(nrows): stack[i, ::10, ::10].plot(x="x", y="y", ax=axes[i]) ```
github_jupyter
``` %matplotlib inline # Imports from clr import AddReference AddReference("System") AddReference("QuantConnect.Common") AddReference("QuantConnect.Jupyter") AddReference("QuantConnect.Indicators") from System import * from QuantConnect import * from QuantConnect.Data.Custom import * from QuantConnect.Data.Market import TradeBar, QuoteBar from QuantConnect.Jupyter import * from QuantConnect.Indicators import * from datetime import datetime, timedelta import matplotlib.pyplot as plt import matplotlib.ticker as mtick import pandas as pd import numpy as np def build_events(): """Return a pandas dataframe of the events (since the csv files always get deleted.)""" df = pd.DataFrame(columns=['event_date', 'day_0_date', 'ticker']) event_string = """1/31/2000,2/1/2000,ALK 11/12/2001,11/12/2001,AAMRQ 4/1/2011,4/2/2011,LUV 8/14/2013,8/14/2013,UPS""" event_list = event_string.splitlines() for row in event_list: event = row.split(',') df = df.append([{'event_date': event[0], 'day_0_date': event[1], 'ticker': event[2]}], ignore_index=True) # convert date strings to datetimes df['day_0_date'] = pd.to_datetime(df['day_0_date']) return df class DataProvider(object): def __init__(self): pass """Provide security price data specifically for this event study.""" def get_event_window_columns(self, num_pre_event_window_periods, num_post_event_window_periods): """Return a list of column headers for the event window. Ex: ['-2','-1','0','1','2']""" cols = [] for i in range(num_pre_event_window_periods * -1, num_post_event_window_periods + 1): cols.append(str(i)) return cols def get_closing_prices(self, ticker, day_0_date, num_pre_event_window_periods, num_post_event_window_periods): """Return a pandas DataFrame of closing prices over the event window.""" """ Returns: pandas DataFrame, empty if no data was available. If there was data available, the """ raise NotImplementedError('subclasses must override foo()!') class DataProviderESFR(DataProvider): """Provide security price data specifically for this event study.""" def __init__(self): pass def get_closing_prices(self, ticker, day_0_date, num_pre_event_window_periods, num_post_event_window_periods): """Return a pandas DataFrame of closing prices over the event window.""" columns = self.get_event_window_columns(num_pre_event_window_periods, num_post_event_window_periods) closing_prices_df = pd.DataFrame(index=[0], columns=columns) # Create a list of prices for the event window and fill with NaN prices = [np.nan for i in range(num_pre_event_window_periods + 1 + num_post_event_window_periods)] if ticker == 'ALK': prices = [14.97, 15.16, 16.03, 15.84, 16.22, 15.84, 15.78, 15.81, 15.47, 15.25, 15.10, 14.94, 14.70] elif ticker == 'AAMRQ': prices = [18.74, 18.90, 18.36, 18.55, 18.28, 18.13, 16.49, 17.01, 17.94, 18.75, 20.06, 20.77, 19.86] elif ticker == 'LUV': prices = [12.53, 12.43, 12.36, 12.66, 12.48, 12.52, 12.31, 12.05, 12.01, 11.66, 11.54, 11.64, 11.70] elif ticker == 'UPS': prices = [86.79, 86.72, 86.96, 86.65, 86.61, 86.55, 86.30, 85.43, 85.52, 85.55, 85.77, 85.56, 86.43] elif ticker == 'SPY': if day_0_date.strftime('%Y-%m-%d') == '2000-02-01': prices = [1401.91, 1410.03, 1404.09, 1398.56, 1360.16, 1394.46, 1409.28, 1409.12, 1424.97, 1424.37, 1423.00, 1441.75, 1411.71] elif day_0_date.strftime('%Y-%m-%d') == '2001-11-12': prices = [1087.20, 1102.84, 1118.86, 1115.80, 1118.54, 1120.31, 1118.33, 1139.09, 1141.21, 1142.24, 1138.65, 1151.06, 1142.66] elif day_0_date.strftime('%Y-%m-%d') == '2011-04-02': prices = [1313.80, 1310.19, 1319.44, 1328.26, 1325.83, 1332.41, 1332.87, 1332.63, 1335.54, 1333.51, 1328.17, 1324.46, 1314.16] elif day_0_date.strftime('%Y-%m-%d') == '2013-08-14': prices = [1697.37, 1690.91, 1697.48, 1691.42, 1689.47, 1694.16, 1685.39, 1661.32, 1655.83, 1646.06, 1652.35, 1642.80, 1656.96] closing_prices_df.loc[0] = prices return closing_prices_df import matplotlib.pyplot as plt import matplotlib.ticker as mtick import numpy as np class EventStudyResults(object): """Helper class that collects and formats the event study results. Attributes: num_starting_events (int): The number of events in the event list passed into the event study. num_events_processed (int): The number of events that made it into the final calculations. aar (pandas.Series): The average abnormal returns from the event study in a pandas.Series. The index of the series goes from [event_window_start : event_window_end]. ie: ['-6', '-5', '-4', '-3', '-2', '-1', '0', '1', '2', '3', '4', '5', '6'] The values of the Series are float64. The first value will be a nan since these are returns calculated from the previous value. ie: [nan, -0.00029537, 0.00117336, 0.00569039, 0.00568463, -0.01591504, -0.0294841, -0.00043875, 0.0047285, 0.00226918, 0.01414965, 0.00387815, -0.00431594] caar (pandas.Series): The cumulative average abnormal returns from the event study in a pandas.Series. The index of the series goes from [event_window_start : event_window_end]. ie: ['-6', '-5', '-4', '-3', '-2', '-1', '0', '1', '2', '3', '4', '5', '6'] The values of the Series are float64. The first value will be a nan since these are returns calculated from the previous value. ie: [nan, -0.00029537, 0.00087799, 0.00656838, 0.01225301, -0.00366204, -0.03314613, -0.03358488, -0.02885638, -0.0265872, -0.01243755, -0.0085594, -0.01287535] """ def __init__(self): self.num_starting_events = 0 self.num_events_processed = 0 self.aar = None self.caar = None self.std_err = None def plot(self, title=None, show=True, pdf_filename=None, show_errorbar=False): plt.clf() plt.figure(figsize=(15, 7.5)) box_props = dict(facecolor='w', alpha=1.0) ax1 = plt.subplot(211) plt.title(title) plt.grid() plt.ylabel('CAAR (%)') ax1.yaxis.set_major_formatter(mtick.PercentFormatter(1.0, symbol='')) ax1.axhline(linewidth=1, color='k') x_ticks = [i for i in range(len(self.caar.index))] x_labels = [str(i) for i in self.caar.index] ax1.set_xticks(x_ticks) ax1.set_xticklabels(x_labels) plt.plot(self.caar.values, label="N=%s" % self.num_events_processed) caar_std_dev = self.caar.std() if show_errorbar: plt.errorbar(self.caar.index, self.caar, yerr=caar_std_dev, linestyle='None', elinewidth=1, ecolor='#1f77b4', capsize=2) plt.legend(loc='upper right') textstr = 'Day 0: {0:.2f}%\nStd: {1:.3f}'.format(self.caar.loc['0'] * 100, caar_std_dev) ax1.text(0.02, 0.05, textstr, transform=ax1.transAxes, verticalalignment='bottom', bbox=box_props) ax2 = plt.subplot(212) plt.grid() x_ticks = [i for i in range(len(self.aar.index))] x_labels = [str(i) for i in self.aar.index] ax2.set_xticks(x_ticks) ax2.set_xticklabels(x_labels) ax2.axhline(linewidth=1, color='k') plt.plot(self.aar.values, label="N=%s" % self.num_events_processed) aar_std_dev = self.aar.std() if show_errorbar: plt.errorbar(self.aar.index, self.aar, yerr=aar_std_dev, linestyle='None', elinewidth=1, ecolor='#1f77b4', capsize=2) plt.legend(loc='upper right') textstr = 'Day 0: {0:.2f}%\nStd: {1:.3f}'.format(self.aar.loc['0'] * 100, aar_std_dev) ax2.text(0.02, 0.05, textstr, transform=ax2.transAxes, verticalalignment='bottom', bbox=box_props) plt.ylabel('AAR (%)') ax2.yaxis.set_major_formatter(mtick.PercentFormatter(1.0, symbol='')) plt.xlabel('Event Window') plt.tight_layout(pad=1.0, w_pad=1.0, h_pad=1.0) if pdf_filename is not None: plt.savefig(pdf_filename, format='pdf') if show: plt.show() class EventStudyNaiveModel(object): """Tool that runs an event study, calculating CAAR using the naive benchmark model.""" def __init__(self, data_provider, event_list_df): """Create an event study. Args: data_provider (:obj:`EventStudyESFR`): object that gets security data event_list_df (:obj:`DataFrame`): Pandas DataFrame containing the list of event dates and ticker symbols. The ticker symbol must be in a column 'ticker'. For daily data, the date of day 0 must be column 'day_0_date' as a ISO8601 string (ie: YYYY-MM-DD) """ self.data_provider = data_provider self.event_list_df = event_list_df self.num_pre_event_window_periods = 0 self.num_post_event_window_periods = 0 self.results = None def run_naive_model(self, market_ticker, num_pre_event_window_periods, num_post_event_window_periods): """Run the event study using the naive benchmark model and return the results. Args: market_ticker (str): The ticker of the model's benchmark num_pre_event_window_periods (int): The number of periods before the event num_post_event_window_periods (int): The number of periods after the event Returns: An instance of EventStudyResults. """ self.results = EventStudyResults() self.num_pre_event_window_periods = num_pre_event_window_periods self.num_post_event_window_periods = num_post_event_window_periods # Create a DataFrame to hold all the Abnormal Returns which will be used # to calculate the Average Abnormal Return (AAR) columns = self.data_provider.get_event_window_columns(num_pre_event_window_periods, num_post_event_window_periods) all_abnormal_returns_df = pd.DataFrame(columns=columns) #print('\nAll Abnormal Returns: \n{}'.format(all_abnormal_returns_df)) for index, event in self.event_list_df.iterrows(): """ For each event: Get the closing prices of the securities for the event window Get the closing prices of the market benchamark for the event window Calculate the actual returns for the security. Calculate the market returns (ie: normal returns) for the benchmark. Calculate the abnormal return. Calculate the average abnormal return. Calculate the cumulative average abnormal return. """ #print('\nDay Zero Date: {} ticker: {}'.format(event.day_0_date, event.ticker)) # Get prices for the security over the event window security_prices_df = self.data_provider.get_closing_prices(event.ticker, event.day_0_date, self.num_pre_event_window_periods, self.num_post_event_window_periods) if security_prices_df.isnull().values.any(): print('\n**** Prices for {} are missing around date: {} ****'.format(event.ticker, event.day_0_date)) continue #print('\nSecurity prices($) for {} over the event window:\n{}'.format(event.ticker, security_prices_df.to_string(index=False))) # Get prices for the market benchmark over the event window market_prices_df = self.data_provider.get_closing_prices(market_ticker, event.day_0_date, self.num_pre_event_window_periods, self.num_post_event_window_periods) if market_prices_df.isnull().values.any(): print('\n**** Prices for {} are missing around date: {} ****'.format(market_ticker, event.day_0_date)) continue #print('\nMarket prices($) for {} over the event window:\n{}'.format(market_ticker, market_prices_df.to_string(index=False))) # Calculate the actual arithmetic return for the security over the event window actual_returns_df = security_prices_df.pct_change(axis='columns') #print('\nSecurity Returns(%) for {} over the event window::\n{}'.format(event.ticker,(actual_returns_df*100).round(2).to_string(index=False))) # Calculate the arithmetic return for the market over the event window. # In the naive model, this becomes the Normal Return. normal_returns_df = market_prices_df.pct_change(axis='columns') #print('\nNormal Returns(%) for {} over the event window:\n{}'.format(market_ticker,(normal_returns_df*100).round(2).to_string(index=False))) # Calculate the Abnormal Return over the event window # AR = Stock Return - Normal Return abnormal_returns_df = actual_returns_df.sub(normal_returns_df) #print('\nAbnormal Returns(%) for {} over the event window:\n{}'.format(event.ticker,(abnormal_returns_df*100).round(2).to_string(index=False))) # Append the AR to the other ARs so we can calculate AAR later all_abnormal_returns_df = pd.concat([all_abnormal_returns_df, abnormal_returns_df], ignore_index=True) #print('\nAR(%) for all securities over the event window:\n{}'.format((all_abnormal_returns_df*100).round(2))) # Calculate the Average Abnormal Returns (AAR) aar = all_abnormal_returns_df.mean() #print('\nAAR(%) for all the securities over the event window:\n{}'.format((aar*100).round(2).to_frame().T.to_string(index=False))) # Calculate the Cumulative Average Abnormal Returns caar = aar.cumsum() #print('\nCAAR(%) for all the securities over the event window:\n{}'.format((caar * 100).round(2).to_frame().T.to_string(index=False))) self.results.aar = aar self.results.caar = caar self.results.num_starting_events = self.event_list_df.shape[0] self.results.num_events_processed = all_abnormal_returns_df.shape[0] return self.results def main(): """ This example replicates "Event Studies for Financial Research, chapter 4: A simplified example, the effect of air crashes on stock prices" using the eventstudy package. """ event_list_df = build_events() # print('The event list:\n {}'.format(event_list_df)) data_provider = DataProviderESFR() event_study = EventStudyNaiveModel(data_provider, event_list_df) # Run the event study looking 6 periods before the event and 6 periods after the event num_pre_event_window_periods = num_post_event_window_periods = 6 market_ticker = 'SPY' results = event_study.run_naive_model(market_ticker, num_pre_event_window_periods, num_post_event_window_periods) print('\nStarted with {} events and processed {} events.'.format(results.num_starting_events, results.num_events_processed)) print('\nAAR (%) for all the securities over the event window:\n{}'.format( (results.aar * 100).round(2).to_frame().T.to_string(index=False))) print('\nCAAR (%) for all the securities over the event window:\n{}'.format( (results.caar * 100).round(2).to_frame().T.to_string(index=False))) results.plot("Airline Crashes and their impact on stock returns") main() ```
github_jupyter
``` import dask from dask_kubernetes import KubeCluster import numpy as np import pyarrow # Specify a remote deployment using a load blanacer if we are running the NB outside of the cluster #dask.config.set({"kubernetes.scheduler-service-type": "LoadBalancer"}) cluster = KubeCluster.from_yaml('worker-spec.yaml', namespace='dask') # deploy_mode='remote') cluster.adapt(minimum=1, maximum=10) # Example usage from dask.distributed import Client import dask.array as da # Connect Dask to the cluster client = Client(cluster) client # the repr gives us useful links client.scheduler_comm.comm.handshake_info() # Create a large array and calculate the mean array = da.ones((1000, 1000, 1000)) print(array.mean().compute()) # Should print 1.0| ``` So now we know the cluster is doing ok :) Configure dask to talk to our local MinIO ``` # The anon false wasted so much time minio_storage_options = { # "anon": "false", "key": "YOURACCESSKEY", "secret": "YOURSECRETKEY", "client_kwargs": { "endpoint_url": "http://minio-1602984784.minio.svc.cluster.local:9000", "region_name": 'us-east-1' }, "config_kwargs": {"s3": {"signature_version": 's3v4'}}, } #tag::minio_storage_options[] minio_storage_options = { "key": "YOURACCESSKEY", "secret": "YOURSECRETKEY", "client_kwargs": { "endpoint_url": "http://minio-1602984784.minio.svc.cluster.local:9000", "region_name": 'us-east-1' }, "config_kwargs": {"s3": {"signature_version": 's3v4'}}, } #end::minio_storage_options[] ``` Download the GH archive data ``` import datetime import dask.dataframe as dd current_date=datetime.datetime(2020,10,1, 1) #tag::make_file_list[] gh_archive_files=[] while current_date < datetime.datetime.now() - datetime.timedelta(days=1): current_date = current_date + datetime.timedelta(hours=1) datestring = f'{current_date.year}-{current_date.month:02}-{current_date.day:02}-{current_date.hour}' gh_url = f'http://data.githubarchive.org/{datestring}.json.gz' gh_archive_files.append(gh_url) #end::make_file_list[] gh_archive_files[0] #tag::load_data[] df = dd.read_json(gh_archive_files, compression='gzip') df.columns #end::load_data[] len(df) # What kind of file systems are supported? #tag::known_fs[] from fsspec.registry import known_implementations known_implementations #end::known_fs[] #tag::known_fs_result[] #end::known_fs_result[] ``` What does our data look like? ``` df.columns h = df.head() h import pandas as pd j = pd.io.json.json_normalize(h.repo) j j.name ``` Since we want to partition on the repo name, we need to extract that to it's own column ``` data_bag = df.to_bag() # The records returned by the bag are tuples not named tuples, so use the df columns to look up the tuple index cols = df.columns def parse_record(record): r = { "repo": pd.io.json.json_normalize(record[cols.get_loc("repo")]), "repo_name": record[cols.get_loc("repo")]["name"], "type": record[cols.get_loc("type")], "id": record[cols.get_loc("id")], "created_at": record[cols.get_loc("created_at")], "payload": pd.io.json.json_normalize(record[cols.get_loc("payload")])} return r #tag::cleanup[] def clean_record(record): r = { "repo": record[cols.get_loc("repo")], "repo_name": record[cols.get_loc("repo")]["name"], "type": record[cols.get_loc("type")], "id": record[cols.get_loc("id")], "created_at": record[cols.get_loc("created_at")], "payload": record[cols.get_loc("payload")]} return r cleaned_up_bag = data_bag.map(clean_record) res = cleaned_up_bag.to_dataframe() #end::cleanup[] parsed_bag = data_bag.map(parse_record) cleaned_up_bag.take(1) res = cleaned_up_bag.to_dataframe() parsed_res = parsed_bag.to_dataframe() h = res.head() h_parsed = parsed_res.head() h type(h.iloc[0]["repo"]) type(h_parsed.iloc[0]["repo"]) #to_csv brings it back locally, lets try parquet. csv doesn't handle nesting so well so use original json inside df.to_csv("s3://dask-test/boop-test-csv", storage_options=minio_storage_options) # This will probably still bring everything back to the client? I'm guessing though. res.to_parquet("s3://dask-test/boop-test-pq", compression="gzip", storage_options=minio_storage_options, engine="pyarrow") parsed_res.to_parquet("s3://dask-test/boop-test-pq-p-nested", compression="gzip", storage_options=minio_storage_options, engine="fastparquet") #tag::write[] res.to_parquet("s3://dask-test/boop-test-partioned", partition_on=["type", "repo_name"], # Based on " there will be no global groupby." I think this is the value we want. compression="gzip", storage_options=minio_storage_options, engine="pyarrow") #end::write[] ```
github_jupyter
# AlphaFold Colab This Colab notebook allows you to easily predict the structure of a protein using a slightly simplified version of [AlphaFold v2.1.0](https://doi.org/10.1038/s41586-021-03819-2). **Differences to AlphaFold v2.1.0** In comparison to AlphaFold v2.1.0, this Colab notebook uses **no templates (homologous structures)** and a selected portion of the [BFD database](https://bfd.mmseqs.com/). We have validated these changes on several thousand recent PDB structures. While accuracy will be near-identical to the full AlphaFold system on many targets, a small fraction have a large drop in accuracy due to the smaller MSA and lack of templates. For best reliability, we recommend instead using the [full open source AlphaFold](https://github.com/deepmind/alphafold/), or the [AlphaFold Protein Structure Database](https://alphafold.ebi.ac.uk/). **This Colab has a small drop in average accuracy for multimers compared to local AlphaFold installation, for full multimer accuracy it is highly recommended to run [AlphaFold locally](https://github.com/deepmind/alphafold#running-alphafold).** Moreover, the AlphaFold-Multimer requires searching for MSA for every unique sequence in the complex, hence it is substantially slower. If your notebook times-out due to slow multimer MSA search, we recommend either using Colab Pro or running AlphaFold locally. Please note that this Colab notebook is provided as an early-access prototype and is not a finished product. It is provided for theoretical modelling only and caution should be exercised in its use. **Citing this work** Any publication that discloses findings arising from using this notebook should [cite](https://github.com/deepmind/alphafold/#citing-this-work) the [AlphaFold paper](https://doi.org/10.1038/s41586-021-03819-2). **Licenses** This Colab uses the [AlphaFold model parameters](https://github.com/deepmind/alphafold/#model-parameters-license) which are subject to the Creative Commons Attribution 4.0 International ([CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode)) license. The Colab itself is provided under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0). See the full license statement below. **More information** You can find more information about how AlphaFold works in the following papers: * [AlphaFold methods paper](https://www.nature.com/articles/s41586-021-03819-2) * [AlphaFold predictions of the human proteome paper](https://www.nature.com/articles/s41586-021-03828-1) * [AlphaFold-Multimer paper](https://www.biorxiv.org/content/10.1101/2021.10.04.463034v1) FAQ on how to interpret AlphaFold predictions are [here](https://alphafold.ebi.ac.uk/faq). ## Setup Start by running the 2 cells below to set up AlphaFold and all required software. ``` #@title 1. Install third-party software #@markdown Please execute this cell by pressing the _Play_ button #@markdown on the left to download and import third-party software #@markdown in this Colab notebook. (See the [acknowledgements](https://github.com/deepmind/alphafold/#acknowledgements) in our readme.) #@markdown **Note**: This installs the software on the Colab #@markdown notebook in the cloud and not on your computer. from IPython.utils import io import os import subprocess import tqdm.notebook TQDM_BAR_FORMAT = '{l_bar}{bar}| {n_fmt}/{total_fmt} [elapsed: {elapsed} remaining: {remaining}]' try: with tqdm.notebook.tqdm(total=100, bar_format=TQDM_BAR_FORMAT) as pbar: with io.capture_output() as captured: # Uninstall default Colab version of TF. %shell pip uninstall -y tensorflow %shell sudo apt install --quiet --yes hmmer pbar.update(6) # Install py3dmol. %shell pip install py3dmol pbar.update(2) # Install OpenMM and pdbfixer. %shell rm -rf /opt/conda %shell wget -q -P /tmp \ https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \ && bash /tmp/Miniconda3-latest-Linux-x86_64.sh -b -p /opt/conda \ && rm /tmp/Miniconda3-latest-Linux-x86_64.sh pbar.update(9) PATH=%env PATH %env PATH=/opt/conda/bin:{PATH} %shell conda update -qy conda \ && conda install -qy -c conda-forge \ python=3.7 \ openmm=7.5.1 \ pdbfixer pbar.update(80) # Create a ramdisk to store a database chunk to make Jackhmmer run fast. %shell sudo mkdir -m 777 --parents /tmp/ramdisk %shell sudo mount -t tmpfs -o size=9G ramdisk /tmp/ramdisk pbar.update(2) %shell wget -q -P /content \ https://git.scicore.unibas.ch/schwede/openstructure/-/raw/7102c63615b64735c4941278d92b554ec94415f8/modules/mol/alg/src/stereo_chemical_props.txt pbar.update(1) except subprocess.CalledProcessError: print(captured) raise #@title 2. Download AlphaFold #@markdown Please execute this cell by pressing the *Play* button on #@markdown the left. GIT_REPO = 'https://github.com/deepmind/alphafold' SOURCE_URL = 'https://storage.googleapis.com/alphafold/alphafold_params_colab_2022-03-02.tar' PARAMS_DIR = './alphafold/data/params' PARAMS_PATH = os.path.join(PARAMS_DIR, os.path.basename(SOURCE_URL)) try: with tqdm.notebook.tqdm(total=100, bar_format=TQDM_BAR_FORMAT) as pbar: with io.capture_output() as captured: %shell rm -rf alphafold %shell git clone --branch main {GIT_REPO} alphafold pbar.update(8) # Install the required versions of all dependencies. %shell pip3 install -r ./alphafold/requirements.txt # Run setup.py to install only AlphaFold. %shell pip3 install --no-dependencies ./alphafold pbar.update(10) # Apply OpenMM patch. %shell pushd /opt/conda/lib/python3.7/site-packages/ && \ patch -p0 < /content/alphafold/docker/openmm.patch && \ popd # Make sure stereo_chemical_props.txt is in all locations where it could be searched for. %shell mkdir -p /content/alphafold/alphafold/common %shell cp -f /content/stereo_chemical_props.txt /content/alphafold/alphafold/common %shell mkdir -p /opt/conda/lib/python3.7/site-packages/alphafold/common/ %shell cp -f /content/stereo_chemical_props.txt /opt/conda/lib/python3.7/site-packages/alphafold/common/ %shell mkdir --parents "{PARAMS_DIR}" %shell wget -O "{PARAMS_PATH}" "{SOURCE_URL}" pbar.update(27) %shell tar --extract --verbose --file="{PARAMS_PATH}" \ --directory="{PARAMS_DIR}" --preserve-permissions %shell rm "{PARAMS_PATH}" pbar.update(55) except subprocess.CalledProcessError: print(captured) raise import jax if jax.local_devices()[0].platform == 'tpu': raise RuntimeError('Colab TPU runtime not supported. Change it to GPU via Runtime -> Change Runtime Type -> Hardware accelerator -> GPU.') elif jax.local_devices()[0].platform == 'cpu': raise RuntimeError('Colab CPU runtime not supported. Change it to GPU via Runtime -> Change Runtime Type -> Hardware accelerator -> GPU.') else: print(f'Running with {jax.local_devices()[0].device_kind} GPU') # Make sure everything we need is on the path. import sys sys.path.append('/opt/conda/lib/python3.7/site-packages') sys.path.append('/content/alphafold') # Make sure all necessary environment variables are set. import os os.environ['TF_FORCE_UNIFIED_MEMORY'] = '1' os.environ['XLA_PYTHON_CLIENT_MEM_FRACTION'] = '2.0' ``` ## Making a prediction Please paste the sequence of your protein in the text box below, then run the remaining cells via _Runtime_ > _Run after_. You can also run the cells individually by pressing the _Play_ button on the left. Note that the search against databases and the actual prediction can take some time, from minutes to hours, depending on the length of the protein and what type of GPU you are allocated by Colab (see FAQ below). ``` #@title 3. Enter the amino acid sequence(s) to fold ⬇️ #@markdown Enter the amino acid sequence(s) to fold: #@markdown * If you enter only a single sequence, the monomer model will be used. #@markdown * If you enter multiple sequences, the multimer model will be used. from alphafold.notebooks import notebook_utils sequence_1 = 'MAAHKGAEHHHKAAEHHEQAAKHHHAAAEHHEKGEHEQAAHHADTAYAHHKHAEEHAAQAAKHDAEHHAPKPH' #@param {type:"string"} sequence_2 = '' #@param {type:"string"} sequence_3 = '' #@param {type:"string"} sequence_4 = '' #@param {type:"string"} sequence_5 = '' #@param {type:"string"} sequence_6 = '' #@param {type:"string"} sequence_7 = '' #@param {type:"string"} sequence_8 = '' #@param {type:"string"} input_sequences = (sequence_1, sequence_2, sequence_3, sequence_4, sequence_5, sequence_6, sequence_7, sequence_8) MIN_SINGLE_SEQUENCE_LENGTH = 16 MAX_SINGLE_SEQUENCE_LENGTH = 2500 MAX_MULTIMER_LENGTH = 2500 # Validate the input. sequences, model_type_to_use = notebook_utils.validate_input( input_sequences=input_sequences, min_length=MIN_SINGLE_SEQUENCE_LENGTH, max_length=MAX_SINGLE_SEQUENCE_LENGTH, max_multimer_length=MAX_MULTIMER_LENGTH) #@title 4. Search against genetic databases #@markdown Once this cell has been executed, you will see #@markdown statistics about the multiple sequence alignment #@markdown (MSA) that will be used by AlphaFold. In particular, #@markdown you’ll see how well each residue is covered by similar #@markdown sequences in the MSA. # --- Python imports --- import collections import copy from concurrent import futures import json import random from urllib import request from google.colab import files from matplotlib import gridspec import matplotlib.pyplot as plt import numpy as np import py3Dmol from alphafold.model import model from alphafold.model import config from alphafold.model import data from alphafold.data import feature_processing from alphafold.data import msa_pairing from alphafold.data import pipeline from alphafold.data import pipeline_multimer from alphafold.data.tools import jackhmmer from alphafold.common import protein from alphafold.relax import relax from alphafold.relax import utils from IPython import display from ipywidgets import GridspecLayout from ipywidgets import Output # Color bands for visualizing plddt PLDDT_BANDS = [(0, 50, '#FF7D45'), (50, 70, '#FFDB13'), (70, 90, '#65CBF3'), (90, 100, '#0053D6')] # --- Find the closest source --- test_url_pattern = 'https://storage.googleapis.com/alphafold-colab{:s}/latest/uniref90_2021_03.fasta.1' ex = futures.ThreadPoolExecutor(3) def fetch(source): request.urlretrieve(test_url_pattern.format(source)) return source fs = [ex.submit(fetch, source) for source in ['', '-europe', '-asia']] source = None for f in futures.as_completed(fs): source = f.result() ex.shutdown() break JACKHMMER_BINARY_PATH = '/usr/bin/jackhmmer' DB_ROOT_PATH = f'https://storage.googleapis.com/alphafold-colab{source}/latest/' # The z_value is the number of sequences in a database. MSA_DATABASES = [ {'db_name': 'uniref90', 'db_path': f'{DB_ROOT_PATH}uniref90_2021_03.fasta', 'num_streamed_chunks': 59, 'z_value': 135_301_051}, {'db_name': 'smallbfd', 'db_path': f'{DB_ROOT_PATH}bfd-first_non_consensus_sequences.fasta', 'num_streamed_chunks': 17, 'z_value': 65_984_053}, {'db_name': 'mgnify', 'db_path': f'{DB_ROOT_PATH}mgy_clusters_2019_05.fasta', 'num_streamed_chunks': 71, 'z_value': 304_820_129}, ] # Search UniProt and construct the all_seq features only for heteromers, not homomers. if model_type_to_use == notebook_utils.ModelType.MULTIMER and len(set(sequences)) > 1: MSA_DATABASES.extend([ # Swiss-Prot and TrEMBL are concatenated together as UniProt. {'db_name': 'uniprot', 'db_path': f'{DB_ROOT_PATH}uniprot_2021_03.fasta', 'num_streamed_chunks': 98, 'z_value': 219_174_961 + 565_254}, ]) TOTAL_JACKHMMER_CHUNKS = sum([cfg['num_streamed_chunks'] for cfg in MSA_DATABASES]) MAX_HITS = { 'uniref90': 10_000, 'smallbfd': 5_000, 'mgnify': 501, 'uniprot': 50_000, } def get_msa(fasta_path): """Searches for MSA for the given sequence using chunked Jackhmmer search.""" # Run the search against chunks of genetic databases (since the genetic # databases don't fit in Colab disk). raw_msa_results = collections.defaultdict(list) with tqdm.notebook.tqdm(total=TOTAL_JACKHMMER_CHUNKS, bar_format=TQDM_BAR_FORMAT) as pbar: def jackhmmer_chunk_callback(i): pbar.update(n=1) for db_config in MSA_DATABASES: db_name = db_config['db_name'] pbar.set_description(f'Searching {db_name}') jackhmmer_runner = jackhmmer.Jackhmmer( binary_path=JACKHMMER_BINARY_PATH, database_path=db_config['db_path'], get_tblout=True, num_streamed_chunks=db_config['num_streamed_chunks'], streaming_callback=jackhmmer_chunk_callback, z_value=db_config['z_value']) # Group the results by database name. raw_msa_results[db_name].extend(jackhmmer_runner.query(fasta_path)) return raw_msa_results features_for_chain = {} raw_msa_results_for_sequence = {} for sequence_index, sequence in enumerate(sequences, start=1): print(f'\nGetting MSA for sequence {sequence_index}') fasta_path = f'target_{sequence_index}.fasta' with open(fasta_path, 'wt') as f: f.write(f'>query\n{sequence}') # Don't do redundant work for multiple copies of the same chain in the multimer. if sequence not in raw_msa_results_for_sequence: raw_msa_results = get_msa(fasta_path=fasta_path) raw_msa_results_for_sequence[sequence] = raw_msa_results else: raw_msa_results = copy.deepcopy(raw_msa_results_for_sequence[sequence]) # Extract the MSAs from the Stockholm files. # NB: deduplication happens later in pipeline.make_msa_features. single_chain_msas = [] uniprot_msa = None for db_name, db_results in raw_msa_results.items(): merged_msa = notebook_utils.merge_chunked_msa( results=db_results, max_hits=MAX_HITS.get(db_name)) if merged_msa.sequences and db_name != 'uniprot': single_chain_msas.append(merged_msa) msa_size = len(set(merged_msa.sequences)) print(f'{msa_size} unique sequences found in {db_name} for sequence {sequence_index}') elif merged_msa.sequences and db_name == 'uniprot': uniprot_msa = merged_msa notebook_utils.show_msa_info(single_chain_msas=single_chain_msas, sequence_index=sequence_index) # Turn the raw data into model features. feature_dict = {} feature_dict.update(pipeline.make_sequence_features( sequence=sequence, description='query', num_res=len(sequence))) feature_dict.update(pipeline.make_msa_features(msas=single_chain_msas)) # We don't use templates in AlphaFold Colab notebook, add only empty placeholder features. feature_dict.update(notebook_utils.empty_placeholder_template_features( num_templates=0, num_res=len(sequence))) # Construct the all_seq features only for heteromers, not homomers. if model_type_to_use == notebook_utils.ModelType.MULTIMER and len(set(sequences)) > 1: valid_feats = msa_pairing.MSA_FEATURES + ( 'msa_species_identifiers', ) all_seq_features = { f'{k}_all_seq': v for k, v in pipeline.make_msa_features([uniprot_msa]).items() if k in valid_feats} feature_dict.update(all_seq_features) features_for_chain[protein.PDB_CHAIN_IDS[sequence_index - 1]] = feature_dict # Do further feature post-processing depending on the model type. if model_type_to_use == notebook_utils.ModelType.MONOMER: np_example = features_for_chain[protein.PDB_CHAIN_IDS[0]] elif model_type_to_use == notebook_utils.ModelType.MULTIMER: all_chain_features = {} for chain_id, chain_features in features_for_chain.items(): all_chain_features[chain_id] = pipeline_multimer.convert_monomer_features( chain_features, chain_id) all_chain_features = pipeline_multimer.add_assembly_features(all_chain_features) np_example = feature_processing.pair_and_merge( all_chain_features=all_chain_features) # Pad MSA to avoid zero-sized extra_msa. np_example = pipeline_multimer.pad_msa(np_example, min_num_seq=512) #@title 5. Run AlphaFold and download prediction #@markdown Once this cell has been executed, a zip-archive with #@markdown the obtained prediction will be automatically downloaded #@markdown to your computer. #@markdown In case you are having issues with the relaxation stage, you can disable it below. #@markdown Warning: This means that the prediction might have distracting #@markdown small stereochemical violations. run_relax = True #@param {type:"boolean"} # --- Run the model --- if model_type_to_use == notebook_utils.ModelType.MONOMER: model_names = config.MODEL_PRESETS['monomer'] + ('model_2_ptm',) elif model_type_to_use == notebook_utils.ModelType.MULTIMER: model_names = config.MODEL_PRESETS['multimer'] output_dir = 'prediction' os.makedirs(output_dir, exist_ok=True) plddts = {} ranking_confidences = {} pae_outputs = {} unrelaxed_proteins = {} with tqdm.notebook.tqdm(total=len(model_names) + 1, bar_format=TQDM_BAR_FORMAT) as pbar: for model_name in model_names: pbar.set_description(f'Running {model_name}') cfg = config.model_config(model_name) if model_type_to_use == notebook_utils.ModelType.MONOMER: cfg.data.eval.num_ensemble = 1 elif model_type_to_use == notebook_utils.ModelType.MULTIMER: cfg.model.num_ensemble_eval = 1 params = data.get_model_haiku_params(model_name, './alphafold/data') model_runner = model.RunModel(cfg, params) processed_feature_dict = model_runner.process_features(np_example, random_seed=0) prediction = model_runner.predict(processed_feature_dict, random_seed=random.randrange(sys.maxsize)) mean_plddt = prediction['plddt'].mean() if model_type_to_use == notebook_utils.ModelType.MONOMER: if 'predicted_aligned_error' in prediction: pae_outputs[model_name] = (prediction['predicted_aligned_error'], prediction['max_predicted_aligned_error']) else: # Monomer models are sorted by mean pLDDT. Do not put monomer pTM models here as they # should never get selected. ranking_confidences[model_name] = prediction['ranking_confidence'] plddts[model_name] = prediction['plddt'] elif model_type_to_use == notebook_utils.ModelType.MULTIMER: # Multimer models are sorted by pTM+ipTM. ranking_confidences[model_name] = prediction['ranking_confidence'] plddts[model_name] = prediction['plddt'] pae_outputs[model_name] = (prediction['predicted_aligned_error'], prediction['max_predicted_aligned_error']) # Set the b-factors to the per-residue plddt. final_atom_mask = prediction['structure_module']['final_atom_mask'] b_factors = prediction['plddt'][:, None] * final_atom_mask unrelaxed_protein = protein.from_prediction( processed_feature_dict, prediction, b_factors=b_factors, remove_leading_feature_dimension=( model_type_to_use == notebook_utils.ModelType.MONOMER)) unrelaxed_proteins[model_name] = unrelaxed_protein # Delete unused outputs to save memory. del model_runner del params del prediction pbar.update(n=1) # --- AMBER relax the best model --- # Find the best model according to the mean pLDDT. best_model_name = max(ranking_confidences.keys(), key=lambda x: ranking_confidences[x]) if run_relax: pbar.set_description(f'AMBER relaxation') amber_relaxer = relax.AmberRelaxation( max_iterations=0, tolerance=2.39, stiffness=10.0, exclude_residues=[], max_outer_iterations=3, use_gpu=True) relaxed_pdb, _, _ = amber_relaxer.process(prot=unrelaxed_proteins[best_model_name]) else: print('Warning: Running without the relaxation stage.') relaxed_pdb = protein.to_pdb(unrelaxed_proteins[best_model_name]) pbar.update(n=1) # Finished AMBER relax. # Construct multiclass b-factors to indicate confidence bands # 0=very low, 1=low, 2=confident, 3=very high banded_b_factors = [] for plddt in plddts[best_model_name]: for idx, (min_val, max_val, _) in enumerate(PLDDT_BANDS): if plddt >= min_val and plddt <= max_val: banded_b_factors.append(idx) break banded_b_factors = np.array(banded_b_factors)[:, None] * final_atom_mask to_visualize_pdb = utils.overwrite_b_factors(relaxed_pdb, banded_b_factors) # Write out the prediction pred_output_path = os.path.join(output_dir, 'selected_prediction.pdb') with open(pred_output_path, 'w') as f: f.write(relaxed_pdb) # --- Visualise the prediction & confidence --- show_sidechains = True def plot_plddt_legend(): """Plots the legend for pLDDT.""" thresh = ['Very low (pLDDT < 50)', 'Low (70 > pLDDT > 50)', 'Confident (90 > pLDDT > 70)', 'Very high (pLDDT > 90)'] colors = [x[2] for x in PLDDT_BANDS] plt.figure(figsize=(2, 2)) for c in colors: plt.bar(0, 0, color=c) plt.legend(thresh, frameon=False, loc='center', fontsize=20) plt.xticks([]) plt.yticks([]) ax = plt.gca() ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['bottom'].set_visible(False) plt.title('Model Confidence', fontsize=20, pad=20) return plt # Show the structure coloured by chain if the multimer model has been used. if model_type_to_use == notebook_utils.ModelType.MULTIMER: multichain_view = py3Dmol.view(width=800, height=600) multichain_view.addModelsAsFrames(to_visualize_pdb) multichain_style = {'cartoon': {'colorscheme': 'chain'}} multichain_view.setStyle({'model': -1}, multichain_style) multichain_view.zoomTo() multichain_view.show() # Color the structure by per-residue pLDDT color_map = {i: bands[2] for i, bands in enumerate(PLDDT_BANDS)} view = py3Dmol.view(width=800, height=600) view.addModelsAsFrames(to_visualize_pdb) style = {'cartoon': {'colorscheme': {'prop': 'b', 'map': color_map}}} if show_sidechains: style['stick'] = {} view.setStyle({'model': -1}, style) view.zoomTo() grid = GridspecLayout(1, 2) out = Output() with out: view.show() grid[0, 0] = out out = Output() with out: plot_plddt_legend().show() grid[0, 1] = out display.display(grid) # Display pLDDT and predicted aligned error (if output by the model). if pae_outputs: num_plots = 2 else: num_plots = 1 plt.figure(figsize=[8 * num_plots, 6]) plt.subplot(1, num_plots, 1) plt.plot(plddts[best_model_name]) plt.title('Predicted LDDT') plt.xlabel('Residue') plt.ylabel('pLDDT') if num_plots == 2: plt.subplot(1, 2, 2) pae, max_pae = list(pae_outputs.values())[0] plt.imshow(pae, vmin=0., vmax=max_pae, cmap='Greens_r') plt.colorbar(fraction=0.046, pad=0.04) # Display lines at chain boundaries. best_unrelaxed_prot = unrelaxed_proteins[best_model_name] total_num_res = best_unrelaxed_prot.residue_index.shape[-1] chain_ids = best_unrelaxed_prot.chain_index for chain_boundary in np.nonzero(chain_ids[:-1] - chain_ids[1:]): if chain_boundary.size: plt.plot([0, total_num_res], [chain_boundary, chain_boundary], color='red') plt.plot([chain_boundary, chain_boundary], [0, total_num_res], color='red') plt.title('Predicted Aligned Error') plt.xlabel('Scored residue') plt.ylabel('Aligned residue') # Save the predicted aligned error (if it exists). pae_output_path = os.path.join(output_dir, 'predicted_aligned_error.json') if pae_outputs: # Save predicted aligned error in the same format as the AF EMBL DB. pae_data = notebook_utils.get_pae_json(pae=pae, max_pae=max_pae.item()) with open(pae_output_path, 'w') as f: f.write(pae_data) # --- Download the predictions --- !zip -q -r {output_dir}.zip {output_dir} files.download(f'{output_dir}.zip') ``` ### Interpreting the prediction In general predicted LDDT (pLDDT) is best used for intra-domain confidence, whereas Predicted Aligned Error (PAE) is best used for determining between domain or between chain confidence. Please see the [AlphaFold methods paper](https://www.nature.com/articles/s41586-021-03819-2), the [AlphaFold predictions of the human proteome paper](https://www.nature.com/articles/s41586-021-03828-1), and the [AlphaFold-Multimer paper](https://www.biorxiv.org/content/10.1101/2021.10.04.463034v1) as well as [our FAQ](https://alphafold.ebi.ac.uk/faq) on how to interpret AlphaFold predictions. ## FAQ & Troubleshooting * How do I get a predicted protein structure for my protein? * Click on the _Connect_ button on the top right to get started. * Paste the amino acid sequence of your protein (without any headers) into the “Enter the amino acid sequence to fold”. * Run all cells in the Colab, either by running them individually (with the play button on the left side) or via _Runtime_ > _Run all._ Make sure you run all 5 cells in order. * The predicted protein structure will be downloaded once all cells have been executed. Note: This can take minutes to hours - see below. * How long will this take? * Downloading the AlphaFold source code can take up to a few minutes. * Downloading and installing the third-party software can take up to a few minutes. * The search against genetic databases can take minutes to hours. * Running AlphaFold and generating the prediction can take minutes to hours, depending on the length of your protein and on which GPU-type Colab has assigned you. * My Colab no longer seems to be doing anything, what should I do? * Some steps may take minutes to hours to complete. * If nothing happens or if you receive an error message, try restarting your Colab runtime via _Runtime_ > _Restart runtime_. * If this doesn’t help, try resetting your Colab runtime via _Runtime_ > _Factory reset runtime_. * How does this compare to the open-source version of AlphaFold? * This Colab version of AlphaFold searches a selected portion of the BFD dataset and currently doesn’t use templates, so its accuracy is reduced in comparison to the full version of AlphaFold that is described in the [AlphaFold paper](https://doi.org/10.1038/s41586-021-03819-2) and [Github repo](https://github.com/deepmind/alphafold/) (the full version is available via the inference script). * What is a Colab? * See the [Colab FAQ](https://research.google.com/colaboratory/faq.html). * I received a warning “Notebook requires high RAM”, what do I do? * The resources allocated to your Colab vary. See the [Colab FAQ](https://research.google.com/colaboratory/faq.html) for more details. * You can execute the Colab nonetheless. * I received an error “Colab CPU runtime not supported” or “No GPU/TPU found”, what do I do? * Colab CPU runtime is not supported. Try changing your runtime via _Runtime_ > _Change runtime type_ > _Hardware accelerator_ > _GPU_. * The type of GPU allocated to your Colab varies. See the [Colab FAQ](https://research.google.com/colaboratory/faq.html) for more details. * If you receive “Cannot connect to GPU backend”, you can try again later to see if Colab allocates you a GPU. * [Colab Pro](https://colab.research.google.com/signup) offers priority access to GPUs. * I received an error “ModuleNotFoundError: No module named ...”, even though I ran the cell that imports it, what do I do? * Colab notebooks on the free tier time out after a certain amount of time. See the [Colab FAQ](https://research.google.com/colaboratory/faq.html#idle-timeouts). Try rerunning the whole notebook from the beginning. * Does this tool install anything on my computer? * No, everything happens in the cloud on Google Colab. * At the end of the Colab execution a zip-archive with the obtained prediction will be automatically downloaded to your computer. * How should I share feedback and bug reports? * Please share any feedback and bug reports as an [issue](https://github.com/deepmind/alphafold/issues) on Github. ## Related work Take a look at these Colab notebooks provided by the community (please note that these notebooks may vary from our validated AlphaFold system and we cannot guarantee their accuracy): * The [ColabFold AlphaFold2 notebook](https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/AlphaFold2.ipynb) by Sergey Ovchinnikov, Milot Mirdita and Martin Steinegger, which uses an API hosted at the Södinglab based on the MMseqs2 server ([Mirdita et al. 2019, Bioinformatics](https://academic.oup.com/bioinformatics/article/35/16/2856/5280135)) for the multiple sequence alignment creation. # License and Disclaimer This is not an officially-supported Google product. This Colab notebook and other information provided is for theoretical modelling only, caution should be exercised in its use. It is provided ‘as-is’ without any warranty of any kind, whether expressed or implied. Information is not intended to be a substitute for professional medical advice, diagnosis, or treatment, and does not constitute medical or other professional advice. Copyright 2021 DeepMind Technologies Limited. ## AlphaFold Code License Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ## Model Parameters License The AlphaFold parameters are made available under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) license. You can find details at: https://creativecommons.org/licenses/by/4.0/legalcode ## Third-party software Use of the third-party software, libraries or code referred to in the [Acknowledgements section](https://github.com/deepmind/alphafold/#acknowledgements) in the AlphaFold README may be governed by separate terms and conditions or license provisions. Your use of the third-party software, libraries or code is subject to any such terms and you should check that you can comply with any applicable restrictions or terms and conditions before use. ## Mirrored Databases The following databases have been mirrored by DeepMind, and are available with reference to the following: * UniProt: v2021\_03 (unmodified), by The UniProt Consortium, available under a [Creative Commons Attribution-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nd/4.0/). * UniRef90: v2021\_03 (unmodified), by The UniProt Consortium, available under a [Creative Commons Attribution-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nd/4.0/). * MGnify: v2019\_05 (unmodified), by Mitchell AL et al., available free of all copyright restrictions and made fully and freely available for both non-commercial and commercial use under [CC0 1.0 Universal (CC0 1.0) Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/). * BFD: (modified), by Steinegger M. and Söding J., modified by DeepMind, available under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by/4.0/). See the Methods section of the [AlphaFold proteome paper](https://www.nature.com/articles/s41586-021-03828-1) for details.
github_jupyter
``` import pandas as pd from sklearn.feature_extraction.text import CountVectorizer from sklearn import svm from sklearn.svm import SVC, LinearSVC # Read in the data data = pd.read_csv('Full_Data.csv', encoding = "ISO-8859-1") data.head(1) train = data[data['Date'] < '20150101'] test = data[data['Date'] > '20141231'] # Removing punctuations slicedData= train.iloc[:,2:27] slicedData.replace(to_replace="[^a-zA-Z]", value=" ", regex=True, inplace=True) # Renaming column names for ease of access list1= [i for i in range(25)] new_Index=[str(i) for i in list1] slicedData.columns= new_Index slicedData.head(5) # Convertng headlines to lower case for index in new_Index: slicedData[index]=slicedData[index].str.lower() slicedData.head(1) headlines = [] for row in range(0,len(slicedData.index)): headlines.append(' '.join(str(x) for x in slicedData.iloc[row,0:25])) headlines[0] basicvectorizer = CountVectorizer(ngram_range=(1,1)) basictrain = basicvectorizer.fit_transform(headlines) %%time basicmodel = svm.LinearSVC(C=0.1, class_weight='balanced') basicmodel = basicmodel.fit(basictrain, train["Label"]) testheadlines = [] for row in range(0,len(test.index)): testheadlines.append(' '.join(str(x) for x in test.iloc[row,2:27])) %%time basictest = basicvectorizer.transform(testheadlines) predictions = basicmodel.predict(basictest) predictions pd.crosstab(test["Label"], predictions, rownames=["Actual"], colnames=["Predicted"]) print(basictrain.shape) from sklearn.metrics import classification_report from sklearn.metrics import f1_score from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix print (classification_report(test["Label"], predictions)) print (accuracy_score(test["Label"], predictions)) basicvectorizer2 = CountVectorizer(ngram_range=(2,2)) basictrain2 = basicvectorizer2.fit_transform(headlines) %%time basicmodel2 = svm.LinearSVC(C=0.1, class_weight='balanced') basicmodel2 = basicmodel2.fit(basictrain2, train["Label"]) %%time basictest2 = basicvectorizer2.transform(testheadlines) predictions2 = basicmodel2.predict(basictest2) pd.crosstab(test["Label"], predictions2, rownames=["Actual"], colnames=["Predicted"]) print(basictrain2.shape) print (classification_report(test["Label"], predictions2)) print (accuracy_score(test["Label"], predictions2)) basicvectorizer3 = CountVectorizer(ngram_range=(3,3)) basictrain3 = basicvectorizer3.fit_transform(headlines) %%timeit basicmodel3 = svm.LinearSVC(C=0.1, class_weight='balanced') basicmodel3 = basicmodel3.fit(basictrain3, train["Label"]) %%timeit basictest3 = basicvectorizer3.transform(testheadlines) predictions3 = basicmodel3.predict(basictest3) pd.crosstab(test["Label"], predictions3, rownames=["Actual"], colnames=["Predicted"]) print(basictrain3.shape) print (classification_report(test["Label"], predictions3)) print (accuracy_score(test["Label"], predictions3)) ```
github_jupyter
## XYZ Pro Features This notebook demonstrates some of the pro features for XYZ Hub API. XYZ paid features can be found here: [xyz pro features](https://www.here.xyz/xyz_pro/). XYZ plans can be found here: [xyz plans](https://developer.here.com/pricing). ### Virtual Space A virtual space is described by definition which references other existing spaces(the upstream spaces). Queries being done to a virtual space will return the features of its upstream spaces combined. Below are different predefined operations of how to combine the features of the upstream spaces. - [group](#group_cell) - [merge](#merge_cell) - [override](#override_cell) - [custom](#custom_cell) ``` # Make necessary imports. import os import json import warnings from xyzspaces.datasets import get_chicago_parks_data, get_countries_data from xyzspaces.exceptions import ApiError import xyzspaces ``` <div class="alert alert-block alert-warning"> <b>Warning:</b> Before running below cells please make sure you have XYZ Token to interact with xyzspaces. Please see README.md in notebooks folder for more info on XYZ_TOKEN </div> ``` # Make a XYZ object try: xyz_token = os.environ["XYZ_TOKEN"] except KeyError: xyz_token = "MY-FANCY-XYZ-TOKEN" if xyz_token == "MY-FANCY-XYZ-TOKEN": warnings.warn( "Please either set your actual token to env variable XYZ_TOKEN or " "just assign value of your actual token to variable xyz_token above." ) xyz = xyzspaces.XYZ(credentials=xyz_token) # create two spaces which will act as upstream spaces for virtual space created later. title1 = "Testing xyzspaces" description1 = "Temporary space containing countries data." space1 = xyz.spaces.new(title=title1, description=description1) # Add some data to it space1 gj_countries = get_countries_data() space1.add_features(features=gj_countries) space_id1 = space1.info["id"] title2 = "Testing xyzspaces" description2 = "Temporary space containing Chicago parks data." space2 = xyz.spaces.new(title=title2, description=description2) # Add some data to space2 with open('./data/chicago_parks.geo.json', encoding="utf-8-sig") as json_file: gj_chicago = json.load(json_file) space2.add_features(features=gj_chicago) space_id2 = space2.info["id"] ``` <a id='group_cell'></a> #### Group Group means to combine the content of the specified spaces. All objects of each space will be part of the response when the virtual space is queried by the user. The information about which object came from which space can be found in the XYZ-namespace in the properties of each feature. When writing back these objects to the virtual space they'll be written back to the upstream space from which they were actually coming. ``` # Create a new virtual space by grouping two spaces created above. title = "Virtual Space for coutries and Chicago parks data" description = "Test group functionality of virtual space" upstream_spaces = [space_id1, space_id2] kwargs = {"virtualspace": dict(group=upstream_spaces)} vspace = xyz.spaces.virtual(title=title, description=description, **kwargs) print(json.dumps(vspace.info, indent=2)) # Reading a particular feature from space1 via virtual space. vfeature1 = vspace.get_feature(feature_id="FRA") feature1 = space1.get_feature(feature_id="FRA") assert vfeature1 == feature1 # Reading a particular feature from space2 via virtual space. vfeature2 = vspace.get_feature(feature_id="LP") feature2 = space2.get_feature(feature_id="LP") assert vfeature2 == feature2 # Deleting a feature from virtual space deletes corresponding feature from upstream space. vspace.delete_feature(feature_id="FRA") try: space1.get_feature("FRA") except ApiError as err: print(err) # Delete temporary spaces created. vspace.delete() space1.delete() space2.delete() ``` <a id='merge_cell'></a> #### Merge Merge means that objects with the same ID will be merged together. If there are duplicate feature-IDs in the various data of the upstream spaces, the duplicates will be merged to build a single feature. The result will be a response that is guaranteed to have no features with duplicate IDs. The merge will happen in the order of the space-references in the specified array. That means objects coming from the second space will overwrite potentially existing property values of objects coming from the first space. The information about which object came from which space(s) can be found in the XYZ-namespace in the properties of each feature. When writing back these objects to the virtual space they'll be written back to the upstream space from which they were actually coming, or the last one in the list if none was specified.When deleting features from the virtual space a new pseudo-deleted feature is written to the last space in the list. Trying to read the feature with that ID from the virtual space is not possible afterward. ``` # create two spaces with duplicate data title1 = "Testing xyzspaces" description1 = "Temporary space containing Chicago parks data." space1 = xyz.spaces.new(title=title1, description=description1) with open('./data/chicago_parks.geo.json', encoding="utf-8-sig") as json_file: gj_chicago = json.load(json_file) # Add some data to it space1 space1.add_features(features=gj_chicago) space_id1 = space1.info["id"] title2 = "Testing xyzspaces duplicate" description2 = "Temporary space containing Chicago parks data duplicate" space2 = xyz.spaces.new(title=title1, description=description1) # Add some data to it space2 space2.add_features(features=gj_chicago) space_id2 = space2.info["id"] # update a particular feature of second space so that post merge virtual space will have this feature merged lp = space2.get_feature("LP") space2.update_feature(feature_id="LP", data=lp, add_tags=["foo", "bar"]) # Create a new virtual space by merging two spaces created above. title = "Virtual Space for coutries and Chicago parks data" description = "Test merge functionality of virtual space" upstream_spaces = [space_id1, space_id2] kwargs = {"virtualspace": dict(merge=upstream_spaces)} vspace = xyz.spaces.virtual(title=title, description=description, **kwargs) print(vspace.info) vfeature1 = vspace.get_feature(feature_id="LP") assert vfeature1["properties"]["@ns:com:here:xyz"]["tags"] == ["foo", "bar"] bp = space2.get_feature("BP") space2.update_feature(feature_id="BP", data=lp, add_tags=["foo1", "bar1"]) vfeature2 = vspace.get_feature(feature_id="BP") assert vfeature2["properties"]["@ns:com:here:xyz"]["tags"] == ["foo1", "bar1"] space1.delete() space2.delete() vspace.delete() ``` <a id='override_cell'></a> #### Override Override means that objects with the same ID will be overridden completely. If there are duplicate feature-IDs in the various data of the upstream spaces, the duplicates will be overridden to result in a single feature. The result will be a response that is guaranteed to have no features with duplicate IDs. The override will happen in the order of the space-references in the specified array. That means objects coming from the second space one will override potentially existing features coming from the first space. The information about which object came from which space can be found in the XYZ-namespace in the properties of each feature. When writing back these objects to the virtual space they'll be written back to the upstream space from which they were actually coming. When deleting features from the virtual space the same rules as for merge apply. ``` # create two spaces with duplicate data title1 = "Testing xyzspaces" description1 = "Temporary space containing Chicago parks data." space1 = xyz.spaces.new(title=title1, description=description1) with open('./data/chicago_parks.geo.json', encoding="utf-8-sig") as json_file: gj_chicago = json.load(json_file) # Add some data to it space1 space1.add_features(features=gj_chicago) space_id1 = space1.info["id"] title2 = "Testing xyzspaces duplicate" description2 = "Temporary space containing Chicago parks data duplicate" space2 = xyz.spaces.new(title=title1, description=description1) # Add some data to it space2 space2.add_features(features=gj_chicago) space_id2 = space2.info["id"] # Create a new virtual space by override operation. title = "Virtual Space for coutries and Chicago parks data" description = "Test merge functionality of virtual space" upstream_spaces = [space_id1, space_id2] kwargs = {"virtualspace": dict(override=upstream_spaces)} vspace = xyz.spaces.virtual(title=title, description=description, **kwargs) print(vspace.info) bp = space2.get_feature("BP") space2.update_feature(feature_id="BP", data=bp, add_tags=["foo1", "bar1"]) vfeature2 = vspace.get_feature(feature_id="BP") assert vfeature2["properties"]["@ns:com:here:xyz"]["tags"] == ["foo1", "bar1"] space1.delete() space2.delete() vspace.delete() ``` ### Applying clustering in space ``` # create two spaces which will act as upstream spaces for virtual space created later. title1 = "Testing xyzspaces" description1 = "Temporary space containing countries data." space1 = xyz.spaces.new(title=title1, description=description1) # Add some data to it space1 gj_countries = get_countries_data() space1.add_features(features=gj_countries) space_id1 = space1.info["id"] # Genereate clustering for the space space1.cluster(clustering='hexbin') # Delete created space space1.delete() ``` ### Rule based Tagging Rule based tagging makes tagging multiple features in space tagged to a particular tag, based in rules mentioned based on JSON-path expression. Users can update space with a map of rules where the key is the tag to be applied to all features matching the JSON-path expression being the value. If multiple rules are matching, multiple tags will be applied to the according to matched sets of features. It could even happen that a feature is matched by multiple rules and thus multiple tags will get added to it. ``` # Create a new space title = "Testing xyzspaces" description = "Temporary space containing Chicago parks data." space = xyz.spaces.new(title=title, description=description) # Add data to the space. with open('./data/chicago_parks.geo.json', encoding="utf-8-sig") as json_file: gj_chicago = json.load(json_file) _ = space.add_features(features=gj_chicago) # update space to add tagging rules to the above mentioned space. tagging_rules = { "large": "$.features[?(@.properties.area>=500)]", "small": "$.features[?(@.properties.area<500)]", } _ = space.update(tagging_rules=tagging_rules) # verify that features are tagged correctly based on rules. large_parks = space.search(tags=["large"]) for park in large_parks: assert park["id"] in ["LP", "BP", "JP"] small_parks = space.search(tags=["small"]) for park in small_parks: assert park["id"] in ["MP", "GP", "HP", "DP", "CP", "COP"] # Delete created space space.delete() ``` ### Activity Log The Activity log will enable tracking of changes in your space. To activate it, just create a space with the listener added and enable_uuid set to True. More information on the activity log can be found [here](https://www.here.xyz/api/devguide/activitylogguide/). ``` title = "Activity-Log Test" description = "Activity-Log Test" listeners = { "id": "activity-log", "params": {"states": 5, "storageMode": "DIFF_ONLY", "writeInvalidatedAt": "true"}, "eventTypes": ["ModifySpaceEvent.request"], } space = xyz.spaces.new( title=title, description=description, enable_uuid=True, listeners=listeners, ) from time import sleep # As activity log is async operation adding sleep to get info sleep(5) print(json.dumps(space.info, indent=2)) space.delete() ```
github_jupyter
``` import cudf import cupy as cp import kvikio import numpy as np import pandas as pd import time import zarr # conda install -c conda-forge zarr HOST_LZ4_MAX = 2013929216 # 2113929216 sizes = list(map(lambda x: HOST_LZ4_MAX//(2**x), np.arange(20))) print(sizes) input_size = [] cascaded_size = [] cascaded_temp_size = [] cascaded_round_trip_time = [] lz4_gpu_size = [] lz4_gpu_temp_size = [] lz4_gpu_round_trip_time = [] bitcomp_gpu_size = [] bitcomp_gpu_temp_size = [] bitcomp_gpu_round_trip_time = [] lz4_size = [] lz4_round_trip_time = [] # !wget 'http://textfiles.com/etext/NONFICTION/kjv10.txt' text = open('kjv10.txt').read() bib = np.frombuffer(bytes(text, 'utf-8'), dtype=np.int8) data_buffer = np.tile(bib, 500) # One of the three below keys, this will set the arrangement of test data for a full run of the notebook. TARGET = "Ascending" DTYPE = cp.int32 data = { "Ascending": np.arange(0, HOST_LZ4_MAX, dtype=np.int32), "Random": np.random.randint(0, 100, HOST_LZ4_MAX, dtype=np.int32), "Text": data_buffer } def get_host_data(offset, dtype): exemplar = np.array([1], dtype=dtype) print(offset) print(exemplar.itemsize) print(data[TARGET].itemsize) index = offset // data[TARGET].itemsize index = index - (index % exemplar.itemsize) print(index) return data[TARGET][0:index].view(dtype) input_size = [] cascaded_size = [] cascaded_temp_size = [] cascaded_round_trip_time = [] lz4_gpu_size = [] lz4_gpu_temp_size = [] lz4_gpu_round_trip_time = [] lz4_size = [] lz4_round_trip_time = [] for size in sizes: data_host = get_host_data(size, DTYPE) data_gpu = cp.array(data_host) """Cascaded GPU""" t_gpu = time.time() compressor = kvikio.nvcomp.CascadedCompressor(data_gpu.dtype) compressed = compressor.compress(data_gpu) output_size = compressor.compress_out_size temp_size = compressor.compress_temp_size decompressed = compressor.decompress(compressed) decompressed_size = decompressed.size * decompressed.itemsize input_size.append(data_gpu.size * data_gpu.itemsize) cascaded_round_trip_time.append(time.time() - t_gpu) cascaded_size.append(output_size.item()) cascaded_temp_size.append(temp_size[0]) print('-----') print('Input size: ', data_gpu.size * data_gpu.itemsize) print('Cascaded GPU compressor output size: ', output_size) print('Cascaded GPU decompressor temp size: ', temp_size) print('Cascaded GPU decompressor output size: ', decompressed_size) print('Cascaded GPU compress/decompress round trip time: ',time.time() - t_gpu) del compressor """LZ4 Host""" lz4 = zarr.LZ4() t_host = time.time() host_compressed = lz4.encode(data_gpu.get()) del data_gpu print(len(host_compressed)) host_compressed = host_compressed[:2113929216] host_decompressed = lz4.decode(host_compressed) print('Lz4 zarr time: ', time.time() - t_host) print('Lz4 compressed size: ', len(host_compressed)) lz4_size.append(len(host_compressed)) lz4_round_trip_time.append(time.time() - t_host) lz4_gpu_size = [] lz4_gpu_temp_size = [] lz4_gpu_round_trip_time = [] for size in sizes: data_host = get_host_data(size, DTYPE) data_gpu = cp.array(data_host) """LZ4 GPU""" data_gpu = cp.array(data_host) t_gpu = time.time() compressor = kvikio.nvcomp.LZ4Compressor(data_gpu.dtype) compressed = compressor.compress(data_gpu) output_size = compressor.compress_out_size temp_size = compressor.compress_temp_size decompressed = compressor.decompress(compressed) decompressed_size = decompressed.size * decompressed.itemsize lz4_gpu_round_trip_time.append(time.time() - t_gpu) lz4_gpu_size.append(output_size.item()) lz4_gpu_temp_size.append(temp_size[0]) print('lz4 GPU compressor output size: ', output_size) print('lz4 GPU decompressor temp size: ', temp_size) print('lz4 GPU decompressor output size: ', decompressed_size) print('lz4 GPU compress/decompress round trip time: ',time.time() - t_gpu) # zarr lz4 max buffer size is 264241152 int64s # zarr lz4 max buffer size is 2113929216 bytes # cascaded max buffer size is 2147483640 bytes # cascaded max buffer size is 268435456 int64s print(input_size) print(cascaded_size) print(cascaded_temp_size) print(cascaded_round_trip_time) print(lz4_gpu_size) print(lz4_gpu_temp_size) print(lz4_gpu_round_trip_time) print(lz4_size) print(lz4_round_trip_time) df = pd.DataFrame({ 'Input Size (Bytes)': input_size, 'cascaded_size': cascaded_size, 'cascaded_temp_size': cascaded_temp_size, 'cascaded_round_trip_time': cascaded_round_trip_time, 'lz4_gpu_size': lz4_gpu_size, 'lz4_gpu_temp_size': lz4_gpu_temp_size, 'lz4_gpu_round_trip_time': lz4_gpu_round_trip_time, 'lz4_size': lz4_size, 'lz4_round_trip_time': lz4_round_trip_time }) ### You'll need the following to display the upcoming plots. ### # !conda install -c conda-forge plotly # !npm install require df['Cascaded Compression Ratio'] = df['Input Size (Bytes)'] / df['cascaded_size'] df['Lz4 Gpu Compression Ratio'] = df['Input Size (Bytes)'] / df['lz4_gpu_size'] df['Lz4 Host Compression Ratio'] = df['Input Size (Bytes)'] / df['lz4_size'] df['Cascaded Temp Buffer Size Ratio'] = df['cascaded_temp_size'] / df['Input Size (Bytes)'] df['Lz4 Temp Buffer Size Ratio'] = df['lz4_gpu_temp_size'] / df['Input Size (Bytes)'] df['Cascaded Speedup'] = df['lz4_round_trip_time'] / df['cascaded_round_trip_time'] df['Lz4 Gpu Speedup'] = df['lz4_round_trip_time'] / df['lz4_gpu_round_trip_time'] print(df.columns) import plotly.express as px title = 'Gpu Acceleration over Zarr Lz4 - ' + TARGET + " " + str(DTYPE) subtitle = 'Includes host->gpu copy time' fig = px.line(df, x='Input Size (Bytes)', y=['Cascaded Speedup', 'Lz4 Gpu Speedup'], labels={'value': 'Multiple Faster'}, title=title) fig.update_xaxes(type='category') fig.show() import plotly.express as px title = 'Compression - ' + TARGET + " " + str(DTYPE) fig = px.line(df, x='Input Size (Bytes)', y=[ 'Lz4 Gpu Compression Ratio', 'Cascaded Compression Ratio', 'Lz4 Host Compression Ratio' ], labels={'value': 'Compression Factor'}, title=title) fig.update_xaxes(type='category') fig.show() import plotly.express as px title = 'Temp Buffer - ' + TARGET fig = px.line(df, x='Input Size (Bytes)', y=[ 'Cascaded Temp Buffer Size Ratio', 'Lz4 Temp Buffer Size Ratio' ], labels={'value': 'temp / input size'}, title=title) fig.update_xaxes(type='category') fig.show() ```
github_jupyter
``` from copy import deepcopy import itertools import jsonlines from pathlib import Path from sklearn.metrics import classification_report from sklearn.metrics import cohen_kappa_score from tasks.pattern_matching import WinomtPatternMatchingTask from tasks.pattern_matching.winomt_utils.language_predictors.util import WB_GENDER_TYPES, GENDER from tasks.contrastive_conditioning import WinomtContrastiveConditioningTask from translation_models.fairseq_models import load_sota_evaluator from translation_models.testing_models import DictTranslationModel # Define data paths data_path = Path(".") / "data" winomt_ende_translations_path = data_path / "aws.de.full.txt" winomt_ende_annotator1_path = data_path / "en-de.annotator1.jsonl" winomt_ende_annotator2_path = data_path / "en-de.annotator2.jsonl" # Load annotations with jsonlines.open(winomt_ende_annotator1_path) as f: annotations1 = {line["Sample ID"]: line for line in f} with jsonlines.open(winomt_ende_annotator2_path) as f: annotations2 = {line["Sample ID"]: line for line in f} # Flatten labels for key in annotations1: annotations1[key]["label"] = annotations1[key]["label"][0] for key in annotations2: annotations2[key]["label"] = annotations2[key]["label"][0] # Remove samples that were only partially annotated for key in list(annotations1.keys()): if key not in annotations2: del annotations1[key] for key in list(annotations2.keys()): if key not in annotations1: del annotations2[key] # Inter-annotator agreement before data cleaning keys = list(annotations1.keys()) labels1 = [annotations1[key]["label"] for key in keys] labels2 = [annotations2[key]["label"] for key in keys] kappa = cohen_kappa_score(labels1, labels2) print(kappa) # Clean data for annotations in [annotations1, annotations2]: for key in keys: # Treat neutral as correct if annotations[key]["label"] == "Both / Neutral / Ambiguous": annotations[key]["label"] = annotations[key]["Gold Gender"].title() # Treat bad as incorrect if annotations[key]["label"] == "Translation too bad to tell": annotations[key]["label"] = "Male" if annotations[key]["Gold Gender"] == "female" else "Female" # Inter-annotator agreement after data cleaning keys = list(annotations1.keys()) labels1 = [annotations1[key]["label"] for key in keys] labels2 = [annotations2[key]["label"] for key in keys] kappa = cohen_kappa_score(labels1, labels2) print(kappa) # Merge annotations annotations = list(itertools.chain(annotations1.values(), annotations2.values())) # Load translations with open(winomt_ende_translations_path) as f: translations = {line.split(" ||| ")[0].strip(): line.split(" ||| ")[1].strip() for line in f} # Run classic (pattern-matching) WinoMT winomt_pattern_matching = WinomtPatternMatchingTask( tgt_language="de", skip_neutral_gold=False, verbose=True, ) pattern_matching_evaluated_samples = winomt_pattern_matching.evaluate(DictTranslationModel(translations)).samples # Run contrastive conditioning evaluator_model = load_sota_evaluator("de") winomt_contrastive_conditioning = WinomtContrastiveConditioningTask( evaluator_model=evaluator_model, skip_neutral_gold=False, category_wise_weighting=True, ) contrastive_conditioning_weighted_evaluated_samples = winomt_contrastive_conditioning.evaluate(DictTranslationModel(translations)).samples # Create unweighted contrastive conditioning samples contrastive_conditioning_unweighted_evaluated_samples = deepcopy(contrastive_conditioning_weighted_evaluated_samples) for sample in contrastive_conditioning_unweighted_evaluated_samples: sample.weight = 1 # Evaluate for evaluated_samples in [ pattern_matching_evaluated_samples, contrastive_conditioning_unweighted_evaluated_samples, contrastive_conditioning_weighted_evaluated_samples, ]: predicted_labels = [] gold_labels = [] weights = [] for annotation in annotations: gold_labels.append(WB_GENDER_TYPES[annotation["label"].lower()].value) sample_index = int(annotation["Index"]) evaluated_sample = evaluated_samples[sample_index] assert evaluated_sample.sentence == annotation["Source Sentence"] if hasattr(evaluated_sample, "predicted_gender"): predicted_gender = evaluated_sample.predicted_gender.value # Convert neutral or unknown to gold in order to treat pattern-matching WinoMT as fairly as possible if predicted_gender in {GENDER.neutral.value, GENDER.unknown.value}: predicted_gender = evaluated_sample.gold_gender.value else: if evaluated_sample.is_correct: predicted_gender = WB_GENDER_TYPES[evaluated_sample.gold_gender].value else: predicted_gender = int(not WB_GENDER_TYPES[evaluated_sample.gold_gender].value) predicted_labels.append(predicted_gender) weights.append(getattr(evaluated_sample, "weight", 1)) class_labels = [gender.value for gender in GENDER][:2] target_names = [gender.name for gender in GENDER][:2] print(classification_report( y_true=gold_labels, y_pred=predicted_labels, labels=class_labels, target_names=target_names, sample_weight=weights, zero_division=True, digits=3, )) ```
github_jupyter
<!--BOOK_INFORMATION--> <img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png"> *This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).* *The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!* <!--NAVIGATION--> < [Input and Output History](01.04-Input-Output-History.ipynb) | [Contents](Index.ipynb) | [Errors and Debugging](01.06-Errors-and-Debugging.ipynb) > # IPython and Shell Commands When working interactively with the standard Python interpreter, one of the frustrations is the need to switch between multiple windows to access Python tools and system command-line tools. IPython bridges this gap, and gives you a syntax for executing shell commands directly from within the IPython terminal. The magic happens with the exclamation point: anything appearing after ``!`` on a line will be executed not by the Python kernel, but by the system command-line. The following assumes you're on a Unix-like system, such as Linux or Mac OSX. Some of the examples that follow will fail on Windows, which uses a different type of shell by default (though with the 2016 announcement of native Bash shells on Windows, soon this may no longer be an issue!). If you're unfamiliar with shell commands, I'd suggest reviewing the [Shell Tutorial](http://swcarpentry.github.io/shell-novice/) put together by the always excellent Software Carpentry Foundation. ## Quick Introduction to the Shell A full intro to using the shell/terminal/command-line is well beyond the scope of this chapter, but for the uninitiated we will offer a quick introduction here. The shell is a way to interact textually with your computer. Ever since the mid 1980s, when Microsoft and Apple introduced the first versions of their now ubiquitous graphical operating systems, most computer users have interacted with their operating system through familiar clicking of menus and drag-and-drop movements. But operating systems existed long before these graphical user interfaces, and were primarily controlled through sequences of text input: at the prompt, the user would type a command, and the computer would do what the user told it to. Those early prompt systems are the precursors of the shells and terminals that most active data scientists still use today. Someone unfamiliar with the shell might ask why you would bother with this, when many results can be accomplished by simply clicking on icons and menus. A shell user might reply with another question: why hunt icons and click menus when you can accomplish things much more easily by typing? While it might sound like a typical tech preference impasse, when moving beyond basic tasks it quickly becomes clear that the shell offers much more control of advanced tasks, though admittedly the learning curve can intimidate the average computer user. As an example, here is a sample of a Linux/OSX shell session where a user explores, creates, and modifies directories and files on their system (``osx:~ $`` is the prompt, and everything after the ``$`` sign is the typed command; text that is preceded by a ``#`` is meant just as description, rather than something you would actually type in): ```bash osx:~ $ echo "hello world" # echo is like Python's print function hello world osx:~ $ pwd # pwd = print working directory /home/jake # this is the "path" that we're sitting in osx:~ $ ls # ls = list working directory contents notebooks projects osx:~ $ cd projects/ # cd = change directory osx:projects $ pwd /home/jake/projects osx:projects $ ls datasci_book mpld3 myproject.txt osx:projects $ mkdir myproject # mkdir = make new directory osx:projects $ cd myproject/ osx:myproject $ mv ../myproject.txt ./ # mv = move file. Here we're moving the # file myproject.txt from one directory # up (../) to the current directory (./) osx:myproject $ ls myproject.txt ``` Notice that all of this is just a compact way to do familiar operations (navigating a directory structure, creating a directory, moving a file, etc.) by typing commands rather than clicking icons and menus. Note that with just a few commands (``pwd``, ``ls``, ``cd``, ``mkdir``, and ``cp``) you can do many of the most common file operations. It's when you go beyond these basics that the shell approach becomes really powerful. ## Shell Commands in IPython Any command that works at the command-line can be used in IPython by prefixing it with the ``!`` character. For example, the ``ls``, ``pwd``, and ``echo`` commands can be run as follows: ```ipython In [1]: !ls myproject.txt In [2]: !pwd /home/jake/projects/myproject In [3]: !echo "printing from the shell" printing from the shell ``` ## Passing Values to and from the Shell Shell commands can not only be called from IPython, but can also be made to interact with the IPython namespace. For example, you can save the output of any shell command to a Python list using the assignment operator: ```ipython In [4]: contents = !ls In [5]: print(contents) ['myproject.txt'] In [6]: directory = !pwd In [7]: print(directory) ['/Users/jakevdp/notebooks/tmp/myproject'] ``` Note that these results are not returned as lists, but as a special shell return type defined in IPython: ```ipython In [8]: type(directory) IPython.utils.text.SList ``` This looks and acts a lot like a Python list, but has additional functionality, such as the ``grep`` and ``fields`` methods and the ``s``, ``n``, and ``p`` properties that allow you to search, filter, and display the results in convenient ways. For more information on these, you can use IPython's built-in help features. Communication in the other direction–passing Python variables into the shell–is possible using the ``{varname}`` syntax: ```ipython In [9]: message = "hello from Python" In [10]: !echo {message} hello from Python ``` The curly braces contain the variable name, which is replaced by the variable's contents in the shell command. # Shell-Related Magic Commands If you play with IPython's shell commands for a while, you might notice that you cannot use ``!cd`` to navigate the filesystem: ```ipython In [11]: !pwd /home/jake/projects/myproject In [12]: !cd .. In [13]: !pwd /home/jake/projects/myproject ``` The reason is that shell commands in the notebook are executed in a temporary subshell. If you'd like to change the working directory in a more enduring way, you can use the ``%cd`` magic command: ```ipython In [14]: %cd .. /home/jake/projects ``` In fact, by default you can even use this without the ``%`` sign: ```ipython In [15]: cd myproject /home/jake/projects/myproject ``` This is known as an ``automagic`` function, and this behavior can be toggled with the ``%automagic`` magic function. Besides ``%cd``, other available shell-like magic functions are ``%cat``, ``%cp``, ``%env``, ``%ls``, ``%man``, ``%mkdir``, ``%more``, ``%mv``, ``%pwd``, ``%rm``, and ``%rmdir``, any of which can be used without the ``%`` sign if ``automagic`` is on. This makes it so that you can almost treat the IPython prompt as if it's a normal shell: ```ipython In [16]: mkdir tmp In [17]: ls myproject.txt tmp/ In [18]: cp myproject.txt tmp/ In [19]: ls tmp myproject.txt In [20]: rm -r tmp ``` This access to the shell from within the same terminal window as your Python session means that there is a lot less switching back and forth between interpreter and shell as you write your Python code. <!--NAVIGATION--> < [Input and Output History](01.04-Input-Output-History.ipynb) | [Contents](Index.ipynb) | [Errors and Debugging](01.06-Errors-and-Debugging.ipynb) >
github_jupyter
# 1. Example of modification attack (Graph Classification) [GRB](https://cogdl.ai/grb/home) supports modification attack on graph classification attack. Contents - [Load Datasets](##Load-Datasets) - [Prepare Model](##Prepare-Model) - [Modification Attack](##Modification-Attack) ``` import os import torch import numpy as np import scipy.sparse as sp import grb.utils as utils from grb.dataset import CogDLDataset, OGBDataset ``` ## 1.1. Load Datasets ``` dataset_name = "mutag" data_dir="../../data/" dataset = CogDLDataset(name=dataset_name, data_dir=data_dir) ``` ## 1.2. Prepare Model GRB supports models based on pure Pytorch, CogDL or DGL. The following is an example of GCNGC (GCN for Graph Classification) implemented by pure Pytorch. Other models can be found in ``grb/model/torch``, ``grb/model/cogdl``, or ``grb/model/dgl``. ### 1.2.1. GCNGC (Graph Convolutional Network for Graph Classification) ``` from grb.model.torch import GCNGC model_name = "gcngc" model = GCNGC(in_features=dataset.num_features, out_features=dataset.num_classes, hidden_features=64, n_layers=3, residual=False, dropout=0.5) print("Number of parameters: {}.".format(utils.get_num_params(model))) print(model) ``` ### 1.2.2 Training GRB provides ``grb.trainer.trainer`` that facilitates the training process of GNNs. For Graph Classification task, a mini-batch training on graphs is applied. Multiple graphs are merged into a large graph, then the results are pooled to predict label for each graph. ``` save_dir = "./saved_models/{}/{}".format(dataset_name, model_name) save_name = "model.pt" device = "cuda:0" batch_size = 20 from grb.trainer.trainer import GraphTrainer trainer = GraphTrainer(dataset=dataset, batch_size=batch_size, optimizer=torch.optim.Adam(model.parameters(), lr=0.01), loss=torch.nn.functional.cross_entropy, lr_scheduler=False, early_stop=True, early_stop_patience=50, device=device) trainer.train(model=model, n_epoch=200, eval_every=1, save_after=0, save_dir=save_dir, save_name=save_name, verbose=False) ``` ### 1.2.3 Inference ``` model = torch.load(os.path.join(save_dir, save_name)) model = model.to(device) model.eval() # by trainer pred = trainer.inference(model) ``` ### 1.2.4 Evaluation (without attack) ``` # by trainer test_score = trainer.evaluate(model, dataset.index_test) print("Test score: {:.4f}".format(test_score)) ``` ## 1.3. Modification Attack ``` from grb.attack.modification.flip import FLIP adj_attack_list = [] for i in dataset.index_test: print("Attacking graph {}".format(i)) graph = dataset.graphs[i] adj = utils.build_adj(graph.edge_attr, graph.edge_index) n_edge_test = adj.getnnz() n_mod_ratio = 0.5 n_edge_mod = int(n_edge_test * n_mod_ratio) # degree flipping attack = FLIP(n_edge_mod, flip_type="deg", mode="descend", device=device) adj_attack = attack.attack(adj, index_target=np.arange(graph.num_nodes)) adj_attack_list.append(adj_attack) logits = torch.zeros((len(adj_attack_list), dataset.num_classes)).to(device) for i in range(len(adj_attack_list)): adj = utils.adj_preprocess(adj_attack_list[i], device=device) logits[i] = model(dataset.graphs[dataset.index_test[i]].x.to(device), adj) score = trainer.eval_metric(logits, dataset.labels[dataset.index_test].to(device)) print("Test score (after attack): {:.4f}".format(score)) ``` For further information, please refer to the [GRB Documentation](https://grb.readthedocs.io/en/latest/).
github_jupyter
# Describing quantum computers This chapter will introduce the different mathematical objects and notations that we’re going to be using to describe quantum computers. Symbols, equations and specialist vocabulary allow us to communicate and work with maths in a very concise way, they’re incredibly powerful tools, but they also come at a cost; they are difficult to understand if you don’t know what all the symbols mean, and this can alienate people. To counter this, in this textbook, equations are interactive. You can move your mouse over symbols to see what they mean. We will also slowly sprinkle in some [esoteric](gloss:esoteric) words so you can start speaking the language of mathematics and quantum computing, and you can see explanations of these words by moving your mouse over these words too. ## Amplitudes A classical probability is often represented by a [real number](gloss:real-number) between 0 and 1, but amplitudes also have a direction. A natural candidate to represent an amplitude is a [complex number](gloss:complex-number), as a complex number can also be completely described by both a magnitude and a direction, but in this course we will only work with amplitudes that can point in two directions (e.g. left and right) and we won’t worry about anything else. ![Image comparing amplitudes and probabilities](images/quantum-states/prob-vs-amp.svg) This makes the maths a lot simpler, as we can now describe any amplitude as a number between -1 and +1; if the number is positive, the amplitude is facing forward, and if it's negative, it's facing backwards. It turns out that this is still enough to do interesting things! <!-- ::: q-block.exercise --> ### Quick quiz <!-- ::: q-quiz(goal="intro-describing-0") --> <!-- ::: .question --> Which of these is a valid amplitude but _not_ a valid probability? <!-- ::: --> <!-- ::: .option(correct) --> 1. $-1$ <!-- ::: --> <!-- ::: .option --> 2. $1/3$ <!-- ::: --> <!-- ::: .option --> 3. $1.01$ <!-- ::: --> <!-- ::: .option --> 3. $\sqrt{-2}$ <!-- ::: --> <!-- ::: --> <!-- ::: --> ## State vectors We saw in the last page that we can predict the behaviour of a quantum system by keeping track of the probability amplitudes for each outcome at each point in our computation. We also saw that, for n qubits, there are $2^n$ possible outcomes, and we can store these amplitudes in lists of length $2^n$ which we call vectors. Since these vectors describe the state of our qubits, we call them “state vectors”. Here is an example of a state vector for a quantum computer with two qubits: $$\class{x-ket}{|x\rangle} \class{def-equal}{:=} \begin{bmatrix}\cssId{_amp-0}{\sqrt{\tfrac{1}{2}}} \\ \cssId{_amp-1}{\sqrt{\tfrac{1}{2}}} \\ \cssId{_amp-2}{0} \\ \cssId{_amp-3}{0} \end{bmatrix}$$ Spend some time reading the tooltips on the equation above, then answer the questions below. <!-- ::: q-block.exercise --> ### Quick quiz <!-- ::: q-quiz(goal="intro-describing-1") --> <!-- ::: .question --> In the state vector above, what is the _amplitude_ of the outcome ‘01’? <!-- ::: --> <!-- ::: .option --> 1. $1$ <!-- ::: --> <!-- ::: .option(correct) --> 2. $\sqrt{\tfrac{1}{2}}$ <!-- ::: --> <!-- ::: .option --> 3. $1/2$ <!-- ::: --> <!-- ::: .option --> 3. $0$ <!-- ::: --> <!-- ::: --> *** <!-- ::: q-quiz(goal="intro-describing-2") --> <!-- ::: .question --> If the state vector above described the state of some qubits, what would be the _probability_ of measuring ‘00’? <!-- ::: --> <!-- ::: .option --> 1. $1$ <!-- ::: --> <!-- ::: .option --> 2. $\sqrt{\tfrac{1}{2}}$ <!-- ::: --> <!-- ::: .option(correct) --> 3. $1/2$ <!-- ::: --> <!-- ::: .option --> 3. $0$ <!-- ::: --> <!-- ::: --> <!-- ::: --> ## Adding and multiplying vectors If you study other areas of mathematics, you’ll find that lots of things can be considered vectors. We’ve introduced vectors as ‘lists of numbers’, because this is how we will consider them both in this textbook and in Qiskit. But what separates a vector from any old list of numbers is that mathematicians decided on some well defined rules for adding two vectors, and for multiplying vectors by [scalars](gloss:scalar-gloss). ### Multiplying vectors by scalars For example, here is a vector multiplied by a scalar: $$ \cssId{_number-three}{3} \begin{bmatrix} \class{_vec-el-0}{1} \\ \class{_vec-el-1}{2} \\ \class{_vec-el-2}{-1} \\ \class{_vec-el-3}{\tfrac{1}{2}} \\ \end{bmatrix} \class{equals}{=} \begin{bmatrix} \class{_vec-el-0}{3} \\ \class{_vec-el-1}{6} \\ \class{_vec-el-2}{-3} \\ \class{_vec-el-3}{\tfrac{3}{2}} \\ \end{bmatrix} $$ We can see that each element of the vector has been multiplied by 2. The more general rule for a vector with $N$ elements is: $$ \class{scalar}{s} \begin{bmatrix} \class{_vec-el-0}{e_0} \\ \class{_vec-el-1}{e_1} \\ \class{_vec-el-2}{e_2} \\ \class{dots}{\vdots} \\ \class{_vec-el-n}{e_{N-1}} \\ \end{bmatrix} \class{equals}{=} \begin{bmatrix} \class{_vec-el-0}{s\times e_0} \\ \class{_vec-el-1}{s\times e_1} \\ \class{_vec-el-2}{s\times e_2} \\ \class{dots}{\vdots} \\ \class{_vec-el-n}{s\times e_{N-1}} \\ \end{bmatrix} $$ So we could have written the state vector $|x\rangle$ we defined above more neatly like this: $$ \class{x-ket}{|x\rangle} = \class{scalar}{\sqrt{\tfrac{1}{2}}} \begin{bmatrix} \cssId{_amp-0-1}{1} \\ \cssId{_amp-1-1}{1} \\ \cssId{_amp-2-1}{0} \\ \cssId{_amp-3-1}{0} \\ \end{bmatrix} $$ ### Adding two vectors The second rule is for adding two vectors together. This is only defined when the two vectors have the same number of elements, and gives a new vector with the same number of elements. Here is the general rule: $$ \begin{bmatrix} \class{_vec-el-0}{a_0} \\ \class{_vec-el-1}{a_1} \\ \class{_vec-el-2}{a_2} \\ \class{_vec-el-3}{a_3} \\ \class{dots}{\vdots} \\ \class{_vec-el-n}{a_{N-1}} \\ \end{bmatrix} + \begin{bmatrix} \class{_vec-el-0}{b_0} \\ \class{_vec-el-1}{b_1} \\ \class{_vec-el-2}{b_2} \\ \class{_vec-el-3}{b_3} \\ \class{dots}{\vdots} \\ \class{_vec-el-n}{b_{N-1}} \\ \end{bmatrix} \class{equals}{=} \begin{bmatrix} \class{_vec-el-0}{a_0 + b_0} \\ \class{_vec-el-1}{a_1 + b_1} \\ \class{_vec-el-2}{a_2 + b_2} \\ \class{_vec-el-3}{a_3 + b_3} \\ \class{dots}{\vdots} \\ \class{_vec-el-n}{a_{N-1} + b_{N-1}} \\ \end{bmatrix} $$ This means we can add and subtract vectors to make new vectors. For example, if we define the the vectors $|00\rangle$ and $|01\rangle$ like so: $$ \class{def-00}{|00\rangle} \class{def-equal}{:=} \begin{bmatrix} \class{_amp-0-general}{1} \\ \class{_amp-1-general}{0} \\ \class{_amp-2-general}{0} \\ \class{_amp-3-general}{0} \end{bmatrix}, \quad \class{def-01}{|01\rangle} \class{def-equal}{:=} \begin{bmatrix} \class{_amp-0-general}{0} \\ \class{_amp-1-general}{1} \\ \class{_amp-2-general}{0} \\ \class{_amp-3-general}{0} \end{bmatrix} $$ We can write $\class{x-ket}{|x\rangle}$ in the form: $$\class{x-ket}{|x\rangle} = \sqrt{\tfrac{1}{2}}(\class{def-00}{|00\rangle} + \class{def-01}{|01\rangle})$$ We call adding quantum states like this “superposing” them, so we can say “$|x\rangle$ is a superposition of the states $|00\rangle$ and $|01\rangle$.” In fact, it’s convention in quantum computing to define the computational basis states like so: $$ \class{def-00}{|00\rangle} \class{def-equal}{:=} \begin{bmatrix} \class{_amp-0-general}{1} \\ \class{_amp-1-general}{0} \\ \class{_amp-2-general}{0} \\ \class{_amp-3-general}{0} \end{bmatrix}, \quad \class{def-01}{|01\rangle} \class{def-equal}{:=} \begin{bmatrix} \class{_amp-0-general}{0} \\ \class{_amp-1-general}{1} \\ \class{_amp-2-general}{0} \\ \class{_amp-3-general}{0} \end{bmatrix}, \quad \class{def-10}{|10\rangle} \class{def-equal}{:=} \begin{bmatrix} \class{_amp-0-general}{0} \\ \class{_amp-1-general}{0} \\ \class{_amp-2-general}{1} \\ \class{_amp-3-general}{0} \end{bmatrix}, \quad \class{def-11}{|11\rangle} \class{def-equal}{:=} \begin{bmatrix} \class{_amp-0-general}{0} \\ \class{_amp-1-general}{0} \\ \class{_amp-2-general}{0} \\ \class{_amp-3-general}{1} \end{bmatrix} $$ And we can write any quantum state as a superposition of these state vectors, if we multiply each vector by the correct number and add them together: $$ \cssId{_psi-ket}{|\psi\rangle} = \class{_amp-0-general}{a_{00}}\class{def-00}{|00\rangle} + \class{_amp-1-general}{a_{01}}\class{def-01}{|01\rangle} + \class{_amp-2-general}{a_{10}}\class{def-10}{|10\rangle} + \class{_amp-3-general}{a_{11}}\class{def-11}{|11\rangle} \class{equals}{=} \begin{bmatrix} \class{_amp-0-general}{a_{00}} \\ \class{_amp-1-general}{a_{01}} \\ \class{_amp-2-general}{a_{10}} \\ \class{_amp-3-general}{a_{11}} \\ \end{bmatrix} $$ Since we can write any vector as a combination of these four vectors, we say these four vectors form a basis, which we will call the _computational basis_. The computational basis is not the only basis. For single qubits, a popular basis is formed by the vectors $\class{plus-ket}{|{+}\rangle}$ and $\class{minus-ket}{|{-}\rangle}$: <!-- ::: column --> ![image showing both the |0>, |1> basis and the |+>, |-> basis on the same plane](images/quantum-states/basis.svg) <!-- ::: column --> $$ \class{plus-ket}{|{+}\rangle} = \sqrt{\tfrac{1}{2}} \begin{bmatrix} \class{_sq-amp0}{1} \\ \class{_sq-amp1}{1} \end{bmatrix} $$ $$ \class{minus-ket}{|{-}\rangle} = \sqrt{\tfrac{1}{2}} \begin{bmatrix} \class{_sq-amp0}{1} \\ \class{_sq-amp1}{-1} \end{bmatrix} $$ <!-- ::: --> <!-- ::: q-block.exercise --> ### Try it Find values for $\alpha$, $\beta$, $\gamma$ and $\delta$ such that these equations are true: - $\alpha|{+}\rangle + \beta|{-}\rangle = |0\rangle$ - $\gamma|{+}\rangle + \delta|{-}\rangle = |1\rangle$ <!-- ::: --> ## How many different state vectors are there? We know that we can represent any quantum state using vectors, but is any vector a valid quantum state? In our case, no; since we square our amplitudes to find the probability of outcomes occurring, we need these squares to add to one, otherwise it doesn't make sense. $$ \cssId{sum}{\sum^{N-1}_{i=0}} \cssId{_amp-i}{a_i}^2 = 1 $$ <!-- ::: q-block.exercise --> ### Quick quiz <!-- ::: q-quiz(goal="quiz2") --> <!-- ::: .question --> Which of these is a valid quantum state? (Try adding up the squared amplitudes.) <!-- ::: --> <!-- ::: .option(correct) --> 1. $\sqrt{\tfrac{1}{3}}\begin{bmatrix} 1 \\\\ -1 \\\\ 1 \\\\ 0 \end{bmatrix}$ <!-- ::: --> <!-- ::: .option --> 2. $\sqrt{\tfrac{1}{2}}\begin{bmatrix} 1 \\\\ -1 \\\\ -1 \\\\ 1 \end{bmatrix}$ <!-- ::: --> <!-- ::: .option --> 3. $\tfrac{1}{2}\begin{bmatrix} 1 \\\\ 1 \end{bmatrix}$ <!-- ::: --> <!-- ::: --> <!-- ::: --> Another factor is something we call "global phases" of state vector. Since we only know phase exists because of the interference effects it produces, we can only ever measure phase _differences_. If we rotated all the amplitudes in a state vector by the same amount, we'd still see the exact same behaviour. <!-- ::: column --> ![image showing interference effects with different starting phases](images/quantum-states/global-phase-L.svg) <!-- ::: column --> ![image showing interference effects with different starting phases](images/quantum-states/global-phase-R.svg) <!-- ::: --> For example, there is no experiment we could perform that would be able to distinguish between these two states: <!-- ::: column --> $$ |a\rangle = \sqrt{\tfrac{1}{2}}\begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix} $$ <!-- ::: column --> $$ -|a\rangle = \sqrt{\tfrac{1}{2}}\begin{bmatrix} -1 \\ 0 \\ 0 \\ -1 \end{bmatrix} $$ <!-- ::: --> Because the differences between each of the amplitudes is the same. You could say these two vectors are different _mathematically_, but the same _physically_. ## Quantum operations So now we know all about the different states our qubits can be in, it’s time to look at how we represent the operations that transform one state to another. In the same way there is a transition probability that a certain action will transform a coin from heads to tails, there is a transition amplitude for each starting state and end state of our qubits. We can describe any quantum operations through these transition amplitudes. ![Image showing two state vectors before and after an operation](images/quantum-states/quantum-operation.svg) So, what possible transformations are there? Say we have a starting state $|a\rangle$ that’s transformed to a new state $|b\rangle$. If we want our representation to cover every possible transformation, then each amplitude in $|a\rangle$ must have a transition amplitude to each amplitude in $|b\rangle$. <!-- ::: q-block.exercise --> ### Quick quiz <!-- ::: q-quiz(goal="intro-describing-3") --> <!-- ::: .question --> An $n$-qubit state vector can contain up to $2^n$ amplitudes. What’s the largest number of transition amplitudes we’d need to represent any quantum operation on $n$ qubits? <!-- ::: --> <!-- ::: .option --> 1. $2\cdot 2^n$ <!-- ::: --> <!-- ::: .option(correct) --> 2. $(2^n)^2$ <!-- ::: --> <!-- ::: .option --> 3. $4^n$ <!-- ::: --> <!-- ::: .option --> 3. $2^{2^n}$ <!-- ::: --> <!-- ::: --> <!-- ::: --> Drawing lines like this is a bit of a messy way of doing it, so instead we can put all these numbers into a [matrix](gloss:matrix): $$ \cssId{u-gate}{U} = \begin{bmatrix} \class{_t_amp_00_00}{t_{00\to 00}} & \class{_t_amp_01_00}{t_{01\to 00}} & \class{_t_amp_10_00}{t_{10\to 00}} & \class{_t_amp_11_00}{t_{11\to 00}} \\ \class{_t_amp_00_01}{t_{00\to 01}} & \class{_t_amp_01_01}{t_{01\to 01}} & \class{_t_amp_10_01}{t_{10\to 01}} & \class{_t_amp_11_01}{t_{11\to 01}} \\ \class{_t_amp_00_10}{t_{00\to 10}} & \class{_t_amp_01_10}{t_{01\to 10}} & \class{_t_amp_10_10}{t_{10\to 10}} & \class{_t_amp_11_10}{t_{11\to 10}} \\ \class{_t_amp_00_11}{t_{00\to 11}} & \class{_t_amp_01_11}{t_{01\to 11}} & \class{_t_amp_10_11}{t_{10\to 11}} & \class{_t_amp_11_11}{t_{11\to 11}} \\ \end{bmatrix} $$ For example, here is the matrix that represents the CNOT operation we saw in the atoms of computation: $$ \cssId{_cnot-gate}{\text{CNOT}} = \begin{bmatrix} \class{_t_amp_00_00}{1} & \class{_t_amp_01_00}{0} & \class{_t_amp_10_00}{0} & \class{_t_amp_11_00}{0} \\ \class{_t_amp_00_01}{0} & \class{_t_amp_01_01}{0} & \class{_t_amp_10_01}{0} & \class{_t_amp_11_01}{1} \\ \class{_t_amp_00_10}{0} & \class{_t_amp_01_10}{0} & \class{_t_amp_10_10}{1} & \class{_t_amp_11_10}{0} \\ \class{_t_amp_00_11}{0} & \class{_t_amp_01_11}{1} & \class{_t_amp_10_11}{0} & \class{_t_amp_11_11}{0} \\ \end{bmatrix} $$ <!-- ::: q-block.exercise --> ### Quick quiz <!-- ::: q-quiz(goal="intro-maths-0") --> <!-- ::: .question --> What is the transition amplitude of the CNOT operation (as shown above) transforming the state $|10\rangle$ to $|01\rangle$? <!-- ::: --> <!-- ::: .option --> 1. $1$ <!-- ::: --> <!-- ::: .option(correct) --> 2. $0$ <!-- ::: --> <!-- ::: .option --> 3. $\begin{bmatrix} 1 & 0 & 0 & 0\end{bmatrix}$ <!-- ::: --> <!-- ::: .option --> 3. $\begin{bmatrix} 0 \\\\ 0 \\\\ 1 \\\\ 0\end{bmatrix}$ <!-- ::: --> <!-- ::: --> <!-- ::: --> And here is the matrix for the H-gate we saw in the previous page: $$ H = \sqrt{\tfrac{1}{2}} \begin{bmatrix} \class{_t_amp_0_0}{1} & \class{_t_amp_1_0}{1} \\ \class{_t_amp_0_1}{1} & \class{_t_amp_1_1}{-1} \\ \end{bmatrix} $$ (we use the same rule for multiplying a matrix by a scalar as we do with vectors). And when we want to see what effect an operation will have on some qubits, we multiply each transition amplitude by the amplitude of each state in our input state vector, and then add up the amplitudes for each state to get our output state vector. This is exactly the same as multiplying along each branch in a probability (or amplitude) tree and adding up the total probabilities (or amplitudes) at the end. For any mathematicians in the audience, this is just standard matrix multiplication. $$ H|0\rangle = \sqrt{\tfrac{1}{2}} \begin{bmatrix} \class{_t_amp_0_0}{1} & \class{_t_amp_1_0}{ 1} \\ \class{_t_amp_0_1}{1} & \class{_t_amp_1_1}{-1} \\ \end{bmatrix} \begin{bmatrix} \class{_sq-amp0}{1} \\ \class{_sq-amp1}{0} \\ \end{bmatrix} = \sqrt{\tfrac{1}{2}} \begin{bmatrix} (1 \class{dot}{\cdot} 1) & + & (1 \class{dot}{\cdot} 0) \\ (1 \class{dot}{\cdot} 1) & + & (-1 \class{dot}{\cdot} 0) \\ \end{bmatrix} = \sqrt{\tfrac{1}{2}} \begin{bmatrix} \class{_sq-amp0}{1} \\ \class{_sq-amp0}{1} \\ \end{bmatrix} $$ ![image showing how the H-gate transforms the state |0> into the state |+>](images/quantum-states/h-gate.svg) ## Rules of quantum operations In the same way that not every vector is a valid state vector, not every matrix is a valid quantum operation. If a matrix is to make sense as a real operation, it needs to keep the total probability of the output states equal to 1. So for example, this couldn’t be a real operation: $$ \begin{bmatrix} \class{_t_amp_0_0}{1} & \class{_t_amp_1_0}{0} \\ \class{_t_amp_0_1}{1} & \class{_t_amp_1_1}{0} \\ \end{bmatrix} $$ Because if it acts on the state $|0\rangle$ we get: $$ \begin{bmatrix} \class{_t_amp_0_0}{1} & \class{_t_amp_1_0}{0} \\ \class{_t_amp_0_1}{1} & \class{_t_amp_1_1}{0} \\ \end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} \class{_sq-amp0}{1} \\ \class{_sq-amp1}{1} \end{bmatrix} $$ and the total probabilities add to two, which doesn't make any sense. Alternatively, if it acted on the state $|1\rangle$, then the total probabilities would add up to zero, which also doesn't make sense. To preserve the total probability in all cases, our operations need to be reversible. This means we can perform our quantum gates backwards to 'undo' them (remembering to reverse any rotations) and be left with the state we started with. We say matrices with this property are _unitary_. You will often see quantum gates referred to as 'unitaries' or 'unitary gates'.
github_jupyter
# If / Else em um Quiz do Buzzfeed - Olá, Pedro, mostrar. De novo, entrar no Jupyter notebook. Se não estiver aberto. Eu tenho vários, que uso como blocos de anotações de reportagens, rascunhos de investigações. - Eu vou fazer as coisas passo a passo aqui, espero que você esteja acompanhando/copiando. No final do módulo você vai poder usar todos os arquivos. Mas por favor, tem que fazer você mesmo. --- - MUITO BEM, If/else é uma das coisas mais importantes - Quiz do Buzzfeed, também. - Um quiz é nada mais é que um programa. Ele pede um, ou vários inputs do usuário, armazena isso em uma variável e, dependendo, mostra um resultado. - Vou marcar 3. Roda. A resposta é "Que eu sou X" - Será que a gente consegue recriar isso? - Vamos olhar de novo: - Ou seja: if algo é zero, a reposta é essa, se é maior que zero, a resposta é outra. - Vamos criar uma variável chamada Insuportabilidade. = 0 - If / else statement O site Buzzfeed ficou famoso por seus quizzes. Exemplo de um deles: [Quão INSUPORTÁVEL você é?](https://www.buzzfeed.com/rafaelcapanema/quao-insuportavel-voce-e?utm_term=.kgPNzPa9D#.ebkRDjmBG)). A maioria desses testes dão "pontos" para cada resposta marcada, oferecendo uma classificação. No quiz em questão, por exemplo, se eu não marcar nenhuma opção, e clicar em "mostrar resultado", recebo a seguinte mensagem: ![](https://i.imgur.com/f6RZpAv.png) Se eu marcar as 30 respostas, inclusive, sim, "você faz cócegas nas pessoas", a classificação é impublicável, com várias palavras em caps lock. Um quiz é nada mais que um programa simples, que usa uma simples **variável** para dar o resultado. Chamemos ela de índice de insuportabilidade: ``` insuportabilidade = 0 ``` No início do quiz, cada jogador começa com um nível "0" de insuportabilidade. À medida que vai marcando as respostas, esse número vai aumentando. Ao final, quando o jogador clica em "ver resultado", o site roda um pequeno programa, mais ou menos assim: ``` insuportabilidade = 3 if insuportabilidade == 0: print('Você é a pessoa MAIS LEGAL DO MUNDO') else: print('Você é um pouco insuportável!') ``` *(experimente mudar o valor de insuportabilidade e as frases para ver os diferentes resultados)* Lembrando: usamos `==` para fazer uma comparação exata. No statement acima, pedimos para o programa avaliar: se a variável `insuportabilidade` for **exatamente** igual a zero (`if ==`), ele vai mostrar um resultado na tela. Se for qualquer coisa diferente disso (`else`), o programa mostra outro resultado. Um bom quiz tem mais de duas gradações. Ninguém é 8 ou 80. Então precisamos de colocar condições intermediárias: ``` insuportabilidade = 19 if insuportabilidade == 0: print('Você é a pessoa MAIS LEGAL DO MUNDO') elif 0 < insuportabilidade < 10: print('Você pode ser insuportável às vezes!') elif insuportabilidade >= 10 and insuportabilidade < 20: print('Você é meio chatinho(a), hein?') else: print('Deus me livre!') ## Experimente colocar diferentes valores a insuportabilidade para verdiferentes resultados if insuportabilidade == 0: print() elif insuportabilidade > 0 and insuportabilidade < 10: print() else: print() ``` Se o programa precisa avaliar mais que duas condições, usamos `elif` (uma boa explicação, em inglês, [aqui](https://www.tutorialspoint.com/python/python_decision_making.htm)) Note que estamos avaliando duas condições simultaneamente (and) am cada uma das declarações. Para que o programa avalie `0 < insuportabilidade < 10` como `True`, ele faz o seguinte, na ordem: `insuportabilidade > 0 insuportabilidade < 10` Isso é o mesmo que escrever: `insuportabilidade > 0 and insuportabilidade < 10` Se as duas declarações forem verdadeiras, ele retorna `True`, e mostra o resultado que esperamos. Se alguma for falsa, ele passa para a próxima declaração, (`elif`) e avalia se ambas as afirmações são verdadeiras ou falsas. ## Bônus: Acrescentando interatividade O Python 3 tem um comando `input`, que armazena um valor dado pelo usuário do seu programa, da mesma forma que o site do Buzzfeed armazena as respostas do usuários antes de processar a pontuação no final. Vejamos um exemplo: ``` insuportabilidade = input('Você gosta de fazer cócegas nas pessoas? Responda "x" se sim, qualquer outra coisa se não') if insuportabilidade == 'x': print('Você é terrível') else: print('Você é legal!') ``` Usando o input, não declaramos um valor para a variável `insuportabilidade` logo de cara, e sim aguardamos o `input` do usuário. O valor de `insuportabilidade` é qualquer coisa que a pessoa digitar, seguido de enter. ``` cor = input('qual a sua cor favorita? ') print('A sua cor favorita é ' + cor) ``` Mas é claro que o nosso quiz tem diversas respostas "certas", e o valor de `insuportabilidade` varia dentro de uma escala. Para construir algo como um quiz completo, como está na página do Buzzfeed, poderíamos fazer: ## USAR OPERADORES <, >, !=, >=, or and, etc // A gente vai voltar a isso ``` # O valor começa em 0 insuportabilidade = 0 # Fazemos uma série de perguntas, em que as respostas aceitas são 's' para sim ou qualquer outra coisa se não. resposta_1 = input('Você gosta de fazer cócegas?') if resposta_1 == 's': insuportabilidade = insuportabilidade + 1 resposta_2 = input('Você tem o hábito de interromper os outros?') if resposta_2 == 's': insuportabilidade = insuportabilidade + 1 resposta_3 = input('Estende a mão pra cumprimentar alguém e tira no estilo "deixa que eu toco sozinho"') if resposta_3 == 's': insuportabilidade = insuportabilidade + 1 # Agora damos o resultado: if insuportabilidade == 3: print('Você não está habilitado a viver em sociedade.') else: print('Você é ok!') ``` * 's' é diferente de 'S' ## Recapitulando: - Tente quebrar um programa em processos menores; - Vá testando a cada mudança; - Estude if / elif / else. Variações dessa lógica são o coração de muitos programas
github_jupyter
## Machine Learning CSE3008 Lab Experiment ## Name: Simran Anand ## Reg no: 19BCD7243 ### 1. Assuming a set of documents that need to be classified, use the naïve Bayesian Classifier model to perform this task. ``` import pandas as pd msg=pd.read_csv('C:/Users/simran/Downloads/naivetext.csv',names=['message','label']) print('The dimensions of the dataset',msg.shape) msg['labelnum']=msg.label.map({'pos':1,'neg':0}) X=msg.message y=msg.labelnum print(X) print(y) #splitting the dataset into train and test data from sklearn.model_selection import train_test_split xtrain,xtest,ytrain,ytest=train_test_split(X,y) print ('\n The total number of Training Data :',ytrain.shape) print ('\n The total number of Test Data :',ytest.shape) #output of count vectoriser is a sparse matrix from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer() xtrain_dtm = count_vect.fit_transform(xtrain) xtest_dtm=count_vect.transform(xtest) print('\n The words or Tokens in the text documents \n') print(count_vect.get_feature_names()) df=pd.DataFrame(xtrain_dtm.toarray(),columns=count_vect.get_feature_names()) # Training Naive Bayes (NB) classifier on training data. from sklearn.naive_bayes import MultinomialNB clf = MultinomialNB().fit(xtrain_dtm,ytrain) predicted = clf.predict(xtest_dtm) #printing accuracy, Confusion matrix, Precision and Recall from sklearn import metrics print('Accuracy metrics') print('Accuracy of the classifer is',metrics.accuracy_score(ytest,predicted)) print('Confusion matrix') print(metrics.confusion_matrix(ytest,predicted)) print('Recall and Precison ') print(metrics.recall_score(ytest,predicted)) print(metrics.precision_score(ytest,predicted)) ``` ### 2. Build an Artificial Neural Network by implementing the Backpropagation algorithm and test the same using appropriate data sets. K-NN and Weighted- KNN classifiers #### Initializing variables value ``` import numpy as np x = np.array(([2, 9], [1, 5], [3, 6]), dtype=float) print("small x",x) #original output y = np.array(([92], [86], [89]), dtype=float) X = x/np.amax(x,axis=0) #maximum along the first axis print("Capital X",X) #Defining Sigmoid Function for output def sigmoid (x): return (1/(1 + np.exp(-x))) #Derivative of Sigmoid Function def derivatives_sigmoid(x): return x * (1 - x) #Variables initialization epoch=7000 #Setting training iterations lr=0.1 #Setting learning rate inputlayer_neurons = 2 #number of input layer neurons hiddenlayer_neurons = 3 #number of hidden layers neurons output_neurons = 1 #number of neurons at output layer ``` Note: 1)In this code,we have defined sigmoid function and its derivative function. 2)As you know, we train the Neural network many times at a single point, for that we need the number of epochs. 3)Below that we have defined the only number of neurons in each layer. ``` #Defining weight and biases for hidden and output layer wh=np.random.uniform(size=(inputlayer_neurons,hiddenlayer_neurons)) bh=np.random.uniform(size=(1,hiddenlayer_neurons)) wout=np.random.uniform(size=(hiddenlayer_neurons,output_neurons)) bout=np.random.uniform(size=(1,output_neurons)) ``` Note: 1.Here we have defined random weights and bias 2.As we know, we should first defined the weights and Bias for the first (here we have only one hidden layer) hidden layer. 3.After that we have defined the weights and bias for the output layer. 4.Keep in mind when defining the weights size (how many neurons are in the previous layer, the number of neurons in the layer for that we have defined weights). 5.Size of bias (number of neurons in output layer,the number of neurons in the layer for that we have defined biases). ``` #Forward Propagation for i in range(epoch): hinp1=np.dot(X,wh) hinp=hinp1 + bh hlayer_act = sigmoid(hinp) outinp1=np.dot(hlayer_act,wout) outinp= outinp1+ bout output = sigmoid(outinp) ``` Note: 1.Here we are just calculating output of our model, first we have done this for hidden layer and after that for output layer , and finally get the output. 2.np.dot is used for dot product of two matrix. ``` #Backpropagation Algorithm EO = y-output outgrad = derivatives_sigmoid(output) d_output = EO* outgrad EH = d_output.dot(wout.T) hiddengrad = derivatives_sigmoid(hlayer_act) #how much hidden layer wts contributed to error d_hiddenlayer = EH * hiddengrad wout += hlayer_act.T.dot(d_output) *lr # dotproduct of nextlayererror and currentlayerop bout += np.sum(d_output, axis=0,keepdims=True) *lr #Updating Weights wh += X.T.dot(d_hiddenlayer) *lr print("Actual Output: \n" + str(y)) print("Predicted Output: \n" ,output) ``` Note: 1.In this code first we calculated error of hidden layer and after that calculated error of output layer. 2.As we know from the formula we have to find out how much hidden layer contribute in total error and also contribution of weight in total error. 3.After that we have updated our weights and biases, until we get minimum error. 4.X.T is used to make transpose matrix.
github_jupyter
# Estructuras de datos > David Quintanar Pérez ## Listas Los objetos tipo list son colecciones ordenadas de objetos, los cuales son indexables numéricamente y son mutables. ``` juegos_zelda = ["ocarina of time", "majora's mask", "breath of the wild", "twilight princess", "wind waker", "skyward sword"] juegos_mario = ["super mario 64", "super mario sunshine", "super mario galaxy", "paper mario", "super mario world"] ``` > ### Acceder a los elementos dentro de la lista [inicio:final-1:orden] ``` """ [0] [0:3] [1:5:2] [::2] [::-1] """ juegos_zelda[0] ``` > ### Número de elementos en una lista ``` len(juegos_zelda) len(juegos_mario) ``` > ### Operaciones con Listas > #### &nbsp;&nbsp;&nbsp;&nbsp;Repetir o multiplicar listas con * ``` juegos_zelda * 2 ``` > #### &nbsp;&nbsp;&nbsp;&nbsp;Concatenar con + ``` juegos_nintendo = juegos_mario + juegos_zelda juegos_nintendo ``` > #### &nbsp;&nbsp;&nbsp;&nbsp;Agregar elementos ``` juegos_mario.append("mario kart") juegos_mario ``` > #### &nbsp;&nbsp;&nbsp;&nbsp;Modificar elementos ``` # Uno a la vez [posicion] juegos_mario[-2] += ": the origami king" juegos_mario # Múltiples [:] juegos_super_smash_bros = ["nintendo 64", "melee", "brawl", "ultimate"] len_juegos_nintendo = len(juegos_nintendo) juegos_nintendo[len_juegos_nintendo::] = juegos_super_smash_bros juegos_nintendo ``` > #### &nbsp;&nbsp;&nbsp;&nbsp;Saber si existe un elemento ``` "ocarina of time" in juegos_nintendo "Ocarina of time" in juegos_nintendo ``` > #### &nbsp;&nbsp;&nbsp;&nbsp;Saber la posición de un elemento ``` juegos_nintendo.index("ocarina of time") juegos_nintendo[5] ``` > #### &nbsp;&nbsp;&nbsp;&nbsp;Eliminar un elemento ``` juegos_nintendo.pop(juegos_nintendo.index("paper mario")) "paper mario" in juegos_nintendo ``` > #### &nbsp;&nbsp;&nbsp;&nbsp;Ordenar elementos ``` juegos_nintendo.sort() juegos_nintendo ``` > #### &nbsp;&nbsp;&nbsp;&nbsp;Generar listas numéricas ``` list(range(10)) ``` > #### &nbsp;&nbsp;&nbsp;&nbsp;Unir elementos (convertir a strings) ``` str_juegos_nintendo = ",".join(juegos_nintendo) str_juegos_nintendo type(str_juegos_nintendo) ``` ## Tuplas Los objetos tipo tuple son colecciones ordenadas de objetos, los cuales son indexables numéricamente e inmutables. ``` nombres_perros = ("jacob", "ritchie", "piglet", "blacky", "boopi", "rocket") nombres_perros nombres_perros[0] = "jacov" ``` ## Diccionarios El tipo dict es una colección mutable de pares <clave>:<valor>, el cual es indexable mediante la clave de cada elemento. ``` compras_amazon = { "echo dot": 1500, "echo show": 900, "kindle": 1000 } compras_amazon.keys() compras_amazon.values() list(compras_amazon.keys())[0] compras_amazon["echo dot"] "echo dot" in compras_amazon dic_juegos_nintendo = { "zelda": juegos_zelda, "mario": juegos_mario, "super smash bros": juegos_super_smash_bros } dic_juegos_nintendo ``` ## Sets Los elementos contenidos en un objeto tipo set deben ser de tipo inmutable. ``` registro = {1, 'dos', 3.0, '4'} registro registro.add(False) registro registro.remove(False) registro registro.discard(3) registro registro.pop() registro.pop() registro.pop() registro.pop() registro = {1, 'dos', 3.0, '4'} segundo_registro = registro.copy() segundo_registro registro == segundo_registro registro is segundo_registro registro.clear() registro ``` ## Counter ``` from collections import Counter votos = [ 'the legend of zelda ocarina of time', 'super smash bros', 'donkey kong 64', 'super mario 64', 'super smash bros', 'super mario 64', 'super smash bros', 'super mario 64', 'the legend of zelda ocarina of time', 'the legend of zelda ocarina of time', 'super mario 64', 'the legend of zelda ocarina of time', 'Kirby 64' ] recuento_votos = Counter(votos) print(recuento_votos) ```
github_jupyter
# Explore weather data ``` %load_ext autoreload %autoreload 2 import sys sys.path.append('../') import os from glob import glob import matplotlib.pyplot as plt %matplotlib inline import pandas as pd import numpy as np from datetime import datetime import pickle weather_data_file = '../data/weather_data/ny_jfk_weather.csv' june_18_weather_data_file = '../data/weather_data/ny_jfk_weather_2018-06.csv' weather_data = pd.read_csv(weather_data_file, sep=',') weather_data.head(10) weather_data.columns weather_data.drop(columns=['STATION', 'NAME', 'LATITUDE', 'LONGITUDE', 'ELEVATION', 'PGTM', 'TMAX', 'TMIN', 'WDF2', 'WDF5', 'WSF2', 'WSF5', 'WT01', 'WT02', 'WT03', 'WT04', 'WT06', 'WT08', 'WT09'], inplace=True) ``` ### Graph weather over the sample period ``` f, axarr = plt.subplots(5, sharex=True, figsize=(10, 10)) axarr[0].plot(weather_data.AWND) axarr[0].set_title('Average Wind Speed') axarr[1].plot(weather_data.TAVG) axarr[1].set_title('Average Temperature') axarr[2].plot(weather_data.PRCP) axarr[2].set_title('Average Precipitation') axarr[3].plot(weather_data.SNWD) axarr[3].set_title('Snow Depth') axarr[4].plot(weather_data.SNOW) axarr[4].set_title('Snow') ``` ### Create weather meta-data ``` months = [] for i, weather in weather_data.iterrows(): month = str(datetime.strptime(weather.DATE, '%Y-%m-%d').strftime('%m')) months.append(month) weather_data['month'] = months weather_data.head(10) weather_meta = pd.DataFrame() for month in weather_data.month.unique(): min_temp = weather_data[weather_data.month == month].TAVG.min() max_temp = weather_data[weather_data.month == month].TAVG.max() mean_temp = weather_data[weather_data.month == month].TAVG.mean() min_wind = weather_data[weather_data.month == month].AWND.min() max_wind = weather_data[weather_data.month == month].AWND.max() mean_wind = weather_data[weather_data.month == month].AWND.mean() min_prcp = weather_data[weather_data.month == month].PRCP.min() max_prcp = weather_data[weather_data.month == month].PRCP.max() mean_prcp = weather_data[weather_data.month == month].PRCP.mean() min_snwd = weather_data[weather_data.month == month].SNWD.min() max_snwd = weather_data[weather_data.month == month].SNWD.max() mean_snwd = weather_data[weather_data.month == month].SNWD.mean() min_snow = weather_data[weather_data.month == month].SNOW.min() max_snow = weather_data[weather_data.month == month].SNOW.max() mean_snow = weather_data[weather_data.month == month].SNOW.mean() weather_meta = weather_meta.append({ 'month': month, 'min_temp': min_temp, 'max_temp': max_temp, 'mean_temp': mean_temp, 'min_wind': min_wind, 'max_wind': max_wind, 'mean_wind': mean_wind, 'min_prcp': min_prcp, 'max_prcp': max_prcp, 'mean_prcp': mean_prcp, 'min_snwd': min_snwd, 'max_snwd': max_snwd, 'mean_snwd': mean_snwd, 'min_snow': min_snow, 'max_snow': max_snow, 'mean_snow': mean_snow }, ignore_index=True) weather_meta.set_index('month') ``` * Save weather meta-data for later ``` pickle.dump(weather_meta, open('../data/weather_data/weather_meta.p', "wb")) ``` ### Extract June 18 data ``` weather_data_june_18 = weather_data[weather_data['month'].isin(['05', '06', '07', '08'])] weather_data_june_18 weather_data_june_18.to_csv(june_18_weather_data_file) ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.patches as patches import os import itertools from multiprocessing import Pool from itertools import product %run config.py %run shared.py valid = pd.read_pickle(PATH_VALID_DEVICES) ``` # Category ranking **RQ:** What do users see when they open the notification drawer (or the lock screen)? How many notifications are there and from what kind of apps (categories)? ``` MAX_POSITIONS = 5 def get_packages_at_positions(lst): lst = lst[:MAX_POSITIONS] result = [] for n in lst: result.append(n['packageName']) result += ['_NONE_' for x in range(MAX_POSITIONS - len(result))] return result def worker_ranking(uuid): df = pd.read_pickle('%s%s.pkl.gz' % (PATH_DEVICES_DIR, uuid), compression='gzip') df.Active = df.Active.apply(filter_active) df.Active = df.Active.apply(get_packages_at_positions) result = pd.DataFrame(columns=['UUID', 'Position', 'PackageName', 'Percentage']) for i in range(MAX_POSITIONS): counts = df.Active.apply(lambda x: x[i]).value_counts(dropna=False, normalize=True) for index, value in counts.iteritems(): result = result.append({ 'UUID': uuid, 'Position': i + 1, 'PackageName': index, 'Percentage': value }, ignore_index=True) return result uuids = valid.UUID.tolist() p = Pool(NUM_CORES) lst = p.map(worker_ranking, uuids) ranking = pd.concat(lst) ranking = ranking.reset_index(drop=True) ranking.tail() ranking['Category'] = ranking.PackageName.apply(lambda x: category_mapping[x] if x in category_mapping else 'UNKNOWN') ranking.tail() ranking.Category.value_counts() ``` ## Unknown category ``` ranking[(ranking.Category == 'UNKNOWN')].PackageName.value_counts().head(10) ranking[(ranking.Category == 'UNKNOWN')] \ .groupby('PackageName') \ .Percentage.mean() \ .sort_values(ascending=False) \ .head(10) ranking[(ranking.Category == 'UNKNOWN')].PackageName.apply(lambda x: '.'.join(x.split('.')[:4])).value_counts().head(10) ``` ## Category distribution ``` ranking.tail() ranking2 = pd.DataFrame(list(product(ranking.UUID.unique(), range(1, MAX_POSITIONS + 1))), columns=['UUID', 'Position']) for category in ranking.Category.unique(): ranking2[category] = pd.Series() ranking2 = ranking2.set_index(['UUID', 'Position']) ranking2.tail() g = ranking.groupby(['UUID', 'Position', 'Category'], as_index=False).sum() g.tail() for index, row in g.iterrows(): if index % 10000 == 0: print(index) ranking2.loc[(row['UUID'], row['Position']), row['Category']] = row['Percentage'] ranking2.tail() ranking2 = ranking2.fillna(0) ranking2 = ranking2.groupby('Position').mean() ranking2 ranking2.sum(axis=1) ranking2 = ranking2[[ 'PHONE', 'SMS/IM', 'SOCIAL', 'EMAIL', 'CALENDAR/REMINDER', 'SYSTEM', 'TOOL', 'NAVIGATION', 'MEDIA', 'SHOPPING', 'GAME', 'NEWS', 'HEALTH/FITNESS', # 'WEATHER', 'UNKNOWN', 'NONE' ]] len(category_mapping) len(set(category_mapping.values())) ``` ## Plot category ranking ``` ranking_plot = ranking2.copy() # Remove capslock from column names ranking_plot.columns = pd.Series(ranking_plot.columns).apply(str.title).tolist() # Sort by column name ranking_plot = ranking_plot.reindex(sorted(ranking_plot.columns[:-2]) + list(ranking_plot.columns[-2:]), axis=1) # Rename specific columns ranking_plot = ranking_plot.rename(columns={ 'Shopping': 'Shopping &\nFinance', 'Calendar/Reminder': 'Calendar &\nReminder', 'Health/Fitness': 'Health &\nFitness', 'Sms/Im': 'SMS & IM', 'Unknown': 'Uncategorized', 'Social': 'Social &\nDating', 'None': 'No notification\nshown', 'News': 'News &\nWeather' }) fig, ax = plt.subplots(figsize=(12, 4)) im = ax.imshow(np.log(ranking_plot.values), cmap='GnBu') # , vmax=1 # Ticks ax.set_xticks(range(len(ranking_plot.columns))) ax.set_yticks(range(len(ranking_plot.index))) # Tick labels ax.set_xticklabels(ranking_plot.columns) ax.set_yticklabels(range(1, 6)) # Rotate labels plt.setp(ax.get_xticklabels(), rotation=45, ha='right', rotation_mode='anchor') # Labels ax.set_xlabel('', fontsize=10) ax.set_ylabel('Position', fontsize=10) ax.tick_params(labelsize=10) # Values for i in range(len(ranking_plot.index)): for j in range(len(ranking_plot.columns)): value = ranking_plot.values[i, j] color = 'k' #if value < 0.2 else 'w' text = str(np.round(value * 100, 1)) + '%' ax.text(j, i, text, ha='center', va='center', color=color, fontsize=10) ax.set_yticks([x + 0.5 for x in range(0, 5)], minor=True) ax.grid(which='minor', color='k', linestyle='-', linewidth=1) ax.tick_params(which='minor', bottom=False, left=False) plt.savefig('figures/category_positions.png', bbox_inches='tight', pad_inches=0, dpi=300) plt.savefig('figures/category_positions.pdf', bbox_inches='tight', pad_inches=0) plt.show() for i in range(0, 5): print('Position %s\n-----------------------------' % (i + 1)) display(ranking2.iloc[i].T.sort_values(ascending=False)) print('') test = ranking2.copy() del test['NONE'] for index in test.index: print('%s: %s' % (index, test.loc[index].T.sort_values(ascending=False).index.tolist()[:5])) ``` ## Category counts ``` category_counts = pd.Series(list(category_mapping.values())).value_counts() category_counts category_counts_2 = category_counts[(category_counts.index != 'NONE')] # & (category_counts.index != 'UNKNOWN') category_counts_2 = pd.DataFrame(category_counts_2) category_counts_2['Category'] = category_counts_2.index category_counts_2 = category_counts_2.rename(columns={ 0: 'Count' }) category_counts_2 = category_counts_2.reset_index(drop=True) category_counts_2 len(category_counts_2) category_counts_2.Category = category_counts_2.Category.apply(str.title) category_counts_2 = category_counts_2.sort_values('Category', ascending=True) category_counts_2.Category = category_counts_2.Category.replace({ 'Shopping': 'Shopping & Finance', 'Calendar/Reminder': 'Calendar & Reminder', 'Health/Fitness': 'Health & Fitness', 'Sms/Im': 'SMS & IM', 'Unknown': 'Uncategorized', 'Social': 'Social & Dating', 'None': 'No notification shown' }) category_counts_2 print(category_counts_2[['Category', 'Count']].to_latex(index=False)) ax = category_counts_2.plot( kind='bar', x='Category', y='Count', ylim=(0, 200), figsize=(10, 4), legend=False, grid=True ) # Grid ax.set_axisbelow(True) ax.xaxis.grid(alpha=0.2, linestyle='dashed') ax.yaxis.grid(alpha=0.2, linestyle='dashed') # Tweak font sizes ax.set_xlabel('', fontsize=10) ax.set_ylabel('# Apps', fontsize=10) ax.tick_params(labelsize=10) # Rotate labels plt.setp(ax.get_xticklabels(), rotation=45, ha='right', rotation_mode='anchor') #plt.savefig('figures/category_counts.png', bbox_inches='tight', pad_inches=0, dpi=300) #plt.savefig('figures/category_counts.pdf', bbox_inches='tight', pad_inches=0) plt.show() ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt from scipy.interpolate import interpn import os import glob import random import config import utils from keras.models import Model from keras.layers import BatchNormalization, Conv2D, Conv2DTranspose, MaxPooling2D, Dropout, UpSampling2D, Input, concatenate, Activation,LeakyReLU from keras import backend as K import tensorflow as tf from keras.callbacks import ModelCheckpoint from keras.optimizers import Adam, SGD # Set model_filename to either name of a file to which a new trained model has to be saved or to a name of a file from which # trained model weights have to be loaded model_filename = 'regresja1.h5' # Read all simulated profiles for a regular grid of primary beam parameters, for fields 3x3, 10x10, and 30x30 dataPoints = [(str(e),str(se),str(s),str(an)) for e in config.simulatedEnergies for se in config.simulatedEnergyDispersions for s in config.simulatedSourceSizes for an in config.simulatedAngularDivergences] random.seed(config.SEED) random.shuffle(dataPoints) profiles = utils.readProfiles(config.profileDIR,dataPoints) profiles = np.asarray(profiles) y_train = np.asarray(dataPoints,dtype=np.float) print(profiles.shape,y_train.shape) X_train = [] for nfield,(field,Ranges) in enumerate(zip(config.analyzedProfiles,config.analyzedRanges)): if field != None: for profile,Range in zip(field,Ranges): inp = profiles[nfield,:,profile,Range[0]:Range[1]] inp = np.reshape(inp,inp.shape + (1,)) print(inp.shape) X_train.append(inp) from keras.layers import Conv1D,MaxPooling1D,Concatenate,Flatten,Dense def conv_block(inp,filters=16,kernel_size=3,strides = 1,kernel_initializer= 'glorot_uniform',padding='same',activation='relu'): c = Conv1D(filters,kernel_size,kernel_initializer=kernel_initializer,padding=padding,activation=activation,strides = strides)(inp) c = Conv1D(filters,kernel_size,kernel_initializer=kernel_initializer,padding=padding,activation=activation,strides = strides)(c) c = MaxPooling1D(2)(c) return c inputs = [] outputs = [] for nmod in range(len(config.allRanges)): shape = (config.allRanges[nmod][1]-config.allRanges[nmod][0],1) print(shape) W = int(np.log(shape[0]/3)/np.log(2)) inp = Input(shape) F = 16 x = inp for w in range(W): x = conv_block(x,filters = F) F *= 2 out = Conv1D(1,1) (x) inputs.append(inp) outputs.append(out) c = Concatenate(axis=1)(outputs) c = Flatten()(c) c = Dense(100,activation='relu')(c) c = Dense(100,activation='relu')(c) out = Dense(4,activation=None)(c) model = Model(inputs = inputs,outputs=[out]) #model.summary() callback_checkpoint = ModelCheckpoint( model_filename, verbose=1, monitor='val_loss', save_best_only=True ) model.compile( optimizer=Adam(lr=0.0001), loss = 'mse', metrics=[tf.keras.metrics.MeanAbsoluteError()] ) # Run only if the trained model does not yet exist exists history = model.fit( X_train,y_train, steps_per_epoch=100, epochs=300, validation_split=0.2, validation_steps = 10, callbacks=[callback_checkpoint] ) # Run if the trained model already exists model.load_weights(model_filename) groundTruthFilename = config.testProfilesDIR + config.groundTruthFileName testGoals = open(groundTruthFilename) lines = testGoals.readlines() y_test = [l[:-1].split()[1:5] for l in lines[:-1]] fileNames = [l[:-1].split()[0] for l in lines[:-1]] testGoals.close() fields = ['fields','fields10','fields30'] testProfiles3 = [] testProfiles10 = [] testProfiles30 = [] for fileName in fileNames: name = config.testProfilesDIR + fileName + '_' + fields[0] + '.npz' file = np.load(name) testProfiles3.append(file[file.files[1]]) name = config.testProfilesDIR + fileName + '_' + fields[1] + '.npz' file = np.load(name) testProfiles10.append(file[file.files[1]]) name = config.testProfilesDIR + fileName + '_' + fields[2] + '.npz' file = np.load(name) testProfiles30.append(file[file.files[1]]) testProfiles = [] testProfiles.append(testProfiles3) testProfiles.append(testProfiles10) testProfiles.append(testProfiles30) testProfiles = np.asarray(testProfiles,dtype=np.float) y_test = np.asarray(y_test,dtype=np.float) print(testProfiles.shape,y_test.shape) X_test = [] for nfield,(field,Ranges) in enumerate(zip(config.analyzedProfiles,config.analyzedRanges)): if field != None: for profile,Range in zip(field,Ranges): inp = testProfiles[nfield,:,profile,Range[0]:Range[1]] inp = np.reshape(inp,inp.shape + (1,)) print(inp.shape) X_test.append(inp) y_predicted = model.predict(X_test) print(y_predicted.shape) from scipy.stats import pearsonr for par in range(4): plt.figure(figsize=(10,10)) corr, _ = pearsonr(y_test[:,par], y_predicted[:,par]) plt.plot(y_test[:,par],y_predicted[:,par],'og',label='R='+str(np.round(corr,3))) plt.legend(loc='upper left') plt.show() ```
github_jupyter
### Colleges Chosen by Non-Matrics This notebook was created to explore the new data for where accepted students decide to go in favor of Siena. The bulk of the plots in this notebook were generated using [Altair](https://altair-viz.github.io/). Import necessary libraries. ``` import numpy as np import seaborn as sns import matplotlib.pyplot as plt import pandas as pd import pickle as pkl import os import sys import altair as alt alt.renderers.enable('notebook') from vega_datasets import data import warnings warnings.filterwarnings('ignore') sys.path.insert(0, '../src/visualization/') import visualize as vis ``` Load in all of the .csv files as DataFrames. Then, we concatenate the three DataFrames of college data into one, and map this data to `df['College_chosen_by_non-matrics']`. ``` df = pd.read_csv('../data/processed/CriticalPath_Data_EM_Confidential_lessNoise.csv').drop(columns='Unnamed: 0') ``` Create a DataFrame that groups students by the college they chose over Siena, as well as what major these students were. ``` college_by_major = df.groupby(["College_chosen_by_non-matrics", "Major"]).count().rename(columns={"Unique_student_ID":"# Students"}) college_by_major = college_by_major.reset_index() ``` Create a barplot showing the breakdown of students that chose UAlbany over Siena College. ``` alt.Chart(college_by_major[college_by_major['College_chosen_by_non-matrics']=='SUNY UNIVERSITY AT ALBANY'].iloc[:15]).mark_bar().encode( x='# Students:Q', y=alt.Y( 'Major:O', sort = alt.EncodingSortField( field='# Students', op = "sum", order = "descending" ) ) ).properties(height=200,width=300,title='Applicants who go to UAlbany instead of Siena: Last 3 Years') ``` All students who were accepted to Siena, but chose another college, broken down by major. It seems to look like this barplot matches up well with the applicants by major barplot on [01-st-exploratory.ipynb](https://github.com/stibbs1998/admissions_internship/blob/master/notebooks/01-st-exploratory.ipynb). ``` alt.Chart(college_by_major.groupby("Major").sum().reset_index( ).sort_values("# Students",ascending=False).iloc[:30]).mark_bar().encode( x='# Students:Q', y=alt.Y( 'Major:O', sort = alt.EncodingSortField( field='# Students:Q', op = "sum", order = "descending" ) ) ).properties(height=400,width=400,title="Applicants Who Don't Attend Siena by Major (Last 3 Years)") ``` Create a DataFrame to breakdown applicants by their `'CollegeCode'`. That is, are they applying to the School of Science, Business, or Liberal Arts. ``` college_by_school = df[~df['College_chosen_by_non-matrics'].isnull()] college_by_school = college_by_school.groupby(["College_chosen_by_non-matrics", "CollegeCode"]).count().rename(columns={"Unique_student_ID":"# Students"}) college_by_school = college_by_school.reset_index().rename(columns={"CollegeCode":"School"}) college_by_school['School'] = college_by_school['School'].map({"AD":"School of Art","BD":"School of Business","SD":"School of Science"}) ``` Create a barplot of the top thirty colleges by the total students who chose to go there over Siena. Further break this down by the number who applied to the School of Science, Business, and Liberal Arts. ``` num_colleges = 30 height = 500 width = 500 top_choices = college_by_school.groupby("College_chosen_by_non-matrics").sum().sort_values("# Students", ascending=False).iloc[:num_colleges].index.values _source = college_by_school.set_index("College_chosen_by_non-matrics").loc[top_choices].reset_index() def popular_college_by_school(source,title): bars = alt.Chart(source).mark_bar().encode( x=alt.X('# Students:Q', stack='zero'), y=alt.Y('College_chosen_by_non-matrics:O',axis=alt.Axis(title=''), sort=alt.EncodingSortField( field="yield", # The field to use for the sort op="sum", # The operation to run on the field prior to sorting order="ascending" # The order to sort in )), color=alt.Color('School') ).properties(height=height,width=width,title=title) text = alt.Chart(source).mark_text( dx=-10, dy=3, color='white').encode( x=alt.X('# Students:Q', stack='zero'), y=alt.Y('College_chosen_by_non-matrics:O',sort=alt.EncodingSortField( field="yield", # The field to use for the sort op="sum", # The operation to run on the field prior to sorting order="ascending" # The order to sort in )), detail='School:O', text=alt.Text('# Students:Q', format='.0f') ).properties(height=height,width=width) return bars + text popular_college_by_school(_source,title='College Breakdown by Department: Last Three Years') ``` Barplot of where undeclared liberal arts majors go. ``` alt.Chart(college_by_major[college_by_major['Major']=='UNAR'].groupby( "College_chosen_by_non-matrics").sum().reset_index( ).sort_values("# Students",ascending=False).iloc[:30]).mark_bar().encode( x='# Students:Q', y=alt.Y( 'College_chosen_by_non-matrics:O', sort = alt.EncodingSortField( field='# Students', op = "sum", order = "descending" ) ) ).properties(height=400,width=400,title='Colleges Chosen by Undeclared Arts Majors: Last 3 Years').configure_mark( opacity=0.5,color='blue') ``` We can even look at each individual major to find where other students tend to go. Below we define a function that takes in a major and returns the cooresponding barplot. ``` def major_breakdown(major,n=20,col='blue'): return alt.Chart(college_by_major.groupby(["College_chosen_by_non-matrics","Major"]).sum().reset_index( ).sort_values("# Students",ascending=False)[college_by_major['Major']== major][:n]).mark_bar(color=col).encode( x='# Students:Q', y=alt.Y( 'College_chosen_by_non-matrics:O',axis=alt.Axis(title=''), sort = alt.EncodingSortField( field='# Students', op = "sum", order = "descending" ) ) ).properties(height=300,width=200,title=f"Where else do {major} Majors go?") ``` Side-by-side barplots of where both Physics and Business majors decide to attend over Siena Collge. ``` major_breakdown('PHYS',n=10,col='green') | major_breakdown('BUSI',n=10,col='gold') ``` Is there a way to find the average distance from someone's house to the school they go to???? Using the [Haversine Formula](https://en.wikipedia.org/wiki/Haversine_formula), we can calculate the distance from one point to another in kilometers. This is done in the source code found [here] Haversine Formula: $$ d = 3,958.8 mi \cdot c$$ $$ c = 2 \cdot atan^2( \sqrt{a}, \sqrt{1-a} ) $$ $$ a = sin^2 (\Delta \phi /2) + cos\phi_1 \cdot cos\phi_2 \cdot sin^2(\Delta \lambda /2) $$ * $d$ is the distance from A $\to$ B * $\phi$ is the latitude (North/South) * $\lambda$ is the longitude (East/West) What is the median distance to Siena College of students broken down by admission status?? ``` alt.Chart(df.groupby("Admission_status").median().reset_index() ).mark_bar().encode( x=alt.X('Dist_to_Siena:Q',axis=alt.Axis(title='Distance to Siena (mi)')), y=alt.Y( 'Admission_status:O', title='Admission Status', sort = alt.EncodingSortField( field='Dist_to_Siena', op = "sum", order = "descending", ) ), color='Admission_status:O' ).properties(height=300,width=300,title="Median Distance to Siena College").configure_mark() ``` Create a layered, normalized histogram of distance to Siena by admission status. ``` f, axes = plt.subplots(figsize=(10,6)) mile_limit = 500 bins = 50 sns.distplot(df[(df['Admission_status']=='Applied') & df['Dist_to_Siena'].le(mile_limit)]['Dist_to_Siena'], color='skyblue',label='Applied',hist_kws={"alpha":0.5},bins=bins); sns.distplot(df[(df['Admission_status']=='Accepted') & df['Dist_to_Siena'].le(mile_limit)]['Dist_to_Siena'], color='red',label='Accepted',hist_kws={"alpha":0.4}, bins=bins); sns.distplot(df[(df['Admission_status']=='Enrolled') & df['Dist_to_Siena'].le(mile_limit)]['Dist_to_Siena'], color='gold',label='Enrolled',hist_kws={"alpha":0.3}, bins=bins); plt.legend(loc='best'); plt.ylabel('Kernel Density Estimate') plt.xlabel("Distance to Siena College (mi)") plt.title('Distance to Siena by Admission Status'); ``` For the 20 most popular colleges selected by accepted applicants to Siena, how does distance from Siena vs the distance to other colleges affect their popularity? Using [this](https://altair-viz.github.io/gallery/selection_histogram.html) as the boilerplate for the code, we are able to select ***ANY*** range of distance to Siena College, and generate the barplot for attendance at school this far away. ``` top_choices = df.groupby("College_chosen_by_non-matrics").sum().sort_values("Unique_student_ID", ascending=False).iloc[:20].index.values source = df.set_index("College_chosen_by_non-matrics").loc[top_choices].reset_index() source = source[(source['Dist_to_Siena']<500)&(source['Dist_to_Ccbnm']<1000)] source['index'] = source.index source['Year_of_entry'] = (source['Year_of_entry']-30)/100 brush = alt.selection(type='interval') points = alt.Chart(source).mark_point().encode( y=alt.Y('Dist_to_Ccbnm:Q',axis=alt.Axis(title='Distance to College Attended (mi)')), x=alt.X('Dist_to_Siena:Q',axis=alt.Axis(title='Distance to Siena (mi)')), color=alt.condition(brush, 'CollegeCode:N', alt.value('lightgray')) ).add_selection( brush ).properties(height=800,width=800) bars = alt.Chart(source).mark_bar().encode( y=alt.Y('College_chosen_by_non-matrics:N',sort=alt.EncodingSortField( field="College_chosen_by_non-matrics:Q", op="count", order="descending") ), color='CollegeCode:N', x=alt.X('count(College_chosen_by_non-matrics):Q') ).transform_filter( brush ).properties(height=800,width=800) text = alt.Chart(source).mark_text( dx=-10, dy=3, color='white').encode( x=alt.X('count(College_chosen_by_non-matrics):Q', stack='zero',title='# Students'), y=alt.Y('College_chosen_by_non-matrics:N', axis=alt.Axis(title=''), sort=alt.EncodingSortField( field="College_chosen_by_non-matrics:Q", op="count", order="descending")), detail='CollegeCode:O', text=alt.Text('count(CollegeCode):Q', format='.0f') ).transform_filter( brush ).properties(height=800,width=800) (points | (bars+text)).save('../reports/Dist2Siena_Ccbnm.html') ``` Now lets look at the distribution of distances another way. Let's have the ability to mouse over any college, and obtain a histogram detailing the distribution of distance to Siena. ``` alt.data_transformers.enable('json') selector = alt.selection_single(empty='all', fields=['College_chosen_by_non-matrics']) states = alt.topo_feature(data.us_10m.url, feature='states') source = df[['ccbnm_long','ccbnm_lat','College_chosen_by_non-matrics', 'Dist_to_Ccbnm','ccbnm_for_dist']].dropna(subset=['ccbnm_for_dist']) base = alt.Chart(source).properties( width=800, height=800 ).add_selection(selector) background = alt.Chart(states).mark_geoshape( fill='lightgray', stroke='white' ).properties(title="Colleges Chosen by Non-Matrics", width=800, height=800 ).project('albersUsa') points = base.mark_circle(size=20,color='steelblue').encode( longitude='ccbnm_long:Q', latitude='ccbnm_lat:Q', tooltip=['College_chosen_by_non-matrics','ccbnm_lat','ccbnm_long'] ).add_selection( selector ) hists = base.mark_bar(opacity=0.5, thickness=100).encode( x=alt.X('Dist_to_Ccbnm', axis=alt.Axis(title='Distance to College (mi)'), bin=alt.Bin(step=50)), y=alt.Y('count()', axis=alt.Axis(title='Number of Students'), stack=None) ).transform_filter( selector ).properties(width=800,height=800) ((background + points) | hists).save('../reports/College_Map_Histogram.html') ```
github_jupyter
``` # Импортнем все сразу, чтобы не вспоминать %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns from collections import Counter, defaultdict import cv2 import numpy as np import os import torch import torch.nn as nn import torch.nn.functional as F import torchvision from torchvision.models import resnet18 from torchvision.datasets import VOCDetection from torch.utils.data import DataLoader from torch.utils.tensorboard import SummaryWriter from tqdm.auto import tqdm, trange ``` # Object Detection У нас достаточно инструментов и знаний, чтобы попробовать сделать детекцию объектов. ## Определимся сначала с постановкой задачи Пусть у нас есть картинки: ![Картинки](./img/plain-img.png) Поручим людям пройтись по картинкам и отметить все интересующие нас объекты. Это можно сделать например следующими способами: - boundary box (bbox или bb) -- прямоугольник `[x, y, w, h]` со сторонами параллельными краям картинки (самый простой вариант) - instance/semantic mask -- картинка с нулями и единицами размером с исходную картинку, показывающая какие пиксели относятся к объекту (самый затратный вариант) ![Разметка](./img/img-with-annotation.png) Можно размечать иначе под конкретные нужды: - center + radius `[x, y, r]` - rotated bbox `[angle, x, h, w, h]` - ломанная окружающая линия `[(x_0, y_0), ... (x_n, y_n)]` - whatever **Итак: мы хотим сделать сеть, которая будет с одной картинки предсказывать положение (bbox) и класс нескольких объектов (из N классов)** ## Интерпретация feature maps ![](./img/backbone.png) Если мы прогоним картинку `[3, W, H]` через привычную сверточную сеть (возьмем например классификационную сетку до GAP), мы получим тензор с большим количеством каналов небольшого пространственного размера `[ch, w, h]` (пространственные размеры пускай уменьшились K раз, K=32 для resnet50 **TODO: проверить размеры**). Можно грубо сказать что каждый пиксель выходного тензора отвечает за область `KxK` пикселей входной картинки (рецептивно поле однако накрывает всю картинку с запасом). ![](./img/img-with-grid.png) Мы уже пытались интерпретировать пиксели в этом тензоре как Class Activation Map. Показывает что-то интересное, но как детектор явно не получится использовать. Однако мы можем сформулировать детекцию как оптимизационную задачу и проучить модель специально под нее. Давайте пойдем от идеи CAM и повесим на каждый пиксель выходной мапы несколько голов, которые будут предсказывать необходимые для детекции вещи. Договоримся, что пиксель относится к объекту только если его центр (точнее центр BBox'а) попадает в область действия пикселя. ![](./img/image-ssd.png) Чтобы завести детекцию нам потребуются такие головы: - `bbox regression` - регрессия bbox'а - `objectness` - относится ли пиксель к объекту или нет - `clf` - если относится, то давайте предскажем класс Как мы когда-то обсуждали, Conv1x1 действует на каждый пиксель точно так же как FC-слой. Так что для наших нужд нам достаточно добавить к модели Conv1x1 с количеством выходных каналов `4 + 1 + N_classes`. ``` def monkey_forward(net, x): x = net.conv1(x) x = net.bn1(x) x = net.relu(x) x = net.maxpool(x) x = net.layer1(x) x = net.layer2(x) x = net.layer3(x) x = net.layer4(x) x = net.final(x) # [bs, 512, 7, 7] # x = net.avgpool(x) # [bs, 512, 1, 1] # [..., dim] # x = torch.flatten(x, 1) # [] # x = net.fc(x) return x ``` # Adapt torchvision classification model for detection На первый взгляд нам надо только выбросить avgpool, однако [forward метод в torchvision.model.resnet написан неудачно для наших целей](https://pytorch.org/vision/stable/_modules/torchvision/models/resnet.html#resnet18), так что придется его запатчить. **JFYI: подмена логики в рантайме (ака [monkey patching](https://en.wikipedia.org/wiki/Monkey_patch)) -- распространенная но _опасная_ практика** # Prediction transformations **Как превратить предсказанные числа в координаты bbox'ов?** Пусть один пиксель выходного тензора относится к патчу (kx, ky) входной картинки. BBox'ы живут в пространстве исходных картинок, непосредственные выходы сети -- в пространстве выходных тензоров. Для удобства вычислений, давайте представим bbox как координаты центра (так будет удобнее регрессировать) + ширина и высота. Предсказываем 4 числа: $t_x, t_y, t_w, t_h$. `scale` -- это цена выходного пикселя во входных. $$ x_c = s \cdot (\tanh t_x + i + 0.5)\\ y_c = s \cdot (\tanh t_y + j + 0.5)\\ w = s \cdot \exp{t_w}\\ h = s \cdot \exp{t_h} $$ **NB: Обычно используют так называемые anchor -- затравочные bbox'ы, см whiteboard** ``` class VeryModel(nn.Module): def __init__(self, n_classes=12, cfg=None): super().__init__() self.n_classes = n_classes self.cfg = cfg model = resnet18(pretrained=True) # добавим в модельку слой и подменим forward model.final = nn.Conv2d(model.fc.in_features, 4 + 1 + n_classes, 1) model.forward = lambda x: monkey_forward(model, x) self.inner = model def forward(self, x): output = self.inner(x) # [bs, 5+N, 7, 7] _, _, W, H = x.shape bs, ch, w, h = output.shape # 5 + N = 4 (bbox) + obj + clf scale_x = W / w scale_y = H / h ox = torch.arange(w).view([1, 1, w, 1]).repeat(1, 1, 1, h) + 0.5 oy = torch.arange(h).view([1, 1, 1, h]).repeat(1, 1, w, 1) + 0.5 cx = scale_x * (torch.tanh(output[:, 0:1, ...]) + ox) cy = scale_y * (torch.tanh(output[:, 1:2, ...]) + oy) ww = scale_x * torch.exp(output[:, 2:3, ...]) hh = scale_y * torch.exp(output[:, 3:4, ...]) bb = torch.cat([cx, cy, ww, hh], dim=1) obj = output[:, 4:5, ...] clf = output[:, 5:, ...] return dict( bb=bb, obj=obj, clf=clf, ) def compute_all(self, batch, device=None): loss = [obj > th](loc + clf) + obj pass net = VeryModel() net = net.eval() with torch.no_grad(): x = torch.zeros((1, 3, 416, 246)) out = net(x) print(x.shape) for k, v in out.items(): print(k, v.shape) ``` # WOK _Pascal VOC EDA_ Просто посмотреть, что у нас в датасете лежит ``` ds = VOCDetection("./voc", image_set="train") # print(ds[0]) some_stats = defaultdict(list) N = len(ds) for i in trange(N): pic, ddict = ds[i] anno = ddict['annotation'] w = int(anno['size']['width']) h = int(anno['size']['height']) d = int(anno['size']['depth']) some_stats['w'].append(w) some_stats['h'].append(h) some_stats['d'].append(d) some_stats['objects_per_image'].append(len(anno['object'])) for x in anno['object']: name = x['name'] bb = x['bndbox'] ww = int(bb['xmax']) - int(bb['xmin']) hh = int(bb['ymax']) - int(bb['ymin']) aspect = (ww / hh + 1e-5) rel_ww = ww / w rel_hh = hh / h some_stats['name'].append(name) some_stats['ww'].append(ww) some_stats['hh'].append(hh) some_stats['aspect'].append(aspect) some_stats['rel_ww'].append(rel_ww) some_stats['rel_hh'].append(rel_hh) # names print(sorted(Counter(some_stats['name']).items(), key=lambda t: -t[1])) del some_stats['name'] for k, v in some_stats.items(): plt.figure() plt.title(k) sns.distplot(v, kde=False) plt.show() ``` # Let's prepare data ``` CLASSES = [ 'person', 'chair', 'car', 'dog', 'bottle', 'cat', 'bird', 'pottedplant', 'sheep', 'boat', 'aeroplane', 'tvmonitor', 'bicycle', 'sofa', 'horse', 'motorbike', 'diningtable', 'cow', 'train', 'bus', ] cls2idx = {k: i for i, k in enumerate(CLASSES)} # unused def process_image(pil_image): img = np.asarray(pil_image) img = img.astype(np.float32) / 255.0 # img \in [0, 1] mean = np.array([0.485, 0.456, 0.406]).reshape(1, 1, 3) std = np.array([0.229, 0.224, 0.225]).reshape(1, 1, 3) img = (img - mean) / std img = img.astype(np.float32) img = np.transpose(img, [2, 0, 1]) return img class Verydet: def __init__(self, root, image_set="train", download=True, transform_fn=None): self.dataset = VOCDetection(root, image_set=image_set, download=download) self.transform_fn = transform_fn def __len__(self): return len(self.dataset) def __getitem__(self, item): pil_image, ddict = self.dataset[item] img = np.asarray(pil_image) ddict = ddict['annotation'] category_ids = [] bboxes = [] for x in ddict['object']: category_ids.append(cls2idx[x['name']]) bb = {k: int(v) for k, v in x['bndbox'].items()} bboxes.append(tuple([bb[_] for _ in ["xmin", "ymin", "xmax", "ymax"]])) ret = {"image": img, "category_ids": category_ids, "bboxes": bboxes} if self.transform_fn is not None: ret = self.transform_fn(**ret) return ret trainset = Verydet("./voc", image_set="train", download=True) valset = Verydet("./voc", image_set="val", download=True) trainset[0] ``` # Форматы bbox'ов Шпаргалка из albumentations: The `coco` format `[x_min, y_min, width, height]`, e.g. [97, 12, 150, 200]. The `pascal_voc` format `[x_min, y_min, x_max, y_max]`, e.g. [97, 12, 247, 212]. The `albumentations` format is like `pascal_voc`, but normalized, in other words: `[x_min, y_min, x_max, y_max]`, e.g. [0.2, 0.3, 0.4, 0.5]. The `yolo` format `[x, y, width, height]`, e.g. [0.1, 0.2, 0.3, 0.4]; `x`, `y` - normalized bbox center; `width`, `height` - normalized bbox width and height. ``` # based on albumentation tutorial # https://albumentations.ai/docs/examples/example_bboxes2/ BOX_COLOR = (255, 0, 0) # Red LINE_COLOR = (100, 100, 255) TEXT_COLOR = (255, 255, 255) # White def visualize_bbox(img, bbox, class_name, color=BOX_COLOR, thickness=2): """Visualizes a single bounding box on the image""" # x_min, y_min, w, h = bbox # x_min, x_max, y_min, y_max = int(x_min), int(x_min + w), int(y_min), int(y_min + h) x_min, y_min, x_max, y_max = [int(_) for _ in bbox] cv2.rectangle(img, (x_min, y_min), (x_max, y_max), color=color, thickness=thickness) ((text_width, text_height), _) = cv2.getTextSize(class_name, cv2.FONT_HERSHEY_SIMPLEX, 0.35, 1) cv2.rectangle(img, (x_min, y_min - int(1.3 * text_height)), (x_min + text_width, y_min), BOX_COLOR, -1) cv2.putText( img, text=class_name, org=(x_min, y_min - int(0.3 * text_height)), fontFace=cv2.FONT_HERSHEY_SIMPLEX, fontScale=0.35, color=TEXT_COLOR, lineType=cv2.LINE_AA, ) return img def visualize(image, bboxes, category_ids, category_id_to_name, grid=None, centers=False, target_obj_mask=None): img = image.copy() if grid is not None: height, width, _ = img.shape nh, nw = grid for i in range(1, nw): x = (width // nw) * i cv2.line(img, (x, 0), (x, height), LINE_COLOR, 1) for j in range(1, nh): y = (height // nh) * j cv2.line(img, (0, y), (width, y), LINE_COLOR, 1) for bbox, category_id in zip(bboxes, category_ids): class_name = category_id_to_name[category_id] print(bbox, class_name) img = visualize_bbox(img, bbox, class_name) plt.figure(figsize=(12, 12)) plt.axis('off') plt.imshow(img) entry = trainset[0] visualize(entry['image'], entry['bboxes'], entry['category_ids'], CLASSES, grid=(7, 7)) ``` # Оси, координаты, картинки и opencv Порядок осей в тензорах, с которым мы работаем обычно называют `channels_first`, он же `NCHW`, картиночные размерности идут в порядке высота, ширина. При этом, рисовательные функции opencv подразумевают, что координаты точек -- это `(x, y)`. Это просто кладезь ошибок. Поэтому всегда стоит тестировать модельки/пайплайны на неквадратных картинках и посматривать на недиагональные элементы (на них сложно отловить ошибку). Как себя проверять? Через визуализацию и unit-тесты. ``` ds = VOCDetection("./voc", image_set="train") img = np.asarray(ds[0][0]) # Direct pixel manipulation: y-coord, x-coord img1 = img.copy() img1[60-10: 60+10, 190-10: 190+10, 1] = 255 # Drawing x-coord, y-coord img2 = img.copy() cv2.circle(img2, (190, 60), 10, (255, 0, 0), -1) plt.figure() plt.title("ndarray assignment") plt.imshow(img1) plt.show() plt.figure() plt.title("cv2 drawing") plt.imshow(img2) plt.show() import albumentations as A import albumentations.pytorch.transforms as APT np.random.seed(7) transform = A.Compose( [ A.ShiftScaleRotate(p=0.5), A.PadIfNeeded(min_height=420, min_width=420), A.RandomCrop(416, 416, always_apply=True), # A.Normalize(), # APT.ToTensorV2(always_apply=True), ], bbox_params=A.BboxParams(format='pascal_voc', label_fields=['category_ids']), ) entry = trainset[200] transformed = transform(**entry) print(transformed['image'].shape) visualize(transformed['image'], transformed['bboxes'], transformed['category_ids'], CLASSES, grid=(7, 7)) def collate_fn(lst): tmp = defaultdict(list) ret = dict() for entry in lst: for k, v in entry.items(): tmp[k].append(v) for k, v in tmp.items(): if isinstance(v[0], np.ndarray): for vv in v: print(vv.shape) v = np.concatenate(v, 0) if isinstance(v[0], torch.Tensor): v = torch.cat(v, 0) ret[k] = v return ret trainset = Verydet("./voc", image_set="train", transform_fn=transform) # valset = Verydet("./voc", image_set="val", transform_fn=transform) dl = DataLoader(trainset, shuffle=True, batch_size=4, collate_fn=collate_fn) batch = next(iter(dl)) class Trainer: def __init__(self, model: nn.Module, batch_size: int = 128): self.model = model self.batch_size = batch_size run_folder = Path(os.get_cwd()) train_log_folder = run_folder / "train_log" val_log_folder = run_folder / "val_log" print(f"Run output folder is {run_folder}") os.makedirs(run_folder) os.makedirs(train_log_folder) os.makedirs(val_log_folder) self.outpath = outpath self.device = 'cpu' if torch.cuda.is_available(): self.device = torch.cuda.current_device() self.model = self.model.to(self.device) self.global_step = 0 self.train_writer = SummaryWriter(log_dir=str(train_log_folder)) self.val_writer = SummaryWriter(log_dir=str(val_log_folder)) def save_checkpoint(self, path): torch.save(self.model.state_dict(), path) def train(self, num_epochs: int): model = self.model optimizer = model.get_optimizer() train_loader = model.get_loader(train=True) val_loader = model.get_loader(train=False) best_loss = float('inf') for epoch in range(num_epochs): model.train() for batch in tqdm(train_loader): batch = {k: v.to(self.device) for k, v in batch.items()} loss, details = model.compute_all(batch) optimizer.zero_grad() loss.backward() optimizer.step() model.post_train_batch() for k, v in details.items(): self.train_writer.add_scalar(k, v, global_step=self.global_step) self.global_step += 1 model.eval() val_losses = [] for batch in tqdm(val_loader): batch = {k: v.to(self.device) for k, v in batch.items()} loss, details = model.compute_all(batch) val_losses.append(loss.item()) val_loss = np.mean(val_losses) model.post_val_stage(val_loss) if val_loss < best_loss: self.save_checkpoint(str(self.run_folder / "best_checkpoint.pth")) best_loss = val_loss def find_lr(self, min_lr: float = 1e-6, max_lr: float = 1e-1, num_lrs: int = 20, smooth_beta: float = 0.8) -> dict: lrs = np.geomspace(start=min_lr, stop=max_lr, num=num_lrs) logs = {'lr': [], 'loss': [], 'avg_loss': []} avg_loss = None model = self.model optimizer = model.get_optimizer() train_loader = model.get_loader(train=True) model.train() for lr, batch in tqdm(zip(lrs, train_loader), desc='finding LR', total=num_lrs): # apply new lr for param_group in optimizer.param_groups: param_group['lr'] = lr # train step batch = {k: v.to(self.device) for k, v in batch.items()} loss, details = model.compute_all(batch) optimizer.zero_grad() loss.backward() optimizer.step() # calculate smoothed loss if avg_loss is None: avg_loss = loss else: avg_loss = smooth_beta * avg_loss + (1 - smooth_beta) * loss # store values into logs logs['lr'].append(lr) logs['avg_loss'].append(avg_loss) logs['loss'].append(loss) logs.update({key: np.array(val) for key, val in logs.items()}) return logs ```
github_jupyter
<a href="https://colab.research.google.com/github/anshupandey/Deep-Learning-for-structured-Data/blob/main/code9_airline_passenger_volume_forecast_with_LSTM.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # LSTM - forecasting passenger volume at Chnagi Airport ## Data Collection and data preparation ``` !wget -q https://storage.data.gov.sg/civil-aircraft-arrivals-departures-passengers-and-mail-changi-airport-monthly/civil-aircraft-arrivals-departures-passengers-and-mail-changi-airport-monthly.zip !unzip "civil-aircraft-arrivals-departures-passengers-and-mail-changi-airport-monthly.zip" import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns #load data df = pd.read_csv(r"/content/civil-aircraft-arrivals-departures-and-passengers-changi-airport-monthly.csv") df.shape df.head() df = df[df.level_1=='Total Passengers'] df.shape df.head() df.index = pd.to_datetime(df.month) df = df[['value']] df.head() plt.figure(figsize=(15,5)) plt.plot(df) plt.show() df2 = df['2005-01-01':'2018-12-01'] df2.shape df2.head() plt.figure(figsize=(15,5)) plt.plot(df2) plt.show() ``` ### Forecasting with sequence size = 1 ``` df3 = df2.copy() df3['features'] = df3['value'].shift(1) df3.head() df3.rename(columns={'value':'target'},inplace=True) df3.dropna(inplace=True) df3 = df3[['features','target']] df3.head() x = df3.features.values y = df3.target.values print(x.shape,y.shape) # samples, timsestamps, features x = x.reshape(167,1,1) ``` #### Modelling LSTM for forecasting ``` from tensorflow.keras import models,layers ip_layer = layers.Input(shape=(1,1)) # first lstm layer lstm1 = layers.LSTM(15,activation='relu')(ip_layer) h1 = layers.Dense(20,activation='relu')(lstm1) op = layers.Dense(1)(h1) model = models.Model(inputs=ip_layer,outputs=op) model.summary() model.compile(loss='mae',optimizer='adam') model.fit(x,y,epochs=2000,shuffle=False) ytrue = df["2019-01-01":"2019-12-01"]['value'] ytrue.shape ytrue def forecast(value,size): ans = [] for i in range(size): value = np.array(value).reshape(-1,1,1) pred = model.predict(value) ans.append(pred[0]) value = pred return np.array(ans) df['2018-12-01':'2018-12-01']['value'].values ypred = forecast(df['2018-12-01':'2018-12-01']['value'].values,size=12) ypred = pd.DataFrame(ypred,index = ytrue.index) ypred plt.plot(ytrue,c='green') plt.plot(ypred,c='red') plt.show() ``` ### Forecasting with sequence size = 12 ``` from sklearn.preprocessing import MinMaxScaler mm = MinMaxScaler() df3 = df2.copy() df3['value'] = mm.fit_transform(df3[['value']]) df3.head() def split_sequence(sequence,n_steps): x = [] y = [] for i in range(len(sequence)): # get the end index of the pattern end_ix = i + n_steps # check if the iteration is beyond the size of sequence, break the loop if end_ix > len(sequence)-1: break # collect the input and output parts of the pattern seq_x, seq_y = sequence[i:end_ix],sequence[end_ix] x.append(seq_x) y.append(seq_y) return np.array(x),np.array(y) x,y = split_sequence(df3.value,n_steps=12) print(x.shape,y.shape) # samples, features, timestamps x = x.reshape(-1,12,1) x.shape ``` #### Modelling LSTM for forecasting ``` from tensorflow.keras import models,layers ip_layer = layers.Input(shape=(12,1)) # first lstm layer lstm1 = layers.LSTM(50,activation='relu',return_sequences=True)(ip_layer) lstm2 = layers.LSTM(80,activation='relu',return_sequences=True)(lstm1) lstm3 = layers.LSTM(100,activation='relu',return_sequences=False)(lstm2) op = layers.Dense(1)(lstm3) model = models.Model(inputs=ip_layer,outputs=op) model.summary() model.compile(loss='mae',optimizer='adam') model.fit(x,y,epochs=1000,shuffle=False) ytrue = df["2019-01-01":"2019-12-01"]['value'] ytrue.shape ytrue df['2018-01-01':'2018-12-01']['value'].values ip = np.array(mm.transform(df['2018-01-01':'2018-12-01']).reshape(-1,1)) print(ip.shape) ip.flatten().tolist() ans = [] value = ip.flatten().tolist() for i in range(12): ip = np.array(value).reshape(1,12,1) pred = model.predict(ip) #print(i,pred,ip) value.pop(0) value.append(pred[0][0]) ans.append(pred[0]) ypred = np.array(ans) ypred ypred = mm.inverse_transform(ypred) ypred = pd.DataFrame(ypred,index = ytrue.index) ypred plt.plot(ytrue,c='green') plt.plot(ypred,c='red') plt.show() ```
github_jupyter
## Classes for callback implementors ``` from fastai.gen_doc.nbdoc import * from fastai.callback import * from fastai.basics import * ``` fastai provides a powerful *callback* system, which is documented on the [`callbacks`](/callbacks.html#callbacks) page; look on that page if you're just looking for how to use existing callbacks. If you want to create your own, you'll need to use the classes discussed below. A key motivation for the callback system is that additional functionality can be entirely implemented in a single callback, so that it's easily read. By using this trick, we will have different methods categorized in different callbacks where we will find clearly stated all the interventions the method makes in training. For instance in the [`LRFinder`](/callbacks.lr_finder.html#LRFinder) callback, on top of running the fit function with exponentially growing LRs, it needs to handle some preparation and clean-up, and all this code can be in the same callback so we know exactly what it is doing and where to look if we need to change something. In addition, it allows our [`fit`](/basic_train.html#fit) function to be very clean and simple, yet still easily extended. So far in implementing a number of recent papers, we haven't yet come across any situation where we had to modify our training loop source code - we've been able to use callbacks every time. ``` show_doc(Callback) ``` To create a new type of callback, you'll need to inherit from this class, and implement one or more methods as required for your purposes. Perhaps the easiest way to get started is to look at the source code for some of the pre-defined fastai callbacks. You might be surprised at how simple they are! For instance, here is the **entire** source code for [`GradientClipping`](/train.html#GradientClipping): ```python @dataclass class GradientClipping(LearnerCallback): clip:float def on_backward_end(self, **kwargs): if self.clip: nn.utils.clip_grad_norm_(self.learn.model.parameters(), self.clip) ``` You generally want your custom callback constructor to take a [`Learner`](/basic_train.html#Learner) parameter, e.g.: ```python @dataclass class MyCallback(Callback): learn:Learner ``` Note that this allows the callback user to just pass your callback name to `callback_fns` when constructing their [`Learner`](/basic_train.html#Learner), since that always passes `self` when constructing callbacks from `callback_fns`. In addition, by passing the learner, this callback will have access to everything: e.g all the inputs/outputs as they are calculated, the losses, and also the data loaders, the optimizer, etc. At any time: - Changing self.learn.data.train_dl or self.data.valid_dl will change them inside the fit function (we just need to pass the [`DataBunch`](/basic_data.html#DataBunch) object to the fit function and not data.train_dl/data.valid_dl) - Changing self.learn.opt.opt (We have an [`OptimWrapper`](/callback.html#OptimWrapper) on top of the actual optimizer) will change it inside the fit function. - Changing self.learn.data or self.learn.opt directly WILL NOT change the data or the optimizer inside the fit function. In any of the callbacks you can unpack in the kwargs: - `n_epochs`, contains the number of epochs the training will take in total - `epoch`, contains the number of the current - `iteration`, contains the number of iterations done since the beginning of training - `num_batch`, contains the number of the batch we're at in the dataloader - `last_input`, contains the last input that got through the model (eventually updated by a callback) - `last_target`, contains the last target that got through the model (eventually updated by a callback) - `last_output`, contains the last output spitted by the model (eventually updated by a callback) - `last_loss`, contains the last loss computed (eventually updated by a callback) - `smooth_loss`, contains the smoothed version of the loss - `last_metrics`, contains the last validation loss and metrics computed - `pbar`, the progress bar - [`train`](/train.html#train), flag to know if we're in training mode or not - `stop_training`, that will stop the training at the end of the current epoch if True - `stop_epoch`, that will break the current epoch loop - `skip_step`, that will skip the next optimizer step - `skip_zero`, that will skip the next zero grad When returning a dictionary with those key names, the state of the [`CallbackHandler`](/callback.html#CallbackHandler) will be updated with any of those changes, so in any [`Callback`](/callback.html#Callback), you can change those values. ### Methods your subclass can implement All of these methods are optional; your subclass can handle as many or as few as you require. ``` show_doc(Callback.on_train_begin) ``` Here we can initiliaze anything we need. The optimizer has now been initialized. We can change any hyper-parameters by typing, for instance: ``` self.opt.lr = new_lr self.opt.mom = new_mom self.opt.wd = new_wd self.opt.beta = new_beta ``` ``` show_doc(Callback.on_epoch_begin) ``` This is not technically required since we have `on_train_begin` for epoch 0 and `on_epoch_end` for all the other epochs, yet it makes writing code that needs to be done at the beginning of every epoch easy and more readable. ``` show_doc(Callback.on_batch_begin) ``` Here is the perfect place to prepare everything before the model is called. Example: change the values of the hyperparameters (if we don't do it on_batch_end instead) At the end of that event `xb`,`yb` will be set to `last_input`, `last_target` of the state of the [`CallbackHandler`](/callback.html#CallbackHandler). ``` show_doc(Callback.on_loss_begin) ``` Here is the place to run some code that needs to be executed after the output has been computed but before the loss computation. Example: putting the output back in FP32 when training in mixed precision. At the end of that event the output will be set to `last_output` of the state of the [`CallbackHandler`](/callback.html#CallbackHandler). ``` show_doc(Callback.on_backward_begin) ``` Here is the place to run some code that needs to be executed after the loss has been computed but before the gradient computation. Example: `reg_fn` in RNNs. At the end of that event the output will be set to `last_loss` of the state of the [`CallbackHandler`](/callback.html#CallbackHandler). ``` show_doc(Callback.on_backward_end) ``` Here is the place to run some code that needs to be executed after the gradients have been computed but before the optimizer is called. If `skip_step` is `True` at the end of this event, the optimizer step is skipped. ``` show_doc(Callback.on_step_end) ``` Here is the place to run some code that needs to be executed after the optimizer step but before the gradients are zeroed. If `skip_zero` is `True` at the end of this event, the gradients are not zeroed. ``` show_doc(Callback.on_batch_end) ``` Here is the place to run some code that needs to be executed after a batch is fully done. Example: change the values of the hyperparameters (if we don't do it on_batch_begin instead) If `end_epoch` is `True` at the end of this event, the current epoch is interrupted (example: lr_finder stops the training when the loss explodes). ``` show_doc(Callback.on_epoch_end) ``` Here is the place to run some code that needs to be executed at the end of an epoch. Example: Save the model if we have a new best validation loss/metric. If `end_training` is `True` at the end of this event, the training stops (example: early stopping). ``` show_doc(Callback.on_train_end) ``` Here is the place to tidy everything. It's always executed even if there was an error during the training loop, and has an extra kwarg named exception to check if there was an exception or not. Examples: save log_files, load best model found during training ``` show_doc(Callback.get_state) ``` This is used internally when trying to export a [`Learner`](/basic_train.html#Learner). You won't need to subclass this function but you can add attribute names to the lists `exclude` or `not_min`of the [`Callback`](/callback.html#Callback) you are designing. Attributes in `exclude` are never saved, attributes in `not_min` only if `minimal=False`. ## Annealing functions The following functions provide different annealing schedules. You probably won't need to call them directly, but would instead use them as part of a callback. Here's what each one looks like: ``` annealings = "NO LINEAR COS EXP POLY".split() fns = [annealing_no, annealing_linear, annealing_cos, annealing_exp, annealing_poly(0.8)] for fn, t in zip(fns, annealings): plt.plot(np.arange(0, 100), [fn(2, 1e-2, o) for o in np.linspace(0.01,1,100)], label=t) plt.legend(); show_doc(annealing_cos) show_doc(annealing_exp) show_doc(annealing_linear) show_doc(annealing_no) show_doc(annealing_poly) show_doc(CallbackHandler) ``` You probably won't need to use this class yourself. It's used by fastai to combine all the callbacks together and call any relevant callback functions for each training stage. The methods below simply call the equivalent method in each callback function in [`self.callbacks`](/callbacks.html#callbacks). ``` show_doc(CallbackHandler.on_backward_begin) show_doc(CallbackHandler.on_backward_end) show_doc(CallbackHandler.on_batch_begin) show_doc(CallbackHandler.on_batch_end) show_doc(CallbackHandler.on_epoch_begin) show_doc(CallbackHandler.on_epoch_end) show_doc(CallbackHandler.on_loss_begin) show_doc(CallbackHandler.on_step_end) show_doc(CallbackHandler.on_train_begin) show_doc(CallbackHandler.on_train_end) show_doc(CallbackHandler.set_dl) show_doc(OptimWrapper) ``` This is a convenience class that provides a consistent API for getting and setting optimizer hyperparameters. For instance, for [`optim.Adam`](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam) the momentum parameter is actually `betas[0]`, whereas for [`optim.SGD`](https://pytorch.org/docs/stable/optim.html#torch.optim.SGD) it's simply `momentum`. As another example, the details of handling weight decay depend on whether you are using `true_wd` or the traditional L2 regularization approach. This class also handles setting different WD and LR for each layer group, for discriminative layer training. ``` show_doc(OptimWrapper.clear) show_doc(OptimWrapper.create) show_doc(OptimWrapper.new) show_doc(OptimWrapper.read_defaults) show_doc(OptimWrapper.read_val) show_doc(OptimWrapper.set_val) show_doc(OptimWrapper.step) show_doc(OptimWrapper.zero_grad) show_doc(SmoothenValue) ``` Used for smoothing loss in [`Recorder`](/basic_train.html#Recorder). ``` show_doc(SmoothenValue.add_value) show_doc(Stepper) ``` Used for creating annealing schedules, mainly for [`OneCycleScheduler`](/callbacks.one_cycle.html#OneCycleScheduler). ``` show_doc(Stepper.step) show_doc(AverageMetric) ``` See the documentation on [`metrics`](/metrics.html#metrics) for more information. ### Callback methods You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality. ``` show_doc(AverageMetric.on_epoch_begin) show_doc(AverageMetric.on_batch_end) show_doc(AverageMetric.on_epoch_end) ``` ## Undocumented Methods - Methods moved below this line will intentionally be hidden ## New Methods - Please document or move to the undocumented section
github_jupyter
``` from astropy.io import fits from scipy.signal import fftconvolve import numpy as np import seaborn as sns import pandas as pd from pandas import DataFrame as df import matplotlib.pyplot as plt from joblib import Parallel, delayed import multiprocessing sns.set(rc={'figure.figsize': (50, 50)}) %matplotlib inline test_set = "/scratch/datasets/astro_deconv_2019/test/" indexes=list(range(5000, 105000, 5000)) test_start = 9400 test_end = 9700 def open_fits(x): return fits.open(x)[0].data.squeeze() def convolve(convoled, convolver): p = convolver.shape[0] r = slice(p//2, -p//2+1) # uneven PSF needs +2, even psf +1 return fftconvolve(convoled, convolver, mode="full")[r,r] # calculate wsclean baseline l1_wsclean = [] for number in range(test_start, test_end): target_path = "{}{}-skymodel.fits".format(test_set, number) wsclean_model_path = "{}{}-wsclean-model.fits".format(test_set, number) clean_beam_path = "{}{}-clean-beam.fits".format(test_set, number) clean_beam = open_fits(clean_beam_path) target = open_fits(target_path) wsclean_model = open_fits(wsclean_model_path) target_convolved = convolve(target, clean_beam) wsclean_convoled = convolve(wsclean_model, clean_beam) l1_wsclean.append(np.sum(np.abs( wsclean_model - target))) wsclean_scaling = 1 / np.average(l1_wsclean) def compute(neural_output, start, end): scores = [] for number in range(start, end): target_path = "{}{}-skymodel.fits".format(test_set, number) neural_model_path = "{}{}-outputs.fits".format(neural_output, number) clean_beam_path = "{}{}-clean-beam.fits".format(test_set, number) clean_beam = open_fits(clean_beam_path) target = open_fits(target_path) neural_model = open_fits(neural_model_path ) target_convoled = convolve(target, clean_beam) neural_convolved = convolve(neural_model, clean_beam) score = np.sum(np.abs( neural_convolved - target_convoled)) scores.append(score) return scores def doit(wsclean_scaling, type_, run, index): print(f"type: {type_} run: {run} index: {index}") neural_output = f"/scratch/vacuum-cleaner/final_eval_lr/{type_}/test/run{run}/{index}/fits/" scores = compute(neural_output, test_start, test_end) #scaled = np.average(scores) * wsclean_scaling return scores num_cores = multiprocessing.cpu_count() scores = {} for type_ in ("gan_psf_res",): #, "gan_psf", "gan_psf_res"): for run in range(1, 11): p = Parallel(n_jobs=num_cores) steps = p(delayed(doit)(wsclean_scaling, type_, run, index) for index in indexes) scores[f"{type_}__run{run}"] = steps data = pd.DataFrame(scores, index=indexes) plt.figure(figsize=(16, 12)) p = sns.lineplot(data=data, dashes=False) p.set_yscale('log') plt.figure(figsize=(16, 12)) avg_data = pd.DataFrame({ 'gan_average': data.drop('gan__run3', axis=1).filter(regex='gan__').mean(axis=1), 'gan_psf_average': data.filter(regex='gan_psf_').mean(axis=1), 'gan_psf_res_average': data.filter(regex='gan_psf_res_').mean(axis=1), }) p = sns.lineplot(data=avg_data, dashes=False) p.set_yscale('log') data.mean().sort_values() selection = ["gan_psf__run8", "gan_psf__run1", "gan_psf__run7", "gan_psf__run5", "gan_psf__run10", "gan__run9", "gan__run1", "gan__run2", "gan__run8", "gan__run5"] plt.figure(figsize=(16, 12)) p = sns.lineplot(data=data[selection], dashes=False) p.set_yscale('log') selection = ["gan__run8", "gan_psf__run6", "gan_psf_res__run10"] plt.figure(figsize=(16, 12)) p = sns.lineplot(data=data[selection], dashes=False) p.set_yscale('log') ```
github_jupyter
``` import torch import torchvision import torchvision.transforms as T import matplotlib.pyplot as plt from cs231n.data_utils import load_imagenet_val from cs231n.image_utils import SQUEEZENET_MEAN, SQUEEZENET_STD from PIL import Image ``` ## 辅助函数 ``` def preprocess(img, size=224): transform = T.Compose([ T.Resize(size), T.ToTensor(), T.Normalize(mean=SQUEEZENET_MEAN.tolist(), std=SQUEEZENET_STD.tolist()), T.Lambda(lambda x: x[None]), ]) return transform(img) def deprocess(img, should_rescale=True): transform = T.Compose([ T.Lambda(lambda x: x[0]), T.Normalize(mean=[0, 0, 0], std=(1.0 / SQUEEZENET_STD).tolist()), T.Normalize(mean=(-SQUEEZENET_MEAN).tolist(), std=[1, 1, 1]), T.Lambda(rescale) if should_rescale else T.Lambda(lambda x: x), T.ToPILImage(), ]) return transform(img) def rescale(x): low, high = x.min(), x.max() x_rescaled = (x - low) / (high - low) return x_rescaled def blur_image(X, sigma=1): X_np = X.cpu().clone().numpy() X_np = gaussian_filter1d(X_np, sigma, axis=2) X_np = gaussian_filter1d(X_np, sigma, axis=3) X.copy_(torch.Tensor(X_np).type_as(X)) return X ``` ## 加载预训练模型 ``` # Download and load the pretrained SqueezeNet model. model = torchvision.models.squeezenet1_1(pretrained=True) # We don't want to train the model, so tell PyTorch not to compute gradients # with respect to model parameters. for param in model.parameters(): param.requires_grad = False ``` ## 加载一些图片 ``` X, y, class_names = load_imagenet_val(num=5) plt.figure(figsize=(12, 6)) for i in range(5): plt.subplot(1, 5, i + 1) plt.imshow(X[i]) plt.title(class_names[y[i]]) plt.axis('off') plt.gcf().tight_layout() ``` ## Saliency Maps测试 ### Saliency Maps原理 A saliency map tells us the degree to which each pixel in the image affects the classification score for that image. To compute it, we compute the gradient of the unnormalized score corresponding to the correct class (which is a scalar) with respect to the pixels of the image. If the image has shape (3, H, W) then this gradient will also have shape (3, H, W); for each pixel in the image, this gradient tells us the amount by which the classification score will change if the pixel changes by a small amount. To compute the saliency map, we take the absolute value of this gradient, then take the maximum value over the 3 input channels; the final saliency map thus has shape (H, W) and all entries are nonnegative. Saliency Maps相当于是计算图像的每一个pixel是如何影响一个分类器的, 或者说分类器对图像中每一个pixel哪些认为是重要的. 会计算图像每一个像素点的梯度。如果图像的形状是(3, H, W),这个梯度的形状也是(3, H, W);对于图像中的每个像素点,这个梯度告诉我们当像素点发生轻微改变时,正确分类分数变化的幅度。 计算saliency map的时候,需要计算出梯度的绝对值,然后再取三个颜色通道的最大值;因此最后的saliency map的形状是(H, W)为一个通道的灰度图。 ``` # Example of using gather to select one entry from each row in PyTorch # 用来返回matrix指定行某个位置的值 def gather_example(): N, C = 4, 5 s = torch.randn(N, C) y = torch.LongTensor([1, 2, 1, 3]) print(s) print(y) print(s.gather(1, y.view(-1, 1)).squeeze()) gather_example() torch.LongTensor(y).view(-1,1) def compute_saliency_maps(X, y, model): """ X表示图片, y表示分类结果, model表示使用的分类模型 Input : - X : Input images : Tensor of shape (N, 3, H, W) - y : Label for X : LongTensor of shape (N,) - model : A pretrained CNN that will be used to computer the saliency map Return : - saliency : A Tensor of shape (N, H, W) giving the saliency maps for the input images """ # 确保model是test模式 model.eval() # 确保X是需要gradient X.requires_grad_() saliency = None logits = model.forward(X) logits = logits.gather(1, y.view(-1, 1)).squeeze() # 得到正确分类 logits.backward(torch.FloatTensor([1., 1., 1., 1., 1.])) # 只计算正确分类部分的loss saliency = abs(X.grad.data) # 返回X的梯度绝对值大小 saliency, _ = torch.max(saliency, dim=1) return saliency.squeeze() def show_saliency_maps(X, y): # Convert X and y from numpy arrays to Torch Tensors X_tensor = torch.cat([preprocess(Image.fromarray(x)) for x in X], dim=0) y_tensor = torch.LongTensor(y) # Compute saliency maps for images in X saliency = compute_saliency_maps(X_tensor, y_tensor, model) # Convert the saliency map from Torch Tensor to numpy array and show images # and saliency maps together. saliency = saliency.numpy() N = X.shape[0] for i in range(N): plt.subplot(2, N, i + 1) plt.imshow(X[i]) plt.axis('off') plt.title(class_names[y[i]]) plt.subplot(2, N, N + i + 1) plt.imshow(saliency[i], cmap=plt.cm.hot) plt.axis('off') plt.gcf().set_size_inches(12, 5) plt.show() show_saliency_maps(X, y) ```
github_jupyter