markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
**WOW. That's nice code!**Also: **Naming your baby Daenerys after the hero...**...is a bad break.
(baby_names .query('name in ["Khaleesi","Ramsay","Lyanna","Ellaria","Meera"] & \ year >= 2000') .groupby(['name','year'])['count'].sum() # for each name-year, combine M and F counts .reset_index() # give use the column names back as they were (makes the plot call easy) .pipe((sns.lineplot, 'data'),hue='name',x='year',y='count') ) plt.axvline(2011, 0,160,color='red') # add a line for when the show debuted plt.title("PEOPLE NAMED THEIR KID KHALEESI")
_____no_output_____
MIT
content/03/02f_chains-Copy1.ipynb
schemesmith/ledatascifi-2021
**BUT IT COULD BE WORSE**
(baby_names .query('name in ["Krymson"] & year >= 1950') .groupby(['name','year'])['count'].sum() # for each name-year, combine M and F counts .reset_index() # give use the column names back as they were (makes the plot call easy) .pipe((sns.lineplot, 'data'),hue='name',x='year',y='count') ) plt.title("Alabama, wow...Krymson, really?")
_____no_output_____
MIT
content/03/02f_chains-Copy1.ipynb
schemesmith/ledatascifi-2021
"Bookrating (Collaborative-Filtering)"> "Prediction of tangible books to read using collaborative filtering"- toc: false- branch: master- badges: true- comments: true- categories: [jupyter, pytorch, pytorch-lightning]- hide: false- search_exclude: true
%%capture !pip install -U fastai from google.colab import drive drive.mount('/content/drive') from fastai.collab import * import pandas as pd import torch.nn as nn pathr = '/content/drive/MyDrive/my-datasets/collaborative-filtering/BX-Book-Ratings.csv' pathb = '/content/drive/MyDrive/my-datasets/collaborative-filtering/BX-Books.csv' pathu = '/content/drive/MyDrive/my-datasets/collaborative-filtering/BX-Users.csv' dfr = pd.read_csv(pathr, sep=';', error_bad_lines=False, encoding='latin-1') dfb = pd.read_csv(pathb, sep=';', error_bad_lines=False, encoding='latin-1') dfu = pd.read_csv(pathu, sep=';', error_bad_lines=False, encoding='latin-1') dfb = dfb[['ISBN','Book-Title','Book-Author','Year-Of-Publication','Publisher']] dfr.head() dfb.head() df = dfr.merge(dfb) df.head() dls = CollabDataLoaders.from_df(df, item_name='Book-Title', bs=64) dls.show_batch() learn = collab_learner(dls, y_range=(0,5.5), n_factors=50) learn.fit_one_cycle(5, 2e-3, wd=0.1) def recommend(book): movie_factors = learn.model.i_weight.weight idx = dls.classes['Book-Title'].o2i[book] dist = nn.CosineSimilarity(dim=1)(movie_factors, movie_factors[idx][None]) indices = dist.argsort(descending=True)[1:6] return dls.classes['Book-Title'][indices] res = recommend('Harry Potter and the Prisoner of Azkaban (Book 3)') for i in res: print(i)
Harry Potter and the Goblet of Fire (Book 4) Harry Potter and the Chamber of Secrets (Book 2) The X-Planes: X-1 to X-45: 3rd Edition Sanctuary: Finding Moments of Refuge in the Presence of God Harry Potter and the Sorcerer's Stone (Book 1)
Apache-2.0
_notebooks/2021-01-26-Bookrating.ipynb
rajivreddy219/ai-projects
Text classification with attention and synthetic gradients.Imports and set-up:
%tensorflow_version 2.x import numpy as np import tensorflow as tf import pandas as pd import subprocess from sklearn.model_selection import train_test_split import gensim import re import sys import time # TODO: actually implement distribution in the training loop strategy = tf.distribute.get_strategy() use_mixed_precision = False tf.config.run_functions_eagerly(False) tf.get_logger().setLevel('ERROR') is_tpu = None try: tpu = tf.distribute.cluster_resolver.TPUClusterResolver() is_tpu = True except ValueError: is_tpu = False if is_tpu: print('TPU available.') tf.config.experimental_connect_to_cluster(tpu) tf.tpu.experimental.initialize_tpu_system(tpu) strategy = tf.distribute.TPUStrategy(tpu) if use_mixed_precision: policy = tf.keras.mixed_precision.experimental.Policy('mixed_bfloat16') tf.keras.mixed_precision.experimental.set_policy(policy) else: print('No TPU available.') result = subprocess.run( ['nvidia-smi', '-L'], stdout=subprocess.PIPE).stdout.decode("utf-8").strip() if "has failed" in result: print("No GPU available.") else: print(result) strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy( tf.distribute.experimental.CollectiveCommunication.NCCL) if use_mixed_precision: policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16') tf.keras.mixed_precision.experimental.set_policy(policy)
_____no_output_____
CC0-1.0
synth_grads.ipynb
tarcey/Synthetic-Gradients-in-TF
Downloading the data
# Download the Sentiment140 dataset !mkdir -p data !wget -nc https://nyc3.digitaloceanspaces.com/ml-files-distro/v1/sentiment-analysis-is-bad/data/training.1600000.processed.noemoticon.csv.zip -P data !unzip -n -d data data/training.1600000.processed.noemoticon.csv.zip
_____no_output_____
CC0-1.0
synth_grads.ipynb
tarcey/Synthetic-Gradients-in-TF
Loading and splitting the data
sen140 = pd.read_csv( "data/training.1600000.processed.noemoticon.csv", encoding='latin-1', names=["target", "ids", "date", "flag", "user", "text"]) sen140.head() sen140 = sen140.sample(frac=1).reset_index(drop=True) sen140 = sen140[['text', 'target']] features, targets = sen140.iloc[:, 0].values, sen140.iloc[:, 1].values print("A random tweet\t:", features[0]) # split between train and test sets x_train, x_test, y_train, y_test = train_test_split(features, targets, test_size=0.33) y_train = y_train.astype("float32") / 4.0 y_test = y_test.astype("float32") / 4.0 x_train = np.expand_dims(x_train, -1) x_test = np.expand_dims(x_test, -1)
_____no_output_____
CC0-1.0
synth_grads.ipynb
tarcey/Synthetic-Gradients-in-TF
Preprocessing data
def process_tweet(x): x = x.strip() x = x.lower() x = re.sub(r"[^a-zA-Z0-9üöäÜÖÄß\.,!\?\-%\$€\/ ]+'", ' ', x) x = re.sub('([\.,!\?\-%\$€\/])', r' \1 ', x) x = re.sub('\s{2,}', ' ', x) x = x.split() x.append("[&END&]") length = len(x) return x tweets_train = [] tweets_test = [] for tweet in x_train: tweets_train.append(process_tweet(tweet[0])) for tweet in x_test: tweets_test.append(process_tweet(tweet[0])) # Building the initial vocab with all words from the training set def add_or_update_word(_vocab, word): entry = None if word in _vocab: entry = _vocab[word] entry = (entry[0], entry[1] + 1) else: entry = (len(_vocab), 1) _vocab[word] = entry vocab_pre = {} # "[&END&]" is for padding, "[&UNK&]" for unknown words add_or_update_word(vocab_pre, "[&END&]") add_or_update_word(vocab_pre, "[&UNK&]") for tweet in tweets_train: for word in tweet: add_or_update_word(vocab_pre, word) # limiting the vocabulary to only include words that appear at least 3 times # in the training data set. Reduces vocab size to about 1/6th. # This is to make it harder for the model to overfit by focusing on words that # may only appear in the training data, and also to generally make it learn to # handle unknown words (more robust) keys = vocab_pre.keys() vocab = {} vocab["[&END&]"] = 0 vocab["[&UNK&]"] = 1 for key in keys: freq = vocab_pre[key][1] index = vocab_pre[key][0] if freq >= 3 and index > 1: vocab[key] = len(vocab) # Replace words that have been removed from the vocabulary with "[&UNK&]" in # both the training and testing data def filter_unknown(_in, _vocab): for tweet in _in: for i in range(len(tweet)): if not tweet[i] in _vocab: tweet[i] = "[&UNK&]" filter_unknown(tweets_train, vocab) filter_unknown(tweets_test, vocab)
_____no_output_____
CC0-1.0
synth_grads.ipynb
tarcey/Synthetic-Gradients-in-TF
Using gensim word2vec to get a good word embedding.
# train the embedding embedding_dims = 128 embedding = gensim.models.Word2Vec(tweets_train, size=embedding_dims, min_count=0) def tokenize(_in, _vocab): _out = [] for i in range(len(_in)): tweet = _in[i] wordlist = [] for word in tweet: wordlist.append(_vocab[word].index) _out.append(wordlist) return _out tokens_train = tokenize(tweets_train, embedding.wv.vocab) tokens_test = tokenize(tweets_test, embedding.wv.vocab)
_____no_output_____
CC0-1.0
synth_grads.ipynb
tarcey/Synthetic-Gradients-in-TF
Creating modules and defining the model.
class SequenceCollapseAttention(tf.Module): ''' Collapses a sequence of arbitrary length into num_out_entries entries from the sequence according to dot-product attention. So, a variable length sequence is reduced to a sequence of a fixed, known length. ''' def __init__(self, num_out_entries, initializer=tf.keras.initializers.HeNormal, name=None): super().__init__(name=name) self.is_built = False self.num_out_entries = num_out_entries self.initializer = initializer() def __call__(self, keys, query): if not self.is_built: self.weights = tf.Variable( self.initializer([query.shape[-1], self.num_out_entries]), trainable=True) self.biases = tf.Variable(tf.zeros([self.num_out_entries]), trainable=True) self.is_built = True scores = tf.linalg.matmul(query, self.weights) + self.biases scores = tf.transpose(scores, perm=(0, 2, 1)) scores = tf.nn.softmax(scores) output = tf.linalg.matmul(scores, keys) return output class WordEmbedding(tf.Module): ''' Creates a word-embedding module from a provided embedding matrix. ''' def __init__(self, embedding_matrix, trainable=False, name=None): super().__init__(name=name) self.embedding = tf.Variable(embedding_matrix, trainable=trainable) def __call__(self, x): return tf.nn.embedding_lookup(self.embedding, x) testvar = None class PositionalEncoding1D(tf.Module): ''' Positional encoding as in the Attention Is All You Need paper. I hope. For experimentation, the weight by which the positional information is mixed into the input vectors is learned. ''' def __init__(self, axis=-2, base=1000, name=None): super().__init__(name=name) self.axis = axis self.base = base self.encoding_weight = tf.Variable([2.0], trainable=True) testvar = self.encoding_weight def __call__(self, x): sequence_length = tf.shape(x)[self.axis] d = tf.shape(x)[-1] T = tf.shape(x)[self.axis] pos_enc = tf.range(0, d / 2, delta=1, dtype=tf.float32) pos_enc = (-2.0 / tf.cast(d, dtype=tf.float32)) * pos_enc base = tf.cast(tf.fill(tf.shape(pos_enc), self.base), dtype=tf.float32) pos_enc = tf.math.pow(base, pos_enc) pos_enc = tf.expand_dims(pos_enc, axis=0) pos_enc = tf.tile(pos_enc, [T, 1]) t = tf.expand_dims(tf.range(1, T+1, delta=1, dtype=tf.float32), axis=-1) pos_enc = tf.math.multiply(pos_enc, t) pos_enc_sin = tf.expand_dims(tf.math.sin(pos_enc), axis=-1) pos_enc_cos = tf.expand_dims(tf.math.cos(pos_enc), axis=-1) pos_enc = tf.concat((pos_enc_sin, pos_enc_cos), axis=-1) pos_enc = tf.reshape(pos_enc, [T, d]) return x + (pos_enc * self.encoding_weight) class MLP_Block(tf.Module): ''' With batch normalization before the activations. A regular old multilayer perceptron, hidden shapes are defined by the "shapes" argument. ''' def __init__(self, shapes, initializer=tf.keras.initializers.HeNormal, name=None, activation=tf.nn.swish, trainable_batch_norms=False): super().__init__(name=name) self.is_built = False self.shapes = shapes self.initializer = initializer() self.weights = [None] * len(shapes) self.biases = [None] * len(shapes) self.bnorms = [None] * len(shapes) self.activation = activation self.trainable_batch_norms = trainable_batch_norms def _build(self, x): for n in range(0, len(self.shapes)): in_shape = x.shape[-1] if n == 0 else self.shapes[n - 1] factor = 1 if self.activation != tf.nn.crelu or n == 0 else 2 self.weights[n] = tf.Variable( self.initializer([in_shape * factor, self.shapes[n]]), trainable=True) self.biases[n] = tf.Variable(tf.zeros([self.shapes[n]]), trainable=True) self.bnorms[n] = tf.keras.layers.BatchNormalization( trainable=self.trainable_batch_norms) self.is_built = True def __call__(self, x, training=False): if not self.is_built: self._build(x) h = x for n in range(len(self.shapes)): h = tf.linalg.matmul(h, self.weights[n]) + self.biases[n] h = self.bnorms[n](h, training=training) h = self.activation(h) return h class SyntheticGradient(tf.Module): ''' An implementation of synthetic gradients. When added to a model, this module will intercept incoming gradients and replace them by learned, synthetic ones. If you encounter NANs, try setting the sg_output_scale parameter to a lower value, or increase the number of initial_epochs or epochs. When the model using this module does not learn, the generator might be too simple, the sg_output_scale might be too low, the learning rate of the generator might be too large or too low, or the number of epochs might be too large or too low. If the number of initial epochs is too large, the generator can get stuck in a local minimum and fail to learn. The relative_generator_hidden_shapes list defines the shapes of the hidden layers of the generator as a multiple of its input dimension. For an affine transormation, pass an empty list. ''' def __init__(self, initializer=tf.keras.initializers.GlorotUniform, activation=tf.nn.tanh, relative_generator_hidden_shapes=[6, ], learning_rate=0.01, epochs=1, initial_epochs=16, sg_output_scale=1, name=None): super().__init__(name=name) self.is_built = False self.initializer = initializer self.activation = activation self.relative_generator_hidden_shapes = relative_generator_hidden_shapes self.initial_epochs = initial_epochs self.epochs = epochs self.sg_output_scale = sg_output_scale self.optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) def build(self, xy, dy): ''' Builds the gradient generator on its first run, and trains on the first incoming batch of gradients for a number of epochs to avoid bad results (including NANs) in the first few batches where the generator still outputs bad approximations. To further reduce NANs due to bad gradients, a fixed scaler for the outputs of the generator is computed based on the first batch. ''' if self.is_built: return if len(self.relative_generator_hidden_shapes) > 0: generator_shape = [ xy.shape[-1] * mult for mult in self.relative_generator_hidden_shapes] self.generator_hidden = MLP_Block( generator_shape, activation=self.activation, initializer=self.initializer, trainable_batch_norms=False) else: self.generator_hidden = tf.identity self.generator_out = MLP_Block( [dy.shape[-1]], activation=tf.identity, initializer=self.initializer, trainable_batch_norms=False) # calculate a static scaler for the generated gradients to avoid # overflows due to too large gradients self.generator_out_scale = 1.0 x = self.generate_gradient(xy) / self.sg_output_scale mag_y = tf.math.sqrt(tf.math.reduce_sum(tf.math.square(dy), axis=-1)) mag_x = tf.math.sqrt(tf.math.reduce_sum(tf.math.square(x), axis=-1)) mag_scale = tf.math.reduce_mean(mag_y / mag_x, axis=tf.range(0, tf.rank(dy) - 1)) self.generator_out_scale = tf.Variable(mag_scale, trainable=False) # train for a number of epochs on the first run, by default 16, to avoid # bad results in the beginning of training. for i in range(self.initial_epochs): self.train_generator(xy, dy) self.is_built = True def generate_gradient(self, x): ''' Just an MLP, or an affine transformation if the hidden shape in the constructor is set to be empty. ''' x = self.generator_hidden(x) out = self.generator_out(x) out = out * self.generator_out_scale return out * self.sg_output_scale def train_generator(self, x, target): ''' Gradient descend for the gradient generator. This is called every time a gradient comes in, although in theory (especially with deeper gradient generators) once the gradients are modeled sufficiently, it could be OK to stop training on incoming gradients, thus fully decoupling the lower parts of the network from the upper parts relative to this SG module. ''' with tf.GradientTape() as tape: l2_loss = target - self.generate_gradient(x) l2_loss = tf.math.reduce_sum(tf.math.square(l2_loss), axis=-1) # l2_loss = tf.math.sqrt(l2_dist) grads = tape.gradient(l2_loss, self.trainable_variables) self.optimizer.apply_gradients(zip(grads, self.trainable_variables)) @tf.custom_gradient def sg(self, x, y): ''' In the forward pass it is essentially a no-op (identity). In the backwards pass it replaces the incoming gradient by a synthetic one. ''' x = tf.identity(x) def grad(dy): # concat x and the label to be inputs for the generator: xy = self.concat_x_and_y(x, y) if not self.is_built: self.build(xy, dy) # train the generator on the incoming gradient: for i in range(self.epochs): self.train_generator(xy, dy) # return the gradient. The second return value is the gradient for y # which should be zero since we only need y (labels) to generate the # synthetic gradients dy = self.generate_gradient(xy) return dy, tf.zeros(tf.shape(y)) return x, grad def __call__(self, x, y): return self.sg(x, y) def concat_x_and_y(self, x, y): ''' Probably an overly complex yet incomplete solution to a rather small inconvenience. Inconvenience: The gradient generators take the output of the last module AND the target/labels of the network as inputs. But those two tensors can be of different shapes. The obvious solution would be to manually reshape the targets so they can be concatenated with the outputs of the past state. But because i wanted this SG module to be as "plug-and-play" as possible, i tried to attempt automatic reshaping. Should work for 1d->1d, and 1d-sequence -> 1d, possibly 1d seq->seq, unsure about the rest. ''' # insert as many dims before the last dim of y to give it the same rank # as x amount = tf.math.maximum(tf.rank(x) - tf.rank(y), 0) new_shape = tf.concat((tf.shape(y)[:-1], tf.tile([1], [amount]), [tf.shape(y)[-1]]), axis=-1) y = tf.reshape(y, new_shape) # tile the added dims such that x and y can be concatenated # In order to tile only the added dims, i need to set the dimensions # with a length of 1 (except the last) to the length of the # corresponding dimensions in x, while setting the rest to 1. # This is waiting to break. mask = tf.cast(tf.math.less_equal(tf.shape(y), tf.constant([1])), dtype=tf.int32) # ignore the last dim mask = tf.concat([mask[:-1], tf.constant([0])], axis=-1) zeros_to_ones = tf.math.subtract( tf.ones(tf.shape(mask), dtype=tf.int32), mask) # has ones where there is a one in the shape, now the 1s are set to the # length in x mask = tf.math.multiply(mask, tf.shape(x)) # add ones to all other dimensions to preserve their shape mask = tf.math.add(zeros_to_ones, mask) # tile y = tf.tile(y, mask) return tf.concat((x, y), axis=-1) class FlattenL2D(tf.Module): "Flattens the last two dimensions only" def __init__(self, name=None): super().__init__(name=name) def __call__(self, x): new_shape = tf.concat( (tf.shape(x)[:-2], [(tf.shape(x)[-1]) * (tf.shape(x)[-2])]), axis=-1) return tf.reshape(x, new_shape) initializer = tf.keras.initializers.HeNormal class SentimentAnalysisWithAttention(tf.Module): def __init__(self, name=None): super().__init__(name=name) # Structure and the idea behind it: # 1: The input sequence is embedded and is positionally encoded. # 2.1: An MLP block ('query') computes scores for the following # attention layer for each entry in the sequence. Ie, it decides # which words are worth a closer look. # 2.2: An attention layer selects n positionally encoded word # embeddings from the input sequence based on the learned queries. # 3: The result is flattened into a tensor of known shape and a number # of dense layers compute the final classification. self.embedding = WordEmbedding(embedding.wv.vectors) self.batch_norm = tf.keras.layers.BatchNormalization(trainable=True) self.pos_enc = PositionalEncoding1D() self.query = MLP_Block([256, 128], initializer=initializer) self.attention = SequenceCollapseAttention(num_out_entries=9, initializer=initializer) self.flatten = FlattenL2D() self.dense = MLP_Block([512, 256, 128, 64], initializer=initializer, trainable_batch_norms=True) self.denseout = MLP_Block([1], initializer=initializer, activation=tf.nn.sigmoid, trainable_batch_norms=True) # Synthetic gradient modules for the various layers. self.sg_query = SyntheticGradient(relative_generator_hidden_shapes=[9]) self.sg_attention = SyntheticGradient() self.sg_dense = SyntheticGradient() def __call__(self, x, y=tf.constant([]), training=False): x = self.embedding(x) x = self.pos_enc(x) x = self.batch_norm(x, training=training) q = self.query(x, training=training) # q = self.sg_query(q, y) # SG x = self.attention(x, q) x = self.flatten(x) x = self.sg_attention(x, y) # SG x = self.dense(x, training=training) x = self.sg_dense(x, y) # SG output = self.denseout(x, training=training) return output model = SentimentAnalysisWithAttention() class BatchGenerator(tf.keras.utils.Sequence): ''' Creates batches from the given data, specifically it pads the sequences per batch only as much as necessary to make every sequence within a batch be of the same length. ''' def __init__(self, inputs, labels, padding, batch_size): self.batch_size = batch_size self.labels = labels self.inputs = inputs self.padding = padding # self.on_epoch_end() def __len__(self): return int(np.floor(len(self.inputs) / self.batch_size)) def __getitem__(self, index): max_length = 0 start_index = index * self.batch_size end_index = start_index + self.batch_size for i in range(start_index, end_index): l = len(self.inputs[i]) if l > max_length: max_length = l out_x = np.empty([self.batch_size, max_length], dtype='int32') out_y = np.empty([self.batch_size, 1], dtype='float32') for i in range(self.batch_size): out_y[i] = self.labels[start_index + i] tweet = self.inputs[start_index + i] l = len(tweet) l = min(l, max_length) for j in range(0, l): out_x[i][j] = tweet[j] for j in range(l, max_length): out_x[i][j] = self.padding return out_x, out_y
_____no_output_____
CC0-1.0
synth_grads.ipynb
tarcey/Synthetic-Gradients-in-TF
Training the model
def train_validation_loop(model_caller, data_generator, epochs, metrics=[]): batch_time = -1 for epoch in range(epochs): start_e = time.time() start_p = time.time() num_batches = len(data_generator) predictions = [None] * num_batches for b in range(num_batches): start_b = time.time() x_batch, y_batch = data_generator[b] predictions[b] = model_caller(x_batch, y_batch, metrics=metrics) # progress output elapsed_t = time.time() - start_b if batch_time != -1: batch_time = 0.05 * elapsed_t + 0.95 * batch_time else: batch_time = elapsed_t if int(time.time() - start_p) >= 1 or b == (num_batches - 1): start_p = time.time() eta = int((num_batches - b) * batch_time) ela = int(time.time() - start_e) out_string = "\rEpoch %d/%d,\tbatch %d/%d,\telapsed: %d/%ds" % ( (epoch + 1), epochs, b + 1, num_batches, ela, ela + eta) for metric in metrics: out_string += "\t %s: %f" % (metric.name, float(metric.result())) out_length = len(out_string) sys.stdout.write(out_string) sys.stdout.flush() for metric in metrics: metric.reset_states() sys.stdout.write("\n") return np.concatenate(predictions) def trainer(model, loss, optimizer): @tf.function(experimental_relax_shapes=True) def training_step(x_batch, y_batch, model=model, loss=loss, optimizer=optimizer, metrics=[]): with tf.GradientTape() as tape: predictions = model(x_batch, y_batch, training=True) losses = loss(y_batch, predictions) grads = tape.gradient(losses, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) for metric in metrics: metric.update_state(y_batch, predictions) return predictions return training_step loss = tf.keras.losses.BinaryCrossentropy(from_logits=True) optimizer = tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.9) metrics = (tf.keras.metrics.BinaryCrossentropy(from_logits=True), tf.keras.metrics.BinaryAccuracy()) batch_size = 512 epochs = 4 padding = embedding.wv.vocab["[&END&]"].index training_generator = BatchGenerator(tokens_train, y_train, padding, batch_size=batch_size) train_validation_loop(trainer(model, loss, optimizer), training_generator, epochs, metrics)
_____no_output_____
CC0-1.0
synth_grads.ipynb
tarcey/Synthetic-Gradients-in-TF
Testing it on validation data
def validator(model): @tf.function(experimental_relax_shapes=True) def validation_step(x_batch, y_batch, model=model, metrics=[]): predictions = model(x_batch, training=False) for metric in metrics: metric.update_state(y_batch, predictions) return predictions return validation_step testing_generator = BatchGenerator(tokens_test, y_test, padding, batch_size=batch_size) predictions = train_validation_loop(validator(model), testing_generator, 1, metrics)
_____no_output_____
CC0-1.0
synth_grads.ipynb
tarcey/Synthetic-Gradients-in-TF
Get some example results from the the test data.
most_evil_tweet=None most_evil_evilness=1 most_cool_tweet=None most_cool_coolness=1 most_angelic_tweet=None most_angelic_angelicness=0 y_pred = np.concatenate(predictions) for i in range(0,len(y_pred)): judgement = y_pred[i] polarity = abs(judgement-0.5)*2 if judgement>=most_angelic_angelicness: most_angelic_angelicness = judgement most_angelic_tweet = x_test[i] if judgement<=most_evil_evilness: most_evil_evilness = judgement most_evil_tweet = x_test[i] if polarity<=most_cool_coolness: most_cool_coolness = polarity most_cool_tweet = x_test[i] print("The evilest tweet known to humankind:\n\t", most_evil_tweet) print("Evilness: ", 1.0-most_evil_evilness) print("\n") print("The most angelic tweet any mortal has ever laid eyes upon:\n\t", most_angelic_tweet) print("Angelicness: ", most_angelic_angelicness) print("\n") print("This tweet is too cool for you, don't read:\n\t", most_cool_tweet) print("Coolness: ", 1.0-most_cool_coolness)
_____no_output_____
CC0-1.0
synth_grads.ipynb
tarcey/Synthetic-Gradients-in-TF
Imports
import numpy as np import xarray as xr import seaborn as sns import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib.colors import LinearSegmentedColormap from matplotlib.patches import Polygon from matplotlib import colors as mat_colors import mpl_toolkits.axisartist as axisartist from mpl_toolkits.axes_grid1 import Size, Divider
_____no_output_____
BSD-3-Clause
Fig.3.11_par_methods_overview/Par_method_comparision_overview.ipynb
pat-schmitt/supplementary_material_master_thesis
Define Functions Performance measurements
def BIAS(a1, a2): return (a1 - a2).mean().item() def RMSE(a1, a2): return np.sqrt(((a1 - a2)**2).mean()).item() def DIFF(a1, a2): return np.max(np.abs(a1 - a2)).item()
_____no_output_____
BSD-3-Clause
Fig.3.11_par_methods_overview/Par_method_comparision_overview.ipynb
pat-schmitt/supplementary_material_master_thesis
help function for heatmap axis
def setup_axes(fig, rect): ax = axisartist.Subplot(fig, rect) fig.add_subplot(ax) return ax
_____no_output_____
BSD-3-Clause
Fig.3.11_par_methods_overview/Par_method_comparision_overview.ipynb
pat-schmitt/supplementary_material_master_thesis
heatmap
def heatmap(datasets, # first_dataset, second_dataset, opti_var, annotation=None, annotation_x_position=0, annotation_y_position=1, fig=None, ax=None, cmap='vlag', cmap_levels=None, grid_color='grey', grid_linewidth=1.5, presentation=False, labels_pad=-360, xlim=None, # here use it do define max Diff sfc_h nr_of_iterations=None): if not ax: ax = plt.gca() if not fig: fig = plt.gcf() if all(dataset is None for dataset in datasets): raise ValueError('All datasets are None!') # define variables for plotting guess_opti_var = [] first_guess_diff = [] true_opti_var = [] BIAS_opti_var = [] RMSE_opti_var = [] DIFF_opti_var = [] fct_opti_var = [] times = [] maxiters = [] BIAS_sfc = [] RMSE_sfc = [] DIFF_sfc = [] BIAS_w = [] RMSE_w = [] DIFF_w = [] BIAS_fg = [] RMSE_fg = [] DIFF_fg = [] BIAS_sfc_fg = [] RMSE_sfc_fg = [] DIFF_sfc_fg = [] array_length = 0 check_first_guess = None check_true_opti_var = None # create data and label variables for dataset in datasets: # check if the current dataset contains data or if the data was not available if dataset is None: guess_opti_var.append(None) first_guess_diff.append(None) true_opti_var.append(None) BIAS_opti_var.append(None) RMSE_opti_var.append(None) DIFF_opti_var.append(None) fct_opti_var.append(None) times.append(None) maxiters.append(None) BIAS_sfc.append(None) RMSE_sfc.append(None) DIFF_sfc.append(None) BIAS_w.append(None) RMSE_w.append(None) DIFF_w.append(None) elif type(dataset) != xr.core.dataset.Dataset: # if no minimisation possible guess_opti_var.append('no_minimisation') first_guess_diff.append(None) true_opti_var.append(None) BIAS_opti_var.append(None) RMSE_opti_var.append(None) DIFF_opti_var.append(None) fct_opti_var.append(None) times.append(None) maxiters.append(None) BIAS_sfc.append(None) RMSE_sfc.append(None) DIFF_sfc.append(None) BIAS_w.append(None) RMSE_w.append(None) DIFF_w.append(None) else: # find index corresponding to max time max_index = len(dataset['computing_time'].values) - 1 if nr_of_iterations is not None: max_index = nr_of_iterations - 1 elif xlim is not None: # calculate all max diff surface_h all_DIFF_sfc_h = np.array( [DIFF(dataset.true_surface_h.data, dataset.surface_h.data[i-1]) for i in dataset.coords['nr_of_iteration'].data]) # only consider as many points until max DIFF is smaller xlim if all_DIFF_sfc_h[-1] < xlim: max_index = np.argmax(all_DIFF_sfc_h < xlim) if opti_var == 'bed_h': guess_opti_var.append((dataset.guessed_bed_h[max_index] - dataset.true_bed_h).values) first_guess_diff.append((dataset.first_guessed_bed_h - dataset.true_bed_h).values) true_opti_var.append(dataset.true_bed_h.values) BIAS_opti_var.append(BIAS(dataset.guessed_bed_h[max_index], dataset.true_bed_h)) RMSE_opti_var.append(RMSE(dataset.guessed_bed_h[max_index], dataset.true_bed_h)) DIFF_opti_var.append(DIFF(dataset.guessed_bed_h[max_index], dataset.true_bed_h)) if check_first_guess is None: BIAS_fg = BIAS(dataset.first_guessed_bed_h, dataset.true_bed_h) RMSE_fg = RMSE(dataset.first_guessed_bed_h, dataset.true_bed_h) DIFF_fg = DIFF(dataset.first_guessed_bed_h, dataset.true_bed_h) elif opti_var == 'bed_shape': guess_opti_var.append((dataset.guessed_bed_shape[-1] - dataset.true_bed_shape).values) first_guess_diff.append((dataset.first_guessed_bed_shape - dataset.true_bed_shape).values) true_opti_var.append(dataset.true_bed_shape.values) BIAS_opti_var.append(BIAS(dataset.guessed_bed_shape[max_index], dataset.true_bed_shape)) RMSE_opti_var.append(RMSE(dataset.guessed_bed_shape[max_index], dataset.true_bed_shape)) DIFF_opti_var.append(DIFF(dataset.guessed_bed_shape[max_index], dataset.true_bed_shape)) if check_first_guess is None: BIAS_fg = BIAS(dataset.first_guessed_bed_shape, dataset.true_bed_shape) RMSE_fg = RMSE(dataset.first_guessed_bed_shape, dataset.true_bed_shape) DIFF_fg = DIFF(dataset.first_guessed_bed_shape, dataset.true_bed_shape) elif opti_var == 'w0': guess_opti_var.append((dataset.guessed_w0[-1] - dataset.true_w0).values) first_guess_diff.append((dataset.first_guessed_w0 - dataset.true_w0).values) true_opti_var.append(dataset.true_w0.values) BIAS_opti_var.append(BIAS(dataset.guessed_w0[max_index], dataset.true_w0)) RMSE_opti_var.append(RMSE(dataset.guessed_w0[max_index], dataset.true_w0)) DIFF_opti_var.append(DIFF(dataset.guessed_w0[max_index], dataset.true_w0)) if check_first_guess is None: BIAS_fg = BIAS(dataset.first_guessed_w0, dataset.true_w0) RMSE_fg = RMSE(dataset.first_guessed_w0, dataset.true_w0) DIFF_fg = DIFF(dataset.first_guessed_w0, dataset.true_w0) else: raise ValueError('Unknown opti var!') fct_opti_var.append(dataset.function_calls[max_index].values) times.append(dataset.computing_time[max_index].values) maxiters.append(dataset.attrs['maxiter_reached']) BIAS_sfc.append(BIAS(dataset.surface_h[max_index], dataset.true_surface_h)) RMSE_sfc.append(RMSE(dataset.surface_h[max_index], dataset.true_surface_h)) DIFF_sfc.append(DIFF(dataset.surface_h[max_index], dataset.true_surface_h)) BIAS_w.append(BIAS(dataset.widths[max_index], dataset.true_widths)) RMSE_w.append(RMSE(dataset.widths[max_index], dataset.true_widths)) DIFF_w.append(DIFF(dataset.widths[max_index], dataset.true_widths)) # determine array length for empty lines if array_length == 0: array_length = dataset.points_with_ice[-1].values + 1 # check that the arrays have the same number of points with ice elif array_length != dataset.points_with_ice[-1].values + 1: raise ValueError('Not the same lentgth of points with ice!!!') # check if all experiments start with the same true values and first guess # in the first round save values if check_first_guess is None: check_first_guess = first_guess_diff[-1] check_true_opti_var = true_opti_var[-1] # not implemented yet BIAS_sfc_fg = BIAS(dataset.first_guess_surface_h, dataset.true_surface_h) RMSE_sfc_fg = RMSE(dataset.first_guess_surface_h, dataset.true_surface_h) DIFF_sfc_fg = DIFF(dataset.first_guess_surface_h, dataset.true_surface_h) BIAS_w_fg = BIAS(dataset.first_guess_widths, dataset.true_widths) RMSE_w_fg = RMSE(dataset.first_guess_widths, dataset.true_widths) DIFF_w_fg = DIFF(dataset.first_guess_widths, dataset.true_widths) # after first round compare all values to first ones to make sure comparing the same start conditions else: if np.sum(check_true_opti_var - true_opti_var[-1]) != 0: raise ValueError('Not the same true control variable!!!') if np.sum(check_first_guess - first_guess_diff[-1]) != 0: raise ValueError('Not the same first guess!!!') # create variables for ploting (data and y label) data = [] y_labels = [] # first add heading data.append(np.empty((array_length)) * np.nan) if not presentation: if opti_var == 'bed_h': y_labels.append(r' RMSE_b, DIFF_b, RMSE_s, DIFF_s, fct, $T_{cpu}$') elif opti_var in ['bed_shape', 'w0']: y_labels.append(r' RMSE_Ps, DIFF_Ps, RMSE_w, DIFF_w, fct, $T_{cpu}$') else: raise ValueError('Unknown opti_var !') y_label_variable_format = '{:7.2f}, {: 7.2f}, {:7.2f}, {:7.2f}' else: if opti_var == 'bed_h': y_labels.append(' DIFF_b, fct, t') elif opti_var in ['bed_shape', 'w0']: y_labels.append(' DIFF DIFF_w fct time') else: raise ValueError('Unknown opti_var !') y_label_variable_format = '{: 6.2f}' #', {:6.2f}' if not presentation: # add first guess data.append(check_first_guess) if opti_var == 'bed_h': y_labels.append(('fg:' + y_label_variable_format).format(RMSE_fg, DIFF_fg, RMSE_sfc_fg, DIFF_sfc_fg)) elif opti_var in ['bed_shape', 'w0']: y_labels.append(('fg:' + y_label_variable_format).format(RMSE_fg, DIFF_fg, RMSE_w_fg, DIFF_w_fg)) else: raise ValueError('Unknown opti_var !') else: # add first guess data.append(check_first_guess) if opti_var == 'bed_h': y_labels.append(('fg:' + y_label_variable_format).format(DIFF_fg)) elif opti_var in ['bed_shape', 'w0']: y_labels.append(('fg:' + y_label_variable_format).format(DIFF_fg, DIFF_w_fg)) else: raise ValueError('Unknown opti_var !') # add two format placeholders for fct_calls and time y_label_variable_format += ', {:4d}, {:4.0f}s' # add all other data with empty line for None datasets for i, guess in enumerate(guess_opti_var): if guess is None: data.append(np.empty((array_length)) * np.nan) if i < 9: y_labels.append((' ' + chr(65+i) + ': NO DATAFILE FOUND')) else: y_labels.append((' ' + chr(65+i) + ': NO DATAFILE FOUND')) elif type(guess) is str: data.append(np.empty((array_length)) * np.nan) if i < 9: y_labels.append((' ' + chr(65+i) + ': NO Minimisation Possible')) else: y_labels.append((' ' + chr(65+i) + ': NO Minimisation Possible')) else: data.append(guess) if i < 9: y_label_text = (' ' + chr(65+i) + ':' + y_label_variable_format) else: y_label_text = (' ' + chr(65+i) + ':' + y_label_variable_format) if maxiters[i] == 'yes': y_label_text += '+' if opti_var == 'bed_h': if not presentation: y_labels.append(y_label_text.format(RMSE_opti_var[i], DIFF_opti_var[i], RMSE_sfc[i], DIFF_sfc[i], fct_opti_var[i], times[i])) else: y_labels.append(y_label_text.format(DIFF_opti_var[i], fct_opti_var[i], times[i])) elif opti_var in ['bed_shape', 'w0']: if not presentation: y_labels.append(y_label_text.format(RMSE_opti_var[i], DIFF_opti_var[i], RMSE_w[i], DIFF_w[i], fct_opti_var[i], times[i])) else: y_labels.append(y_label_text.format(DIFF_opti_var[i], DIFF_w[i], fct_opti_var[i], times[i])) else: raise ValueError('Unknown opti_var !') # make data an numpy array data = np.array(data) #choose colormap if not cmap_levels: color_nr = 100 if opti_var == 'bed_h': cmap_limit = np.max(np.abs(check_first_guess)) #cmap_limit = np.max(np.array([np.abs(np.floor(np.nanmin(np.array(data)))), # np.abs(np.ceil(np.nanmax(np.array(data))))])) elif opti_var in ['bed_shape', 'w0']: cmap_limit = np.max(np.abs(check_first_guess)) #cmap_limit = np.max(np.array([np.abs(np.floor(np.nanmin(np.array(data)) * 10)), # np.abs(np.ceil(np.nanmax(np.array(data)) * 10))])) / 10 else: raise ValueError('Unknown opti var!!') #if (np.min(data) < 0) & (np.max(data) > 0): cmap_levels = np.linspace(-cmap_limit, cmap_limit, color_nr, endpoint=True) #elif (np.min(data) < 0) & (np.max(data) =< 0): # cmap_levels = np.linspace(-cmap_limit, 0, color_nr, endpoint=True) #elif (np.min(data) >= 0) & (np.max(data) > 0) else: color_nr = len(cmap_levels) - 1 rel_color_steps = np.arange(color_nr)/color_nr if cmap == 'rainbow': colors = cm.rainbow(rel_color_steps) elif cmap == 'vlag': colors = sns.color_palette('vlag', color_nr) elif cmap == 'icefire': colors = sns.color_palette('icefire', color_nr) elif cmap == 'Spectral': colors = sns.color_palette('Spectral_r', color_nr) cmap = LinearSegmentedColormap.from_list('custom', colors, N=color_nr) cmap.set_bad(color='white') norm = mat_colors.BoundaryNorm(cmap_levels, cmap.N) # plot heatmap im = plt.imshow(data, aspect='auto', interpolation=None, cmap=cmap, norm=norm, alpha=1.) # Turn spines and ticks off and create white frame. for key, spine in ax.axis.items(): spine.major_ticks.set_visible(False) spine.minor_ticks.set_visible(False) spine.line.set_visible(False) # spine.line.set_color(grid_color) # spine.line.set_linewidth(0) #grid_linewidth) # set y ticks ax.set_yticks(np.arange(data.shape[0])) ax.set_yticklabels(y_labels) #for tick in ax.get_yticklabels(): # tick.set_fontname("Arial") # align yticklabels left ax.axis["left"].major_ticklabels.set_ha("left") # set pad to put labels over heatmap ax.axis["left"].major_ticklabels.set_pad(labels_pad) # set y minor grid ax.set_yticks(np.arange(data.shape[0]+1)-.5, minor=True) ax.grid(which="minor", axis='y', color=grid_color, linestyle='-', linewidth=grid_linewidth) # set x ticklabels off ax.set_xticklabels([]) # create colorbar cax = ax.inset_axes([1.01, 0.1, 0.03, 0.8]) #cax = fig.add_axes([0.5, 0, 0.01, 1]) cbar = fig.colorbar(im, cax=cax, boundaries=cmap_levels, spacing='proportional',) cbar.set_ticks([np.min(cmap_levels),0,np.max(cmap_levels)]) if opti_var == 'bed_h': cbar.set_ticklabels(['{:d}'.format(int(-cmap_limit)), '0' ,'{:d}'.format(int(cmap_limit))]) elif opti_var == 'bed_shape': cbar.set_ticklabels(['{:.1f}'.format(-cmap_limit), '0' ,'{:.1f}'.format(cmap_limit)]) elif opti_var == 'w0': cbar.set_ticklabels(['{:d}'.format(int(-cmap_limit)), '0' ,'{:d}'.format(int(cmap_limit))]) else: raise ValueError('Unknown opti var!!') #cbar.ax.set_ylabel(cbarlabel,) # set title #ax.set_title(title) if annotation is not None: # include text ax.text(annotation_x_position, annotation_y_position, annotation, horizontalalignment='left', verticalalignment='center', transform=ax.transAxes) return im
_____no_output_____
BSD-3-Clause
Fig.3.11_par_methods_overview/Par_method_comparision_overview.ipynb
pat-schmitt/supplementary_material_master_thesis
legend plot
def add_legend2(ax, title, fontsize, lw, ms, labels): ax.plot([], [], '-', lw=lw, ms=ms, c='none', label=labels[0]) # plot for first gradient scaling ax.plot([], [], '.-', lw=lw, ms=ms, c=color_1, label=labels[1]) # plot for second gradient scaling ax.plot([], [], '.-', lw=lw, ms=ms, c=color_2, zorder=5, label=labels[2]) # plot for second gradient scaling ax.plot([], [], '.-', lw=lw, ms=ms, c=color_3, zorder=5, label=labels[3]) # plot for second gradient scaling ax.plot([], [], '.-', lw=lw, ms=ms, c=color_4, zorder=5, label=labels[4]) l = ax.legend(loc='center', fontsize=fontsize, title=title) plt.setp(l.get_title(), multialignment='center') ax.axis('off') def add_legend(ax, #title, fontsize, lw, ms, labels): ax.plot([], [], '-', lw=lw, ms=ms, c='none', label=labels[0]) # plot for first gradient scaling ax.plot([], [], '.-', lw=lw, ms=ms, c='none', label=labels[1]) # plot for second gradient scaling ax.plot([], [], '.-', lw=lw, ms=ms, c='none', zorder=5, label=labels[2]) ax.plot([], [], '.-', lw=lw, ms=ms, c='none', zorder=5, label=labels[3]) leg = ax.legend(loc='center', fontsize=fontsize, #title=title, handlelength=0, handletextpad=0, fancybox=True) for item in leg.legendHandles: item.set_visible(False) ax.axis('off')
_____no_output_____
BSD-3-Clause
Fig.3.11_par_methods_overview/Par_method_comparision_overview.ipynb
pat-schmitt/supplementary_material_master_thesis
performance plot
def performance_plot(ax, datasets, fig=None, # 'bed_h RMSE', 'bed_h Diff', 'bed_h Bias', # 'bed_shape RMSE', 'bed_shape Diff', 'bed_shape Bias', # 'w0 RMSE', 'w0 Diff', 'w0 Bias', # 'sfc_h RMSE', 'sfc_h Diff', 'sfc_h Bias', # 'widths RMSE', 'widths Diff', 'widths Bias' performance_measurement='bed_h RMSE', xlim=5, y_label='', annotation=None, annotation_x_position=-0.2, annotation_y_position=1, lw=2, fontsize=25, ms=10, nr_of_iterations=None, ax_xlim=None ): if not fig: fig = plt.gcf() measure = performance_measurement all_x = [] all_y = [] for dataset in datasets: if dataset is not None: max_index = len(dataset['computing_time'].values) - 1 if nr_of_iterations is not None: max_index = nr_of_iterations - 1 elif xlim is not None: # calculate all max diff surface_h all_DIFF_sfc_h = np.array( [DIFF(dataset.true_surface_h.data, dataset.surface_h.data[i-1]) for i in dataset.coords['nr_of_iteration'].data]) # only consider as many points until max DIFF is smaller xlim if all_DIFF_sfc_h[-1] < xlim: max_index = np.argmax(all_DIFF_sfc_h < xlim) # include time 0 for first guess tmp_x = [0] # add first guess values if measure == 'bed_h RMSE': tmp_y = [RMSE(dataset['first_guessed_bed_h'], dataset['true_bed_h'])] elif measure == 'bed_h Diff': tmp_y = [DIFF(dataset['first_guessed_bed_h'], dataset['true_bed_h'])] elif measure == 'bed_h Bias': tmp_y = [BIAS(dataset['first_guessed_bed_h'], dataset['true_bed_h'])] elif measure == 'bed_shape RMSE': tmp_y = [RMSE(dataset['first_guessed_bed_shape'], dataset['true_bed_shape'])] elif measure == 'bed_shape Diff': tmp_y = [DIFF(dataset['first_guessed_bed_shape'], dataset['true_bed_shape'])] elif measure == 'bed_shape Bias': tmp_y = [BIAS(dataset['first_guessed_bed_shape'], dataset['true_bed_shape'])] elif measure == 'w0 RMSE': tmp_y = [RMSE(dataset['first_guessed_w0'], dataset['true_w0'])] elif measure == 'w0 Diff': tmp_y = [DIFF(dataset['first_guessed_w0'], dataset['true_w0'])] elif measure == 'w0 Bias': tmp_y = [BIAS(dataset['first_guessed_w0'], dataset['true_w0'])] elif measure == 'sfc_h RMSE': tmp_y = [RMSE(dataset['first_guess_surface_h'], dataset['true_surface_h'])] elif measure == 'sfc_h Diff': tmp_y = [DIFF(dataset['first_guess_surface_h'], dataset['true_surface_h'])] elif measure == 'sfc_h Bias': tmp_y = [DIFF(dataset['first_guess_surface_h'], dataset['true_surface_h'])] elif measure == 'widths RMSE': tmp_y = [RMSE(dataset['first_guess_widths'], dataset['true_widths'])] elif measure == 'widths Diff': tmp_y = [DIFF(dataset['first_guess_widths'], dataset['true_widths'])] elif measure == 'widths Bias': tmp_y = [DIFF(dataset['first_guess_widths'], dataset['true_widths'])] else: raise ValueError('Unknown performance measurement!') for i in dataset.coords['nr_of_iteration'].values[:max_index + 1] - 1: tmp_x.append(dataset['computing_time'][i]) if measure == 'bed_h RMSE': tmp_y.append(RMSE(dataset['guessed_bed_h'][i], dataset['true_bed_h'])) elif measure == 'bed_h Diff': tmp_y.append(DIFF(dataset['guessed_bed_h'][i], dataset['true_bed_h'])) elif measure == 'bed_h Bias': tmp_y.append(BIAS(dataset['guessed_bed_h'][i], dataset['true_bed_h'])) elif measure == 'bed_shape RMSE': tmp_y.append(RMSE(dataset['guessed_bed_shape'][i], dataset['true_bed_shape'])) elif measure == 'bed_shape Diff': tmp_y.append(DIFF(dataset['guessed_bed_shape'][i], dataset['true_bed_shape'])) elif measure == 'bed_shape Bias': tmp_y.append(BIAS(dataset['guessed_bed_shape'][i], dataset['true_bed_shape'])) elif measure == 'w0 RMSE': tmp_y.append(RMSE(dataset['guessed_w0'][i], dataset['true_w0'])) elif measure == 'w0 Diff': tmp_y.append(DIFF(dataset['guessed_w0'][i], dataset['true_w0'])) elif measure == 'w0 Bias': tmp_y.append(BIAS(dataset['guessed_w0'][i], dataset['true_w0'])) elif measure == 'sfc_h RMSE': tmp_y.append(RMSE(dataset['surface_h'][i], dataset['true_surface_h'])) elif measure == 'sfc_h Diff': tmp_y.append(DIFF(dataset['surface_h'][i], dataset['true_surface_h'])) elif measure == 'sfc_h Bias': tmp_y.append(BIAS(dataset['surface_h'][i], dataset['true_surface_h'])) elif measure == 'widths RMSE': tmp_y.append(RMSE(dataset['widths'][i], dataset['true_widths'])) elif measure == 'widths Diff': tmp_y.append(DIFF(dataset['widths'][i], dataset['true_widths'])) elif measure == 'widths Bias': tmp_y.append(BIAS(dataset['widths'][i], dataset['true_widths'])) else: raise ValueError('Unknown performance measurement!') else: tmp_x = [] tmp_y = [] all_x.append(tmp_x) all_y.append(tmp_y) colors = [color_1, color_2, color_3, color_4] for i, (x, y) in enumerate(zip(all_x, all_y)): ax.plot(x, y, '.-', lw=lw, ms=ms, c=colors[i]) #ax.legend((),(),title=measure, loc='best') #ax.axvline(60, alpha=0.5, c='gray', ls='--') # ax.axvline(20, alpha=0.5, c='gray', ls='--') #if xlim is not None: # ax.set_xlim(xlim) ax.tick_params(axis='both', colors=axis_color, width=lw) ax.spines['bottom'].set_color(axis_color) ax.spines['bottom'].set_linewidth(lw) ax.spines['left'].set_color(axis_color) ax.spines['left'].set_linewidth(lw) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.set_xlabel(r'$T_{cpu}$', fontsize=fontsize, c=axis_color) ax.set_ylabel(y_label, fontsize=fontsize, c=axis_color) if ax_xlim is not None: ax.set_xlim(ax_xlim) if annotation is not None: ax.text(annotation_x_position, annotation_y_position, annotation, horizontalalignment='left', verticalalignment='center', transform = ax.transAxes)
_____no_output_____
BSD-3-Clause
Fig.3.11_par_methods_overview/Par_method_comparision_overview.ipynb
pat-schmitt/supplementary_material_master_thesis
Define Colors
colors = sns.color_palette("colorblind") colors axis_color = list(colors[7]) + [1.] color_1 = list(colors[3]) + [1.] color_2 = list(colors[0]) + [1.] color_3 = list(colors[4]) + [1.] color_4 = list(colors[2]) + [1.] glacier_color = glacier_color = list(colors[9]) + [.5]
_____no_output_____
BSD-3-Clause
Fig.3.11_par_methods_overview/Par_method_comparision_overview.ipynb
pat-schmitt/supplementary_material_master_thesis
Import Data
input_folder = 'plot_data/' filename_scale_1 = 'par_clif_cons_ret_bed_h_and_bed_shape_at_once_scal1reg11.nc' filename_scale_1e4 = 'par_clif_cons_ret_bed_h_and_bed_shape_at_once_scal1e4reg11.nc' filename_separated = 'par_clif_cons_ret_bed_h_and_bed_shape_separatedreg11.nc' filename_calculated = 'par_clif_cons_ret_bed_h_and_bed_shape_calculatedreg11.nc' datasets = [] for filename in [filename_scale_1, filename_separated, filename_scale_1e4, filename_calculated]: with xr.open_dataset(input_folder + filename) as ds: datasets.append(ds)
_____no_output_____
BSD-3-Clause
Fig.3.11_par_methods_overview/Par_method_comparision_overview.ipynb
pat-schmitt/supplementary_material_master_thesis
Create figure with performance
#dataset, nr_of_iterations = None facecolor = 'white' labels_pad = -700 cmap = 'Spectral' fontsize = 25 lw=2 ms=10 annotation_x_position_spatial = -0.15 annotation_y_position_spatial = 0.9 annotation_x_position_performance = -0.14 annotation_y_position_performance = 1.05 #index_start_first_profil_row = 0 #index_end_first_profil_row = 6 #index_start_second_profil_row = 65 #index_end_second_profil_row = 71 save_file = True filename = 'par_methods_overview.pdf' #plt.rcParams['font.family'] = 'monospace' mpl.rcParams.update({'font.size': fontsize}) fig = plt.figure(figsize=(1,1), facecolor='white') # define grid total_width = 10 # define fixed size of spatial subplot spatial_height = 2.5 spatial_y_separation = 0.5 # define fixed size for performance plot performance_height = 2.5 performance_width = 8 performance_separation_y = 1 separation_y_performance_spatial = 0.5 # define fixed size for legend legend_height = 3.5 separation_x_legend_spatial = 0.5 # fixed size in inch # along x axis x-index for locator horiz = [Size.Fixed(total_width), # 0 ] # y-index for locator vert = [Size.Fixed(performance_height), # 0 performance row 2 Size.Fixed(separation_y_performance_spatial), Size.Fixed(performance_height), # 2 performance row 1 Size.Fixed(separation_y_performance_spatial), Size.Fixed(spatial_height), # 4 spatial row 2 Size.Fixed(spatial_y_separation), Size.Fixed(spatial_height), # 6 spatial row 1 Size.Fixed(separation_x_legend_spatial), Size.Fixed(legend_height), # 8 legend ] # define indices for subplots for easier changes later # spatial heatmap spatial_nx = 0 spatial_nx1 = 1 spatial_ny_row_1 = 6 spatial_ny_row_2 = 4 spatial_annotation = ['(a)', '(b)'] # performance performance_nx = 0 performance_ny_row_1 = 2 performance_ny_row_2 = 0 # legend legend_nx = 0 legend_ny = 8 # Position of the grid in the figure rect = (0., 0., 1., 1.) # divide the axes rectangle into grid whose size is specified by horiz * vert divider = Divider(fig, rect, horiz, vert, aspect=False) with plt.rc_context({'font.family': 'monospace'}): ax = setup_axes(fig, 111) im = heatmap(datasets, opti_var='bed_h', annotation=spatial_annotation[0], annotation_x_position=annotation_x_position_spatial, annotation_y_position=annotation_y_position_spatial, fig=fig, ax=ax, cmap=cmap, grid_color=facecolor, presentation=False, labels_pad=labels_pad, xlim=5, nr_of_iterations=nr_of_iterations) ax.set_axes_locator(divider.new_locator(nx=spatial_nx, nx1=spatial_nx1, ny=spatial_ny_row_1)) ax = setup_axes(fig, 111) im = heatmap(datasets, opti_var='bed_shape', annotation=spatial_annotation[1], annotation_x_position=annotation_x_position_spatial, annotation_y_position=annotation_y_position_spatial, fig=fig, ax=ax, cmap='vlag', grid_color=facecolor, presentation=False, labels_pad=labels_pad, xlim=5, nr_of_iterations=nr_of_iterations) ax.set_axes_locator(divider.new_locator(nx=spatial_nx, nx1=spatial_nx1, ny=spatial_ny_row_2)) # add perfomance plot bed_h RMSE ax = fig.subplots() performance_plot(ax, datasets, fig=None, # 'bed_h RMSE', 'bed_h Diff', 'bed_h Bias', # 'bed_shape RMSE', 'bed_shape Diff', 'bed_shape Bias', # 'w0 RMSE', 'w0 Diff', 'w0 Bias', # 'sfc_h RMSE', 'sfc_h Diff', 'sfc_h Bias', # 'widths RMSE', 'widths Diff', 'widths Bias' performance_measurement='bed_h RMSE', xlim=5, y_label='RMSE_b', annotation='(c)', annotation_x_position=annotation_x_position_performance, annotation_y_position=annotation_y_position_performance, lw=lw, fontsize=fontsize, ms=ms, nr_of_iterations=nr_of_iterations, ax_xlim=[0, 400] ) ax.set_axes_locator(divider.new_locator(nx=performance_nx, ny=performance_ny_row_1)) # add perfomance plot bed_shape RMSE ax = fig.subplots() performance_plot(ax, datasets, fig=None, # 'bed_h RMSE', 'bed_h Diff', 'bed_h Bias', # 'bed_shape RMSE', 'bed_shape Diff', 'bed_shape Bias', # 'w0 RMSE', 'w0 Diff', 'w0 Bias', # 'sfc_h RMSE', 'sfc_h Diff', 'sfc_h Bias', # 'widths RMSE', 'widths Diff', 'widths Bias' performance_measurement='bed_shape RMSE', xlim=5, y_label='RMSE_Ps', annotation='(d)', annotation_x_position=annotation_x_position_performance, annotation_y_position=annotation_y_position_performance, lw=lw, fontsize=fontsize, ms=ms, nr_of_iterations=nr_of_iterations, ax_xlim=[0, 350] ) ax.set_axes_locator(divider.new_locator(nx=performance_nx, ny=performance_ny_row_2)) # add legend ax = fig.subplots() add_legend2(ax=ax, title=(r'$\bf{cliff}$ with $\bf{constant}$ width and $\bf{parabolic}$ shape,' + '\n' + r'$\bf{retreating}$ from an $\bf{initial~ glacier~ surface}$,' + '\n' + r'regularisation parameters $\lambda_0$ = 1 and $\lambda_1$ = 100'), fontsize=fontsize, lw=lw, ms=ms, labels=['fg: first guess', "and A: 'explicit' without scaling", "and B: 'iterative'", "and C: 'explicit' with scaling of 1e-4", "and D: 'implicit' with no limits"]) ax.set_axes_locator(divider.new_locator(nx=legend_nx, ny=legend_ny)) if save_file: fig.savefig(filename, format='pdf', bbox_inches='tight', dpi=300);
_____no_output_____
BSD-3-Clause
Fig.3.11_par_methods_overview/Par_method_comparision_overview.ipynb
pat-schmitt/supplementary_material_master_thesis
Prune DB
conn = sqlite3.connect('prices.db') conn.executescript(SCHEMA) conn.commit() conn.close()
_____no_output_____
MIT
crypto-examples/Insert in Database.ipynb
zzhengnan/pycon-concurrency-tutorial-2020
Insert Data
BASE_PATH = Path('crypto_data/') files = list(BASE_PATH.glob('*.csv')) INSERT_STATEMENT = """ INSERT INTO price ( exchange, symbol, open, high, low, close, volume, day ) VALUES (?, ?, ?, ?, ?, ?, ?, ?); """ conn = sqlite3.connect('prices.db') for file in files: exchange, symbol = file.name[:-4].split('_') df = pd.read_csv(str(file)) df['exchange'] = exchange df['symbol'] = symbol values = df[['exchange', 'symbol', 'OpenPrice', 'HighPrice', 'LowPrice', 'ClosePrice', 'Volume', 'DateTime']].values conn.executemany(INSERT_STATEMENT, values) conn.commit() conn.close()
_____no_output_____
MIT
crypto-examples/Insert in Database.ipynb
zzhengnan/pycon-concurrency-tutorial-2020
Final Test
conn = sqlite3.connect('prices.db') cursor = conn.cursor() cursor.execute('SELECT COUNT(*) FROM price;') cursor.fetchone() cursor.execute('SELECT * FROM price LIMIT 5;') cursor.fetchall() conn.close()
_____no_output_____
MIT
crypto-examples/Insert in Database.ipynb
zzhengnan/pycon-concurrency-tutorial-2020
Exchanges
conn = sqlite3.connect('prices.db') cursor = conn.cursor() cursor.execute('SELECT DISTINCT exchange FROM price;') cursor.fetchall() cursor.execute('SELECT DISTINCT symbol FROM price;') cursor.fetchall()
_____no_output_____
MIT
crypto-examples/Insert in Database.ipynb
zzhengnan/pycon-concurrency-tutorial-2020
Filtered query:
cursor.execute('SELECT * FROM price WHERE symbol = "btc" AND exchange = "bitfinex" AND day = "2019-07-20";') cursor.fetchall()
_____no_output_____
MIT
crypto-examples/Insert in Database.ipynb
zzhengnan/pycon-concurrency-tutorial-2020
Setup Data
#----------Training data set df_train0 = pd.read_json('../input/deepfake/metadata0.json') df_train1 = pd.read_json('../input/deepfake/metadata1.json') df_train2 = pd.read_json('../input/deepfake/metadata2.json') df_train3 = pd.read_json('../input/deepfake/metadata3.json') df_train4 = pd.read_json('../input/deepfake/metadata4.json') df_train5 = pd.read_json('../input/deepfake/metadata5.json') df_train6 = pd.read_json('../input/deepfake/metadata6.json') df_train7 = pd.read_json('../input/deepfake/metadata7.json') df_train8 = pd.read_json('../input/deepfake/metadata8.json') df_train9 = pd.read_json('../input/deepfake/metadata9.json') df_train10 = pd.read_json('../input/deepfake/metadata10.json') df_train11 = pd.read_json('../input/deepfake/metadata11.json') df_train12 = pd.read_json('../input/deepfake/metadata12.json') df_train13 = pd.read_json('../input/deepfake/metadata13.json') df_train14 = pd.read_json('../input/deepfake/metadata14.json') df_train15 = pd.read_json('../input/deepfake/metadata15.json') df_train16 = pd.read_json('../input/deepfake/metadata16.json') df_train17 = pd.read_json('../input/deepfake/metadata17.json') df_train18 = pd.read_json('../input/deepfake/metadata18.json') df_train19 = pd.read_json('../input/deepfake/metadata19.json') df_train20 = pd.read_json('../input/deepfake/metadata20.json') df_train21 = pd.read_json('../input/deepfake/metadata21.json') df_train22 = pd.read_json('../input/deepfake/metadata22.json') df_train23 = pd.read_json('../input/deepfake/metadata23.json') df_train24 = pd.read_json('../input/deepfake/metadata24.json') df_train25 = pd.read_json('../input/deepfake/metadata25.json') df_train26 = pd.read_json('../input/deepfake/metadata26.json') df_train27 = pd.read_json('../input/deepfake/metadata27.json') df_train28 = pd.read_json('../input/deepfake/metadata28.json') df_train29 = pd.read_json('../input/deepfake/metadata29.json') df_train30 = pd.read_json('../input/deepfake/metadata30.json') df_train31 = pd.read_json('../input/deepfake/metadata31.json') df_train32 = pd.read_json('../input/deepfake/metadata32.json') df_train33 = pd.read_json('../input/deepfake/metadata33.json') df_train34 = pd.read_json('../input/deepfake/metadata34.json') df_train35 = pd.read_json('../input/deepfake/metadata35.json') df_train36 = pd.read_json('../input/deepfake/metadata36.json') df_train37 = pd.read_json('../input/deepfake/metadata37.json') df_train38 = pd.read_json('../input/deepfake/metadata38.json') df_train39 = pd.read_json('../input/deepfake/metadata39.json') df_train40 = pd.read_json('../input/deepfake/metadata40.json') df_train41 = pd.read_json('../input/deepfake/metadata41.json') df_train42 = pd.read_json('../input/deepfake/metadata42.json') df_train43 = pd.read_json('../input/deepfake/metadata43.json') df_train44 = pd.read_json('../input/deepfake/metadata44.json') df_train45 = pd.read_json('../input/deepfake/metadata45.json') df_train46 = pd.read_json('../input/deepfake/metadata46.json') df_train = [df_train0 ,df_train1, df_train2, df_train3, df_train4, df_train5, df_train6, df_train7, df_train8, df_train9,df_train10, df_train11, df_train12, df_train13, df_train14, df_train15,df_train16, df_train17, df_train18, df_train19, df_train20, df_train21, df_train22, df_train23, df_train24, df_train25, df_train26, df_train27, df_train28, df_train29, df_train30, df_train31, df_train32, df_train33, df_train34, df_train35, df_train36, df_train37, df_train38, df_train39, df_train40, df_train41, df_train42, df_train43, df_train44, df_train45, df_train46] train_nums = ["%.2d" % i for i in range(len(df_train)+1)] #--------------Validation data set df_val1 = pd.read_json('../input/deepfake/metadata47.json') df_val2 = pd.read_json('../input/deepfake/metadata48.json') df_val3 = pd.read_json('../input/deepfake/metadata49.json') df_val = [df_val1, df_val2, df_val3] val_nums =['47', '48', '49'] # def get_all_paths(df_list,suffixes_list): # LABELS = {'REAL':0,'FAKE':1} # paths = [] # labels = [] # for df,suffix in tqdm(zip(df_list,suffixes_list),total=len(df_list)): # images_names = list(df.columns.values) # for img_name in images_names: # try: # paths.append(get_path(img_name,suffix)) # labels.append(LABELS[df[img_name]['label']]) # except Exception as err: # #print(err) # pass # return paths,labels def get_orig_fakes(df): orig_fakes = {} temp = df.T.groupby(['original',df.T.index,]).count() for orig,fake in (list(temp.index)): fakes = [] try:#if key exists fakes = orig_fakes[orig] fakes.append(fake) except KeyError as e: fakes.append(fake) finally: orig_fakes[orig] = fakes return orig_fakes def get_path(img_name,suffix): path = '../input/deepfake/DeepFake'+suffix+'/DeepFake'+suffix+'/' + img_name.replace(".mp4","")+ '.jpg' if not os.path.exists(path): raise Exception return path def get_all_paths(df_list,suffixes_list): paths = [] labels = [] count = 0 for df in tqdm(df_list,total=len(df_list)): orig_fakes = get_orig_fakes(df) for suffix in suffixes_list: try: for orig,fakes in orig_fakes.items(): paths.append(get_path(orig,suffix)) labels.append(0)#processing REAL image for img_name in fakes: paths.append(get_path(img_name,suffix)) labels.append(1)#processing FAKES image except Exception as err: count+=1 pass print("Exceptions:",count) return paths,labels %%time val_img_paths, val_img_labels = get_all_paths(df_val,val_nums) train_img_paths, train_img_labels = get_all_paths(df_train,train_nums) len(train_img_paths),len(val_img_paths) #NOT IDEMPOTENT val_img_labels = val_img_labels[:500] val_img_paths = val_img_paths[:500] len(val_img_paths)
_____no_output_____
MIT
DFDC/Training Xception Classifier .ipynb
nizamphoenix/kaggle
Dataset
def read_img(path): return cv2.cvtColor(cv2.imread(path),cv2.COLOR_BGR2RGB) import random def shuffle(X,y): new = [] for m,n in zip(X,y): new.append([m,n]) random.shuffle(new) X,y = [],[] for path,label in new: X.append(path) y.append(label) return X,y
_____no_output_____
MIT
DFDC/Training Xception Classifier .ipynb
nizamphoenix/kaggle
FAKE-->1 REAL-->0
def get_data(train_paths, train_y, val_paths, val_y): train_X=[] for img in tqdm(train_paths): train_X.append(read_img(img)) val_X=[] for img in tqdm(val_paths): val_X.append(read_img(img)) train_X, train_y = shuffle(train_X,train_y) val_X, val_y = shuffle(val_X,val_y) return train_X, val_X, train_y, val_y ''' def get_random_sampling(paths, y, val_paths, val_y): real=[] fake=[] for path,label in zip(paths,y): if label==0: real.append(path) else: fake.append(path) # fake=random.sample(fake,len(real)) paths,y=[],[] for x in real: paths.append(x) y.append(0) for x in fake: paths.append(x) y.append(1) real=[] fake=[] for m,n in zip(val_paths,val_y): if n==0: real.append(m) else: fake.append(m) # fake=random.sample(fake,len(real)) val_paths,val_y=[],[] for x in real: val_paths.append(x) val_y.append(0) for x in fake: val_paths.append(x) val_y.append(1) #training dataset X=[] for img in tqdm(paths): X.append(read_img(img)) #validation dataset val_X=[] for img in tqdm(val_paths): val_X.append(read_img(img)) # Balance with ffhq dataset ffhq = os.listdir('../input/ffhq-face-data-set/thumbnails128x128') X_ = []#used for train and val for file in tqdm(ffhq): im = read_img(f'../input/ffhq-face-data-set/thumbnails128x128/{file}') im = cv2.resize(im, (150,150)) X_.append(im) random.shuffle(X_) #Appending REAL images from FFHQ dataset for i in range(64773 - 12130): X.append(X_[i]) y.append(0) del X_[0:64773 - 12130] for i in range(6108 - 1258): val_X.append(X_[i]) val_y.append(0) X, y = shuffle(X,y) val_X, val_y = shuffle(val_X,val_y) return X, val_X, y, val_y ''' from torch.utils.data import Dataset, DataLoader mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] class ImageDataset(Dataset): def __init__(self, X, y, training=True, transform=None): self.X = X self.y = y self.transform = transform self.training = training def __len__(self): return len(self.X) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() img = self.X[idx] if self.transform is not None: res = self.transform(image=img) img = res['image'] img = np.rollaxis(img, 2, 0) # img = np.array(img).astype(np.float32) / 255. labels = self.y[idx] labels = np.array(labels).astype(np.float32) return [img, labels]
_____no_output_____
MIT
DFDC/Training Xception Classifier .ipynb
nizamphoenix/kaggle
Model
!pip install pytorchcv --quiet from pytorchcv.model_provider import get_model model = get_model("xception", pretrained=True) # model = get_model("resnet18", pretrained=True) model = nn.Sequential(*list(model.children())[:-1]) # Remove original output layer model[0].final_block.pool = nn.Sequential(nn.AdaptiveAvgPool2d(1)) # model[0].final_pool = nn.Sequential(nn.AdaptiveAvgPool2d(1)) class Head(torch.nn.Module): def __init__(self, in_f, out_f): super().__init__() self.f = nn.Flatten() self.l = nn.Linear(in_f, 512) self.d = nn.Dropout(0.30) self.o = nn.Linear(512, out_f) self.b1 = nn.BatchNorm1d(in_f) self.b2 = nn.BatchNorm1d(512) self.r = nn.ReLU() def forward(self, x): x = self.f(x) x = self.b1(x) x = self.d(x) x = self.l(x) x = self.r(x) x = self.b2(x) x = self.d(x) out = self.o(x) return out class FCN(torch.nn.Module): def __init__(self, base, in_f): super().__init__() self.base = base self.h1 = Head(in_f, 1) def forward(self, x): x = self.base(x) return self.h1(x) model = FCN(model, 2048) PATH = './model1.pth' model.load_state_dict(torch.load(PATH)) model.eval()
_____no_output_____
MIT
DFDC/Training Xception Classifier .ipynb
nizamphoenix/kaggle
Train Functions
def calculate_loss(preds, targets): return F.binary_cross_entropy(F.sigmoid(preds), targets) def train_model(epoch, optimizer, scheduler=None, history=None): model.train() total_loss = 0 t = tqdm(train_loader) for i, (img_batch, y_batch) in enumerate(t): img_batch = img_batch.cuda().float() y_batch = y_batch.cuda().float() optimizer.zero_grad()#to avoid accumulating gradients preds_batch = model(img_batch) loss = calculate_loss(preds_batch, y_batch) total_loss += loss t.set_description(f'Epoch {epoch+1}/{n_epochs}, LR: %6f, Loss: %.4f'%(optimizer.state_dict()['param_groups'][0]['lr'],total_loss/(i+1))) if history is not None: history.loc[epoch + i / len(X), 'train_loss'] = loss.data.cpu().numpy() history.loc[epoch + i / len(X), 'lr'] = optimizer.state_dict()['param_groups'][0]['lr'] loss.backward()#computing gradients optimizer.step()#updating parameters if scheduler is not None: scheduler.step() def evaluate_model(epoch, scheduler=None, history=None): model.eval() total_loss = 0.0 pred = [] target = [] with torch.no_grad(): for img_batch, y_batch in val_loader: img_batch = img_batch.cuda().float() y_batch = y_batch.cuda().float() preds_batch = model(img_batch) loss = calculate_loss(preds_batch, y_batch) total_loss += loss pred = [*map(F.sigmoid,preds_batch)] target = [*map(lambda i: i.data.cpu(),y_batch)] pred = [p.data.cpu().numpy() for p in pred] pred2 = pred pred = [np.round(p) for p in pred] pred = np.array(pred) #calculating accuracy acc = sklearn.metrics.recall_score(target, pred, average='macro') target = [i.item() for i in target] pred2 = np.array(pred2).clip(0.1, 0.9) #calculating log-loss after clipping log_loss = sklearn.metrics.log_loss(target, pred2) total_loss /= len(val_loader) if history is not None: history.loc[epoch, 'dev_loss'] = total_loss.cpu().numpy() if scheduler is not None: scheduler.step(total_loss) print(f'Dev loss: %.4f, Acc: %.6f, log_loss: %.6f'%(total_loss,acc,log_loss)) return total_loss
_____no_output_____
MIT
DFDC/Training Xception Classifier .ipynb
nizamphoenix/kaggle
Dataloaders
X, val_X, y, val_y = get_data(train_img_paths, train_img_labels,val_img_paths, val_img_labels) print('There are '+str(y.count(1))+' fake train samples') print('There are '+str(y.count(0))+' real train samples') print('There are '+str(val_y.count(1))+' fake val samples') print('There are '+str(val_y.count(0))+' real val samples') import albumentations from albumentations.augmentations.transforms import ShiftScaleRotate, HorizontalFlip, Normalize, RandomBrightnessContrast, MotionBlur, Blur, GaussNoise, JpegCompression train_transform = albumentations.Compose([ ShiftScaleRotate(p=0.3, scale_limit=0.25, border_mode=1, rotate_limit=25), HorizontalFlip(p=0.2), RandomBrightnessContrast(p=0.3, brightness_limit=0.25, contrast_limit=0.5), MotionBlur(p=.2), GaussNoise(p=.2), JpegCompression(p=.2, quality_lower=50), Normalize() ]) val_transform = albumentations.Compose([Normalize()]) train_dataset = ImageDataset(X, y, transform=train_transform) val_dataset = ImageDataset(val_X, val_y, transform=val_transform) nrow, ncol = 5, 6 fig, axes = plt.subplots(nrow, ncol, figsize=(20, 8)) axes = axes.flatten() for i, ax in enumerate(axes): image, label = train_dataset[i] image = np.rollaxis(image, 0, 3) image = image*std + mean image = np.clip(image, 0., 1.) ax.imshow(image) ax.set_title(f'label: {label}')
_____no_output_____
MIT
DFDC/Training Xception Classifier .ipynb
nizamphoenix/kaggle
Train
import gc history = pd.DataFrame() history2 = pd.DataFrame() torch.cuda.empty_cache() gc.collect() best = 1e10 n_epochs = 20 batch_size = 64#BATCH SIZE CHANGED train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, num_workers=4) val_loader = DataLoader(dataset=val_dataset, batch_size=batch_size, shuffle=False, num_workers=0) model = model.cuda() optimizer = torch.optim.AdamW(model.parameters(), lr=0.001) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=5, mode='min', factor=0.7, verbose=True, min_lr=1e-5) for epoch in range(n_epochs): torch.cuda.empty_cache() gc.collect() train_model(epoch, optimizer, scheduler=None, history=history) loss = evaluate_model(epoch, scheduler=scheduler, history=history2) if loss < best: best = loss print(f'Saving best model...') torch.save(model.state_dict(), f'model2.pth') history2.plot() import torch w = torch.rand(5) w.requires_grad_() print(w) s = w.sum() print(s) s.backward() print(w.grad) # tensor([1., 1., 1., 1., 1.]) s.backward() print(w.grad) # tensor([2., 2., 2., 2., 2.]) s.backward() print(w.grad) # tensor([3., 3., 3., 3., 3.]) s.backward() print(w.grad) # tensor([4., 4., 4., 4., 4.])
_____no_output_____
MIT
DFDC/Training Xception Classifier .ipynb
nizamphoenix/kaggle
Face RecognitionIn this assignment, you will build a face recognition system. Many of the ideas presented here are from [FaceNet](https://arxiv.org/pdf/1503.03832.pdf). In lecture, we also talked about [DeepFace](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf). Face recognition problems commonly fall into two categories: - **Face Verification** - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem. - **Face Recognition** - "who is this person?". For example, the video lecture showed a [face recognition video](https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem. FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person. **In this assignment, you will:**- Implement the triplet loss function- Use a pretrained model to map face images into 128-dimensional encodings- Use these encodings to perform face verification and face recognition Channels-first notation* In this exercise, we will be using a pre-trained model which represents ConvNet activations using a **"channels first"** convention, as opposed to the "channels last" convention used in lecture and previous programming assignments. * In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. * Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community. Updates If you were working on the notebook before this update...* The current notebook is version "3a".* You can find your original work saved in the notebook with the previous version name ("v3") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* `triplet_loss`: Additional Hints added.* `verify`: Hints added.* `who_is_it`: corrected hints given in the comments.* Spelling and formatting updates for easier reading. Load packagesLet's load the required packages.
from keras.models import Sequential from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate from keras.models import Model from keras.layers.normalization import BatchNormalization from keras.layers.pooling import MaxPooling2D, AveragePooling2D from keras.layers.merge import Concatenate from keras.layers.core import Lambda, Flatten, Dense from keras.initializers import glorot_uniform from keras.engine.topology import Layer from keras import backend as K K.set_image_data_format('channels_first') import cv2 import os import numpy as np from numpy import genfromtxt import pandas as pd import tensorflow as tf from fr_utils import * from inception_blocks_v2 import * %matplotlib inline %load_ext autoreload %autoreload 2 np.set_printoptions(threshold=np.nan)
Using TensorFlow backend.
MIT
4. Convolutional Neural Networks/Face_Recognition_v3a.ipynb
taktak-hi/Deep-Learning-Specialization
0 - Naive Face VerificationIn Face Verification, you're given two images and you have to determine if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person! **Figure 1** * Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on. * You'll see that rather than using the raw image, you can learn an encoding, $f(img)$. * By using an encoding for each image, an element-wise comparison produces a more accurate judgement as to whether two pictures are of the same person. 1 - Encoding face images into a 128-dimensional vector 1.1 - Using a ConvNet to compute encodingsThe FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning, let's load weights that someone else has already trained. The network architecture follows the Inception model from [Szegedy *et al.*](https://arxiv.org/abs/1409.4842). We have provided an inception network implementation. You can look in the file `inception_blocks_v2.py` to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook. This opens the file directory that contains the '.py' file). The key things you need to know are:- This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$ - It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vectorRun the cell below to create the model for face images.
FRmodel = faceRecoModel(input_shape=(3, 96, 96)) print("Total Params:", FRmodel.count_params())
Total Params: 3743280
MIT
4. Convolutional Neural Networks/Face_Recognition_v3a.ipynb
taktak-hi/Deep-Learning-Specialization
** Expected Output **Total Params: 3743280 By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings to compare two face images as follows: **Figure 2**: By computing the distance between two encodings and thresholding, you can determine if the two pictures represent the same personSo, an encoding is a good one if: - The encodings of two images of the same person are quite similar to each other. - The encodings of two images of different persons are very different.The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart. **Figure 3**: In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) 1.2 - The Triplet LossFor an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.<!--We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1).!-->Training will use triplets of images $(A, P, N)$: - A is an "Anchor" image--a picture of a person. - P is a "Positive" image--a picture of the same person as the Anchor image.- N is a "Negative" image--a picture of a different person than the Anchor image.These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example. You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$:$$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$You would thus like to minimize the following "triplet cost":$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}_\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}_\text{(2)} + \alpha \large ] \small_+ \tag{3}$$Here, we are using the notation "$[z]_+$" to denote $max(z,0)$. Notes:- The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small. - The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large. It has a minus sign preceding it because minimizing the negative of the term is the same as maximizing that term.- $\alpha$ is called the margin. It is a hyperparameter that you pick manually. We will use $\alpha = 0.2$. Most implementations also rescale the encoding vectors to haven L2 norm equal to one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that in this assignment.**Exercise**: Implement the triplet loss as defined by formula (3). Here are the 4 steps:1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$3. Compute the full formula by taking the max with zero and summing over the training examples:$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small_+ \tag{3}$$ Hints* Useful functions: `tf.reduce_sum()`, `tf.square()`, `tf.subtract()`, `tf.add()`, `tf.maximum()`.* For steps 1 and 2, you will sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$. * For step 4 you will sum over the training examples. Additional Hints* Recall that the square of the L2 norm is the sum of the squared differences: $||x - y||_{2}^{2} = \sum_{i=1}^{N}(x_{i} - y_{i})^{2}$* Note that the `anchor`, `positive` and `negative` encodings are of shape `(m,128)`, where m is the number of training examples and 128 is the number of elements used to encode a single example.* For steps 1 and 2, you will maintain the number of `m` training examples and sum along the 128 values of each encoding. [tf.reduce_sum](https://www.tensorflow.org/api_docs/python/tf/math/reduce_sum) has an `axis` parameter. This chooses along which axis the sums are applied. * Note that one way to choose the last axis in a tensor is to use negative indexing (`axis=-1`).* In step 4, when summing over training examples, the result will be a single scalar value.* For `tf.reduce_sum` to sum across all axes, keep the default value `axis=None`.
# GRADED FUNCTION: triplet_loss def triplet_loss(y_true, y_pred, alpha = 0.2): """ Implementation of the triplet loss as defined by formula (3) Arguments: y_true -- true labels, required when you define a loss in Keras, you don't need it in this function. y_pred -- python list containing three objects: anchor -- the encodings for the anchor images, of shape (None, 128) positive -- the encodings for the positive images, of shape (None, 128) negative -- the encodings for the negative images, of shape (None, 128) Returns: loss -- real number, value of the loss """ anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2] ### START CODE HERE ### (≈ 4 lines) # Step 1: Compute the (encoding) distance between the anchor and the positive pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis = -1) # Step 2: Compute the (encoding) distance between the anchor and the negative neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis = -1) # Step 3: subtract the two previous distances and add alpha. basic_loss = pos_dist - neg_dist + alpha # Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples. loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0)) ### END CODE HERE ### return loss with tf.Session() as test: tf.set_random_seed(1) y_true = (None, None, None) y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1), tf.random_normal([3, 128], mean=1, stddev=1, seed = 1), tf.random_normal([3, 128], mean=3, stddev=4, seed = 1)) loss = triplet_loss(y_true, y_pred) print("loss = " + str(loss.eval()))
loss = 528.143
MIT
4. Convolutional Neural Networks/Face_Recognition_v3a.ipynb
taktak-hi/Deep-Learning-Specialization
**Expected Output**: **loss** 528.143 2 - Loading the pre-trained modelFaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run.
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy']) load_weights_from_FaceNet(FRmodel)
_____no_output_____
MIT
4. Convolutional Neural Networks/Face_Recognition_v3a.ipynb
taktak-hi/Deep-Learning-Specialization
Here are some examples of distances between the encodings between three individuals: **Figure 4**: Example of distance outputs between three individuals' encodingsLet's now use this model to perform face verification and face recognition! 3 - Applying the model You are building a system for an office building where the building manager would like to offer facial recognition to allow the employees to enter the building.You'd like to build a **Face verification** system that gives access to the list of people who live or work there. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the entrance. The face recognition system then checks that they are who they claim to be. 3.1 - Face VerificationLet's build a database containing one encoding vector for each person who is allowed to enter the office. To generate the encoding we use `img_to_encoding(image_path, model)`, which runs the forward propagation of the model on the specified image. Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.
database = {} database["danielle"] = img_to_encoding("images/danielle.png", FRmodel) database["younes"] = img_to_encoding("images/younes.jpg", FRmodel) database["tian"] = img_to_encoding("images/tian.jpg", FRmodel) database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel) database["kian"] = img_to_encoding("images/kian.jpg", FRmodel) database["dan"] = img_to_encoding("images/dan.jpg", FRmodel) database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel) database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel) database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel) database["felix"] = img_to_encoding("images/felix.jpg", FRmodel) database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel) database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
_____no_output_____
MIT
4. Convolutional Neural Networks/Face_Recognition_v3a.ipynb
taktak-hi/Deep-Learning-Specialization
Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.**Exercise**: Implement the verify() function which checks if the front-door camera picture (`image_path`) is actually the person called "identity". You will have to go through the following steps:1. Compute the encoding of the image from `image_path`.2. Compute the distance between this encoding and the encoding of the identity image stored in the database.3. Open the door if the distance is less than 0.7, else do not open it.* As presented above, you should use the L2 distance [np.linalg.norm](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html). * (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.) Hints* `identity` is a string that is also a key in the `database` dictionary.* `img_to_encoding` has two parameters: the `image_path` and `model`.
# GRADED FUNCTION: verify def verify(image_path, identity, database, model): """ Function that verifies if the person on the "image_path" image is "identity". Arguments: image_path -- path to an image identity -- string, name of the person you'd like to verify the identity. Has to be an employee who works in the office. database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors). model -- your Inception model instance in Keras Returns: dist -- distance between the image_path and the image of "identity" in the database. door_open -- True, if the door should open. False otherwise. """ ### START CODE HERE ### # Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line) encoding = img_to_encoding(image_path, model) # Step 2: Compute distance with identity's image (≈ 1 line) dist = np.linalg.norm(encoding - database[identity]) # Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines) if dist < 0.7: print("It's " + str(identity) + ", welcome in!") door_open = True else: print("It's not " + str(identity) + ", please go away") door_open = False ### END CODE HERE ### return dist, door_open
_____no_output_____
MIT
4. Convolutional Neural Networks/Face_Recognition_v3a.ipynb
taktak-hi/Deep-Learning-Specialization
Younes is trying to enter the office and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture:
verify("images/camera_0.jpg", "younes", database, FRmodel)
It's younes, welcome in!
MIT
4. Convolutional Neural Networks/Face_Recognition_v3a.ipynb
taktak-hi/Deep-Learning-Specialization
**Expected Output**: **It's younes, welcome in!** (0.65939283, True) Benoit, who does not work in the office, stole Kian's ID card and tried to enter the office. The camera took a picture of Benoit ("images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter.
verify("images/camera_2.jpg", "kian", database, FRmodel)
It's not kian, please go away
MIT
4. Convolutional Neural Networks/Face_Recognition_v3a.ipynb
taktak-hi/Deep-Learning-Specialization
**Expected Output**: **It's not kian, please go away** (0.86224014, False) 3.2 - Face RecognitionYour face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the office the next day and couldn't get in! To solve this, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the building, and the door will unlock for them! You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as one of the inputs. **Exercise**: Implement `who_is_it()`. You will have to go through the following steps:1. Compute the target encoding of the image from image_path2. Find the encoding from the database that has smallest distance with the target encoding. - Initialize the `min_dist` variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding. - Loop over the database dictionary's names and encodings. To loop use `for (name, db_enc) in database.items()`. - Compute the L2 distance between the target "encoding" and the current "encoding" from the database. - If this distance is less than the min_dist, then set `min_dist` to `dist`, and `identity` to `name`.
# GRADED FUNCTION: who_is_it def who_is_it(image_path, database, model): """ Implements face recognition for the office by finding who is the person on the image_path image. Arguments: image_path -- path to an image database -- database containing image encodings along with the name of the person on the image model -- your Inception model instance in Keras Returns: min_dist -- the minimum distance between image_path encoding and the encodings from the database identity -- string, the name prediction for the person on image_path """ ### START CODE HERE ### ## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line) encoding = img_to_encoding(image_path, model) ## Step 2: Find the closest encoding ## # Initialize "min_dist" to a large value, say 100 (≈1 line) min_dist = 100 # Loop over the database dictionary's names and encodings. for (name, db_enc) in database.items(): # Compute L2 distance between the target "encoding" and the current db_enc from the database. (≈ 1 line) dist = np.linalg.norm(encoding - db_enc) # If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines) if dist < min_dist: min_dist = dist identity = name ### END CODE HERE ### if min_dist > 0.7: print("Not in the database.") else: print ("it's " + str(identity) + ", the distance is " + str(min_dist)) return min_dist, identity
_____no_output_____
MIT
4. Convolutional Neural Networks/Face_Recognition_v3a.ipynb
taktak-hi/Deep-Learning-Specialization
Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes.
who_is_it("images/camera_0.jpg", database, FRmodel)
it's younes, the distance is 0.659393
MIT
4. Convolutional Neural Networks/Face_Recognition_v3a.ipynb
taktak-hi/Deep-Learning-Specialization
Logistic Regression
model = LogisticRegressionWithSGD.train(trainingData) posTest = tf.transform('O M G GET cheap stuff by sending money to ...'.split(' ')) negTest = tf.transform('Hi Dad, I started studying Spark the other ...'.split(' ')) print('Prediction for positive test example: %g' % model.predict(posTest)) print('Prediction for negative test example: %g' % model.predict(negTest))
Prediction for positive test example: 0 Prediction for negative test example: 0
Apache-2.0
a01_PySpark/e01_Resources/PySpark-and-MLlib/learning_spark_MLlib.ipynb
mindis/Big_Data_Analysis
Creating vectors
from numpy import array from pyspark.mllib.linalg import Vectors denseVec1 = array([1.0, 2.0, 3.0]) denseVec2 = Vectors.dense([1.0, 2.0, 3.0]) sparseVec1 = Vectors.sparse(4, {0:1.0, 2:2.0}) sparseVec2 = Vectors.sparse(4, [0, 2], [1.0, 2.0])
_____no_output_____
Apache-2.0
a01_PySpark/e01_Resources/PySpark-and-MLlib/learning_spark_MLlib.ipynb
mindis/Big_Data_Analysis
Using HashingTF
sentence = 'hello hello world' words = sentence.split() tf = HashingTF(10000) tf.transform(words) rdd = sc.wholeTextFiles("data").map(lambda (name, text): text.split()) tfVectors = tf.transform(rdd)
_____no_output_____
Apache-2.0
a01_PySpark/e01_Resources/PySpark-and-MLlib/learning_spark_MLlib.ipynb
mindis/Big_Data_Analysis
Copyright 2019 Digital Advantage - Deep Insider.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
連載『機械学習 & ディープラーニング入門(データ構造編)』のノートブック Deep Insiderで記事を読む Google Colabで実行する GitHubでソースコードを見る ※上から順に実行してください。上のコードで実行したものを再利用しているところがあるため、すべて実行しないとエラーになるコードがあります。  すべてのコードを一括実行したい場合は、メニューバーから[ランタイム]-[すべてのセルを実行]をクリックしてください。 ※このノートブックは「Python 2」でも実行できるようにしていますが、基本的に「Python 3」を利用することをお勧めします。 Python 3を利用するには、メニューバーから[ランタイム]-[ランタイムのタイプを変更]を選択すると表示される[ノートブックの設定]ダイアログの、[ランタイムのタイプ]欄で「Python 3」に選択し、その右下にある[保存]ボタンをクリックしてください。
# Python バージョン2への対応 from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import sys print(sys.version_info.major) # 3 # バージョン(メジャー) print(sys.version_info.minor) # 6 # バージョン(マイナー)
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
Python言語におけるデータの構造 Pythonにおける「1つの」データの表現 リスト1-1 「単一の」データを表現するコード
height = 177.2 print(height) # 177.2と出力される
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
リスト1-2 変数名だけを記述してオブジェクトの評価結果を出力
height # 177.2と出力される
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
リスト1-3 オブジェクト評価結果の出力とprint()関数の出力の違い
import numpy as np array2d = np.array([ [ 165.5, 58.4 ], [ 177.2, 67.8 ], [ 183.2, 83.7 ] ]) print(array2d) # [[165.5 58.4] # [177.2 67.8] # [183.2 83.7]] array2d # array([[165.5, 58.4], # [177.2, 67.8], # [183.2, 83.7]])
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
リスト2-1 「単一の」データを複数書いて表現するコード
hana_height = 165.5 taro_height = 177.2 jiro_height = 183.2 hana_height, taro_height, jiro_height # (165.5, 177.2, 183.2)
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
リスト2-2 「複数(1次元)の」データを表現するコード
heights = [ 165.5, 177.2, 183.2 ] heights # [165.5, 177.2, 183.2]
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
Pythonにおける「複数(2次元)の」データの表現 リスト3 「複数(2次元)の」データを表現するコード
people = [ [ 165.5, 58.4 ], [ 177.2, 67.8 ], [ 183.2, 83.7 ] ] people # [165.5, 177.2, 183.2]
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
Pythonにおける「複数(多次元)の」データの表現 リスト4 「複数(3次元)の」データを表現するコード
list3d = [ [ [ 165.5, 58.4 ], [ 177.2, 67.8 ], [ 183.2, 83.7 ] ], [ [ 155.5, 48.4 ], [ 167.2, 57.8 ], [ 173.2, 73.7 ] ], [ [ 145.5, 38.4 ], [ 157.2, 47.8 ], [ 163.2, 63.7 ] ] ] list3d # [[[165.5, 58.4], [177.2, 67.8], [183.2, 83.7]], # [[155.5, 48.4], [167.2, 57.8], [173.2, 73.7]], # [[145.5, 38.4], [157.2, 47.8], [163.2, 63.7]]]
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
AIプログラムにおけるデータの構造(基本編) NumPyのインストール リスト5-1 `numpy`パッケージをインストールするためのシェルコマンド
!pip install numpy
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
numpyモジュールのインポート リスト5-2 `numpy`モジュールをインポートするコード例
import numpy as np
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
NumPyのデータ型「多次元配列」オブジェクトの作成 リスト5-3 `array`関数で多次元配列を作成するコード例(値を使用)
array2d = np.array([ [ 165.5, 58.4 ], [ 177.2, 67.8 ], [ 183.2, 83.7 ] ]) array2d # array([[165.5, 58.4], # [177.2, 67.8], # [183.2, 83.7]])
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
リスト5-4 `array`関数で多次元配列を作成するコード例(変数を使用)
array3d = np.array(list3d) array3d # array([[[165.5, 58.4], # [177.2, 67.8], # [183.2, 83.7]], # # [[155.5, 48.4], # [167.2, 57.8], # [173.2, 73.7]], # # [[145.5, 38.4], # [157.2, 47.8], # [163.2, 63.7]]])
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
リスト5-5 `ndarray`クラスの`tolist()`メソッドで多次元リストに変換するコード例
tolist3d = array3d.tolist() tolist3d # [[[165.5, 58.4], [177.2, 67.8], [183.2, 83.7]], # [[155.5, 48.4], [167.2, 57.8], [173.2, 73.7]], # [[145.5, 38.4], [157.2, 47.8], [163.2, 63.7]]]
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
AIプログラムにおけるデータの構造(応用編) Pandasのインストール リスト6 ◎pandas◎パッケージをインストールするためのシェルコマンド
!pip install pandas
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
図7-1 NumPyのデータをPandasで一覧表として表示する例
import pandas as pd df = pd.DataFrame(array2d, columns=['身長', '体重']) df
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
AIプログラムにおけるデータの計算 AI・ディープラーニングで数学を使う理由 リスト7-1 3人の身長の平均を計算するコード例(個別の値を使用)
# hana_height, taro_height, jiro_height = 165.5, 177.2, 183.2 # Lesson 1のリスト2-1で宣言済み average_height = ( hana_height + taro_height + jiro_height ) / 3 print(average_height) # 175.29999999999998
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
リスト7-2 3人の身長と体重の平均を計算するコード例(多次元配列を使用)
import numpy as np array1d = np.array([ 165.5, 177.2, 183.2 ]) average_height = np.average(array1d) average_height # 175.29999999999998
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
NumPyを使った計算 リスト8-1 3行2列の行列のさまざまな特性を表示するコード例
array2d = np.array([ [ 165.5, 58.4 ], [ 177.2, 67.8 ], [ 183.2, 83.7 ] ]) print(array2d.shape) # (3, 2) print(array2d.ndim) # 2 print(array2d.size) # 6
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
リスト8-2 NumPyを使った行列計算
diet = np.array([ [ 1.0, 0.0 ], [ 0.0, 0.9 ] ]) lose_weights = diet @ array2d.T # Python 3.5以降の場合。それ以前のPython 2系などの場合は、以下のmatmul関数を使う必要がある #lose_weights = np.matmul(diet, array2d.T) print(lose_weights.T) # [[165.5 52.56] # [177.2 61.02] # [183.2 75.33]]
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
リスト8-3 全要素の平均値を算出(身長/体重別ではない)
averages = np.average(array2d) averages # 122.63333333333334
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
リスト8-4 身長/体重別の平均値を算出
averages = np.average(array2d, axis=0) averages # array([175.3 , 69.96666667])
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
リスト8-5 3次元配列データでグループごとの身長/体重別の平均値を算出
array3d = np.array( [ [ [ 165.5, 58.4 ], [ 177.2, 67.8 ], [ 183.2, 83.7 ] ], [ [ 155.5, 48.4 ], [ 167.2, 57.8 ], [ 173.2, 73.7 ] ], [ [ 145.5, 38.4 ], [ 157.2, 47.8 ], [ 163.2, 63.7 ] ] ] ) avr3d = np.average(array3d, axis=1) print(avr3d) # [[175.3 69.96666667] # [165.3 59.96666667] # [155.3 49.96666667]]
_____no_output_____
Apache-2.0
docs/articles/deeplearningdat.ipynb
DeepInsider/playground-data
Copyright 2019 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Time series forecasting View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial is an introduction to time series forecasting using TensorFlow. It builds a few different styles of models including Convolutional and Recurrent Neural Networks (CNNs and RNNs).This is covered in two main parts, with subsections: * Forecast for a single timestep: * A single feature. * All features.* Forecast multiple steps: * Single-shot: Make the predictions all at once. * Autoregressive: Make one prediction at a time and feed the output back to the model. Setup
import os import datetime import IPython import IPython.display import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import tensorflow as tf mpl.rcParams['figure.figsize'] = (8, 6) mpl.rcParams['axes.grid'] = False
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
The weather datasetThis tutorial uses a weather time series dataset recorded by the Max Planck Institute for Biogeochemistry.This dataset contains 14 different features such as air temperature, atmospheric pressure, and humidity. These were collected every 10 minutes, beginning in 2003. For efficiency, you will use only the data collected between 2009 and 2016. This section of the dataset was prepared by François Chollet for his book [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).
zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip', fname='jena_climate_2009_2016.csv.zip', extract=True) csv_path, _ = os.path.splitext(zip_path)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
This tutorial will just deal with **hourly predictions**, so start by sub-sampling the data from 10 minute intervals to 1h:
df = pd.read_csv(csv_path) # slice [start:stop:step], starting from index 5 take every 6th record. df = df[5::6] date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Let's take a glance at the data. Here are the first few rows:
df.head()
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Here is the evolution of a few features over time.
plot_cols = ['T (degC)', 'p (mbar)', 'rho (g/m**3)'] plot_features = df[plot_cols] plot_features.index = date_time _ = plot_features.plot(subplots=True) plot_features = df[plot_cols][:480] plot_features.index = date_time[:480] _ = plot_features.plot(subplots=True)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Inspect and cleanup Next look at the statistics of the dataset:
df.describe().transpose()
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Wind velocity One thing that should stand out is the `min` value of the wind velocity, `wv (m/s)` and `max. wv (m/s)` columns. This `-9999` is likely erroneous. There's a separate wind direction column, so the velocity should be `>=0`. Replace it with zeros:
wv = df['wv (m/s)'] bad_wv = wv == -9999.0 wv[bad_wv] = 0.0 max_wv = df['max. wv (m/s)'] bad_max_wv = max_wv == -9999.0 max_wv[bad_max_wv] = 0.0 # The above inplace edits are reflected in the DataFrame df['wv (m/s)'].min()
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Feature engineeringBefore diving in to build a model it's important to understand your data, and be sure that you're passing the model appropriately formatted data. WindThe last column of the data, `wd (deg)`, gives the wind direction in units of degrees. Angles do not make good model inputs, 360° and 0° should be close to each other, and wrap around smoothly. Direction shouldn't matter if the wind is not blowing. Right now the distribution of wind data looks like this:
plt.hist2d(df['wd (deg)'], df['wv (m/s)'], bins=(50, 50), vmax=400) plt.colorbar() plt.xlabel('Wind Direction [deg]') plt.ylabel('Wind Velocity [m/s]')
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
But this will be easier for the model to interpret if you convert the wind direction and velocity columns to a wind **vector**:
wv = df.pop('wv (m/s)') max_wv = df.pop('max. wv (m/s)') # Convert to radians. wd_rad = df.pop('wd (deg)')*np.pi / 180 # Calculate the wind x and y components. df['Wx'] = wv*np.cos(wd_rad) df['Wy'] = wv*np.sin(wd_rad) # Calculate the max wind x and y components. df['max Wx'] = max_wv*np.cos(wd_rad) df['max Wy'] = max_wv*np.sin(wd_rad)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
The distribution of wind vectors is much simpler for the model to correctly interpret.
plt.hist2d(df['Wx'], df['Wy'], bins=(50, 50), vmax=400) plt.colorbar() plt.xlabel('Wind X [m/s]') plt.ylabel('Wind Y [m/s]') ax = plt.gca() ax.axis('tight')
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Time Similarly the `Date Time` column is very useful, but not in this string form. Start by converting it to seconds:
timestamp_s = date_time.map(datetime.datetime.timestamp)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Similar to the wind direction the time in seconds is not a useful model input. Being weather data it has clear daily and yearly periodicity. There are many ways you could deal with periodicity.A simple approach to convert it to a usable signal is to use `sin` and `cos` to convert the time to clear "Time of day" and "Time of year" signals:
day = 24*60*60 year = (365.2425)*day df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day)) df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day)) df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year)) df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year)) plt.plot(np.array(df['Day sin'])[:25]) plt.plot(np.array(df['Day cos'])[:25]) plt.xlabel('Time [h]') plt.title('Time of day signal')
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
This gives the model access to the most important frequency features. In this case you knew ahead of time which frequencies were important. If you didn't know, you can determine which frequencies are important using an `fft`. To check our assumptions, here is the `tf.signal.rfft` of the temperature over time. Note the obvious peaks at frequencies near `1/year` and `1/day`:
fft = tf.signal.rfft(df['T (degC)']) f_per_dataset = np.arange(0, len(fft)) n_samples_h = len(df['T (degC)']) hours_per_year = 24*365.2524 years_per_dataset = n_samples_h/(hours_per_year) f_per_year = f_per_dataset/years_per_dataset plt.step(f_per_year, np.abs(fft)) plt.xscale('log') plt.ylim(0, 400000) plt.xlim([0.1, max(plt.xlim())]) plt.xticks([1, 365.2524], labels=['1/Year', '1/day']) _ = plt.xlabel('Frequency (log scale)')
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Split the data We'll use a `(70%, 20%, 10%)` split for the training, validation, and test sets. Note the data is **not** being randomly shuffled before splitting. This is for two reasons.1. It ensures that chopping the data into windows of consecutive samples is still possible.2. It ensures that the validation/test results are more realistic, being evaluated on data collected after the model was trained.
column_indices = {name: i for i, name in enumerate(df.columns)} n = len(df) train_df = df[0:int(n*0.7)] val_df = df[int(n*0.7):int(n*0.9)] test_df = df[int(n*0.9):] num_features = df.shape[1]
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Normalize the dataIt is important to scale features before training a neural network. Normalization is a common way of doing this scaling. Subtract the mean and divide by the standard deviation of each feature. The mean and standard deviation should only be computed using the training data so that the models have no access to the values in the validation and test sets.It's also arguable that the model shouldn't have access to future values in the training set when training, and that this normalization should be done using moving averages. That's not the focus of this tutorial, and the validation and test sets ensure that you get (somewhat) honest metrics. So in the interest of simplicity this tutorial uses a simple average.
train_mean = train_df.mean() train_std = train_df.std() train_df = (train_df - train_mean) / train_std val_df = (val_df - train_mean) / train_std test_df = (test_df - train_mean) / train_std
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Now peek at the distribution of the features. Some features do have long tails, but there are no obvious errors like the `-9999` wind velocity value.
df_std = (df - train_mean) / train_std df_std = df_std.melt(var_name='Column', value_name='Normalized') plt.figure(figsize=(12, 6)) ax = sns.violinplot(x='Column', y='Normalized', data=df_std) _ = ax.set_xticklabels(df.keys(), rotation=90)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Data windowingThe models in this tutorial will make a set of predictions based on a window of consecutive samples from the data. The main features of the input windows are:* The width (number of time steps) of the input and label windows* The time offset between them.* Which features are used as inputs, labels, or both. This tutorial builds a variety of models (including Linear, DNN, CNN and RNN models), and uses them for both:* *Single-output*, and *multi-output* predictions.* *Single-time-step* and *multi-time-step* predictions.This section focuses on implementing the data windowing so that it can be reused for all of those models. Depending on the task and type of model you may want to generate a variety of data windows. Here are some examples:1. For example, to make a single prediction 24h into the future, given 24h of history you might define a window like this: ![One prediction 24h into the future.](images/raw_window_24h.png)2. A model that makes a prediction 1h into the future, given 6h of history would need a window like this: ![One prediction 1h into the future.](images/raw_window_1h.png) The rest of this section defines a `WindowGenerator` class. This class can:1. Handle the indexes and offsets as shown in the diagrams above.1. Split windows of features into a `(features, labels)` pairs.2. Plot the content of the resulting windows.3. Efficiently generate batches of these windows from the training, evaluation, and test data, using `tf.data.Dataset`s. 1. Indexes and offsetsStart by creating the `WindowGenerator` class. The `__init__` method includes all the necessary logic for the input and label indices.It also takes the train, eval, and test dataframes as input. These will be converted to `tf.data.Dataset`s of windows later.
class WindowGenerator(): def __init__(self, input_width, label_width, shift, train_df=train_df, val_df=val_df, test_df=test_df, label_columns=None): # Store the raw data. self.train_df = train_df self.val_df = val_df self.test_df = test_df # Work out the label column indices. self.label_columns = label_columns if label_columns is not None: self.label_columns_indices = {name: i for i, name in enumerate(label_columns)} self.column_indices = {name: i for i, name in enumerate(train_df.columns)} # Work out the window parameters. self.input_width = input_width self.label_width = label_width self.shift = shift self.total_window_size = input_width + shift self.input_slice = slice(0, input_width) self.input_indices = np.arange(self.total_window_size)[self.input_slice] self.label_start = self.total_window_size - self.label_width self.labels_slice = slice(self.label_start, None) self.label_indices = np.arange(self.total_window_size)[self.labels_slice] def __repr__(self): return '\n'.join([ f'Total window size: {self.total_window_size}', f'Input indices: {self.input_indices}', f'Label indices: {self.label_indices}', f'Label column name(s): {self.label_columns}'])
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Here is code to create the 2 windows shown in the diagrams at the start of this section:
w1 = WindowGenerator(input_width=24, label_width=1, shift=24, label_columns=['T (degC)']) w1 w2 = WindowGenerator(input_width=6, label_width=1, shift=1, label_columns=['T (degC)']) w2
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
2. SplitGiven a list consecutive inputs, the `split_window` method will convert them to a window of inputs and a window of labels.The example `w2`, above, will be split like this:![The initial window is all consecuitive samples, this splits it into an (inputs, labels) pairs](images/split_window.png)This diagram doesn't show the `features` axis of the data, but this `split_window` function also handles the `label_columns` so it can be used for both the single output and multi-output examples.
def split_window(self, features): inputs = features[:, self.input_slice, :] labels = features[:, self.labels_slice, :] if self.label_columns is not None: labels = tf.stack( [labels[:, :, self.column_indices[name]] for name in self.label_columns], axis=-1) # Slicing doesn't preserve static shape information, so set the shapes # manually. This way the `tf.data.Datasets` are easier to inspect. inputs.set_shape([None, self.input_width, None]) labels.set_shape([None, self.label_width, None]) return inputs, labels WindowGenerator.split_window = split_window
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Try it out:
# Stack three slices, the length of the total window: example_window = tf.stack([np.array(train_df[:w2.total_window_size]), np.array(train_df[100:100+w2.total_window_size]), np.array(train_df[200:200+w2.total_window_size])]) example_inputs, example_labels = w2.split_window(example_window) print('All shapes are: (batch, time, features)') print(f'Window shape: {example_window.shape}') print(f'Inputs shape: {example_inputs.shape}') print(f'labels shape: {example_labels.shape}')
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Typically data in TensorFlow is packed into arrays where the outermost index is across examples (the "batch" dimension). The middle indices are the "time" or "space" (width, height) dimension(s). The innermost indices are the features.The code above took a batch of 3, 7-timestep windows, with 19 features at each time step. It split them into a batch of 6-timestep, 19 feature inputs, and a 1-timestep 1-feature label. The label only has one feature because the `WindowGenerator` was initialized with `label_columns=['T (degC)']`. Initially this tutorial will build models that predict single output labels. 3. PlotHere is a plot method that allows a simple visualization of the split window:
w2.example = example_inputs, example_labels def plot(self, model=None, plot_col='T (degC)', max_subplots=3): inputs, labels = self.example plt.figure(figsize=(12, 8)) plot_col_index = self.column_indices[plot_col] max_n = min(max_subplots, len(inputs)) for n in range(max_n): plt.subplot(3, 1, n+1) plt.ylabel(f'{plot_col} [normed]') plt.plot(self.input_indices, inputs[n, :, plot_col_index], label='Inputs', marker='.', zorder=-10) if self.label_columns: label_col_index = self.label_columns_indices.get(plot_col, None) else: label_col_index = plot_col_index if label_col_index is None: continue plt.scatter(self.label_indices, labels[n, :, label_col_index], edgecolors='k', label='Labels', c='#2ca02c', s=64) if model is not None: predictions = model(inputs) plt.scatter(self.label_indices, predictions[n, :, label_col_index], marker='X', edgecolors='k', label='Predictions', c='#ff7f0e', s=64) if n == 0: plt.legend() plt.xlabel('Time [h]') WindowGenerator.plot = plot
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
This plot aligns inputs, labels, and (later) predictions based on the time that the item refers to:
w2.plot()
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
You can plot the other columns, but the example window `w2` configuration only has labels for the `T (degC)` column.
w2.plot(plot_col='p (mbar)')
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
4. Create `tf.data.Dataset`s Finally this `make_dataset` method will take a time series `DataFrame` and convert it to a `tf.data.Dataset` of `(input_window, label_window)` pairs using the `preprocessing.timeseries_dataset_from_array` function.
def make_dataset(self, data): data = np.array(data, dtype=np.float32) ds = tf.keras.preprocessing.timeseries_dataset_from_array( data=data, targets=None, sequence_length=self.total_window_size, sequence_stride=1, shuffle=True, batch_size=32,) ds = ds.map(self.split_window) return ds WindowGenerator.make_dataset = make_dataset
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
The `WindowGenerator` object holds training, validation and test data. Add properties for accessing them as `tf.data.Datasets` using the above `make_dataset` method. Also add a standard example batch for easy access and plotting:
@property def train(self): return self.make_dataset(self.train_df) @property def val(self): return self.make_dataset(self.val_df) @property def test(self): return self.make_dataset(self.test_df) @property def example(self): """Get and cache an example batch of `inputs, labels` for plotting.""" result = getattr(self, '_example', None) if result is None: # No example batch was found, so get one from the `.train` dataset result = next(iter(self.train)) # And cache it for next time self._example = result return result WindowGenerator.train = train WindowGenerator.val = val WindowGenerator.test = test WindowGenerator.example = example
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Now the `WindowGenerator` object gives you access to the `tf.data.Dataset` objects, so you can easily iterate over the data.The `Dataset.element_spec` property tells you the structure, `dtypes` and shapes of the dataset elements.
# Each element is an (inputs, label) pair w2.train.element_spec
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Iterating over a `Dataset` yields concrete batches:
for example_inputs, example_labels in w2.train.take(1): print(f'Inputs shape (batch, time, features): {example_inputs.shape}') print(f'Labels shape (batch, time, features): {example_labels.shape}')
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Single step modelsThe simplest model you can build on this sort of data is one that predicts a single feature's value, 1 timestep (1h) in the future based only on the current conditions.So start by building models to predict the `T (degC)` value 1h into the future.![Predict the next time step](images/narrow_window.png)Configure a `WindowGenerator` object to produce these single-step `(input, label)` pairs:
single_step_window = WindowGenerator( input_width=1, label_width=1, shift=1, label_columns=['T (degC)']) single_step_window
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
The `window` object creates `tf.data.Datasets` from the training, validation, and test sets, allowing you to easily iterate over batches of data.
for example_inputs, example_labels in single_step_window.train.take(1): print(f'Inputs shape (batch, time, features): {example_inputs.shape}') print(f'Labels shape (batch, time, features): {example_labels.shape}')
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
BaselineBefore building a trainable model it would be good to have a performance baseline as a point for comparison with the later more complicated models.This first task is to predict temperature 1h in the future given the current value of all features. The current values include the current temperature. So start with a model that just returns the current temperature as the prediction, predicting "No change". This is a reasonable baseline since temperature changes slowly. Of course, this baseline will work less well if you make a prediction further in the future.![Send the input to the output](images/baseline.png)
class Baseline(tf.keras.Model): def __init__(self, label_index=None): super().__init__() self.label_index = label_index def call(self, inputs): if self.label_index is None: return inputs result = inputs[:, :, self.label_index] return result[:, :, tf.newaxis]
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2