markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Checkpoint callback optionsThe callback provides several options to give the resulting checkpoints unique names, and adjust the checkpointing frequency.Train a new model, and save uniquely named checkpoints once every 5-epochs:
# include the epoch in the file name. (uses `str.format`) checkpoint_path = "training_2/cp-{epoch:04d}.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) cp_callback = tf.keras.callbacks.ModelCheckpoint( checkpoint_path, verbose=1, save_weights_only=True, # Save weights, every 5-epochs. period=5) model = create_model() model.fit(train_images, train_labels, epochs = 50, callbacks = [cp_callback], validation_data = (test_images,test_labels), verbose=0)
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs
Now, look at the resulting checkpoints and choose the latest one:
! ls {checkpoint_dir} latest = tf.train.latest_checkpoint(checkpoint_dir) latest
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs
Note: the default tensorflow format only saves the 5 most recent checkpoints.To test, reset the model and load the latest checkpoint:
model = create_model() model.load_weights(latest) loss, acc = model.evaluate(test_images, test_labels) print("Restored model, accuracy: {:5.2f}%".format(100*acc))
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs
What are these files? The above code stores the weights to a collection of [checkpoint](https://www.tensorflow.org/guide/saved_modelsave_and_restore_variables)-formatted files that contain only the trained weights in a binary format. Checkpoints contain:* One or more shards that contain your model's weights. * An index file that indicates which weights are stored in a which shard. If you are only training a model on a single machine, you'll have one shard with the suffix: `.data-00000-of-00001` Manually save weightsAbove you saw how to load the weights into a model.Manually saving the weights is just as simple, use the `Model.save_weights` method.
# Save the weights model.save_weights('./checkpoints/my_checkpoint') # Restore the weights model = create_model() model.load_weights('./checkpoints/my_checkpoint') loss,acc = model.evaluate(test_images, test_labels) print("Restored model, accuracy: {:5.2f}%".format(100*acc))
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs
Save the entire modelThe entire model can be saved to a file that contains the weight values, the model's configuration, and even the optimizer's configuration (depends on set up). This allows you to checkpoint a model and resume training later—from the exact same state—without access to the original code.Saving a fully-functional model is very useful—you can load them in TensorFlow.js ([HDF5](https://js.tensorflow.org/tutorials/import-keras.html), [Saved Model](https://js.tensorflow.org/tutorials/import-saved-model.html)) and then train and run them in web browsers, or convert them to run on mobile devices using TensorFlow Lite ([HDF5](https://www.tensorflow.org/lite/convert/python_apiexporting_a_tfkeras_file_), [Saved Model](https://www.tensorflow.org/lite/convert/python_apiexporting_a_savedmodel_)) As an HDF5 fileKeras provides a basic save format using the [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format) standard. For our purposes, the saved model can be treated as a single binary blob.
model = create_model() # You need to use a keras.optimizer to restore the optimizer state from an HDF5 file. model.compile(optimizer='adam', loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy']) model.fit(train_images, train_labels, epochs=5) # Save entire model to a HDF5 file model.save('my_model.h5')
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs
Now recreate the model from that file:
# Recreate the exact same model, including weights and optimizer. new_model = keras.models.load_model('my_model.h5') new_model.summary()
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs
Check its accuracy:
loss, acc = new_model.evaluate(test_images, test_labels) print("Restored model, accuracy: {:5.2f}%".format(100*acc))
_____no_output_____
Apache-2.0
site/en/2/tutorials/keras/save_and_restore_models.ipynb
allenlavoie/docs
Stream Analytics TutorialOverviewWelcome to the stream analytics tutorial for EpiData. In this tutorial we will perform near real-time stream analytics on sample weather data acquired from a simulated wireless sensor network. Package and Module ImportsAs a first step, we will import packages and modules required for this tutorial. Since EpiData Context (ec) is required to use the application, it is implicitly imported. Sample functions for near real-time analytics are avaialable in EpiData Analytics package. Other packages and modules, such as datetime, pandas and matplotlib, can also be imported at this time.
#from epidata.context import ec from epidata.analytics import * %matplotlib inline from datetime import datetime, timedelta import pandas as pd import time import pylab as pl from IPython import display import json
_____no_output_____
Apache-2.0
ipython/home/tutorials/3. Stream Analytics Tutorial.ipynb
samarth-bhutani/epidata-community
Stream AnalysisFunction DefinitionEpiData supports development and deployment of custom algorithms via Jupyter Notebook. Below, we define python functions for substituting extreme outliers and aggregating temperature measurements. These functions can be operated on near real-time and historic data. In this tutorial, we will apply the functions on near real-time data available from Kafka 'measurements' and 'measurements_cleansed' topics
import pandas as pd import math, numbers def substitute_demo(df, meas_names, method="rolling", size=3): """ Substitute missing measurement values within a data frame, using the specified method. """ df["meas_value"].replace(250, np.nan, inplace=True) for meas_name in meas_names: if (method == "rolling"): if ((size % 2 == 0) and (size != 0)): size += 1 if df.loc[df["meas_name"]==meas_name].size > 0: indices = df.loc[df["meas_name"] == meas_name].index[df.loc[df["meas_name"] == meas_name]["meas_value"].apply( lambda x: not isinstance(x, basestring) and (x == None or np.isnan(x)))] substitutes = df.loc[df["meas_name"]==meas_name]["meas_value"].rolling( window=size, min_periods=1, center=True).mean() df["meas_value"].fillna(substitutes, inplace=True) df.loc[indices, "meas_flag"] = "substituted" df.loc[indices, "meas_method"] = "rolling average" else: raise ValueError("Unsupported substitution method: ", repr(method)) return df import pandas as pd import numpy as np import json def subgroup_statistics(row): row['start_time'] = np.min(row["ts"]) row["stop_time"] = np.max(row["ts"]) row["meas_summary_name"] = "statistics" row["meas_summary_value"] = json.dumps({'count': row["meas_value"].count(), 'mean': row["meas_value"].mean(), 'std': row["meas_value"].std(), 'min': row["meas_value"].min(), 'max': row["meas_value"].max()}) row["meas_summary_description"] = "descriptive statistics" return row def meas_statistics_demo(df, meas_names, method="standard"): """ Compute statistics on measurement values within a data frame, using the specified method. """ if (method == "standard"): df_grouped = df.loc[df["meas_name"].isin(meas_names)].groupby(["company", "site", "station", "sensor"], as_index=False) df_summary = df_grouped.apply(subgroup_statistics).loc[:, ["company", "site", "station", "sensor", "start_time", "stop_time", "event", "meas_name", "meas_summary_name", "meas_summary_value", "meas_summary_description"]].drop_duplicates() else: raise ValueError("Unsupported summary method: ", repr(method)) return df_summary
_____no_output_____
Apache-2.0
ipython/home/tutorials/3. Stream Analytics Tutorial.ipynb
samarth-bhutani/epidata-community
Transformations and StreamsThe analytics algorithms are executed on near real-time data through transformations. A transformation specifies the function, its parameters and destination. The destination can be one of the database tables, namely 'measurements_cleansed' or 'measurements_summary', or another Kafka topic.Once the transformations are defined, they are initiated via ec.create_stream(transformations, data_source, batch_duration) function call.
#Stop current near real-time processing ec.stop_streaming() # Define tranformations and steam operations op1 = ec.create_transformation(substitute_demo, [["Temperature", "Wind_Speed"], "rolling", 3], "measurements_substituted") ec.create_stream([op1], "measurements") op2 = ec.create_transformation(identity, [], "measurements_cleansed") op3 = ec.create_transformation(meas_statistics, [["Temperature", "Wind_Speed"], "standard"], "measurements_summary") ec.create_stream([op2, op3],"measurements_substituted") # Start near real-time processing ec.start_streaming()
_____no_output_____
Apache-2.0
ipython/home/tutorials/3. Stream Analytics Tutorial.ipynb
samarth-bhutani/epidata-community
Data IngestionWe can now start data ingestion from simulated wireless sensor network. To do so, you can download and run the sensor_data_with_outliers.py example shown in the image below. Data Query and VisualizationWe query the original and processed data from Kafka queue using Kafka Consumer. The data obtained from the quey is visualized using Bokeh charts.
from bokeh.io import push_notebook, show, output_notebook from bokeh.layouts import row, column from bokeh.plotting import figure from bokeh.models import ColumnDataSource from kafka import KafkaConsumer import json from pandas.io.json import json_normalize output_notebook() plot1 = figure(plot_width=750, plot_height=200, x_axis_type='datetime', y_range=(30, 300)) plot2 = figure(plot_width=750, plot_height=200, x_axis_type='datetime', y_range=(30, 300)) df_kafka_init = pd.DataFrame(columns = ["ts", "meas_value"]) test_data_1 = ColumnDataSource(data=df_kafka_init.to_dict(orient='list')) test_data_2 = ColumnDataSource(data=df_kafka_init.to_dict(orient='list')) meas_name = "Temperature" plot1.circle("ts", "meas_value", source=test_data_1, legend=meas_name, line_color='orangered', line_width=1.5) line1 = plot1.line("ts", "meas_value", source=test_data_1, legend=meas_name, line_color='orangered', line_width=1.5) plot1.legend.location = "top_right" plot2.circle("ts", "meas_value", source=test_data_2, legend=meas_name, line_color='blue', line_width=1.5) line2 = plot2.line("ts", "meas_value", source=test_data_2, legend=meas_name, line_color='blue', line_width=1.5) plot2.legend.location = "top_right" consumer = KafkaConsumer() consumer.subscribe(['measurements', 'measurements_substituted']) delay = .1 handle = show(column(plot1, plot2), notebook_handle=True) for message in consumer: topic = message.topic measurements = json.loads(message.value) df_kafka = json_normalize(measurements) df_kafka["meas_value"] = np.nan if "meas_value" not in measurements else measurements["meas_value"] df_kafka = df_kafka.loc[df_kafka["meas_name"]==meas_name] df_kafka = df_kafka[["ts", "meas_value"]] df_kafka["ts"] = df_kafka["ts"].apply(lambda x: pd.to_datetime(x, unit='ms').tz_localize('UTC').tz_convert('US/Pacific')) if (not df_kafka.empty): if (topic == 'measurements'): test_data_1.stream(df_kafka.to_dict(orient='list')) if (topic == 'measurements_substituted'): test_data_2.stream(df_kafka.to_dict(orient='list')) push_notebook(handle=handle) time.sleep(delay)
_____no_output_____
Apache-2.0
ipython/home/tutorials/3. Stream Analytics Tutorial.ipynb
samarth-bhutani/epidata-community
Another way to query and visualize processed data is using ec.query_measurements_cleansed(..) and ec.query_measurements_summary(..) functions. For our example, we specify paramaters that match sample data set, and query the aggregated values using ec.query_measurements_summary(..) function call.
# QUERY MEASUREMENTS_CLEANSED TABLE primary_key={"company": "EpiData", "site": "San_Jose", "station":"WSN-1", "sensor": ["Temperature_Probe", "RH_Probe", "Anemometer"]} start_time = datetime.strptime('8/19/2017 00:00:00', '%m/%d/%Y %H:%M:%S') stop_time = datetime.strptime('8/20/2017 00:00:00', '%m/%d/%Y %H:%M:%S') df_cleansed = ec.query_measurements_cleansed(primary_key, start_time, stop_time) print "Number of records:", df_cleansed.count() df_cleansed_local = df_cleansed.toPandas() df_cleansed_local[df_cleansed_local["meas_name"]=="Temperature"].tail(10).sort_values(by="ts",ascending=False) # QUERY MEASUREMNTS_SUMMARY TABLE primary_key={"company": "EpiData", "site": "San_Jose", "station":"WSN-1", "sensor": ["Temperature_Probe"]} start_time = datetime.strptime('8/19/2017 00:00:00', '%m/%d/%Y %H:%M:%S') stop_time = datetime.strptime('8/20/2017 00:00:00', '%m/%d/%Y %H:%M:%S') last_index = -1 summary_result = pd.DataFrame() df_summary = ec.query_measurements_summary(primary_key, start_time, stop_time) df_summary_local = df_summary.toPandas() summary_keys = df_summary_local[["company", "site", "station", "sensor", "start_time", "stop_time", "meas_name", "meas_summary_name"]] summary_result = df_summary_local["meas_summary_value"].apply(json.loads).apply(pd.Series) summary_combined = pd.concat([summary_keys, summary_result], axis=1) summary_combined.tail(5)
_____no_output_____
Apache-2.0
ipython/home/tutorials/3. Stream Analytics Tutorial.ipynb
samarth-bhutani/epidata-community
Stop Stream AnalyticsThe transformations can be stopped at any time via ec.stop_streaming() function call
#Stop current near real-time processing ec.stop_streaming()
_____no_output_____
Apache-2.0
ipython/home/tutorials/3. Stream Analytics Tutorial.ipynb
samarth-bhutani/epidata-community
Update the geemap packageIf you run into errors with this notebook, please uncomment the line below to update the [geemap](https://github.com/giswqs/geemapinstallation) package to the latest version from GitHub. Restart the Kernel (Menu -> Kernel -> Restart) to take effect.
# geemap.update_package()
_____no_output_____
MIT
examples/notebooks/18_create_landsat_timelapse.ipynb
jitendra-kumar/geemap
Create an interactive map Use the Drawing tool to draw a rectangle on the map
Map = geemap.Map() Map
_____no_output_____
MIT
examples/notebooks/18_create_landsat_timelapse.ipynb
jitendra-kumar/geemap
Generate a Landsat timelapse animation
import os out_dir = os.path.join(os.path.expanduser("~"), 'Downloads') if not os.path.exists(out_dir): os.makedirs(out_dir) label = 'Urban Growth in Las Vegas' Map.add_landsat_ts_gif(label=label, start_year=1985, bands=['Red', 'Green', 'Blue'], font_color='white', frames_per_second=10, progress_bar_color='blue')
_____no_output_____
MIT
examples/notebooks/18_create_landsat_timelapse.ipynb
jitendra-kumar/geemap
Create Landsat timeseries
import os import ee import geemap Map = geemap.Map() Map
_____no_output_____
MIT
examples/notebooks/18_create_landsat_timelapse.ipynb
jitendra-kumar/geemap
You and define an roi or draw a rectangle on the map
roi = ee.Geometry.Polygon( [[[-115.471773, 35.892718], [-115.471773, 36.409454], [-114.271283, 36.409454], [-114.271283, 35.892718], [-115.471773, 35.892718]]], None, False) # roi = Map.draw_last_feature collection = geemap.landsat_timeseries(roi=roi, start_year=1985, end_year=2019, start_date='06-10', end_date='09-20') print(collection.size().getInfo()) first_image = collection.first() vis = { 'bands': ['NIR', 'Red', 'Green'], 'min': 0, 'max': 4000, 'gamma': [1, 1, 1] } Map.addLayer(first_image, vis, 'First image')
_____no_output_____
MIT
examples/notebooks/18_create_landsat_timelapse.ipynb
jitendra-kumar/geemap
Download ImageCollection as a GIF
# Define arguments for animation function parameters. video_args = { 'dimensions': 768, 'region': roi, 'framesPerSecond': 10, 'bands': ['NIR', 'Red', 'Green'], 'min': 0, 'max': 4000, 'gamma': [1, 1, 1] } work_dir = os.path.join(os.path.expanduser("~"), 'Downloads') if not os.path.exists(work_dir): os.makedirs(work_dir) out_gif = os.path.join(work_dir, "landsat_ts.gif") geemap.download_ee_video(collection, video_args, out_gif)
_____no_output_____
MIT
examples/notebooks/18_create_landsat_timelapse.ipynb
jitendra-kumar/geemap
Add animated text to GIF
geemap.show_image(out_gif) texted_gif = os.path.join(work_dir, "landsat_ts_text.gif") geemap.add_text_to_gif(out_gif, texted_gif, xy=('3%', '5%'), text_sequence=1985, font_size=30, font_color='#ffffff', add_progress_bar=False) label = 'Urban Growth in Las Vegas' geemap.add_text_to_gif(texted_gif, texted_gif, xy=('2%', '88%'), text_sequence=label, font_size=30, font_color='#ffffff', progress_bar_color='cyan') geemap.show_image(texted_gif)
_____no_output_____
MIT
examples/notebooks/18_create_landsat_timelapse.ipynb
jitendra-kumar/geemap
Copyright 2019 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ==============================================================================
_____no_output_____
Apache-2.0
site/en-snapshot/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa.ipynb
ilyaspiridonov/docs-l10n
Multilingual Universal Sentence Encoder Q&A Retrieval View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This is a demo for using [Univeral Encoder Multilingual Q&A model](https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/3) for question-answer retrieval of text, illustrating the use of **question_encoder** and **response_encoder** of the model. We use sentences from [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) paragraphs as the demo dataset, each sentence and its context (the text surrounding the sentence) is encoded into high dimension embeddings with the **response_encoder**. These embeddings are stored in an index built using the [simpleneighbors](https://pypi.org/project/simpleneighbors/) library for question-answer retrieval.On retrieval a random question is selected from the [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) dataset and encoded into high dimension embedding with the **question_encoder** and query the simpleneighbors index returning a list of approximate nearest neighbors in semantic space. Setup
%%capture #@title Setup Environment # Install the latest Tensorflow version. !pip install -q tensorflow_text !pip install -q simpleneighbors[annoy] !pip install -q nltk !pip install -q tqdm #@title Setup common imports and functions import json import nltk import os import pprint import random import simpleneighbors import urllib from IPython.display import HTML, display from tqdm.notebook import tqdm import tensorflow.compat.v2 as tf import tensorflow_hub as hub from tensorflow_text import SentencepieceTokenizer nltk.download('punkt') def download_squad(url): return json.load(urllib.request.urlopen(url)) def extract_sentences_from_squad_json(squad): all_sentences = [] for data in squad['data']: for paragraph in data['paragraphs']: sentences = nltk.tokenize.sent_tokenize(paragraph['context']) all_sentences.extend(zip(sentences, [paragraph['context']] * len(sentences))) return list(set(all_sentences)) # remove duplicates def extract_questions_from_squad_json(squad): questions = [] for data in squad['data']: for paragraph in data['paragraphs']: for qas in paragraph['qas']: if qas['answers']: questions.append((qas['question'], qas['answers'][0]['text'])) return list(set(questions)) def output_with_highlight(text, highlight): output = "<li> " i = text.find(highlight) while True: if i == -1: output += text break output += text[0:i] output += '<b>'+text[i:i+len(highlight)]+'</b>' text = text[i+len(highlight):] i = text.find(highlight) return output + "</li>\n" def display_nearest_neighbors(query_text, answer_text=None): query_embedding = model.signatures['question_encoder'](tf.constant([query_text]))['outputs'][0] search_results = index.nearest(query_embedding, n=num_results) if answer_text: result_md = ''' <p>Random Question from SQuAD:</p> <p>&nbsp;&nbsp;<b>%s</b></p> <p>Answer:</p> <p>&nbsp;&nbsp;<b>%s</b></p> ''' % (query_text , answer_text) else: result_md = ''' <p>Question:</p> <p>&nbsp;&nbsp;<b>%s</b></p> ''' % query_text result_md += ''' <p>Retrieved sentences : <ol> ''' if answer_text: for s in search_results: result_md += output_with_highlight(s, answer_text) else: for s in search_results: result_md += '<li>' + s + '</li>\n' result_md += "</ol>" display(HTML(result_md))
_____no_output_____
Apache-2.0
site/en-snapshot/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa.ipynb
ilyaspiridonov/docs-l10n
Run the following code block to download and extract the SQuAD dataset into:* **sentences** is a list of (text, context) tuples - each paragraph from the SQuAD dataset are splitted into sentences using nltk library and the sentence and paragraph text forms the (text, context) tuple.* **questions** is a list of (question, answer) tuples.Note: You can use this demo to index the SQuAD train dataset or the smaller dev dataset (1.1 or 2.0) by selecting the **squad_url** below.
#@title Download and extract SQuAD data squad_url = 'https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json' #@param ["https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json", "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json", "https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json", "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json"] squad_json = download_squad(squad_url) sentences = extract_sentences_from_squad_json(squad_json) questions = extract_questions_from_squad_json(squad_json) print("%s sentences, %s questions extracted from SQuAD %s" % (len(sentences), len(questions), squad_url)) print("\nExample sentence and context:\n") sentence = random.choice(sentences) print("sentence:\n") pprint.pprint(sentence[0]) print("\ncontext:\n") pprint.pprint(sentence[1]) print()
_____no_output_____
Apache-2.0
site/en-snapshot/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa.ipynb
ilyaspiridonov/docs-l10n
The following code block setup the tensorflow graph **g** and **session** with the [Univeral Encoder Multilingual Q&A model](https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/3)'s **question_encoder** and **response_encoder** signatures.
#@title Load model from tensorflow hub module_url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/3" #@param ["https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/3", "https://tfhub.dev/google/universal-sentence-encoder-qa/3"] model = hub.load(module_url)
_____no_output_____
Apache-2.0
site/en-snapshot/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa.ipynb
ilyaspiridonov/docs-l10n
The following code block compute the embeddings for all the text, context tuples and store them in a [simpleneighbors](https://pypi.org/project/simpleneighbors/) index using the **response_encoder**.
#@title Compute embeddings and build simpleneighbors index batch_size = 100 encodings = model.signatures['response_encoder']( input=tf.constant([sentences[0][0]]), context=tf.constant([sentences[0][1]])) index = simpleneighbors.SimpleNeighbors( len(encodings['outputs'][0]), metric='angular') print('Computing embeddings for %s sentences' % len(sentences)) slices = zip(*(iter(sentences),) * batch_size) num_batches = int(len(sentences) / batch_size) for s in tqdm(slices, total=num_batches): response_batch = list([r for r, c in s]) context_batch = list([c for r, c in s]) encodings = model.signatures['response_encoder']( input=tf.constant(response_batch), context=tf.constant(context_batch) ) for batch_index, batch in enumerate(response_batch): index.add_one(batch, encodings['outputs'][batch_index]) index.build() print('simpleneighbors index for %s sentences built.' % len(sentences))
_____no_output_____
Apache-2.0
site/en-snapshot/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa.ipynb
ilyaspiridonov/docs-l10n
On retrieval, the question is encoded using the **question_encoder** and the question embedding is used to query the simpleneighbors index.
#@title Retrieve nearest neighbors for a random question from SQuAD num_results = 25 #@param {type:"slider", min:5, max:40, step:1} query = random.choice(questions) display_nearest_neighbors(query[0], query[1])
_____no_output_____
Apache-2.0
site/en-snapshot/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa.ipynb
ilyaspiridonov/docs-l10n
COPYRIGHT © 2018 Kiran Arun Setup
# install dependencies !rm -r Neural_Networks-101-demo !git clone -b explanations https://github.com/KiranArun/Neural_Networks-101-demo.git !python3 /content/Neural_Networks-101-demo/scripts/setup.py helper_funcs tensorboard # run tensorboard get_ipython().system_raw('tensorboard --logdir=/content/logdir/ --host=0.0.0.0 --port=6006 &') get_ipython().system_raw('./ngrok http 6006 &') ! curl -s http://localhost:4040/api/tunnels | python3 -c "import sys, json; print('Tensorboard Link:', json.load(sys.stdin)['tunnels'][0]['public_url'])"
Tensorboard Link: http://bdb495ac.ngrok.io
MIT
tb_models/Basic_MNIST-tb.ipynb
KiranArun/Neural_Networks-demo
MNIST Handwriten Digits Classifier
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import os from math import ceil,floor import helper_funcs as helper # this is the directory where we will keep and external files, eg. data, logs model_root_dir = '/content/' # get data mnist = helper.MNIST_data(model_root_dir+'MNIST_data/',shuffle=False) image_dims = (28,28) input_size = 28**2 num_classes = 10 batch_size = 100 learning_rate = 0.1 epochs = 2 iterations = ceil(mnist.number_train_samples/batch_size) hidden_size = 256 embedding_size = 10 model_logdir = model_root_dir+'logdir/' LABELS = os.path.join(os.getcwd(), model_logdir+"labels_1024.tsv") SPRITES = os.path.join(os.getcwd(), model_logdir+"sprite_1024.png") hparam_str = 'fc2,lr_%f' % (learning_rate) previous_runs = list(f for f in os.listdir(model_logdir) if f.startswith('run')) if len(previous_runs) == 0: run_number = 1 else: run_number = max([int(s[4:6]) for s in previous_runs]) + 1 LOGDIR = '%srun_%02d,' % (model_logdir, run_number)+hparam_str tf.reset_default_graph() with tf.name_scope('input'): X_placeholder = tf.placeholder(shape=[None, input_size], dtype=tf.float32, name='X_placeholder') Y_placeholder = tf.placeholder(shape=[None, num_classes], dtype=tf.int64, name='Y_placeholder') with tf.name_scope('input_reshaped'): X_image = tf.reshape(X_placeholder, shape=[-1,*image_dims, 1]) tf.summary.image('input', X_image, 3) def variable_summaries(var): with tf.name_scope('summaries'): mean = tf.reduce_mean(var) tf.summary.scalar('mean', mean) stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean))) tf.summary.scalar('stddev', stddev) tf.summary.scalar('max', tf.reduce_max(var)) tf.summary.scalar('min', tf.reduce_min(var)) tf.summary.histogram('histogram', var) with tf.name_scope('hidden_layer'): with tf.name_scope('Weights'): W1 = tf.Variable(tf.truncated_normal(shape=[input_size, hidden_size]), dtype=tf.float32, name='W1') variable_summaries(W1) with tf.name_scope('biases'): b1 = tf.Variable(tf.constant(0.1,shape=[hidden_size]), dtype=tf.float32, name='b1') variable_summaries(b1) with tf.name_scope('output'): hidden_output = tf.nn.relu(tf.matmul(X_placeholder, W1) + b1) with tf.name_scope('output_layer'): with tf.name_scope('Weights'): W2 = tf.Variable(tf.truncated_normal(shape=[hidden_size, num_classes]), dtype=tf.float32, name='W2') variable_summaries(W2) with tf.name_scope('biases'): b2 = tf.Variable(tf.constant(0.1,shape=[num_classes]), dtype=tf.float32, name='b2') variable_summaries(b2) with tf.name_scope('output'): Y_predictions = tf.matmul(hidden_output, W2) + b2 embedding_input = Y_predictions with tf.name_scope('loss'): cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y_placeholder, logits=Y_predictions, name='cross_entropy') loss = tf.reduce_mean(cross_entropy) tf.summary.scalar('loss', loss) with tf.name_scope('train'): train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) with tf.name_scope('accuracy'): with tf.name_scope('correct_predictions'): correct_prediction = tf.equal(tf.argmax(Y_predictions, 1), tf.argmax(Y_placeholder, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) tf.summary.scalar('accuracy', accuracy) sess = tf.InteractiveSession() summ = tf.summary.merge_all() embedding = tf.Variable(tf.zeros([1024, embedding_size]), name="embedding") assignment = embedding.assign(embedding_input) saver = tf.train.Saver() sess.run(tf.global_variables_initializer()) writer = tf.summary.FileWriter(LOGDIR) writer.add_graph(sess.graph) config = tf.contrib.tensorboard.plugins.projector.ProjectorConfig() embedding_config = config.embeddings.add() embedding_config.tensor_name = embedding.name embedding_config.sprite.image_path = SPRITES embedding_config.metadata_path = LABELS embedding_config.sprite.single_image_dim.extend([*image_dims]) tf.contrib.tensorboard.plugins.projector.visualize_embeddings(writer, config) losses = np.array([]) for epoch in range(epochs): print('New epoch', str(epoch+1)+'/'+str(epochs)) for iteration in range(iterations): batch_xs, batch_ys = mnist.get_batch(iteration, batch_size) _, _loss, _summary = sess.run([train_step, loss, summ], feed_dict={ X_placeholder: batch_xs, Y_placeholder: batch_ys }) if (iteration+1) % (iterations/5) == 0: _accuracy = sess.run(accuracy, feed_dict={X_placeholder : mnist.validation_images, Y_placeholder : mnist.validation_labels }) print('step', str(iteration+1)+'/'+str(iterations), 'loss', _loss, 'accuracy', str(round(100*_accuracy,2))+'%') if iteration % 10 == 0: writer.add_summary(_summary, (epoch*iterations)+iteration) losses = np.append(losses, _loss) sess.run(assignment, feed_dict={X_placeholder: mnist.test_images[:1024], Y_placeholder: mnist.test_labels[:1024]}) saver.save(sess, os.path.join(LOGDIR, "model.ckpt"), (epoch*iterations)+iteration) fig, ax = plt.subplots(figsize=(10,6)) ax.plot(losses) ax.grid(True) _accuracy = sess.run(accuracy, feed_dict={X_placeholder : mnist.test_images, Y_placeholder : mnist.test_labels }) print(str(round(100*_accuracy,2))+'%')
_____no_output_____
MIT
tb_models/Basic_MNIST-tb.ipynb
KiranArun/Neural_Networks-demo
**Chapter 16 – Reinforcement Learning** This notebook contains all the sample code and solutions to the exercices in chapter 16. Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
# To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import numpy.random as rnd import os import sys # to make this notebook's output stable across runs rnd.seed(42) # To plot pretty figures and animations %matplotlib nbagg import matplotlib import matplotlib.animation as animation import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "rl" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300)
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Introduction to OpenAI gym In this notebook we will be using [OpenAI gym](https://gym.openai.com/), a great toolkit for developing and comparing Reinforcement Learning algorithms. It provides many environments for your learning *agents* to interact with. Let's start by importing `gym`:
import gym
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Next we will load the MsPacman environment, version 0.
env = gym.make('MsPacman-v0')
[2017-02-17 10:57:41,836] Making new env: MsPacman-v0
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Let's initialize the environment by calling is `reset()` method. This returns an observation:
obs = env.reset()
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Observations vary depending on the environment. In this case it is an RGB image represented as a 3D NumPy array of shape [width, height, channels] (with 3 channels: Red, Green and Blue). In other environments it may return different objects, as we will see later.
obs.shape
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
An environment can be visualized by calling its `render()` method, and you can pick the rendering mode (the rendering options depend on the environment). In this example we will set `mode="rgb_array"` to get an image of the environment as a NumPy array:
img = env.render(mode="rgb_array")
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Let's plot this image:
plt.figure(figsize=(5,4)) plt.imshow(img) plt.axis("off") save_fig("MsPacman") plt.show()
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Welcome back to the 1980s! :) In this environment, the rendered image is simply equal to the observation (but in many environments this is not the case):
(img == obs).all()
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Let's create a little helper function to plot an environment:
def plot_environment(env, figsize=(5,4)): plt.close() # or else nbagg sometimes plots in the previous cell plt.figure(figsize=figsize) img = env.render(mode="rgb_array") plt.imshow(img) plt.axis("off") plt.show()
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Let's see how to interact with an environment. Your agent will need to select an action from an "action space" (the set of possible actions). Let's see what this environment's action space looks like:
env.action_space
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
`Discrete(9)` means that the possible actions are integers 0 through 8, which represents the 9 possible positions of the joystick (0=center, 1=up, 2=right, 3=left, 4=down, 5=upper-right, 6=upper-left, 7=lower-right, 8=lower-left). Next we need to tell the environment which action to play, and it will compute the next step of the game. Let's go left for 110 steps, then lower left for 40 steps:
env.reset() for step in range(110): env.step(3) #left for step in range(40): env.step(8) #lower-left
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Where are we now?
plot_environment(env)
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
The `step()` function actually returns several important objects:
obs, reward, done, info = env.step(0)
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
The observation tells the agent what the environment looks like, as discussed earlier. This is a 210x160 RGB image:
obs.shape
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
The environment also tells the agent how much reward it got during the last step:
reward
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
When the game is over, the environment returns `done=True`:
done
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Finally, `info` is an environment-specific dictionary that can provide some extra information about the internal state of the environment. This is useful for debugging, but your agent should not use this information for learning (it would be cheating).
info
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Let's play one full game (with 3 lives), by moving in random directions for 10 steps at a time, recording each frame:
frames = [] n_max_steps = 1000 n_change_steps = 10 obs = env.reset() for step in range(n_max_steps): img = env.render(mode="rgb_array") frames.append(img) if step % n_change_steps == 0: action = env.action_space.sample() # play randomly obs, reward, done, info = env.step(action) if done: break
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Now show the animation (it's a bit jittery within Jupyter):
def update_scene(num, frames, patch): patch.set_data(frames[num]) return patch, def plot_animation(frames, repeat=False, interval=40): plt.close() # or else nbagg sometimes plots in the previous cell fig = plt.figure() patch = plt.imshow(frames[0]) plt.axis('off') return animation.FuncAnimation(fig, update_scene, fargs=(frames, patch), frames=len(frames), repeat=repeat, interval=interval) video = plot_animation(frames) plt.show()
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Once you have finished playing with an environment, you should close it to free up resources:
env.close()
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
To code our first learning agent, we will be using a simpler environment: the Cart-Pole. A simple environment: the Cart-Pole The Cart-Pole is a very simple environment composed of a cart that can move left or right, and pole placed vertically on top of it. The agent must move the cart left or right to keep the pole upright.
env = gym.make("CartPole-v0") obs = env.reset() obs
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
The observation is a 1D NumPy array composed of 4 floats: they represent the cart's horizontal position, its velocity, the angle of the pole (0 = vertical), and the angular velocity. Let's render the environment... unfortunately we need to fix an annoying rendering issue first. Fixing the rendering issue Some environments (including the Cart-Pole) require access to your display, which opens up a separate window, even if you specify the `rgb_array` mode. In general you can safely ignore that window. However, if Jupyter is running on a headless server (ie. without a screen) it will raise an exception. One way to avoid this is to install a fake X server like Xvfb. You can start Jupyter using the `xvfb-run` command: $ xvfb-run -s "-screen 0 1400x900x24" jupyter notebookIf Jupyter is running on a headless server but you don't want to worry about Xvfb, then you can just use the following rendering function for the Cart-Pole:
from PIL import Image, ImageDraw try: from pyglet.gl import gl_info openai_cart_pole_rendering = True # no problem, let's use OpenAI gym's rendering function except Exception: openai_cart_pole_rendering = False # probably no X server available, let's use our own rendering function def render_cart_pole(env, obs): if openai_cart_pole_rendering: # use OpenAI gym's rendering function return env.render(mode="rgb_array") else: # rendering for the cart pole environment (in case OpenAI gym can't do it) img_w = 600 img_h = 400 cart_w = img_w // 12 cart_h = img_h // 15 pole_len = img_h // 3.5 pole_w = img_w // 80 + 1 x_width = 2 max_ang = 0.2 bg_col = (255, 255, 255) cart_col = 0x000000 # Blue Green Red pole_col = 0x669acc # Blue Green Red pos, vel, ang, ang_vel = obs img = Image.new('RGB', (img_w, img_h), bg_col) draw = ImageDraw.Draw(img) cart_x = pos * img_w // x_width + img_w // x_width cart_y = img_h * 95 // 100 top_pole_x = cart_x + pole_len * np.sin(ang) top_pole_y = cart_y - cart_h // 2 - pole_len * np.cos(ang) draw.line((0, cart_y, img_w, cart_y), fill=0) draw.rectangle((cart_x - cart_w // 2, cart_y - cart_h // 2, cart_x + cart_w // 2, cart_y + cart_h // 2), fill=cart_col) # draw cart draw.line((cart_x, cart_y - cart_h // 2, top_pole_x, top_pole_y), fill=pole_col, width=pole_w) # draw pole return np.array(img) def plot_cart_pole(env, obs): plt.close() # or else nbagg sometimes plots in the previous cell img = render_cart_pole(env, obs) plt.imshow(img) plt.axis("off") plt.show() plot_cart_pole(env, obs)
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Now let's look at the action space:
env.action_space
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Yep, just two possible actions: accelerate towards the left or towards the right. Let's push the cart left until the pole falls:
obs = env.reset() while True: obs, reward, done, info = env.step(0) if done: break plt.close() # or else nbagg sometimes plots in the previous cell img = render_cart_pole(env, obs) plt.imshow(img) plt.axis("off") save_fig("cart_pole_plot")
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Notice that the game is over when the pole tilts too much, not when it actually falls. Now let's reset the environment and push the cart to right instead:
obs = env.reset() while True: obs, reward, done, info = env.step(1) if done: break plot_cart_pole(env, obs)
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Looks like it's doing what we're telling it to do. Now how can we make the poll remain upright? We will need to define a _policy_ for that. This is the strategy that the agent will use to select an action at each step. It can use all the past actions and observations to decide what to do. A simple hard-coded policy Let's hard code a simple strategy: if the pole is tilting to the left, then push the cart to the left, and _vice versa_. Let's see if that works:
frames = [] n_max_steps = 1000 n_change_steps = 10 obs = env.reset() for step in range(n_max_steps): img = render_cart_pole(env, obs) frames.append(img) # hard-coded policy position, velocity, angle, angular_velocity = obs if angle < 0: action = 0 else: action = 1 obs, reward, done, info = env.step(action) if done: break video = plot_animation(frames) plt.show()
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Nope, the system is unstable and after just a few wobbles, the pole ends up too tilted: game over. We will need to be smarter than that! Neural Network Policies Let's create a neural network that will take observations as inputs, and output the action to take for each observation. To choose an action, the network will first estimate a probability for each action, then select an action randomly according to the estimated probabilities. In the case of the Cart-Pole environment, there are just two possible actions (left or right), so we only need one output neuron: it will output the probability `p` of the action 0 (left), and of course the probability of action 1 (right) will be `1 - p`.
import tensorflow as tf from tensorflow.contrib.layers import fully_connected # 1. Specify the network architecture n_inputs = 4 # == env.observation_space.shape[0] n_hidden = 4 # it's a simple task, we don't need more than this n_outputs = 1 # only outputs the probability of accelerating left initializer = tf.contrib.layers.variance_scaling_initializer() # 2. Build the neural network X = tf.placeholder(tf.float32, shape=[None, n_inputs]) hidden = fully_connected(X, n_hidden, activation_fn=tf.nn.elu, weights_initializer=initializer) outputs = fully_connected(hidden, n_outputs, activation_fn=tf.nn.sigmoid, weights_initializer=initializer) # 3. Select a random action based on the estimated probabilities p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs]) action = tf.multinomial(tf.log(p_left_and_right), num_samples=1) init = tf.global_variables_initializer()
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
In this particular environment, the past actions and observations can safely be ignored, since each observation contains the environment's full state. If there were some hidden state then you may need to consider past actions and observations in order to try to infer the hidden state of the environment. For example, if the environment only revealed the position of the cart but not its velocity, you would have to consider not only the current observation but also the previous observation in order to estimate the current velocity. Another example is if the observations are noisy: you may want to use the past few observations to estimate the most likely current state. Our problem is thus as simple as can be: the current observation is noise-free and contains the environment's full state. You may wonder why we are picking a random action based on the probability given by the policy network, rather than just picking the action with the highest probability. This approach lets the agent find the right balance between _exploring_ new actions and _exploiting_ the actions that are known to work well. Here's an analogy: suppose you go to a restaurant for the first time, and all the dishes look equally appealing so you randomly pick one. If it turns out to be good, you can increase the probability to order it next time, but you shouldn't increase that probability to 100%, or else you will never try out the other dishes, some of which may be even better than the one you tried. Let's randomly initialize this policy neural network and use it to play one game:
n_max_steps = 1000 frames = [] with tf.Session() as sess: init.run() obs = env.reset() for step in range(n_max_steps): img = render_cart_pole(env, obs) frames.append(img) action_val = action.eval(feed_dict={X: obs.reshape(1, n_inputs)}) obs, reward, done, info = env.step(action_val[0][0]) if done: break env.close()
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Now let's look at how well this randomly initialized policy network performed:
video = plot_animation(frames) plt.show()
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Yeah... pretty bad. The neural network will have to learn to do better. First let's see if it is capable of learning the basic policy we used earlier: go left if the pole is tilting left, and go right if it is tilting right. The following code defines the same neural network but we add the target probabilities `y`, and the training operations (`cross_entropy`, `optimizer` and `training_op`):
import tensorflow as tf from tensorflow.contrib.layers import fully_connected tf.reset_default_graph() n_inputs = 4 n_hidden = 4 n_outputs = 1 learning_rate = 0.01 initializer = tf.contrib.layers.variance_scaling_initializer() X = tf.placeholder(tf.float32, shape=[None, n_inputs]) y = tf.placeholder(tf.float32, shape=[None, n_outputs]) hidden = fully_connected(X, n_hidden, activation_fn=tf.nn.elu, weights_initializer=initializer) logits = fully_connected(hidden, n_outputs, activation_fn=None) outputs = tf.nn.sigmoid(logits) # probability of action 0 (left) p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs]) action = tf.multinomial(tf.log(p_left_and_right), num_samples=1) cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits) optimizer = tf.train.AdamOptimizer(learning_rate) training_op = optimizer.minimize(cross_entropy) init = tf.global_variables_initializer() saver = tf.train.Saver()
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
We can make the same net play in 10 different environments in parallel, and train for 1000 iterations. We also reset environments when they are done.
n_environments = 10 n_iterations = 1000 envs = [gym.make("CartPole-v0") for _ in range(n_environments)] observations = [env.reset() for env in envs] with tf.Session() as sess: init.run() for iteration in range(n_iterations): target_probas = np.array([([1.] if obs[2] < 0 else [0.]) for obs in observations]) # if angle<0 we want proba(left)=1., or else proba(left)=0. action_val, _ = sess.run([action, training_op], feed_dict={X: np.array(observations), y: target_probas}) for env_index, env in enumerate(envs): obs, reward, done, info = env.step(action_val[env_index][0]) observations[env_index] = obs if not done else env.reset() saver.save(sess, "./my_policy_net_basic.ckpt") for env in envs: env.close() def render_policy_net(model_path, action, X, n_max_steps = 1000): frames = [] env = gym.make("CartPole-v0") obs = env.reset() with tf.Session() as sess: saver.restore(sess, model_path) for step in range(n_max_steps): img = render_cart_pole(env, obs) frames.append(img) action_val = action.eval(feed_dict={X: obs.reshape(1, n_inputs)}) obs, reward, done, info = env.step(action_val[0][0]) if done: break env.close() return frames frames = render_policy_net("./my_policy_net_basic.ckpt", action, X) video = plot_animation(frames) plt.show()
[2017-02-17 10:58:55,704] Making new env: CartPole-v0
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Looks like it learned the policy correctly. Now let's see if it can learn a better policy on its own. Policy Gradients To train this neural network we will need to define the target probabilities `y`. If an action is good we should increase its probability, and conversely if it is bad we should reduce it. But how do we know whether an action is good or bad? The problem is that most actions have delayed effects, so when you win or lose points in a game, it is not clear which actions contributed to this result: was it just the last action? Or the last 10? Or just one action 50 steps earlier? This is called the _credit assignment problem_.The _Policy Gradients_ algorithm tackles this problem by first playing multiple games, then making the actions in good games slightly more likely, while actions in bad games are made slightly less likely. First we play, then we go back and think about what we did.
import tensorflow as tf from tensorflow.contrib.layers import fully_connected tf.reset_default_graph() n_inputs = 4 n_hidden = 4 n_outputs = 1 learning_rate = 0.01 initializer = tf.contrib.layers.variance_scaling_initializer() X = tf.placeholder(tf.float32, shape=[None, n_inputs]) hidden = fully_connected(X, n_hidden, activation_fn=tf.nn.elu, weights_initializer=initializer) logits = fully_connected(hidden, n_outputs, activation_fn=None) outputs = tf.nn.sigmoid(logits) # probability of action 0 (left) p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs]) action = tf.multinomial(tf.log(p_left_and_right), num_samples=1) y = 1. - tf.to_float(action) cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits) optimizer = tf.train.AdamOptimizer(learning_rate) grads_and_vars = optimizer.compute_gradients(cross_entropy) gradients = [grad for grad, variable in grads_and_vars] gradient_placeholders = [] grads_and_vars_feed = [] for grad, variable in grads_and_vars: gradient_placeholder = tf.placeholder(tf.float32, shape=grad.get_shape()) gradient_placeholders.append(gradient_placeholder) grads_and_vars_feed.append((gradient_placeholder, variable)) training_op = optimizer.apply_gradients(grads_and_vars_feed) init = tf.global_variables_initializer() saver = tf.train.Saver() def discount_rewards(rewards, discount_rate): discounted_rewards = np.zeros(len(rewards)) cumulative_rewards = 0 for step in reversed(range(len(rewards))): cumulative_rewards = rewards[step] + cumulative_rewards * discount_rate discounted_rewards[step] = cumulative_rewards return discounted_rewards def discount_and_normalize_rewards(all_rewards, discount_rate): all_discounted_rewards = [discount_rewards(rewards, discount_rate) for rewards in all_rewards] flat_rewards = np.concatenate(all_discounted_rewards) reward_mean = flat_rewards.mean() reward_std = flat_rewards.std() return [(discounted_rewards - reward_mean)/reward_std for discounted_rewards in all_discounted_rewards] discount_rewards([10, 0, -50], discount_rate=0.8) discount_and_normalize_rewards([[10, 0, -50], [10, 20]], discount_rate=0.8) env = gym.make("CartPole-v0") n_games_per_update = 10 n_max_steps = 1000 n_iterations = 250 save_iterations = 10 discount_rate = 0.95 with tf.Session() as sess: init.run() for iteration in range(n_iterations): print("\rIteration: {}".format(iteration), end="") all_rewards = [] all_gradients = [] for game in range(n_games_per_update): current_rewards = [] current_gradients = [] obs = env.reset() for step in range(n_max_steps): action_val, gradients_val = sess.run([action, gradients], feed_dict={X: obs.reshape(1, n_inputs)}) obs, reward, done, info = env.step(action_val[0][0]) current_rewards.append(reward) current_gradients.append(gradients_val) if done: break all_rewards.append(current_rewards) all_gradients.append(current_gradients) all_rewards = discount_and_normalize_rewards(all_rewards, discount_rate=discount_rate) feed_dict = {} for var_index, gradient_placeholder in enumerate(gradient_placeholders): mean_gradients = np.mean([reward * all_gradients[game_index][step][var_index] for game_index, rewards in enumerate(all_rewards) for step, reward in enumerate(rewards)], axis=0) feed_dict[gradient_placeholder] = mean_gradients sess.run(training_op, feed_dict=feed_dict) if iteration % save_iterations == 0: saver.save(sess, "./my_policy_net_pg.ckpt") env.close() frames = render_policy_net("./my_policy_net_pg.ckpt", action, X, n_max_steps=1000) video = plot_animation(frames) plt.show()
[2017-02-17 11:06:16,047] Making new env: CartPole-v0
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Markov Chains
transition_probabilities = [ [0.7, 0.2, 0.0, 0.1], # from s0 to s0, s1, s2, s3 [0.0, 0.0, 0.9, 0.1], # from s1 to ... [0.0, 1.0, 0.0, 0.0], # from s2 to ... [0.0, 0.0, 0.0, 1.0], # from s3 to ... ] n_max_steps = 50 def print_sequence(start_state=0): current_state = start_state print("States:", end=" ") for step in range(n_max_steps): print(current_state, end=" ") if current_state == 3: break current_state = rnd.choice(range(4), p=transition_probabilities[current_state]) else: print("...", end="") print() for _ in range(10): print_sequence()
States: 0 0 3 States: 0 1 2 1 2 1 2 1 2 1 3 States: 0 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 3 States: 0 3 States: 0 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 3 States: 0 1 3 States: 0 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 ... States: 0 0 3 States: 0 0 0 1 2 1 2 1 3 States: 0 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 3
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Markov Decision Process
transition_probabilities = [ [[0.7, 0.3, 0.0], [1.0, 0.0, 0.0], [0.8, 0.2, 0.0]], # in s0, if action a0 then proba 0.7 to state s0 and 0.3 to state s1, etc. [[0.0, 1.0, 0.0], None, [0.0, 0.0, 1.0]], [None, [0.8, 0.1, 0.1], None], ] rewards = [ [[+10, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, -50]], [[0, 0, 0], [+40, 0, 0], [0, 0, 0]], ] possible_actions = [[0, 1, 2], [0, 2], [1]] def policy_fire(state): return [0, 2, 1][state] def policy_random(state): return rnd.choice(possible_actions[state]) def policy_safe(state): return [0, 0, 1][state] class MDPEnvironment(object): def __init__(self, start_state=0): self.start_state=start_state self.reset() def reset(self): self.total_rewards = 0 self.state = self.start_state def step(self, action): next_state = rnd.choice(range(3), p=transition_probabilities[self.state][action]) reward = rewards[self.state][action][next_state] self.state = next_state self.total_rewards += reward return self.state, reward def run_episode(policy, n_steps, start_state=0, display=True): env = MDPEnvironment() if display: print("States (+rewards):", end=" ") for step in range(n_steps): if display: if step == 10: print("...", end=" ") elif step < 10: print(env.state, end=" ") action = policy(env.state) state, reward = env.step(action) if display and step < 10: if reward: print("({})".format(reward), end=" ") if display: print("Total rewards =", env.total_rewards) return env.total_rewards for policy in (policy_fire, policy_random, policy_safe): all_totals = [] print(policy.__name__) for episode in range(1000): all_totals.append(run_episode(policy, n_steps=100, display=(episode<5))) print("Summary: mean={:.1f}, std={:1f}, min={}, max={}".format(np.mean(all_totals), np.std(all_totals), np.min(all_totals), np.max(all_totals))) print()
policy_fire States (+rewards): 0 (10) 0 (10) 0 1 (-50) 2 2 2 (40) 0 (10) 0 (10) 0 (10) ... Total rewards = 210 States (+rewards): 0 1 (-50) 2 (40) 0 (10) 0 (10) 0 1 (-50) 2 2 (40) 0 (10) ... Total rewards = 70 States (+rewards): 0 (10) 0 1 (-50) 2 (40) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) ... Total rewards = 70 States (+rewards): 0 1 (-50) 2 1 (-50) 2 (40) 0 (10) 0 1 (-50) 2 (40) 0 ... Total rewards = -10 States (+rewards): 0 1 (-50) 2 (40) 0 (10) 0 (10) 0 1 (-50) 2 (40) 0 (10) 0 (10) ... Total rewards = 290 Summary: mean=121.1, std=129.333766, min=-330, max=470 policy_random States (+rewards): 0 1 (-50) 2 1 (-50) 2 (40) 0 1 (-50) 2 2 (40) 0 ... Total rewards = -60 States (+rewards): 0 (10) 0 0 0 0 0 (10) 0 0 0 (10) 0 ... Total rewards = -30 States (+rewards): 0 1 1 (-50) 2 (40) 0 0 1 1 1 1 ... Total rewards = 10 States (+rewards): 0 (10) 0 (10) 0 0 0 0 1 (-50) 2 (40) 0 0 ... Total rewards = 0 States (+rewards): 0 0 (10) 0 1 (-50) 2 (40) 0 0 0 0 (10) 0 (10) ... Total rewards = 40 Summary: mean=-22.1, std=88.152740, min=-380, max=200 policy_safe States (+rewards): 0 1 1 1 1 1 1 1 1 1 ... Total rewards = 0 States (+rewards): 0 1 1 1 1 1 1 1 1 1 ... Total rewards = 0 States (+rewards): 0 (10) 0 (10) 0 (10) 0 1 1 1 1 1 1 ... Total rewards = 30 States (+rewards): 0 (10) 0 1 1 1 1 1 1 1 1 ... Total rewards = 10 States (+rewards): 0 1 1 1 1 1 1 1 1 1 ... Total rewards = 0 Summary: mean=22.3, std=26.244312, min=0, max=170
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Q-Learning Q-Learning will learn the optimal policy by watching the random policy play.
n_states = 3 n_actions = 3 n_steps = 20000 alpha = 0.01 gamma = 0.99 exploration_policy = policy_random q_values = np.full((n_states, n_actions), -np.inf) for state, actions in enumerate(possible_actions): q_values[state][actions]=0 env = MDPEnvironment() for step in range(n_steps): action = exploration_policy(env.state) state = env.state next_state, reward = env.step(action) next_value = np.max(q_values[next_state]) # greedy policy q_values[state, action] = (1-alpha)*q_values[state, action] + alpha*(reward + gamma * next_value) def optimal_policy(state): return np.argmax(q_values[state]) q_values all_totals = [] for episode in range(1000): all_totals.append(run_episode(optimal_policy, n_steps=100, display=(episode<5))) print("Summary: mean={:.1f}, std={:1f}, min={}, max={}".format(np.mean(all_totals), np.std(all_totals), np.min(all_totals), np.max(all_totals))) print()
States (+rewards): 0 (10) 0 (10) 0 1 (-50) 2 (40) 0 (10) 0 1 (-50) 2 (40) 0 (10) ... Total rewards = 230 States (+rewards): 0 (10) 0 (10) 0 (10) 0 1 (-50) 2 2 1 (-50) 2 (40) 0 (10) ... Total rewards = 90 States (+rewards): 0 1 (-50) 2 (40) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) ... Total rewards = 170 States (+rewards): 0 1 (-50) 2 (40) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) 0 (10) ... Total rewards = 220 States (+rewards): 0 1 (-50) 2 (40) 0 (10) 0 1 (-50) 2 (40) 0 (10) 0 (10) 0 (10) ... Total rewards = -50 Summary: mean=125.6, std=127.363464, min=-290, max=500
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Learning to play MsPacman using Deep Q-Learning
env = gym.make("MsPacman-v0") obs = env.reset() obs.shape env.action_space
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Preprocessing Preprocessing the images is optional but greatly speeds up training.
mspacman_color = np.array([210, 164, 74]).mean() def preprocess_observation(obs): img = obs[1:176:2, ::2] # crop and downsize img = img.mean(axis=2) # to greyscale img[img==mspacman_color] = 0 # Improve contrast img = (img - 128) / 128 - 1 # normalize from -1. to 1. return img.reshape(88, 80, 1) img = preprocess_observation(obs) plt.figure(figsize=(11, 7)) plt.subplot(121) plt.title("Original observation (160×210 RGB)") plt.imshow(obs) plt.axis("off") plt.subplot(122) plt.title("Preprocessed observation (88×80 greyscale)") plt.imshow(img.reshape(88, 80), interpolation="nearest", cmap="gray") plt.axis("off") save_fig("preprocessing_plot") plt.show()
_____no_output_____
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
Build DQN
tf.reset_default_graph() from tensorflow.contrib.layers import convolution2d, fully_connected input_height = 88 input_width = 80 input_channels = 1 conv_n_maps = [32, 64, 64] conv_kernel_sizes = [(8,8), (4,4), (3,3)] conv_strides = [4, 2, 1] conv_paddings = ["SAME"]*3 conv_activation = [tf.nn.relu]*3 n_hidden_inputs = 64 * 11 * 10 # conv3 has 64 maps of 11x10 each n_hidden = 512 hidden_activation = tf.nn.relu n_outputs = env.action_space.n initializer = tf.contrib.layers.variance_scaling_initializer() learning_rate = 0.01 def q_network(X_state, scope): prev_layer = X_state conv_layers = [] with tf.variable_scope(scope) as scope: for n_maps, kernel_size, stride, padding, activation in zip(conv_n_maps, conv_kernel_sizes, conv_strides, conv_paddings, conv_activation): prev_layer = convolution2d(prev_layer, num_outputs=n_maps, kernel_size=kernel_size, stride=stride, padding=padding, activation_fn=activation, weights_initializer=initializer) conv_layers.append(prev_layer) last_conv_layer_flat = tf.reshape(prev_layer, shape=[-1, n_hidden_inputs]) hidden = fully_connected(last_conv_layer_flat, n_hidden, activation_fn=hidden_activation, weights_initializer=initializer) outputs = fully_connected(hidden, n_outputs, activation_fn=None) trainable_vars = {var.name[len(scope.name):]: var for var in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=scope.name)} return outputs, trainable_vars X_state = tf.placeholder(tf.float32, shape=[None, input_height, input_width, input_channels]) actor_q_values, actor_vars = q_network(X_state, scope="q_networks/actor") # acts critic_q_values, critic_vars = q_network(X_state, scope="q_networks/critic") # learns copy_ops = [actor_var.assign(critic_vars[var_name]) for var_name, actor_var in actor_vars.items()] copy_critic_to_actor = tf.group(*copy_ops) with tf.variable_scope("train"): X_action = tf.placeholder(tf.int32, shape=[None]) y = tf.placeholder(tf.float32, shape=[None, 1]) q_value = tf.reduce_sum(critic_q_values * tf.one_hot(X_action, n_outputs), axis=1, keep_dims=True) cost = tf.reduce_mean(tf.square(y - q_value)) global_step = tf.Variable(0, trainable=False, name='global_step') optimizer = tf.train.AdamOptimizer(learning_rate) training_op = optimizer.minimize(cost, global_step=global_step) init = tf.global_variables_initializer() saver = tf.train.Saver() actor_vars from collections import deque replay_memory_size = 10000 replay_memory = deque([], maxlen=replay_memory_size) def sample_memories(batch_size): indices = rnd.permutation(len(replay_memory))[:batch_size] cols = [[], [], [], [], []] # state, action, reward, next_state, continue for idx in indices: memory = replay_memory[idx] for col, value in zip(cols, memory): col.append(value) cols = [np.array(col) for col in cols] return cols[0], cols[1], cols[2].reshape(-1, 1), cols[3], cols[4].reshape(-1, 1) eps_min = 0.05 eps_max = 1.0 eps_decay_steps = 50000 import sys def epsilon_greedy(q_values, step): epsilon = max(eps_min, eps_max - (eps_max-eps_min) * step/eps_decay_steps) if rnd.rand() < epsilon: return rnd.randint(n_outputs) # random action else: return np.argmax(q_values) # optimal action n_steps = 100000 # total number of training steps training_start = 1000 # start training after 1,000 game iterations training_interval = 3 # run a training step every 3 game iterations save_steps = 50 # save the model every 50 training steps copy_steps = 25 # copy the critic to the actor every 25 training steps discount_rate = 0.95 skip_start = 90 # Skip the start of every game (it's just waiting time). batch_size = 50 iteration = 0 # game iterations checkpoint_path = "./my_dqn.ckpt" done = True # env needs to be reset with tf.Session() as sess: if os.path.isfile(checkpoint_path): saver.restore(sess, checkpoint_path) else: init.run() while True: step = global_step.eval() if step >= n_steps: break iteration += 1 print("\rIteration {}\tTraining step {}/{} ({:.1f}%)".format(iteration, step, n_steps, step * 100 / n_steps), end="") if done: # game over, start again obs = env.reset() for skip in range(skip_start): # skip boring game iterations at the start of each game obs, reward, done, info = env.step(0) state = preprocess_observation(obs) # Actor evaluates what to do q_values = actor_q_values.eval(feed_dict={X_state: [state]}) action = epsilon_greedy(q_values, step) # Actor plays obs, reward, done, info = env.step(action) next_state = preprocess_observation(obs) # Let's memorize what happened replay_memory.append((state, action, reward, next_state, 1.0 - done)) state = next_state if iteration < training_start or iteration % training_interval != 0: continue # Critic learns X_state_val, X_action_val, rewards, X_next_state_val, continues = sample_memories(batch_size) next_q_values = actor_q_values.eval(feed_dict={X_state: X_next_state_val}) y_val = rewards + continues * discount_rate * np.max(next_q_values, axis=1, keepdims=True) training_op.run(feed_dict={X_state: X_state_val, X_action: X_action_val, y: y_val}) # Regularly copy critic to actor if step % copy_steps == 0: copy_critic_to_actor.run() # And save regularly if step % save_steps == 0: saver.save(sess, checkpoint_path)
Iteration 328653 Training step 100000/100000 (100.0%)
Apache-2.0
16_reinforcement_learning.ipynb
fsv20/Sergey
[![AnalyticsDojo](https://github.com/rpi-techfundamentals/spring2019-materials/blob/master/fig/final-logo.png?raw=1)](http://rpi.analyticsdojo.com)Basic Text Feature Creation in Pythonrpi.analyticsdojo.com Basic Text Feature Creation in Python
!wget https://raw.githubusercontent.com/rpi-techfundamentals/spring2019-materials/master/input/train.csv !wget https://raw.githubusercontent.com/rpi-techfundamentals/spring2019-materials/master/input/test.csv import numpy as np import pandas as pd import pandas as pd train= pd.read_csv('train.csv') test = pd.read_csv('test.csv') #Print to standard output, and see the results in the "log" section below after running your script train.head() #Print to standard output, and see the results in the "log" section below after running your script train.describe() train.dtypes #Let's look at the age field. We can see "NaN" (which indicates missing values).s train["Age"] #Now let's recode. medianAge=train["Age"].median() print ("The Median age is:", medianAge, " years old.") train["Age"] = train["Age"].fillna(medianAge) #Option 2 all in one shot! train["Age"] = train["Age"].fillna(train["Age"].median()) train["Age"] #For Recoding Data, we can use what we know of selecting rows and columns train["Embarked"] = train["Embarked"].fillna("S") train.loc[train["Embarked"] == "S", "EmbarkedRecode"] = 0 train.loc[train["Embarked"] == "C", "EmbarkedRecode"] = 1 train.loc[train["Embarked"] == "Q", "EmbarkedRecode"] = 2 # We can also use something called a lambda function # You can read more about the lambda function here. #http://www.python-course.eu/lambda.php gender_fn = lambda x: 0 if x == 'male' else 1 train['Gender'] = train['Sex'].map(gender_fn) #or we can do in one shot train['NameLength'] = train['Name'].map(lambda x: len(x)) train['Age2'] = train['Age'].map(lambda x: x*x) train #We can start to create little small functions that will find a string. def has_title(name): for s in ['Mr.', 'Mrs.', 'Miss.', 'Dr.', 'Sir.']: if name.find(s) >= 0: return True return False #Now we are using that separate function in another function. title_fn = lambda x: 1 if has_title(x) else 0 #Finally, we call the function for name train['Title'] = train['Name'].map(title_fn) test['Title']= train['Name'].map(title_fn) test #Writing to File submission=pd.DataFrame(test.loc[:,['PassengerId','Survived']]) #Any files you save will be available in the output tab below submission.to_csv('submission.csv', index=False)
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:1: FutureWarning: Passing list-likes to .loc or [] with any missing label will raise KeyError in the future, you can use .reindex() as an alternative. See the documentation here: http://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-listlike """Entry point for launching an IPython kernel. /usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py:1367: FutureWarning: Passing list-likes to .loc or [] with any missing label will raise KeyError in the future, you can use .reindex() as an alternative. See the documentation here: http://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-listlike return self._getitem_tuple(key)
MIT
site/_build/jupyter_execute/notebooks/08-intro-nlp/01-titanic-features.ipynb
rpi-techfundamentals/spring2020_website
Warding Tied to Objectives Repear previous exercise but add status of towers at time of ward.Code here is incorporated into a class to make things smoother
import pandas as pd import numpy as np import matplotlib.pyplot as plt import json import matplotlib.patches as patches import os data_path = os.path.join('data_obj', 'data_April_09_to_May_01.json') df = pd.read_json(data_path) df.head() os.listdir('data_obj') df.shape df['match_id'].nunique()
_____no_output_____
MIT
04_Extract_Organize_Objectives.ipynb
NadimKawwa/DOTAWardFinder
We have more entries that number of unique match IDs. We can reduce work and avoid redundancy by parsing through unique match IDs to extract objectives
df['match_id'][52] #look inside a obs log column df['objectives'][52] for item in df['objectives'][52]: print(item['type']) for item in df['objectives'][0]: if item['type']== 'building_kill': print(item['key']) arr = [{'bot1': 999, 'bot2': 55}, {'bot2': 100, 'bot3': 300}] pd.DataFrame(arr) def getObjectiveTimes(match_id, log): """ Reads a log associated with a match and returns the """ #empty dict d = {} #keep track of how many rosh kills i = 0 #store the match id d['match_id'] = match_id #Extract buildings for item in log: #check if objective tied to buildings if item['type']== 'building_kill': d[item['key']] = item['time'] #check for ROSHAN time killed elif item['type'] == 'CHAT_MESSAGE_ROSHAN_KILL': #has rosh been killed before? if 'ROSHAN_0' not in d: d['ROSHAN_0'] = item['time'] else: i += 1 name= 'ROSHAN_' + str(i) d[name] = item['time'] return d #test it out getObjectiveTimes(123, #arbitrary match ID df['objectives'][52]) def getObjectiveDataframe(df): #filter by keeping unique match ids df_match = df.drop_duplicates(subset='match_id', keep='first') obj_arr = [] for row in df_match.itertuples(index=False): d = getObjectiveTimes(row.match_id, row.objectives) obj_arr.append(d) return pd.DataFrame(obj_arr) a = getObjectiveDataframe(df) a.head() a.shape
_____no_output_____
MIT
04_Extract_Organize_Objectives.ipynb
NadimKawwa/DOTAWardFinder
Neuromatch Academy: Week 1, Day 5, Tutorial 3 Dimensionality Reduction and reconstruction__Content creators:__ Alex Cayco Gajic, John Murray__Content reviewers:__ Roozbeh Farhoudi, Matt Krause, Spiros Chavlis, Richard Gao, Michael Waskom --- Tutorial ObjectivesIn this notebook we'll learn to apply PCA for dimensionality reduction, using a classic dataset that is often used to benchmark machine learning algorithms: MNIST. We'll also learn how to use PCA for reconstruction and denoising.Overview:- Perform PCA on MNIST- Calculate the variance explained- Reconstruct data with different numbers of PCs- (Bonus) Examine denoising using PCAYou can learn more about MNIST dataset [here](https://en.wikipedia.org/wiki/MNIST_database).
# @title Video 1: PCA for dimensionality reduction from IPython.display import YouTubeVideo video = YouTubeVideo(id="oO0bbInoO_0", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video
Video available at https://youtube.com/watch?v=oO0bbInoO_0
CC-BY-4.0
tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial3.ipynb
simpleParadox/course-content
--- SetupRun these cells to get the tutorial started.
# Imports import numpy as np import matplotlib.pyplot as plt # @title Figure Settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") # @title Helper Functions def plot_variance_explained(variance_explained): """ Plots eigenvalues. Args: variance_explained (numpy array of floats) : Vector of variance explained for each PC Returns: Nothing. """ plt.figure() plt.plot(np.arange(1, len(variance_explained) + 1), variance_explained, '--k') plt.xlabel('Number of components') plt.ylabel('Variance explained') plt.show() def plot_MNIST_reconstruction(X, X_reconstructed): """ Plots 9 images in the MNIST dataset side-by-side with the reconstructed images. Args: X (numpy array of floats) : Data matrix each column corresponds to a different random variable X_reconstructed (numpy array of floats) : Data matrix each column corresponds to a different random variable Returns: Nothing. """ plt.figure() ax = plt.subplot(121) k = 0 for k1 in range(3): for k2 in range(3): k = k + 1 plt.imshow(np.reshape(X[k, :], (28, 28)), extent=[(k1 + 1) * 28, k1 * 28, (k2 + 1) * 28, k2 * 28], vmin=0, vmax=255) plt.xlim((3 * 28, 0)) plt.ylim((3 * 28, 0)) plt.tick_params(axis='both', which='both', bottom=False, top=False, labelbottom=False) ax.set_xticks([]) ax.set_yticks([]) plt.title('Data') plt.clim([0, 250]) ax = plt.subplot(122) k = 0 for k1 in range(3): for k2 in range(3): k = k + 1 plt.imshow(np.reshape(np.real(X_reconstructed[k, :]), (28, 28)), extent=[(k1 + 1) * 28, k1 * 28, (k2 + 1) * 28, k2 * 28], vmin=0, vmax=255) plt.xlim((3 * 28, 0)) plt.ylim((3 * 28, 0)) plt.tick_params(axis='both', which='both', bottom=False, top=False, labelbottom=False) ax.set_xticks([]) ax.set_yticks([]) plt.clim([0, 250]) plt.title('Reconstructed') plt.tight_layout() def plot_MNIST_sample(X): """ Plots 9 images in the MNIST dataset. Args: X (numpy array of floats) : Data matrix each column corresponds to a different random variable Returns: Nothing. """ fig, ax = plt.subplots() k = 0 for k1 in range(3): for k2 in range(3): k = k + 1 plt.imshow(np.reshape(X[k, :], (28, 28)), extent=[(k1 + 1) * 28, k1 * 28, (k2+1) * 28, k2 * 28], vmin=0, vmax=255) plt.xlim((3 * 28, 0)) plt.ylim((3 * 28, 0)) plt.tick_params(axis='both', which='both', bottom=False, top=False, labelbottom=False) plt.clim([0, 250]) ax.set_xticks([]) ax.set_yticks([]) plt.show() def plot_MNIST_weights(weights): """ Visualize PCA basis vector weights for MNIST. Red = positive weights, blue = negative weights, white = zero weight. Args: weights (numpy array of floats) : PCA basis vector Returns: Nothing. """ fig, ax = plt.subplots() cmap = plt.cm.get_cmap('seismic') plt.imshow(np.real(np.reshape(weights, (28, 28))), cmap=cmap) plt.tick_params(axis='both', which='both', bottom=False, top=False, labelbottom=False) plt.clim(-.15, .15) plt.colorbar(ticks=[-.15, -.1, -.05, 0, .05, .1, .15]) ax.set_xticks([]) ax.set_yticks([]) plt.show() def add_noise(X, frac_noisy_pixels): """ Randomly corrupts a fraction of the pixels by setting them to random values. Args: X (numpy array of floats) : Data matrix frac_noisy_pixels (scalar) : Fraction of noisy pixels Returns: (numpy array of floats) : Data matrix + noise """ X_noisy = np.reshape(X, (X.shape[0] * X.shape[1])) N_noise_ixs = int(X_noisy.shape[0] * frac_noisy_pixels) noise_ixs = np.random.choice(X_noisy.shape[0], size=N_noise_ixs, replace=False) X_noisy[noise_ixs] = np.random.uniform(0, 255, noise_ixs.shape) X_noisy = np.reshape(X_noisy, (X.shape[0], X.shape[1])) return X_noisy def change_of_basis(X, W): """ Projects data onto a new basis. Args: X (numpy array of floats) : Data matrix each column corresponding to a different random variable W (numpy array of floats) : new orthonormal basis columns correspond to basis vectors Returns: (numpy array of floats) : Data matrix expressed in new basis """ Y = np.matmul(X, W) return Y def get_sample_cov_matrix(X): """ Returns the sample covariance matrix of data X. Args: X (numpy array of floats) : Data matrix each column corresponds to a different random variable Returns: (numpy array of floats) : Covariance matrix """ X = X - np.mean(X, 0) cov_matrix = 1 / X.shape[0] * np.matmul(X.T, X) return cov_matrix def sort_evals_descending(evals, evectors): """ Sorts eigenvalues and eigenvectors in decreasing order. Also aligns first two eigenvectors to be in first two quadrants (if 2D). Args: evals (numpy array of floats) : Vector of eigenvalues evectors (numpy array of floats) : Corresponding matrix of eigenvectors each column corresponds to a different eigenvalue Returns: (numpy array of floats) : Vector of eigenvalues after sorting (numpy array of floats) : Matrix of eigenvectors after sorting """ index = np.flip(np.argsort(evals)) evals = evals[index] evectors = evectors[:, index] if evals.shape[0] == 2: if np.arccos(np.matmul(evectors[:, 0], 1 / np.sqrt(2) * np.array([1, 1]))) > np.pi / 2: evectors[:, 0] = -evectors[:, 0] if np.arccos(np.matmul(evectors[:, 1], 1 / np.sqrt(2)*np.array([-1, 1]))) > np.pi / 2: evectors[:, 1] = -evectors[:, 1] return evals, evectors def pca(X): """ Performs PCA on multivariate data. Eigenvalues are sorted in decreasing order Args: X (numpy array of floats) : Data matrix each column corresponds to a different random variable Returns: (numpy array of floats) : Data projected onto the new basis (numpy array of floats) : Vector of eigenvalues (numpy array of floats) : Corresponding matrix of eigenvectors """ X = X - np.mean(X, 0) cov_matrix = get_sample_cov_matrix(X) evals, evectors = np.linalg.eigh(cov_matrix) evals, evectors = sort_evals_descending(evals, evectors) score = change_of_basis(X, evectors) return score, evectors, evals def plot_eigenvalues(evals, limit=True): """ Plots eigenvalues. Args: (numpy array of floats) : Vector of eigenvalues Returns: Nothing. """ plt.figure() plt.plot(np.arange(1, len(evals) + 1), evals, 'o-k') plt.xlabel('Component') plt.ylabel('Eigenvalue') plt.title('Scree plot') if limit: plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial3.ipynb
simpleParadox/course-content
--- Section 1: Perform PCA on MNISTThe MNIST dataset consists of a 70,000 images of individual handwritten digits. Each image is a 28x28 pixel grayscale image. For convenience, each 28x28 pixel image is often unravelled into a single 784 (=28*28) element vector, so that the whole dataset is represented as a 70,000 x 784 matrix. Each row represents a different image, and each column represents a different pixel. Enter the following cell to load the MNIST dataset and plot the first nine images.
from sklearn.datasets import fetch_openml mnist = fetch_openml(name='mnist_784') X = mnist.data plot_MNIST_sample(X)
_____no_output_____
CC-BY-4.0
tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial3.ipynb
simpleParadox/course-content
The MNIST dataset has an extrinsic dimensionality of 784, much higher than the 2-dimensional examples used in the previous tutorials! To make sense of this data, we'll use dimensionality reduction. But first, we need to determine the intrinsic dimensionality $K$ of the data. One way to do this is to look for an "elbow" in the scree plot, to determine which eigenvalues are signficant. Exercise 1: Scree plot of MNISTIn this exercise you will examine the scree plot in the MNIST dataset.**Steps:**- Perform PCA on the dataset and examine the scree plot. - When do the eigenvalues appear (by eye) to reach zero? (**Hint:** use `plt.xlim` to zoom into a section of the plot).
help(pca) help(plot_eigenvalues) ################################################# ## TO DO for students: perform PCA and plot the eigenvalues ################################################# # perform PCA # score, evectors, evals = ... # plot the eigenvalues # plot_eigenvalues(evals, limit=False) # plt.xlim(...) # limit x-axis up to 100 for zooming # to_remove solution # perform PCA score, evectors, evals = pca(X) # plot the eigenvalues with plt.xkcd(): plot_eigenvalues(evals, limit=False) plt.xlim([0, 100]) # limit x-axis up to 100 for zooming
_____no_output_____
CC-BY-4.0
tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial3.ipynb
simpleParadox/course-content
--- Section 2: Calculate the variance explainedThe scree plot suggests that most of the eigenvalues are near zero, with fewer than 100 having large values. Another common way to determine the intrinsic dimensionality is by considering the variance explained. This can be examined with a cumulative plot of the fraction of the total variance explained by the top $K$ components, i.e.,\begin{equation}\text{var explained} = \frac{\sum_{i=1}^K \lambda_i}{\sum_{i=1}^N \lambda_i}\end{equation}The intrinsic dimensionality is often quantified by the $K$ necessary to explain a large proportion of the total variance of the data (often a defined threshold, e.g., 90%). Exercise 2: Plot the explained varianceIn this exercise you will plot the explained variance.**Steps:**- Fill in the function below to calculate the fraction variance explained as a function of the number of principal componenets. **Hint:** use `np.cumsum`.- Plot the variance explained using `plot_variance_explained`.**Questions:**- How many principal components are required to explain 90% of the variance?- How does the intrinsic dimensionality of this dataset compare to its extrinsic dimensionality?
help(plot_variance_explained) def get_variance_explained(evals): """ Calculates variance explained from the eigenvalues. Args: evals (numpy array of floats) : Vector of eigenvalues Returns: (numpy array of floats) : Vector of variance explained """ ################################################# ## TO DO for students: calculate the explained variance using the equation ## from Section 2. # Comment once you've filled in the function raise NotImplementedError("Student excercise: calculate explaine variance!") ################################################# # cumulatively sum the eigenvalues csum = ... # normalize by the sum of eigenvalues variance_explained = ... return variance_explained ################################################# ## TO DO for students: call the function and plot the variance explained ################################################# # calculate the variance explained variance_explained = ... # Uncomment to plot the variance explained # plot_variance_explained(variance_explained) # to_remove solution def get_variance_explained(evals): """ Plots eigenvalues. Args: (numpy array of floats) : Vector of eigenvalues Returns: Nothing. """ # cumulatively sum the eigenvalues csum = np.cumsum(evals) # normalize by the sum of eigenvalues variance_explained = csum / np.sum(evals) return variance_explained # calculate the variance explained variance_explained = get_variance_explained(evals) with plt.xkcd(): plot_variance_explained(variance_explained)
_____no_output_____
CC-BY-4.0
tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial3.ipynb
simpleParadox/course-content
--- Section 3: Reconstruct data with different numbers of PCs Now we have seen that the top 100 or so principal components of the data can explain most of the variance. We can use this fact to perform *dimensionality reduction*, i.e., by storing the data using only 100 components rather than the samples of all 784 pixels. Remarkably, we will be able to reconstruct much of the structure of the data using only the top 100 components. To see this, recall that to perform PCA we projected the data $\bf X$ onto the eigenvectors of the covariance matrix:\begin{equation}\bf S = X W\end{equation}Since $\bf W$ is an orthogonal matrix, ${\bf W}^{-1} = {\bf W}^T$. So by multiplying by ${\bf W}^T$ on each side we can rewrite this equation as \begin{equation}{\bf X = S W}^T.\end{equation}This now gives us a way to reconstruct the data matrix from the scores and loadings. To reconstruct the data from a low-dimensional approximation, we just have to truncate these matrices. Let's call ${\bf S}_{1:K}$ and ${\bf W}_{1:K}$ as keeping only the first $K$ columns of this matrix. Then our reconstruction is:\begin{equation}{\bf \hat X = S}_{1:K} ({\bf W}_{1:K})^T.\end{equation} Exercise 3: Data reconstructionFill in the function below to reconstruct the data using different numbers of principal components. **Steps:*** Fill in the following function to reconstruct the data based on the weights and scores. Don't forget to add the mean!* Make sure your function works by reconstructing the data with all $K=784$ components. The two images should look identical.
help(plot_MNIST_reconstruction) def reconstruct_data(score, evectors, X_mean, K): """ Reconstruct the data based on the top K components. Args: score (numpy array of floats) : Score matrix evectors (numpy array of floats) : Matrix of eigenvectors X_mean (numpy array of floats) : Vector corresponding to data mean K (scalar) : Number of components to include Returns: (numpy array of floats) : Matrix of reconstructed data """ ################################################# ## TO DO for students: Reconstruct the original data in X_reconstructed # Comment once you've filled in the function raise NotImplementedError("Student excercise: reconstructing data function!") ################################################# # Reconstruct the data from the score and eigenvectors # Don't forget to add the mean!! X_reconstructed = ... return X_reconstructed K = 784 ################################################# ## TO DO for students: Calculate the mean and call the function, then plot ## the original and the recostructed data ################################################# # Reconstruct the data based on all components X_mean = ... X_reconstructed = ... # Plot the data and reconstruction # plot_MNIST_reconstruction(X, X_reconstructed) # to_remove solution def reconstruct_data(score, evectors, X_mean, K): """ Reconstruct the data based on the top K components. Args: score (numpy array of floats) : Score matrix evectors (numpy array of floats) : Matrix of eigenvectors X_mean (numpy array of floats) : Vector corresponding to data mean K (scalar) : Number of components to include Returns: (numpy array of floats) : Matrix of reconstructed data """ # Reconstruct the data from the score and eigenvectors # Don't forget to add the mean!! X_reconstructed = np.matmul(score[:, :K], evectors[:, :K].T) + X_mean return X_reconstructed K = 784 # Reconstruct the data based on all components X_mean = np.mean(X, 0) X_reconstructed = reconstruct_data(score, evectors, X_mean, K) # Plot the data and reconstruction with plt.xkcd(): plot_MNIST_reconstruction(X, X_reconstructed)
_____no_output_____
CC-BY-4.0
tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial3.ipynb
simpleParadox/course-content
Interactive Demo: Reconstruct the data matrix using different numbers of PCsNow run the code below and experiment with the slider to reconstruct the data matrix using different numbers of principal components.**Steps*** How many principal components are necessary to reconstruct the numbers (by eye)? How does this relate to the intrinsic dimensionality of the data?* Do you see any information in the data with only a single principal component?
# @title # @markdown Make sure you execute this cell to enable the widget! def refresh(K=100): X_reconstructed = reconstruct_data(score, evectors, X_mean, K) plot_MNIST_reconstruction(X, X_reconstructed) plt.title('Reconstructed, K={}'.format(K)) _ = widgets.interact(refresh, K=(1, 784, 10))
_____no_output_____
CC-BY-4.0
tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial3.ipynb
simpleParadox/course-content
Exercise 4: Visualization of the weightsNext, let's take a closer look at the first principal component by visualizing its corresponding weights. **Steps:*** Enter `plot_MNIST_weights` to visualize the weights of the first basis vector.* What structure do you see? Which pixels have a strong positive weighting? Which have a strong negative weighting? What kinds of images would this basis vector differentiate?* Try visualizing the second and third basis vectors. Do you see any structure? What about the 100th basis vector? 500th? 700th?
help(plot_MNIST_weights) ################################################# ## TO DO for students: plot the weights calling the plot_MNIST_weights function ################################################# # Plot the weights of the first principal component # plot_MNIST_weights(...) # to_remove solution # Plot the weights of the first principal component with plt.xkcd(): plot_MNIST_weights(evectors[:, 0])
_____no_output_____
CC-BY-4.0
tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial3.ipynb
simpleParadox/course-content
--- Summary* In this tutorial, we learned how to use PCA for dimensionality reduction by selecting the top principal components. This can be useful as the intrinsic dimensionality ($K$) is often less than the extrinsic dimensionality ($N$) in neural data. $K$ can be inferred by choosing the number of eigenvalues necessary to capture some fraction of the variance.* We also learned how to reconstruct an approximation of the original data using the top $K$ principal components. In fact, an alternate formulation of PCA is to find the $K$ dimensional space that minimizes the reconstruction error.* Noise tends to inflate the apparent intrinsic dimensionality, however the higher components reflect noise rather than new structure in the data. PCA can be used for denoising data by removing noisy higher components.* In MNIST, the weights corresponding to the first principal component appear to discriminate between a 0 and 1. We will discuss the implications of this for data visualization in the following tutorial. --- Bonus: Examine denoising using PCAIn this lecture, we saw that PCA finds an optimal low-dimensional basis to minimize the reconstruction error. Because of this property, PCA can be useful for denoising corrupted samples of the data. Exercise 5: Add noise to the dataIn this exercise you will add salt-and-pepper noise to the original data and see how that affects the eigenvalues. **Steps:**- Use the function `add_noise` to add noise to 20% of the pixels.- Then, perform PCA and plot the variance explained. How many principal components are required to explain 90% of the variance? How does this compare to the original data?
help(add_noise) ################################################################### # Insert your code here to: # Add noise to the data # Plot noise-corrupted data # Perform PCA on the noisy data # Calculate and plot the variance explained ################################################################### np.random.seed(2020) # set random seed X_noisy = ... # score_noisy, evectors_noisy, evals_noisy = ... # variance_explained_noisy = ... # plot_MNIST_sample(X_noisy) # plot_variance_explained(variance_explained_noisy) # to_remove solution np.random.seed(2020) # set random seed X_noisy = add_noise(X, .2) score_noisy, evectors_noisy, evals_noisy = pca(X_noisy) variance_explained_noisy = get_variance_explained(evals_noisy) with plt.xkcd(): plot_MNIST_sample(X_noisy) plot_variance_explained(variance_explained_noisy)
_____no_output_____
CC-BY-4.0
tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial3.ipynb
simpleParadox/course-content
Exercise 6: DenoisingNext, use PCA to perform denoising by projecting the noise-corrupted data onto the basis vectors found from the original dataset. By taking the top K components of this projection, we can reduce noise in dimensions orthogonal to the K-dimensional latent space. **Steps:**- Subtract the mean of the noise-corrupted data.- Project the data onto the basis found with the original dataset (`evectors`, not `evectors_noisy`) and take the top $K$ components. - Reconstruct the data as normal, using the top 50 components. - Play around with the amount of noise and K to build intuition.
################################################################### # Insert your code here to: # Subtract the mean of the noise-corrupted data # Project onto the original basis vectors evectors # Reconstruct the data using the top 50 components # Plot the result ################################################################### X_noisy_mean = ... projX_noisy = ... X_reconstructed = ... # plot_MNIST_reconstruction(X_noisy, X_reconstructed) # to_remove solution X_noisy_mean = np.mean(X_noisy, 0) projX_noisy = np.matmul(X_noisy - X_noisy_mean, evectors) X_reconstructed = reconstruct_data(projX_noisy, evectors, X_noisy_mean, 50) with plt.xkcd(): plot_MNIST_reconstruction(X_noisy, X_reconstructed)
_____no_output_____
CC-BY-4.0
tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial3.ipynb
simpleParadox/course-content
Quick Start **A tutorial on Renormalized Mutual Information**We describe in detail the implementation of RMI estimation in the very simple case of a Gaussian distribution.Of course, in this case the optimal feature is given by the Principal Component Analysis
import numpy as np # parameters of the Gaussian distribution mu = [0,0] sigma = [[1, 0.5],[0.5,2]] # extract the samples N_samples = 100000 samples = np.random.multivariate_normal(mu, sigma, N_samples )
_____no_output_____
MIT
quick-start.ipynb
XudongWang97/rmi
Visualize the distribution with a 2D histogram
import matplotlib.pyplot as plt plt.figure() plt.hist2d(*samples.T, bins=100, cmap=plt.cm.binary) plt.gca().set_aspect("equal") plt.xlabel("$x_1$") plt.ylabel("$x_2$") plt.title("$P_x(x)$") plt.show()
_____no_output_____
MIT
quick-start.ipynb
XudongWang97/rmi
Estimate Renormalized Mutual Information of a featureNow we would like to find a one-dimensional function $f(x_1,x_2)$ to describe this 2d distribution. Simplest featureFor example, we could consider ignoring one of the variables:
def f(x): # feature # shape [N_samples, N_features=1] return x[:,0][...,None] def grad_f(x): # gradient # shape [N_samples, N_features=1, N_x=2] grad_f = np.zeros([len(x),1,2]) grad_f[...,0] = 1 return grad_f def feat_and_grad(x): return f(x), grad_f(x)
_____no_output_____
MIT
quick-start.ipynb
XudongWang97/rmi
Let's plot it on top of the distribution
# Range of the plot xmin = -4 xmax = 4 # Number of points in the grid N = 100 # We evaluate the feature on a grid x_linspace = np.linspace(xmin, xmax, N) x1_grid, x2_grid = np.meshgrid(x_linspace, x_linspace, indexing='ij') x_points = np.array([x1_grid.flatten(), x2_grid.flatten()]).T feature = f(x_points) gradient = grad_f(x_points) plt.figure() plt.title("Feature contours") plt.xlabel(r"$x_1$") plt.ylabel(r"$x_2$") plt.gca().set_aspect('equal') # Draw the input distribution on the background plt.hist2d(*samples.T, bins=100, cmap=plt.cm.binary) # Draw the contour lines of the extracted feature plt.xlim([-4,4]) plt.ylim([-4,4]) plt.contour(x1_grid, x2_grid, feature.reshape([N,N]),15, linewidths=4, cmap=plt.cm.Blues) plt.colorbar() plt.show()
_____no_output_____
MIT
quick-start.ipynb
XudongWang97/rmi
$f(x)=x_1$ is clearly a linear function that ignores $x_2$ and increases in the $x_1$ direction**How much information does it give us on $x$?**If we used common mutual information, it would be $\infty$, because $f$ is a deterministic function, and $H(y|x) = -\log \delta(0)$.Let's estimate the renormalized mutual information
import rmi.estimation as inf samples = np.random.multivariate_normal(mu, sigma, N_samples ) feature = f(samples) gradient = grad_f(samples) RMI = inf.RenormalizedMutualInformation(feature, gradient) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI)
Renormalized Mutual Information (x,f(x)): 1.42
MIT
quick-start.ipynb
XudongWang97/rmi
Please note that we perform the plot by calculating the feature on a uniform grid. But, to estimate RMI, the feature should be calculated on x **sampled** from the $x$ distribution. In particular, we have
p_y, delta_y = inf.produce_P(feature) entropy = inf.Entropy(p_y, delta_y) fterm = inf.RegTerm(gradient) print("Entropy\t %2.2f" % entropy) print("Fterm\t %2.2f" % fterm) print("Renormalized Mutual Information (x,f(x)): %2.2f" % (entropy + fterm))
Entropy 1.42 Fterm -0.00 Renormalized Mutual Information (x,f(x)): 1.42
MIT
quick-start.ipynb
XudongWang97/rmi
Renormalized Mutual Information is the sum of the two terms- Entropy- RegTerm Reparametrization invarianceDo we gain information if we increase the variance of the feature?For example, let's rescale our feature. Clearly the information on $x$ should remain the same
scale_factor = 4 feature *= scale_factor gradient *= scale_factor RMI = inf.RenormalizedMutualInformation(feature, gradient) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI) p_y, delta_y = inf.produce_P(feature) entropy = inf.Entropy(p_y, delta_y) fterm = inf.RegTerm(gradient) print("Entropy\t %2.2f" % entropy) print("Fterm\t %2.2f" % fterm)
Renormalized Mutual Information (x,f(x)): 1.42 Entropy 2.80 Fterm -1.39
MIT
quick-start.ipynb
XudongWang97/rmi
Let's try even a non-linear transformation. As long as it is invertible, we will get the same RMI
# For example y_lin = np.linspace(-4,4,100) f_lin = y_lin**3 + 5*y_lin plt.figure() plt.title("Reparametrization function") plt.plot(y_lin, f_lin) plt.show() feature_new = feature**3 + 5*feature gradient_new = 3*feature[...,None]**2*gradient +5*gradient# chain rule... RMI = inf.RenormalizedMutualInformation(feature_new, gradient_new, 2000) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI) p_y, delta_y = inf.produce_P(feature_new) entropy = inf.Entropy(p_y, delta_y) fterm = inf.RegTerm(gradient_new) print("Entropy\t %2.2f" % entropy) print("Fterm\t %2.2f" % fterm)
Renormalized Mutual Information (x,f(x)): 1.42 Entropy 6.24 Fterm -4.70
MIT
quick-start.ipynb
XudongWang97/rmi
In this case, we have to increase the number of bins to calculate the Entropy with reasonable accuracy.The reason is that the feature now spans a quite larger range but changes very rapidly in the few bins around zero (but we use a uniform binning when estimating the entropy).
plt.hist(feature_new,1000) plt.show()
_____no_output_____
MIT
quick-start.ipynb
XudongWang97/rmi
And if we instead appliead a **non-invertible** transformation? The consequence is clear: we will **lose information**.Consider for example:
feature_new = feature**2 gradient_new = 2*feature[...,None]*gradient # chain rule... RMI_2 = inf.RenormalizedMutualInformation(feature_new, gradient_new, 2000) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI_2) p_y, delta_y = inf.produce_P(feature_new) entropy = inf.Entropy(p_y, delta_y) fterm = inf.RegTerm(gradient_new) print("Entropy\t %2.2f" % entropy) print("Fterm\t %2.2f" % fterm) plt.hist(feature_new,1000) plt.show()
_____no_output_____
MIT
quick-start.ipynb
XudongWang97/rmi
The careful observer will be able to guess how much information we have lost in this case: our feature is centered in zero and we squared it. We lose the sign, and on average the half of the samples have one sign and the half the other sign. One bit of information is lost. The difference is $\log 2$!
deltaRMI = RMI - RMI_2 print("delta RMI %2.3f" %deltaRMI) print("log 2 = %2.3f" % np.log(2))
delta RMI 0.672 log 2 = 0.693
MIT
quick-start.ipynb
XudongWang97/rmi
Another featureLet's take another linear feature, for example, this time in the other direction
def f(x): # feature # shape [N_samples, N_features=1] return x[:,1][...,None] def grad_f(x): # gradient # shape [N_samples, N_features=1, N_x=2] grad_f = np.zeros([len(x),1,2]) grad_f[...,1] = 1 return grad_f def feat_and_grad(x): return f(x), grad_f(x) feature = f(x_points) gradient = grad_f(x_points) plt.figure() plt.title("Feature contours") plt.xlabel(r"$x_1$") plt.ylabel(r"$x_2$") plt.gca().set_aspect('equal') # Draw the input distribution on the background samples = np.random.multivariate_normal(mu, sigma, N_samples ) plt.hist2d(*samples.T, bins=100, cmap=plt.cm.binary) # Draw the contour lines of the extracted feature plt.xlim([-4,4]) plt.ylim([-4,4]) plt.contour(x1_grid, x2_grid, feature.reshape([N,N]),15, linewidths=4, cmap=plt.cm.Blues) plt.colorbar() plt.show() feature = f(samples) gradient = grad_f(samples) RMI = inf.RenormalizedMutualInformation(feature, gradient) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI)
_____no_output_____
MIT
quick-start.ipynb
XudongWang97/rmi
This feature seems to better describe our input. This is reasonable: it lies closer to the direction of larger fluctuation of the distribution. What is the best linear feature that we can take?
# Let's define a linear feature def linear(x, th): """ linear increasing in the direction given by angle th. Args: x (array_like): [N_samples, 2] array of samples th (float): direction of the feature in which it increases Returns: feature (array_like): [N_samples, 1] feature grad_feature (array_like): [N_samples, 1, N_x] gradient of the feature """ Feature = x[:, 0]*np.cos(th) + x[:, 1]*np.sin(th) Grad1 = np.full(np.shape(x)[0], np.cos(th)) Grad2 = np.full(np.shape(x)[0], np.sin(th)) return Feature, np.array([Grad1, Grad2]).T samples = np.random.multivariate_normal(mu, sigma, N_samples ) th_lin = np.linspace(0,np.pi, 30) rmis = [] for th in th_lin: feature, grad = linear(samples, th) rmi = inf.RenormalizedMutualInformation(feature,grad) rmis.append([th,rmi]) rmis = np.array(rmis) plt.figure() plt.title("Best linear feature") plt.xlabel("$\theta$") plt.ylabel(r"$RMI(x,f_\theta(x))$") plt.plot(rmis[:,0], rmis[:,1]) plt.show() best_theta = th_lin[np.argmax(rmis[:,1])]
_____no_output_____
MIT
quick-start.ipynb
XudongWang97/rmi
Let's plot the feature with the parameter that gives the largest Renormalized Mutual Information
feature, gradient = linear(x_points,best_theta) plt.figure() plt.title("Feature contours") plt.xlabel(r"$x_1$") plt.ylabel(r"$x_2$") plt.gca().set_aspect('equal') # Draw the input distribution on the background samples = np.random.multivariate_normal(mu, sigma, N_samples ) plt.hist2d(*samples.T, bins=100, cmap=plt.cm.binary) # Draw the contour lines of the extracted feature plt.xlim([-4,4]) plt.ylim([-4,4]) plt.contour(x1_grid, x2_grid, feature.reshape([N,N]),15, linewidths=4, cmap=plt.cm.Blues) plt.colorbar() plt.show() feature, gradient = linear(samples,best_theta) RMI = inf.RenormalizedMutualInformation(feature, gradient) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI)
_____no_output_____
MIT
quick-start.ipynb
XudongWang97/rmi
This is the same feature that we would get if we considered the first Principal Component of PCA. This is the only case in which this is possible: PCA can only extract linear features, and in particular, since it only takes into account the covariance matrix of the distribution, it can provide the best feature only for a Gaussian (which is identified by its mean and covariance matrix)
import rmi.pca samples = np.random.multivariate_normal(mu, sigma, N_samples ) g_pca = rmi.pca.pca(samples,1) eigenv = g_pca.w[0] angle_pca = np.arctan(eigenv[1]/eigenv[0]) feature, gradient = linear(samples,angle_pca) RMI = inf.RenormalizedMutualInformation(feature, gradient) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI) print("best found angle %2.2f" %best_theta) print("pca direction %2.2f" %angle_pca)
best found angle 1.19 pca direction 1.18
MIT
quick-start.ipynb
XudongWang97/rmi
We recall that in this very special case, and as long as the proposed feature is only rotated (without changing the scale), the simple maximization of the Feature Entropy would have given the same result.Again, this only holds for linear features, and in particular for those whose gradient vector is not affected by a change of parameters).As soon as we use a non-linear feature, just looking at the entropy of the feature is not enough anymore - entropy is not reparametrization invariant.Also, given an arbitrary deterministic feature function, RMI is the only quantity that allows to estimate it's dependence with its arguments Feature Optimization Let's try now to optimize a neural network to extract a feature. In this case, as we already discussed, we will still get a linear feature
import rmi.neuralnets as nn # Define the layout of the neural network # The cost function is implicit when choosing the model RMIOptimizer rmi_optimizer = nn.RMIOptimizer( layers=[ nn.K.layers.Dense(30, activation="relu",input_shape=(2,)), nn.K.layers.Dense(1) ]) # Compile the network === choose the optimizer to use during the training rmi_optimizer.compile(optimizer=nn.tf.optimizers.Adam(1e-3)) # Print the table with the structure of the network rmi_optimizer.summary() # Define an objects that handles the training rmi_net = nn.Net(rmi_optimizer) # Perform the training of the neural network batchsize = 1000 N_train = 5000 def get_batch(): return np.random.multivariate_normal(mu, sigma, batchsize) rmi_net.fit_generator(get_batch, N_train) # Plot the training history (value of RMI) # The large fluctuations can be reduced by increasing the batchsize rmi_net.plot_history()
_____no_output_____
MIT
quick-start.ipynb
XudongWang97/rmi
Calculate the feature on the input points: just apply the object `rmi_net`!
feature = rmi_net(x_points) plt.figure() plt.title("Feature contours") plt.xlabel(r"$x_1$") plt.ylabel(r"$x_2$") plt.gca().set_aspect('equal') # Draw the input distribution on the background samples = np.random.multivariate_normal(mu, sigma, N_samples ) plt.hist2d(*samples.T, bins=100, cmap=plt.cm.binary) # Draw the contour lines of the extracted feature plt.xlim([-4,4]) plt.ylim([-4,4]) plt.contour(x1_grid, x2_grid, feature.reshape([N,N]),15, linewidths=4, cmap=plt.cm.Blues) plt.colorbar() plt.show()
_____no_output_____
MIT
quick-start.ipynb
XudongWang97/rmi
To calculate also the gradient of the feature, one can use the function `get_feature_and_grad`
feature, gradient = rmi_net.get_feature_and_grad(samples) RMI = inf.RenormalizedMutualInformation(feature, gradient) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI)
Renormalized Mutual Information (x,f(x)): 1.81
MIT
quick-start.ipynb
XudongWang97/rmi
Tradeoff between simplicity and compression When optimizing renormalized mutual information to obtain a **meaningful feature** (in the sense of representation learning), one should avoid to employ too powerful networks.A good feature should set a convenient tradeoff between its **"simplicity"** (i.e. number of parameters, or how "smooth" the feature is) and its **information content** (i.e. how much the input space is compressed in a smaller dimension). In other words, useful representations should be "well-behaved", even at the price of reducing their renormalized mutual information. We can show this idea in a straight forward example
# Let's define a linear feature def cheating_feature(x): Feature = x[:, 0]*np.cos(best_theta) + x[:, 1]*np.sin(best_theta) step_size = 3 step_width = 1/12 step_argument = x[:, 0]*np.cos(best_theta+np.pi/2) + x[:, 1]*np.sin(best_theta+np.pi/2) Feature +=step_size*np.tanh(step_argument/step_width) Grad1 = np.full(x.shape[0], np.cos(best_theta)) Grad2 = np.full(x.shape[0], np.sin(best_theta)) Grad1 += step_size/step_width*np.cos(best_theta+np.pi/2)/np.cosh(step_argument/step_width)**2 Grad2 += step_size/step_width*np.sin(best_theta+np.pi/2)/np.cosh(step_argument/step_width)**2 return Feature, np.array([Grad1, Grad2]).T samples = np.random.multivariate_normal(mu, sigma, N_samples ) feature, gradient = cheating_feature(x_points) plt.figure() plt.title("Feature contours") plt.xlabel(r"$x_1$") plt.ylabel(r"$x_2$") plt.gca().set_aspect('equal') # Draw the input distribution on the background samples = np.random.multivariate_normal(mu, sigma, N_samples ) plt.hist2d(*samples.T, bins=100, cmap=plt.cm.binary) # Draw the contour lines of the extracted feature plt.xlim([-4,4]) plt.ylim([-4,4]) plt.contour(x1_grid, x2_grid, feature.reshape([N,N]),15, linewidths=4, cmap=plt.cm.Blues) plt.colorbar() plt.show() feature, gradient = cheating_feature(samples) RMI = inf.RenormalizedMutualInformation(feature, gradient) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI) p_y, delta_y = inf.produce_P(feature) entropy = inf.Entropy(p_y, delta_y) fterm = inf.RegTerm(gradient) print("Entropy\t %2.2f" % entropy) print("Fterm\t %2.2f" % fterm)
_____no_output_____
MIT
quick-start.ipynb
XudongWang97/rmi
This feature has a larger mutual information than the linear one. It is still increasing in the direction of largest variance of $x$.However, it contains a _jump_ in the orthogonal direction. This jump allows to encode a "bit" of additional information (about the orthogonal coordinate), allowing to unambiguously distinguish whether $x$ was extracted on the left or right side of the Gaussian.In principle, one can add an arbitrary number of jumps until the missing coordinate can be identified with arbitrary precision. This feature would have an arbitrary high renormalized mutual information (as it should be, since it contains more information on $x$). However, such a non-smooth feature is definitely not useful for feature extraction!One can avoid these extremely compressed representations by encouraging simpler features (like smooth, or a neural network with a limited number of layers for example).
# Histogram of the feature # The continuous value of x encodes one coordinate, # the two peaks of the distribution provide additional information # on the second coordinate! plt.hist(feature,1000) plt.show()
_____no_output_____
MIT
quick-start.ipynb
XudongWang97/rmi
UPDATEThis notebook is no longer being used. Please look at the most recent version, NLSS_V2 found in the same directory. My project looks at the Northernlion Live Super Show, a thrice a week Twitch stream which has been running since 2013. Unlike a video service like Youtube, the live nature of Twitch allows for a more conversational 'live comment stream' to accompany a video. My goal is to gather statistics about the episodes and pair this with a list of all the comments corrosponding to a video. Using this I will attempt to recognize patterns in Twitch comments based on the video statistics.
# Every returned Out[] is displayed, not just the last one. from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" import nltk import pandas as pd import numpy as np
_____no_output_____
MIT
OldFiles/NLSS.ipynb
AndrewRyan95/Twitch_Sentiment_Analysis