text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
# Processing IoT data ## Summary This notebook explains how to process telemetry data coming from IoT devices that arrives trough a gateway enabled edgeHub. ## Description The purpose of this notebook is to explain and guide the reader onto how to process telemetry data generated from IoT devices whitin the DSVM IoT extension. ## Requirements * A gateway enabled Edge Runtime. See 'Setting up IoT Edge' * A sniffer architecture deployed. See 'Obtaining IoT Telemetry' * A device sending telemetry to your gateway. for this notebook we have choosed the scenario where a device is sending Temperature telemetry. ## Documentation * https://tutorials-raspberrypi.com/raspberry-pi-measure-humidity-temperature-dht11-dht22/ * http://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/ ## Step 1: Reading generated data During this step we are going to load the data generated from the IoT devices. The sniffer module mounts a docker volume in order to share data between the module and the host, in order to retrieve the path where the module is storing it's data you can run the following: ``` %%bash # Listing the volumes sudo docker volume ls %%bash # Getting the volume path sudo docker inspect <volume id> ``` Copy the path that was obtained as result from the last command. Since the file location is protected, we are going to make a directory and copy the file over there. The name of the file generated from the module is called data.json Path: volume_path/data.json ``` %%bash ## Making a directory mkdir "/home/$USER/IoT/Data" sudo cp <file path> "/home/$USER/IoT/Data/data.json" ## ``` Next, we are going to extract the data using python. ``` import json import numpy as np ## Reading the data from the file ## Note Change your user path = "/home/<user>/IoT/Data" file = path + "/data.json" data = {} with open(file) as f: for line in f.readlines(): sample = json.loads(line) for key in sample.keys(): if key in data: data[key].append([len(data[key]),sample[key]]) else: data[key] = [] data[key].append([len(data[key]),sample[key]]) temperature = np.array(data['temperature']) ``` ## Step 2: Using a low pass filter in order to detect anomalies "The simplest approach to identifying irregularities in data is to flag the data points that deviate from common statistical properties of a distribution, including mean, median, mode, and quantiles. Let's say the definition of an anomalous data point is one that deviates by a certain standard deviation from the mean. Traversing mean over time-series data isn't exactly trivial, as it's not static. You would need a rolling window to compute the average across the data points. Technically, this is called a rolling average or a moving average, and it's intended to smooth short-term fluctuations and highlight long-term ones. Mathematically, an n-period simple moving average can also be defined as a 'low pass filter.' " In this next step we are going to build a low pass filter (moving average) using discrete linear convolution to detect anomalies in our telemetry data. Check the documentation for a more detailed explanation of the theory. ``` from __future__ import division from itertools import count import matplotlib.pyplot as plt from numpy import linspace, loadtxt, ones, convolve import pandas as pd import collections from random import randint from matplotlib import style style.use('fivethirtyeight') %matplotlib inline print(temperature) ## Adding some noise #temperature[50][1] = 50.0 #temperature[100][1] = 75.0 #temperature[150][1] = 50.0 #temperature[200][1] = 75.0 data_as_frame = pd.DataFrame(temperature, columns=['index','temperature']) data_as_frame.head() # Computes moving average using discrete linear convolution of two one dimensional sequences. def moving_average(data, window_size): window = np.ones(int(window_size))/float(window_size) return np.convolve(data, window, 'same') # Helps in exploring the anamolies using stationary standard deviation def explain_anomalies(y, window_size, sigma=1.0): avg = moving_average(y, window_size).tolist() residual = y - avg # Calculate the variation in the distribution of the residual std = np.std(residual) return {'standard_deviation': round(std, 3), 'anomalies_dict': collections.OrderedDict([(index, y_i) for index, y_i, avg_i in zip(count(), y, avg) if (y_i > avg_i + (sigma*std)) | (y_i < avg_i - (sigma*std))])} # Helps in exploring the anamolies using rolling standard deviation def explain_anomalies_rolling_std(y, window_size, sigma=1.0): avg = moving_average(y, window_size) avg_list = avg.tolist() residual = y - avg # Calculate the variation in the distribution of the residual testing_std = pd.rolling_std(residual, window_size) testing_std_as_df = pd.DataFrame(testing_std) rolling_std = testing_std_as_df.replace(np.nan, testing_std_as_df.ix[window_size - 1]).round(3).iloc[:,0].tolist() std = np.std(residual) return {'stationary standard_deviation': round(std, 3), 'anomalies_dict': collections.OrderedDict([(index, y_i) for index, y_i, avg_i, rs_i in zip(count(), y, avg_list, rolling_std) if (y_i > avg_i + (sigma * rs_i)) | (y_i < avg_i - (sigma * rs_i))])} # This function is repsonsible for displaying how the function performs on the given dataset. def plot_results(x, y, window_size, sigma_value=1,text_xlabel="X Axis", text_ylabel="Y Axis", applying_rolling_std=False): plt.figure(figsize=(15, 8)) plt.plot(x, y, "k.") y_av = moving_average(y, window_size) plt.plot(x, y_av, color='green') plt.xlim(0, 1000) plt.xlabel(text_xlabel) plt.ylabel(text_ylabel) # Query for the anomalies and plot the same events = {} if applying_rolling_std: events = explain_anomalies_rolling_std(y, window_size=window_size, sigma=sigma_value) else: events = explain_anomalies(y, window_size=window_size, sigma=sigma_value) x_anomaly = np.fromiter(events['anomalies_dict'].keys(), dtype=int, count=len(events['anomalies_dict'])) y_anomaly = np.fromiter(events['anomalies_dict'].values(), dtype=float, count=len(events['anomalies_dict'])) plt.plot(x_anomaly, y_anomaly, "r*", markersize=12) # add grid and lines and enable the plot plt.grid(True) plt.show() x = data_as_frame['index'] Y = data_as_frame['temperature'] # plot the results plot_results(x, y=Y, window_size=10, text_xlabel="Moment", sigma_value=3, text_ylabel="Temperature") events = explain_anomalies(y, window_size=5, sigma=3) # Display the anomaly dict print("Information about the anomalies model:{}".format(events)) ```
github_jupyter
# DALI expressions and arithmetic operators In this example, we will show simple examples how to use binary arithmetic operators in DALI Pipeline that allow for element-wise operations on tensors inside a pipeline. We will show available operators and examples of using constant and scalar inputs. ## Supported operators DALI currently supports unary arithmetic operators: `+`, `-`; binary arithmetic operators: `+`, `-`, `*`, `/`, and `//`; comparison operators: `==`, `!=`, `<`, `<=`, `>`, `>=`; and bitwise binary operators: `&`, `|`, `^`. Binary operators can be used as an operation between two tensors, between a tensor and a scalar or a tensor and a constant. By tensor we consider the output of DALI operators (either regular ones or other arithmetic operators). Unary operators work only with Tensor inputs. We will focus on binary arithmetic operators, Tensor, Constant and Scalar operands. The detailed type promotion rules for comparison and bitwise operators are covered in the **Supported operations** section of documentation as well as other examples. ### Prepare the test pipeline First, we will prepare the helper code, so we can easily manipulate the types and values that will appear as tensors in the DALI pipeline. We will be using numpy as source for the custom provided data and we also need to import several things from DALI needed to create Pipeline and use ExternalSource Operator. ``` import numpy as np from nvidia.dali.pipeline import Pipeline import nvidia.dali.ops as ops import nvidia.dali.types as types from nvidia.dali.types import Constant ``` ### Defining the data As we are dealing with binary operators, we need two inputs. We will create a simple helper function that returns two batches of hardcoded data, stored as `np.int32`. In an actual scenario the data processed by DALI arithmetic operators would be tensors produced by other Operator containing some images, video sequences or other data. You can experiment by changing those values or adjusting the `get_data()` function to use different input data. Keep in mind that shapes of both inputs need to match as those will be element-wise operations. ``` left_magic_values = [ [[42, 7, 0], [0, 0, 0]], [[5, 10, 15], [10, 100, 1000]] ] right_magic_values = [ [[3, 3, 3], [1, 3, 5]], [[1, 5, 5], [1, 1, 1]] ] batch_size = len(left_magic_values) def convert_batch(batch): return [np.int32(tensor) for tensor in batch] def get_data(): return (convert_batch(left_magic_values), convert_batch(right_magic_values)) ``` ## Operating on tensors ### Defining the pipeline The next step is to define our pipeline. The data will be obtained from `get_data` function and made available to the pipeline through `ExternalSource`. Note, that we do not need to instantiate any additional operators, we can use regular Python arithmetic expressions on the results of other operators in the `define_graph` step. Let's manipulate the source data by adding, multiplying and dividing it. `define_graph` will return both our data inputs and the result of applying arithmetic operations to them. ``` class ArithmeticPipeline(Pipeline): def __init__(self, batch_size, num_threads, device_id): super(ArithmeticPipeline, self).__init__(batch_size, num_threads, device_id) self.source = ops.ExternalSource(get_data, num_outputs = 2) def define_graph(self): l, r = self.source() sum_result = l + r mul_result = l * r div_result = l // r return l, r, sum_result, mul_result, div_result ``` ### Running the pipeline Lets build and run our pipeline ``` pipe = ArithmeticPipeline(batch_size = batch_size, num_threads = 2, device_id = 0) pipe.build() out = pipe.run() ``` Now it's time to display the results: ``` def examine_output(pipe_out): l = pipe_out[0].as_array() r = pipe_out[1].as_array() sum_out = pipe_out[2].as_array() mul_out = pipe_out[3].as_array() div_out = pipe_out[4].as_array() print("{}\n+\n{}\n=\n{}\n\n".format(l, r, sum_out)) print("{}\n*\n{}\n=\n{}\n\n".format(l, r, mul_out)) print("{}\n//\n{}\n=\n{}\n\n".format(l, r, div_out)) examine_output(out) ``` As we can see the resulting tensors are obtained by applying the arithmetic operation between corresponding elements of its inputs. The shapes of the arguments to arithmetic operators should match (with an exception for scalar tensor inputs that we will describe in the next section), otherwise we will get an error. ## Constant and scalar operands Until now we considered only tensor inputs of matching shapes for inputs of arithmetic operators. DALI allows one of the operands to be a constant or a batch of scalars. They can appear on both sides of binary expressions. ## Constants In `define_graph` step, constant operand for arithmetic operator can be: values of Python's `int` and `float` types used directly, or those values wrapped in `nvidia.dali.types.Constant`. Operation between tensor and constant results in the constant being broadcast to all elements of the tensor. *Note: Currently all values of integral constants are passed internally to DALI as int32 and all values of floating point constants are passed to DALI as float32.* The Python `int` values will be treated as `int32` and the `float` as `float32` in regard to type promotions. The DALI `Constant` can be used to indicate other types. It accepts `DALIDataType` enum values as second argument and has convenience member functions like `.uint8()` or `.float32()` that can be used for conversions. ### Using the Constants Let's adjust the Pipeline to utilize constants first. ``` class ArithmeticConstantsPipeline(Pipeline): def __init__(self, batch_size, num_threads, device_id): super(ArithmeticConstantsPipeline, self).__init__(batch_size, num_threads, device_id) self.source = ops.ExternalSource(get_data, num_outputs = 2) def define_graph(self): l, r = self.source() add_200 = l + 200 mul_075 = l * 0.75 sub_15 = Constant(15).float32() - r return l, r, add_200, mul_075, sub_15 pipe = ArithmeticConstantsPipeline(batch_size = batch_size, num_threads = 2, device_id = 0) pipe.build() out = pipe.run() ``` Now it's time to display the results: ``` def examine_output(pipe_out): l = pipe_out[0].as_array() r = pipe_out[1].as_array() add_200 = pipe_out[2].as_array() mul_075 = pipe_out[3].as_array() sub_15 = pipe_out[4].as_array() print("{}\n+ 200 =\n{}\n\n".format(l, add_200)) print("{}\n* 0.75 =\n{}\n\n".format(l, mul_075)) print("15 -\n{}\n=\n{}\n\n".format(r, sub_15)) examine_output(out) ``` As we can see the constant value is used with all elements of all tensors in the batch. ## Dynamic scalars It is sometimes useful to evaluate an expression with one argument being a tensor and the other being scalar. If the scalar value is constant thoughout the execution of the pipeline, `types.Cosntant` can be used. When dynamic scalar values are needed, they can be constructed as 0D tensors (with empty shape). If DALI encounters such a tensor, it will broadcast it to match the shape of the tensor argument. Note, that DALI operates on batches - and as such, the scalars are also supplied as batches, with each scalar operand being used with other operands at the same index in the batch. ### Using scalar tensors We will use an `ExternalSource` to generate a sequence of numbers which will be then added to the tensor operands. ``` class ArithmeticScalarsPipeline(Pipeline): def __init__(self, batch_size, num_threads, device_id): super(ArithmeticScalarsPipeline, self).__init__(batch_size, num_threads, device_id) # we only need one input self.tensor_source = ops.ExternalSource(lambda: get_data()[0]) # a batch of scalars from 1 to batch_size scalars = np.arange(1, batch_size + 1) self.scalar_source = ops.ExternalSource(lambda: scalars) def define_graph(self): tensors = self.tensor_source() scalars = self.scalar_source() return tensors, scalars, tensors + scalars ``` Now it's time to build and run the Pipeline. It will allow to scale our input by some random numbers generated by the `Uniform` Operator. ``` pipe = ArithmeticScalarsPipeline(batch_size = batch_size, num_threads = 2, device_id = 0) pipe.build() out = pipe.run() def examine_output(pipe_out): t = pipe_out[0].as_array() uni = pipe_out[1].as_array() scaled = pipe_out[2].as_array() print("{}\n+\n{}\n=\n{}".format(t, uni, scaled)) examine_output(out) ``` Notice how the first scalar in the batch (1) is added to all elements in the first tensor and the second scalar (2) to the second tensor.
github_jupyter
## Face and Facial Keypoint detection After you've trained a neural network to detect facial keypoints, you can then apply this network to *any* image that includes faces. The neural network expects a Tensor of a certain size as input and, so, to detect any face, you'll first have to do some pre-processing. 1. Detect all the faces in an image using a face detector (we'll be using a Haar Cascade detector in this notebook). 2. Pre-process those face images so that they are grayscale, and transformed to a Tensor of the input size that your net expects. This step will be similar to the `data_transform` you created and applied in Notebook 2, whose job was tp rescale, normalize, and turn any iimage into a Tensor to be accepted as input to your CNN. 3. Use your trained model to detect facial keypoints on the image. --- In the next python cell we load in required libraries for this section of the project. ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline ``` #### Select an image Select an image to perform facial keypoint detection on; you can select any image of faces in the `images/` directory. ``` import cv2 # load in color image for face detection image = cv2.imread('images/obamas.jpg') # switch red and blue color channels # --> by default OpenCV assumes BLUE comes first, not RED as in many images image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # plot the image fig = plt.figure(figsize=(9,9)) plt.imshow(image) ``` ## Detect all faces in an image Next, you'll use one of OpenCV's pre-trained Haar Cascade classifiers, all of which can be found in the `detector_architectures/` directory, to find any faces in your selected image. In the code below, we loop over each face in the original image and draw a red square on each face (in a copy of the original image, so as not to modify the original). You can even [add eye detections](https://docs.opencv.org/3.4.1/d7/d8b/tutorial_py_face_detection.html) as an *optional* exercise in using Haar detectors. An example of face detection on a variety of images is shown below. <img src='images/haar_cascade_ex.png' width=80% height=80%/> ``` # load in a haar cascade classifier for detecting frontal faces face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml') # run the detector # the output here is an array of detections; the corners of each detection box # if necessary, modify these parameters until you successfully identify every face in a given image faces = face_cascade.detectMultiScale(image, 1.2, 2) # make a copy of the original image to plot detections on image_with_detections = image.copy() # loop over the detected faces, mark the image where each face is found for (x,y,w,h) in faces: # draw a rectangle around each detected face # you may also need to change the width of the rectangle drawn depending on image resolution cv2.rectangle(image_with_detections,(x,y),(x+w,y+h),(255,0,0),3) fig = plt.figure(figsize=(9,9)) plt.imshow(image_with_detections) ``` ## Loading in a trained model Once you have an image to work with (and, again, you can select any image of faces in the `images/` directory), the next step is to pre-process that image and feed it into your CNN facial keypoint detector. First, load your best model by its filename. ``` import torch from models import Net net = Net() ## TODO: load the best saved model parameters (by your path name) ## You'll need to un-comment the line below and add the correct name for *your* saved model net.load_state_dict(torch.load('saved_models/keypoints_model_1.pt')) ## print out your net and prepare it for testing (uncomment the line below) net.eval() ``` ## Keypoint detection Now, we'll loop over each detected face in an image (again!) only this time, you'll transform those faces in Tensors that your CNN can accept as input images. ### TODO: Transform each detected face into an input Tensor You'll need to perform the following steps for each detected face: 1. Convert the face from RGB to grayscale 2. Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255] 3. Rescale the detected face to be the expected square size for your CNN (224x224, suggested) 4. Reshape the numpy image into a torch image. You may find it useful to consult to transformation code in `data_load.py` to help you perform these processing steps. ### TODO: Detect and display the predicted keypoints After each face has been appropriately converted into an input Tensor for your network to see as input, you'll wrap that Tensor in a Variable() and can apply your `net` to each face. The ouput should be the predicted the facial keypoints. These keypoints will need to be "un-normalized" for display, and you may find it helpful to write a helper function like `show_keypoints`. You should end up with an image like the following with facial keypoints that closely match the facial features on each individual face: <img src='images/michelle_detected.png' width=30% height=30%/> ``` image_copy = np.copy(image) # loop over the detected faces from your haar cascade all_transformed_images = [] for (x,y,w,h) in faces: transformed_images = [] # Select the region of interest that is the face in the image roi = image_copy.copy()[y:y+h, x:x+w] # 0 transformed_images.append(roi) # 1 ## TODO: Convert the face region from RGB to grayscale gray = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY) transformed_images.append(gray) # 2 ## TODO: Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255] normalized = gray / 255.0 transformed_images.append(normalized) # 3 ## TODO: Rescale the detected face to be the expected square size for your CNN (224x224, suggested) scaled = cv2.resize(normalized, dsize=(224, 224), interpolation=cv2.INTER_CUBIC) transformed_images.append(scaled) # 4 ## TODO: Reshape the numpy image shape (H x W x C) into a torch image shape (C x H x W) reshaped = scaled.reshape(1, scaled.shape[0], scaled.shape[1]) transformed_images.append(reshaped.reshape(224, 224)) # 5 ## TODO: Make facial keypoint predictions using your loaded, trained network ## perform a forward pass to get the predicted facial keypoints predicted = net(torch.Tensor(reshaped).reshape(1, 1, 224, 224)) predicted = predicted.transpose(1, 0).reshape(68, 2) * 50 + 100 transformed_images.append(predicted.detach().numpy()) all_transformed_images.append(transformed_images) # TODO: Display each detected face and the corresponding keypoints plt.figure(figsize=(20, 10)) for i, transformed_images in enumerate(all_transformed_images): for j, img in enumerate(transformed_images): # print(img.shape) ax = plt.subplot(len(all_transformed_images), len(transformed_images), len(transformed_images) * i + j + 1) if(img.shape[0] == 68): ax.imshow(transformed_images[4], cmap='gray') plt.scatter(img[:, 0], img[:, 1], s=5, marker='.', c='m') else: ax.imshow(img, cmap='gray') plt.show() ```
github_jupyter
# Detailed execution time for cadCAD models *Danilo Lessa Bernardineli* --- This notebook shows how you can use metadata on PSUBs in order to do pre-processing on the simulations. We use two keys for flagging them: the `ignore` which indicates which PSUBs we want to skip, and the `debug`, which informs us what are the ones which we want to monitor the policies execution time. ``` from time import time import logging from functools import wraps logging.basicConfig(level=logging.DEBUG) def print_time(f): """ """ @wraps(f) def wrapper(*args, **kwargs): # Current timestep t = len(args[2]) t1 = time() f_out = f(*args, **kwargs) t2 = time() text = f"{t}|{f.__name__} output (exec time: {t2 - t1:.2f}s): {f_out}" logging.debug(text) return f_out return wrapper ``` ## Dependences ``` %%capture !pip install cadcad import pandas as pd import matplotlib.pyplot as plt import numpy as np from cadCAD.configuration import Experiment from cadCAD.configuration.utils import config_sim from cadCAD.engine import ExecutionMode, ExecutionContext, Executor ``` ## Definitions ### Initial conditions and parameters ``` initial_conditions = { 'prey_population': 100, 'predator_population': 15 } params = { "prey_birth_rate": [1.0], "predator_birth_rate": [0.01], "predator_death_const": [1.0], "prey_death_const": [0.03], "dt": [0.1] # Precision of the simulation. Lower is more accurate / slower } simulation_parameters = { 'N': 1, 'T': range(30), 'M': params } ``` ### Policies ``` def p_predator_births(params, step, sL, s): dt = params['dt'] predator_population = s['predator_population'] prey_population = s['prey_population'] birth_fraction = params['predator_birth_rate'] + np.random.random() * 0.0002 births = birth_fraction * prey_population * predator_population * dt return {'add_to_predator_population': births} def p_prey_births(params, step, sL, s): dt = params['dt'] population = s['prey_population'] birth_fraction = params['prey_birth_rate'] + np.random.random() * 0.1 births = birth_fraction * population * dt return {'add_to_prey_population': births} def p_predator_deaths(params, step, sL, s): dt = params['dt'] population = s['predator_population'] death_rate = params['predator_death_const'] + np.random.random() * 0.005 deaths = death_rate * population * dt return {'add_to_predator_population': -1.0 * deaths} def p_prey_deaths(params, step, sL, s): dt = params['dt'] death_rate = params['prey_death_const'] + np.random.random() * 0.1 prey_population = s['prey_population'] predator_population = s['predator_population'] deaths = death_rate * prey_population * predator_population * dt return {'add_to_prey_population': -1.0 * deaths} ``` ### State update functions ``` def s_prey_population(params, step, sL, s, _input): y = 'prey_population' x = s['prey_population'] + _input['add_to_prey_population'] return (y, x) def s_predator_population(params, step, sL, s, _input): y = 'predator_population' x = s['predator_population'] + _input['add_to_predator_population'] return (y, x) ``` ### State update blocks ``` partial_state_update_blocks = [ { 'label': 'Predator dynamics', 'ignore': False, 'debug': True, 'policies': { 'predator_births': p_predator_births, 'predator_deaths': p_predator_deaths }, 'variables': { 'predator_population': s_predator_population } }, { 'label': 'Prey dynamics', 'ignore': True, 'debug': True, 'policies': { 'prey_births': p_prey_births, 'prey_deaths': p_prey_deaths }, 'variables': { 'prey_population': s_prey_population } } ] # Mantain only PSUBs which doesn't have the ignore flag partial_state_update_blocks = [psub for psub in partial_state_update_blocks if psub.get('ignore', False) == False] # Only check the execution time for the PSUBs with the debug flag for psub in partial_state_update_blocks: psub['policies'] = {label: print_time(f) for label, f in psub['policies'].items()} ``` ### Configuration and Execution ``` sim_config = config_sim(simulation_parameters) exp = Experiment() exp.append_configs(sim_configs=sim_config, initial_state=initial_conditions, partial_state_update_blocks=partial_state_update_blocks) from cadCAD import configs exec_mode = ExecutionMode() exec_context = ExecutionContext(exec_mode.local_mode) executor = Executor(exec_context=exec_context, configs=configs) (records, tensor_field, _) = executor.execute() ``` ### Results ``` import plotly.express as px df = pd.DataFrame(records) fig = px.line(df, x=df.prey_population, y=df.predator_population, color=df.run.astype(str)) fig.show() ```
github_jupyter
### Ch6 Figure1 ``` # Think about your running shoe website. A data analyst should have little trouble finding websites that referred customers to the store. Let's say that most of your customers came from Twitter, Google and Facebook. There were also quite a few customers that came from running shoe websites. A good data analyst easily creates a report of the top 50 websites. These are websites that people visited just before buying. Trying to find out where people are coming from is a good analytics question. It's about gathering up the data, counting it and displaying it in a nice report. referrals = ['facebook', 'google', 'amazon', 'twitter', 'slickdeals', 'instagram', 'pinterest', 'ebates', 'fitness magazine', 'discovery', 'youtube', 'messenger'] referral_type = ['organic', 'paid'] data = [] for i in range(1000): data.append([i, random_date(), referrals[rd.randint(0, len(referrals)-1)], referral_type[rd.randint(0, len(referral_type)-1)]]) df = pd.DataFrame(data, columns = ['id', 'timestamp', 'referral-site', 'referral-type']) # df.to_csv('csv_output/ch6_fig1.csv', index=False) df = pd.read_csv('csv_output/ch6_fig1.csv') df.head() df = pd.read_csv('csv_output/ch6_fig1.csv') site_count = df.groupby('referral-site').id.count().reset_index() site_type_count = df.groupby(['referral-site', 'referral-type']).id.count().reset_index() %matplotlib inline sns.set_style("whitegrid") f, ax = plt.subplots(1,2, figsize=(8,6)) sns.barplot(x='id', y='referral-site', data=site_count, ax=ax[0], color='cornflowerblue'); ax[0].set_title('refferal-site total visits') ax[0].set_xlabel('') sns.barplot(x='id', y='referral-site', hue='referral-type', data=site_type_count, ax=ax[1]); ax[1].legend(loc='center right', bbox_to_anchor=(1.5, .9)); ax[1].set_title('referral-site by referral-type') ax[1].set_xlabel('') f.tight_layout() f.savefig('svg_output/ch6_fig1.svg', format='svg', bbox_inches='tight') ``` Even though facebook, twitter and instagram seem to bring great traffic, but in terms of paid versus organic traffic, while pinterest drives comparable amount of traffic as other sites, but about half of them are paid advertisement. ``` %load_ext rpy2.ipython %%R -w 600 -h 300 -u px require(dplyr) df = read.csv('csv_output/ch6_fig1.csv') head(df) df$timestamp.formated = strptime(df$timestamp, "%Y-%m-%d %H:%M:%S") df$timestamp.day = as.Date(format(df$timestamp.formated, "%Y-%m-%d"), "%Y-%m-%d") df = df %>% select(-timestamp.formated) by_day = df %>% group_by(timestamp.day, referral.type) by_day_by_type = summarise(by_day, count=n()) require(ggplot2) ggplot(by_day_by_type, aes(x=timestamp.day, y=count, group=referral.type, colour=referral.type)) + geom_line(size=1) + geom_point(size=3) + scale_x_date(date_labels = "%b %d") + ggtitle('referral click over time by referral type') + theme_bw() + theme(axis.text.x = element_text(angle = 30, hjust= 1)) # ggsave("svg_output/ch6_fig1_R.svg") ```
github_jupyter
# Exercise: FPGA and the DevCloud Now that we've walked through the process of requesting an edge node with a CPU and Intel® Arria 10 FPGA on Intel's DevCloud and loading a model on the Intel® Arria 10 FPGA, you will have the opportunity to do this yourself with the addition of running inference on an image. In this exercise, you will do the following: 1. Write a Python script to load a model and run inference 10 times on a device on Intel's DevCloud. * Calculate the time it takes to load the model. * Calculate the time it takes to run inference 10 times. 2. Write a shell script to submit a job to Intel's DevCloud. 3. Submit a job using `qsub` on an **IEI Tank-870** edge node with an **Intel® Arria 10 FPGA**. 4. Run `liveQStat` to view the status of your submitted jobs. 5. Retrieve the results from your job. 6. View the results. Click the **Exercise Overview** button below for a demonstration. <span class="graffiti-highlight graffiti-id_vskulnq-id_oudamc9"><i></i><button>Exercise Overview</button></span> #### IMPORTANT: Set up paths so we can run Dev Cloud utilities You *must* run this every time you enter a Workspace session. ``` %env PATH=/opt/conda/bin:/opt/spark-2.4.3-bin-hadoop2.7/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/intel_devcloud_support import os import sys sys.path.insert(0, os.path.abspath('/opt/intel_devcloud_support')) sys.path.insert(0, os.path.abspath('/opt/intel')) ``` ## The Model We will be using the `vehicle-license-plate-detection-barrier-0106` model for this exercise. Remember that to run a model on the FPGA, we need to use `FP16` as the model precision. The model has already been downloaded for you in the `/data/models/intel` directory on Intel's DevCloud. We will be running inference on an image of a car. The path to the image is `/data/resources/car.png` # Step 1: Creating a Python Script The first step is to create a Python script that you can use to load the model and perform inference. We'll use the `%%writefile` magic to create a Python file called `inference_on_device.py`. In the next cell, you will need to complete the `TODO` items for this Python script. `TODO` items: 1. Load the model 2. Get the name of the input node 3. Prepare the model for inference (create an input dictionary) 4. Run inference 10 times in a loop If you get stuck, you can click on the **Show Solution** button below for a walkthrough with the solution code. ``` %%writefile inference_on_device.py import time import numpy as np import cv2 from openvino.inference_engine import IENetwork from openvino.inference_engine import IECore import argparse def main(args): model=args.model_path model_weights=model+'.bin' model_structure=model+'.xml' start=time.time() # TODO: Load the model model=IENetwork(model_structure, model_weights) core = IECore() net = core.load_network(network=model, device_name=args.device, num_requests=1) print(f"Time taken to load model = {time.time()-start} seconds") input_name=next(iter(model.inputs)) # Reading and Preprocessing Image input_img=cv2.imread('/data/resources/car.png') input_img=cv2.resize(input_img, (300,300), interpolation = cv2.INTER_AREA) input_img=np.moveaxis(input_img, -1, 0) # TODO: Prepare the model for inference (create input dict etc.) input_dict={input_name:input_img} start=time.time() for _ in range(10): # TODO: Run Inference in a Loop net.infer(input_dict) print(f"Time Taken to run 10 Inference on FPGA is = {time.time()-start} seconds") if __name__=='__main__': parser=argparse.ArgumentParser() parser.add_argument('--model_path', required=True) parser.add_argument('--device', default=None) args=parser.parse_args() main(args) ``` <span class="graffiti-highlight graffiti-id_f28ff2h-id_4psdryf"><i></i><button>Show Solution</button></span> ## Step 2: Creating a Job Submission Script To submit a job to the DevCloud, you'll need to create a shell script. Similar to the Python script above, we'll use the `%%writefile` magic command to create a shell script called `inference_fpga_model_job.sh`. In the next cell, you will need to complete the `TODO` items for this shell script. `TODO` items: 1. Create three variables: * `DEVICE` - Assign the value as the first argument passed into the shell script. * `MODELPATH` - Assign the value as the second argument passed into the shell script. 2. Call the Python script using the three variable values as the command line argument If you get stuck, you can click on the **Show Solution** button below for a walkthrough with the solution code. ``` %%writefile inference_fpga_model_job.sh #!/bin/bash exec 1>/output/stdout.log 2>/output/stderr.log mkdir -p /output # TODO: Create DEVICE variable # TODO: Create MODELPATH variable DEVICE=$1 MODELPATH=$2 export AOCL_BOARD_PACKAGE_ROOT=/opt/intel/openvino/bitstreams/a10_vision_design_sg2_bitstreams/BSP/a10_1150_sg2 source /opt/altera/aocl-pro-rte/aclrte-linux64/init_opencl.sh aocl program acl0 /opt/intel/openvino/bitstreams/a10_vision_design_sg2_bitstreams/2020-2_PL2_FP16_MobileNet_Clamp.aocx export CL_CONTEXT_COMPILER_MODE_INTELFPGA=3 # TODO: Call the Python script python3 inference_on_device.py --model_path ${MODELPATH} --device ${DEVICE} cd /output tar zcvf output.tgz * # compresses all files in the current directory (output) ``` <span class="graffiti-highlight graffiti-id_5e0vxvt-id_5zk2mzh"><i></i><button>Show Solution</button></span> ## Step 3: Submitting a Job to Intel's DevCloud In the next cell, you will write your `!qsub` command to load your model and run inference on the **IEI Tank-870** edge node with an **Intel Core i5** CPU and an **Intel® Arria 10 FPGA**. Your `!qsub` command should take the following flags and arguments: 1. The first argument should be the shell script filename 2. `-d` flag - This argument should be `.` 3. `-l` flag - This argument should request an edge node with an **IEI Tank-870**. The default quantity is 1, so the **1** after `nodes` is optional. * **Intel Core i5 6500TE** for your `CPU`. * **Intel® Arria 10** for your `FPGA`. To get the queue labels for these devices, you can go to [this link](https://devcloud.intel.com/edge/get_started/devcloud/) 4. `-F` flag - This argument should contain the two values to assign to the variables of the shell script: * **DEVICE** - Device type for the job: `FPGA`. Remember that we need to use the **Heterogenous plugin** (HETERO) to run inference on the FPGA. * **MODELPATH** - Full path to the model for the job. As a reminder, the model is located in `/data/models/intel`. **Note**: There is an optional flag, `-N`, you may see in a few exercises. This is an argument that only works on Intel's DevCloud that allows you to name your job submission. This argument doesn't work in Udacity's workspace integration with Intel's DevCloud. ``` job_id_core = !qsub inference_fpga_model_job.sh -d . -l nodes=1:tank-870:i5-6500te:iei-mustang-f100-a10 -F "HETERO:FPGA,CPU /data/models/intel/vehicle-license-plate-detection-barrier-0106/FP32/vehicle-license-plate-detection-barrier-0106" -N store_core print(job_id_core[0]) ``` <span class="graffiti-highlight graffiti-id_yr40vov-id_cvo0xg6"><i></i><button>Show Solution</button></span> ## Step 4: Running liveQStat Running the `liveQStat` function, we can see the live status of our job. Running the this function will lock the cell and poll the job status 10 times. The cell is locked until this finishes polling 10 times or you can interrupt the kernel to stop it by pressing the stop button at the top: ![stop button](assets/interrupt_kernel.png) * `Q` status means our job is currently awaiting an available node * `R` status means our job is currently running on the requested node **Note**: In the demonstration, it is pointed out that `W` status means your job is done. This is no longer accurate. Once a job has finished running, it will no longer show in the list when running the `liveQStat` function. Click the **Running liveQStat** button below for a demonstration. ``` job_id_core = !qsub inference_fpga_model_job.sh -d . -l nodes=1:tank-870:i5-6500te:iei-mustang-f100-a10 -F "HETERO:FPGA,CPU /data/models/intel/vehicle-license-plate-detection-barrier-0106/FP32/vehicle-license-plate-detection-barrier-0106" -N store_core print(job_id_core[0]) ``` <span class="graffiti-highlight graffiti-id_ecvm8yr-id_nnpaoep"><i></i><button>Running liveQStat</button></span> ``` import liveQStat liveQStat.liveQStat() ``` ## Step 5: Retrieving Output Files In this step, we'll be using the `getResults` function to retrieve our job's results. This function takes a few arguments. 1. `job id` - This value is stored in the `job_id_core` variable we created during **Step 3**. Remember that this value is an array with a single string, so we access the string value using `job_id_core[0]`. 2. `filename` - This value should match the filename of the compressed file we have in our `inference_fpga_model_job.sh` shell script. 3. `blocking` - This is an optional argument and is set to `False` by default. If this is set to `True`, the cell is locked while waiting for the results to come back. There is a status indicator showing the cell is waiting on results. **Note**: The `getResults` function is unique to Udacity's workspace integration with Intel's DevCloud. When working on Intel's DevCloud environment, your job's results are automatically retrieved and placed in your working directory. Click the **Retrieving Output Files** button below for a demonstration. <span class="graffiti-highlight graffiti-id_s7wimuv-id_xm8qs9p"><i></i><button>Retrieving Output Files</button></span> ``` import get_results get_results.getResults(job_id_core[0], filename="output.tgz", blocking=True) !tar zxf output.tgz !cat stdout.log !cat stderr.log ```
github_jupyter
``` import csv import argparse import json from collections import defaultdict, Counter import re from annotation_tool_1 import MAX_WORDS def process_repeat_dict(d): if d["loop"] == "ntimes": repeat_dict = {"repeat_key": "FOR"} processed_d = process_dict(with_prefix(d, "loop.ntimes.")) if 'repeat_for' in processed_d: repeat_dict["repeat_count"] = processed_d["repeat_for"] if 'repeat_dir' in processed_d: repeat_dict['repeat_dir'] = processed_d['repeat_dir'] return repeat_dict if d["loop"] == "repeat_all": repeat_dict = {"repeat_key": "ALL"} processed_d = process_dict(with_prefix(d, "loop.repeat_all.")) if 'repeat_dir' in processed_d: repeat_dict['repeat_dir'] = processed_d['repeat_dir'] return repeat_dict if d["loop"] == "forever": return {"stop_condition": {"condition_type": "NEVER"}} if d['loop'] == 'repeat_until': stripped_d = with_prefix(d, 'loop.repeat_until.') processed_d = process_dict(stripped_d) if 'adjacent_to_block_type' in processed_d: return {"stop_condition" : { "condition_type" : 'ADJACENT_TO_BLOCK_TYPE', 'block_type': processed_d['adjacent_to_block_type']} } else: return {"stop_condition" : { "condition_type" : 'ADJACENT_TO_BLOCK_TYPE',} } raise NotImplementedError("Bad repeat dict option: {}".format(d["loop"])) def process_get_memory_dict(d): filters_val = d['filters'] out_dict = {'filters': {}} parent_dict = {} if filters_val.startswith('type.'): parts = remove_prefix(filters_val, 'type.').split('.') type_val = parts[0] if type_val in ['ACTION', 'AGENT']: out_dict['filters']['temporal'] = 'CURRENT' tag_val = parts[1] out_dict['answer_type'] = 'TAG' out_dict['tag_name'] = parts[1] # the name of tag is here if type_val == 'ACTION': x = with_prefix(d, 'filters.'+filters_val+'.') out_dict['filters'].update(x) elif type_val in ['REFERENCE_OBJECT']: d.pop('filters') ref_obj_dict = remove_key_prefixes(d, ['filters.type.']) ref_dict = process_dict(ref_obj_dict) if 'answer_type' in ref_dict['reference_object']: out_dict['answer_type'] = ref_dict['reference_object']['answer_type'] ref_dict['reference_object'].pop('answer_type') if 'tag_name' in ref_dict['reference_object']: out_dict['tag_name'] = ref_dict['reference_object']['tag_name'] ref_dict['reference_object'].pop('tag_name') out_dict['filters'].update(ref_dict) out_dict['filters']['type'] = type_val return out_dict def remove_prefix(text, prefix): if text.startswith(prefix): return text[len(prefix):] def handle_get_memory(d): out_d = {'dialogue_type': 'GET_MEMORY'} child_d = process_get_memory_dict(with_prefix(d, "action_type.ANSWER.")) out_d.update(child_d) return out_d # convert s to snake case def snake_case(s): return re.sub("([a-z])([A-Z])", "\\1_\\2", s).lower() '''this function splits the key that starts with a given prefix and only for values that are not None and makes the key be the thing after prefix ''' def with_prefix(d, prefix): return { k.split(prefix)[1]: v for k, v in d.items() if k.startswith(prefix) and v not in ("", None, "None") } ''' this function removes certain prefixes from keys and renames the key to be: key with text following the prefix in the dict''' def remove_key_prefixes(d, ps): for p in ps: d = d.copy() rm_keys = [] add_items = [] # print(p, d) for k, v in d.items(): if k.startswith(p): rm_keys.append(k) add_items.append((k[len(p) :], v)) for k in rm_keys: del d[k] for k, v in add_items: d[k] = v return d def fix_spans_due_to_empty_words(action_dict, words): """Return modified (action_dict, words)""" def reduce_span_vals_gte(d, i): for k, v in d.items(): if type(v) == dict: reduce_span_vals_gte(v, i) continue try: a, b = v if a >= i: a -= 1 if b >= i: b -= 1 d[k] = [[a, b]] except ValueError: pass except TypeError: pass # remove trailing empty strings while words[-1] == "": del words[-1] # fix span i = 0 while i < len(words): if words[i] == "": reduce_span_vals_gte(action_dict, i) del words[i] else: i += 1 return action_dict, words def process_dict(d): r = {} # print(d) # print("----------------") d = remove_key_prefixes(d, ["COPY.yes.", "COPY.no.", 'FREEBUILD.BUILD.', 'answer_type.TAG.', 'FREEBUILD.FREEBUILD.', 'coref_resolve_check.yes.', 'coref_resolve_check.no.']) # print(d) # print("----------------new------------------") if "location" in d: r["location"] = {"location_type": d["location"]} if r['location']['location_type'] == 'coref_resolve_check': del r['location']['location_type'] elif r["location"]["location_type"] == "REFERENCE_OBJECT": r["location"]["location_type"] = "REFERENCE_OBJECT" r["location"]["relative_direction"] = d.get( "location.REFERENCE_OBJECT.relative_direction" ) # no key for EXACT if r["location"]["relative_direction"] in ("EXACT", "Other"): del r["location"]["relative_direction"] d["location.REFERENCE_OBJECT.relative_direction"] = None r["location"].update(process_dict(with_prefix(d, "location."))) for k, v in d.items(): if ( k == "location" or k in ['COPY', 'coref_resolve_check'] or (k == "relative_direction" and v in ("EXACT", "NEAR", "Other")) ): continue # handle span if re.match("[^.]+.span#[0-9]+", k): prefix, rest = k.split(".", 1) idx = int(rest.split("#")[-1]) if prefix in r: r[prefix].append([idx, idx]) # a, b = r[prefix] # r[prefix] = [min(a, idx), max(b, idx)] # expand span to include idx else: r[prefix] = [[idx, idx]] # handle nested dict elif "." in k: prefix, rest = k.split(".", 1) prefix_snake = snake_case(prefix) r[prefix_snake] = r.get(prefix_snake, {}) r[prefix_snake].update(process_dict(with_prefix(d, prefix + "."))) # handle const value else: r[k] = v return r def handle_put_memory(d): return {} def handle_commands(d): output = {} action_name = d["action_type"] formatted_dict = with_prefix(d, "action_type.{}.".format(action_name)) child_d = process_dict(with_prefix(d, "action_type.{}.".format(action_name))) # Fix Build/Freebuild mismatch if child_d.get("FREEBUILD") == "FREEBUILD": action_name = 'FREEBUILD' child_d.pop("FREEBUILD", None) if formatted_dict.get('COPY', 'no') == 'yes': action_name = 'COPY' formatted_dict.pop('COPY') # add action type info output['action_type'] = ['yes', action_name.lower()] # add dialogue type info if output['action_type'][1] == 'tag': output['dialogue_type'] = ['yes', 'PUT_MEMORY'] else: output['dialogue_type'] = ['yes', 'HUMAN_GIVE_COMMAND'] for k, v in child_d.items(): if k =='target_action_type': output[k] = ['yes', v] elif type(v)==list: output[k]= ['no', v] else: output[k] = ['yes', v] return output def process_result(full_d, index): worker_id = full_d["WorkerId"] d = with_prefix(full_d, "Answer.root.{}.".format(index)) if not d: return worker_id, {}, full_d['Input.command_{}'.format(index)].split() try: action = d["action_type"] except KeyError: return worker_id, {}, full_d['Input.command_{}'.format(index)].split() action_dict = handle_commands(d) ############## # repeat dict ############## #NOTE: this can probably loop over or hold indices of which specific action ? if action_dict.get('dialogue_type', [None, None])[1] == 'HUMAN_GIVE_COMMAND': if d.get("loop") not in [None, "Other"]: repeat_dict = process_repeat_dict(d) # Some turkers annotate a repeat dict for a repeat_count of 1. # Don't include the repeat dict if that's the case if repeat_dict.get('repeat_dir', None) == 'Other': repeat_dict.pop('repeat_dir') if repeat_dict.get("repeat_count"): a, b = repeat_dict["repeat_count"][0] repeat_count_str = " ".join( [full_d["Input.word{}{}".format(index, x)] for x in range(a, b + 1)] ) if repeat_count_str not in ("a", "an", "one", "1"): action_dict['repeat'] = ['yes', repeat_dict] # action_val = list(action_dict.values())[0] # check what this is # if action_specific_dict.get("schematic"): # action_specific_dict["schematic"]["repeat"] = repeat_dict # elif action_specific_dict.get("reference_object"): # action_specific_dict["reference_object"]["repeat"] = repeat_dict # else: # action_specific_dict["repeat"] = repeat_dict else: action_dict['repeat'] = ['yes', repeat_dict] ################## # post-processing ################## # Fix empty words messing up spans words = [full_d["Input.word{}{}".format(index, x)] for x in range(MAX_WORDS)] action_dict, words = fix_spans_due_to_empty_words(action_dict, words) return worker_id, action_dict, words def fix_cnt_in_schematic(words, action_dict): if 'repeat' not in action_dict: return action_dict repeat = action_dict['repeat'] val = [] if 'repeat_count' in repeat[1]: val = repeat[1]['repeat_count'] elif 'repeat_key' in repeat[1] and repeat[1]['repeat_key'] == 'ALL': if any(x in ['all', 'every', 'each'] for x in words): all_val = words.index('all') val = [[all_val, all_val]] else: return action_dict for k, v in action_dict.items(): if k in ['schematic', 'reference_object']: for i, meh in enumerate(v[1]): # print(words, val) if meh in val: v[1].pop(i) action_dict[k] = [v[0], v[1]] return action_dict from pprint import pprint unique_keys = [] with open('/Users/kavyasrinet/Downloads/test_q.csv', "r") as f, open('/Users/kavyasrinet/Downloads/test_q.txt', 'w') as f2: r = csv.DictReader(f) all_data = {} for d in r: worker_id = d["WorkerId"] all_data[worker_id] = {} for i in range(1, 4): sentence = d['Input.command_{}'.format(i)] _, action_dict, words = process_result(d, i) a_dict = fix_cnt_in_schematic(words, action_dict) unique_keys.extend(list(a_dict.keys())) all_data[worker_id][sentence] = a_dict for k, v in all_data.items(): f2.write(k+"\t"+str(v)+"\n") print(len(all_data.keys())) a = set(unique_keys) print(a) # 500 qual test: '/Users/kavyasrinet/Downloads/500_qual_test.csv' # first round: '//Users/kavyasrinet/Downloads/14_qual_test.csv' # test from sandbox: 'data/test.csv' ```
github_jupyter
``` %matplotlib inline ``` # `scikit-learn` - Machine Learning in Python [scikit-learn](http://scikit-learn.org) is a simple and efficient tool for data mining and data analysis. It is built on [NumPy](www.numpy.org), [SciPy](https://www.scipy.org/), and [matplotlib](https://matplotlib.org/). The following examples show some of `scikit-learn`'s power. For a complete list, go to the official homepage under [examples](http://scikit-learn.org/stable/auto_examples/index.html) or [tutorials](http://scikit-learn.org/stable/tutorial/index.html). ## Blind source separation using FastICA This example of estimating sources from noisy data is adapted from [`plot_ica_blind_source_separation`](http://scikit-learn.org/stable/auto_examples/decomposition/plot_ica_blind_source_separation.html). ``` import numpy as np import matplotlib.pyplot as plt from scipy import signal from sklearn.decomposition import FastICA, PCA # Generate sample data n_samples = 2000 time = np.linspace(0, 8, n_samples) s1 = np.sin(2 * time) # Signal 1: sinusoidal signal s2 = np.sign(np.sin(3 * time)) # Signal 2: square signal s3 = signal.sawtooth(2 * np.pi * time) # Signal 3: saw tooth signal S = np.c_[s1, s2, s3] S += 0.2 * np.random.normal(size=S.shape) # Add noise S /= S.std(axis=0) # Standardize data # Mix data A = np.array([[1, 1, 1], [0.5, 2, 1.0], [1.5, 1.0, 2.0]]) # Mixing matrix X = np.dot(S, A.T) # Generate observations # Compute ICA ica = FastICA(n_components=3) S_ = ica.fit_transform(X) # Reconstruct signals A_ = ica.mixing_ # Get estimated mixing matrix # For comparison, compute PCA pca = PCA(n_components=3) H = pca.fit_transform(X) # Reconstruct signals based on orthogonal components # Plot results plt.figure(figsize=(12, 4)) models = [X, S, S_, H] names = ['Observations (mixed signal)', 'True Sources', 'ICA recovered signals', 'PCA recovered signals'] colors = ['red', 'steelblue', 'orange'] for ii, (model, name) in enumerate(zip(models, names), 1): plt.subplot(2, 2, ii) plt.title(name) for sig, color in zip(model.T, colors): plt.plot(sig, color=color) plt.subplots_adjust(0.09, 0.04, 0.94, 0.94, 0.26, 0.46) plt.show() ``` # Anomaly detection with Local Outlier Factor (LOF) This example presents the Local Outlier Factor (LOF) estimator. The LOF algorithm is an unsupervised outlier detection method which computes the local density deviation of a given data point with respect to its neighbors. It considers as outlier samples that have a substantially lower density than their neighbors. This example is adapted from [`plot_lof`](http://scikit-learn.org/stable/auto_examples/neighbors/plot_lof.html). ``` import numpy as np import matplotlib.pyplot as plt from sklearn.neighbors import LocalOutlierFactor # Generate train data X = 0.3 * np.random.randn(100, 2) # Generate some abnormal novel observations X_outliers = np.random.uniform(low=-4, high=4, size=(20, 2)) X = np.r_[X + 2, X - 2, X_outliers] # fit the model clf = LocalOutlierFactor(n_neighbors=20) y_pred = clf.fit_predict(X) y_pred_outliers = y_pred[200:] # Plot the level sets of the decision function xx, yy = np.meshgrid(np.linspace(-5, 5, 50), np.linspace(-5, 5, 50)) Z = clf._decision_function(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) plt.title("Local Outlier Factor (LOF)") plt.contourf(xx, yy, Z, cmap=plt.cm.Blues_r) a = plt.scatter(X[:200, 0], X[:200, 1], c='white', edgecolor='k', s=20) b = plt.scatter(X[200:, 0], X[200:, 1], c='red', edgecolor='k', s=20) plt.axis('tight') plt.xlim((-5, 5)) plt.ylim((-5, 5)) plt.legend([a, b], ["normal observations", "abnormal observations"], loc="upper left") plt.show() ``` # SVM: Maximum margin separating hyperplane Plot the maximum margin separating hyperplane within a two-class separable dataset using a Support Vector Machine classifier with a linear kernel. This example is adapted from [`plot_separating_hyperplane`](http://scikit-learn.org/stable/auto_examples/svm/plot_separating_hyperplane.html). ``` import numpy as np import matplotlib.pyplot as plt from sklearn import svm from sklearn.datasets import make_blobs # we create 40 separable points X, y = make_blobs(n_samples=40, centers=2, random_state=6) # fit the model, don't regularize for illustration purposes clf = svm.SVC(kernel='linear', C=1000) clf.fit(X, y) plt.scatter(X[:, 0], X[:, 1], c=y, s=30, cmap=plt.cm.Paired) # plot the decision function ax = plt.gca() xlim = ax.get_xlim() ylim = ax.get_ylim() # create grid to evaluate model xx = np.linspace(xlim[0], xlim[1], 30) yy = np.linspace(ylim[0], ylim[1], 30) YY, XX = np.meshgrid(yy, xx) xy = np.vstack([XX.ravel(), YY.ravel()]).T Z = clf.decision_function(xy).reshape(XX.shape) # plot decision boundary and margins ax.contour(XX, YY, Z, colors='k', levels=[-1, 0, 1], alpha=0.5, linestyles=['--', '-', '--']) # plot support vectors ax.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=100, linewidth=1, facecolors='none') plt.show() ``` # `Scikit-Image` - Image processing in python [scikit-image](http://scikit-image.org/) is a collection of algorithms for image processing and is based on [scikit-learn](http://scikit-learn.org). The following examples show some of `scikit-image`'s power. For a complete list, go to the official homepage under [examples](http://scikit-image.org/docs/stable/auto_examples/). ## Sliding window histogram Histogram matching can be used for object detection in images. This example extracts a single coin from the `skimage.data.coins` image and uses histogram matching to attempt to locate it within the original image. This example is adapted from [`plot_windowed_histogram`](http://scikit-image.org/docs/stable/auto_examples/features_detection/plot_windowed_histogram.html). ``` from __future__ import division import numpy as np import matplotlib import matplotlib.pyplot as plt from skimage import data, transform from skimage.util import img_as_ubyte from skimage.morphology import disk from skimage.filters import rank def windowed_histogram_similarity(image, selem, reference_hist, n_bins): # Compute normalized windowed histogram feature vector for each pixel px_histograms = rank.windowed_histogram(image, selem, n_bins=n_bins) # Reshape coin histogram to (1,1,N) for broadcast when we want to use it in # arithmetic operations with the windowed histograms from the image reference_hist = reference_hist.reshape((1, 1) + reference_hist.shape) # Compute Chi squared distance metric: sum((X-Y)^2 / (X+Y)); # a measure of distance between histograms X = px_histograms Y = reference_hist num = (X - Y) ** 2 denom = X + Y denom[denom == 0] = np.infty frac = num / denom chi_sqr = 0.5 * np.sum(frac, axis=2) # Generate a similarity measure. It needs to be low when distance is high # and high when distance is low; taking the reciprocal will do this. # Chi squared will always be >= 0, add small value to prevent divide by 0. similarity = 1 / (chi_sqr + 1.0e-4) return similarity # Load the `skimage.data.coins` image img = img_as_ubyte(data.coins()) # Quantize to 16 levels of greyscale; this way the output image will have a # 16-dimensional feature vector per pixel quantized_img = img // 16 # Select the coin from the 4th column, second row. # Co-ordinate ordering: [x1,y1,x2,y2] coin_coords = [184, 100, 228, 148] # 44 x 44 region coin = quantized_img[coin_coords[1]:coin_coords[3], coin_coords[0]:coin_coords[2]] # Compute coin histogram and normalize coin_hist, _ = np.histogram(coin.flatten(), bins=16, range=(0, 16)) coin_hist = coin_hist.astype(float) / np.sum(coin_hist) # Compute a disk shaped mask that will define the shape of our sliding window # Example coin is ~44px across, so make a disk 61px wide (2 * rad + 1) to be # big enough for other coins too. selem = disk(30) # Compute the similarity across the complete image similarity = windowed_histogram_similarity(quantized_img, selem, coin_hist, coin_hist.shape[0]) fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(12, 4)) axes[0].imshow(quantized_img, cmap='gray') axes[0].set_title('Quantized image') axes[0].axis('off') axes[1].imshow(coin, cmap='gray') axes[1].set_title('Coin from 2nd row, 4th column') axes[1].axis('off') axes[2].imshow(img, cmap='gray') axes[2].imshow(similarity, cmap='hot', alpha=0.5) axes[2].set_title('Original image with overlaid similarity') axes[2].axis('off') plt.tight_layout() plt.show() ``` ## Local Thresholding If the image background is relatively uniform, then you can use a global threshold value as presented above. However, if there is large variation in the background intensity, adaptive thresholding (a.k.a. local or dynamic thresholding) may produce better results. This example is adapted from [`plot_thresholding`](http://scikit-image.org/docs/dev/auto_examples/xx_applications/plot_thresholding.html#local-thresholding). ``` from skimage.filters import threshold_otsu, threshold_local image = data.page() global_thresh = threshold_otsu(image) binary_global = image > global_thresh block_size = 35 adaptive_thresh = threshold_local(image, block_size, offset=10) binary_adaptive = image > adaptive_thresh fig, axes = plt.subplots(ncols=3, figsize=(16, 6)) ax = axes.ravel() plt.gray() ax[0].imshow(image) ax[0].set_title('Original') ax[1].imshow(binary_global) ax[1].set_title('Global thresholding') ax[2].imshow(binary_adaptive) ax[2].set_title('Adaptive thresholding') for a in ax: a.axis('off') plt.show() ``` ## Finding local maxima The peak_local_max function returns the coordinates of local peaks (maxima) in an image. A maximum filter is used for finding local maxima. This operation dilates the original image and merges neighboring local maxima closer than the size of the dilation. Locations, where the original image is equal to the dilated image, are returned as local maxima. This example is adapted from [`plot_peak_local_max`](http://scikit-image.org/docs/stable/auto_examples/segmentation/plot_peak_local_max.html). ``` from scipy import ndimage as ndi import matplotlib.pyplot as plt from skimage.feature import peak_local_max from skimage import data, img_as_float im = img_as_float(data.coins()) # image_max is the dilation of im with a 20*20 structuring element # It is used within peak_local_max function image_max = ndi.maximum_filter(im, size=20, mode='constant') # Comparison between image_max and im to find the coordinates of local maxima coordinates = peak_local_max(im, min_distance=20) # display results fig, axes = plt.subplots(1, 3, figsize=(12, 5), sharex=True, sharey=True, subplot_kw={'adjustable': 'box'}) ax = axes.ravel() ax[0].imshow(im, cmap=plt.cm.gray) ax[0].axis('off') ax[0].set_title('Original') ax[1].imshow(image_max, cmap=plt.cm.gray) ax[1].axis('off') ax[1].set_title('Maximum filter') ax[2].imshow(im, cmap=plt.cm.gray) ax[2].autoscale(False) ax[2].plot(coordinates[:, 1], coordinates[:, 0], 'r.') ax[2].axis('off') ax[2].set_title('Peak local max') fig.tight_layout() plt.show() ``` ## Label image region This example shows how to segment an image with image labeling. The following steps are applied: 1. Thresholding with automatic Otsu method 2. Close small holes with binary closing 3. Remove artifacts touching image border 4. Measure image regions to filter small objects This example is adapted from [`plot_label`](http://scikit-image.org/docs/stable/auto_examples/segmentation/plot_label.html). ``` import matplotlib.pyplot as plt import matplotlib.patches as mpatches from skimage import data from skimage.filters import threshold_otsu from skimage.segmentation import clear_border from skimage.measure import label, regionprops from skimage.morphology import closing, square from skimage.color import label2rgb image = data.coins()[50:-50, 50:-50] # apply threshold thresh = threshold_otsu(image) bw = closing(image > thresh, square(3)) # remove artifacts connected to image border cleared = clear_border(bw) # label image regions label_image = label(cleared) image_label_overlay = label2rgb(label_image, image=image) fig, ax = plt.subplots(figsize=(10, 6)) ax.imshow(image_label_overlay) for region in regionprops(label_image): # take regions with large enough areas if region.area >= 100: # draw rectangle around segmented coins minr, minc, maxr, maxc = region.bbox rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr - minr, fill=False, edgecolor='red', linewidth=2) ax.add_patch(rect) ax.set_axis_off() plt.tight_layout() plt.show() ```
github_jupyter
<a href="https://qworld.net" target="_blank" align="left"><img src="../qworld/images/header.jpg" align="left"></a> $ \newcommand{\bra}[1]{\langle #1|} $ $ \newcommand{\ket}[1]{|#1\rangle} $ $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ $ \newcommand{\dot}[2]{ #1 \cdot #2} $ $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ $ \newcommand{\mypar}[1]{\left( #1 \right)} $ $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ $ \newcommand{\onehalf}{\frac{1}{2}} $ $ \newcommand{\donehalf}{\dfrac{1}{2}} $ $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ $ \newcommand{\vzero}{\myvector{1\\0}} $ $ \newcommand{\vone}{\myvector{0\\1}} $ $ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $ $ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ $ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $ $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ $ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $ $ \newcommand{\greenbit}[1] {\mathbf{{\color{green}#1}}} $ $ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}#1}}} $ $ \newcommand{\redbit}[1] {\mathbf{{\color{red}#1}}} $ $ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}#1}}} $ $ \newcommand{\blackbit}[1] {\mathbf{{\color{black}#1}}} $ <font style="font-size:28px;" align="left"><b><font color="blue"> Solutions for </font>Grover's Search: Implementation </b></font> <br> _prepared by Maksim Dimitrijev and Özlem Salehi_ <br><br> <a id="task2"></a> <h3>Task 2</h3> Let $N=4$. Implement the query phase and check the unitary matrix for the query operator. Note that we are interested in the top-left $4 \times 4$ part of the matrix since the remaining parts are due to the ancilla qubit. You are given a function $f$ and its corresponding quantum operator $U_f$. First run the following cell to load operator $U_f$. Then you can make queries to $f$ by applying the operator $U_f$ via the following command: <pre>Uf(circuit,qreg). ``` %run quantum.py ``` Now use phase kickback to flip the sign of the marked element: <ul> <li>Set output qubit (qreg[2]) to $\ket{-}$ by applying X and H.</li> <li>Apply operator $U_f$ <li>Set output qubit (qreg[2]) back.</li> </ul> (Can you guess the marked element by looking at the unitary matrix?) <h3>Solution</h3> ``` from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer qreg = QuantumRegister(3) #No need to define classical register as we are not measuring mycircuit = QuantumCircuit(qreg) #set ancilla mycircuit.x(qreg[2]) mycircuit.h(qreg[2]) Uf(mycircuit,qreg) #set ancilla back mycircuit.h(qreg[2]) mycircuit.x(qreg[2]) job = execute(mycircuit,Aer.get_backend('unitary_simulator')) u=job.result().get_unitary(mycircuit,decimals=3) #We are interested in the top-left 4x4 part for i in range(4): s="" for j in range(4): val = str(u[i][j].real) while(len(val)<5): val = " "+val s = s + val print(s) mycircuit.draw(output='mpl') ``` <a id="task3"></a> <h3>Task 3</h3> Let $N=4$. Implement the inversion operator and check whether you obtain the following matrix: $\mymatrix{cccc}{-0.5 & 0.5 & 0.5 & 0.5 \\ 0.5 & -0.5 & 0.5 & 0.5 \\ 0.5 & 0.5 & -0.5 & 0.5 \\ 0.5 & 0.5 & 0.5 & -0.5}$. <h3>Solution</h3> ``` def inversion(circuit,quantum_reg): #step 1 circuit.h(quantum_reg[1]) circuit.h(quantum_reg[0]) #step 2 circuit.x(quantum_reg[1]) circuit.x(quantum_reg[0]) #step 3 circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[2]) #step 4 circuit.x(quantum_reg[1]) circuit.x(quantum_reg[0]) #step 5 circuit.x(quantum_reg[2]) #step 6 circuit.h(quantum_reg[1]) circuit.h(quantum_reg[0]) ``` Below you can check the matrix of your inversion operator and how the circuit looks like. We are interested in top-left $4 \times 4$ part of the matrix, the remaining parts are because we used ancilla qubit. ``` from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer qreg1 = QuantumRegister(3) mycircuit1 = QuantumCircuit(qreg1) #set ancilla qubit mycircuit1.x(qreg1[2]) mycircuit1.h(qreg1[2]) inversion(mycircuit1,qreg1) #set ancilla qubit back mycircuit1.h(qreg1[2]) mycircuit1.x(qreg1[2]) job = execute(mycircuit1,Aer.get_backend('unitary_simulator')) u=job.result().get_unitary(mycircuit1,decimals=3) for i in range(4): s="" for j in range(4): val = str(u[i][j].real) while(len(val)<5): val = " "+val s = s + val print(s) mycircuit1.draw(output='mpl') ``` <a id="task4"></a> <h3>Task 4: Testing Grover's search</h3> Now we are ready to test our operations and run Grover's search. Suppose that there are 4 elements in the list and try to find the marked element. You are given the operator $U_f$. First run the following cell to load it. You can access it via <pre>Uf(circuit,qreg).</pre> qreg[2] is the ancilla qubit and it is shared by the query and the inversion operators. Which state do you observe the most? ``` %run quantum.py ``` <h3>Solution</h3> ``` from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer qreg = QuantumRegister(3) creg = ClassicalRegister(2) mycircuit = QuantumCircuit(qreg,creg) #Grover #initial step - equal superposition for i in range(2): mycircuit.h(qreg[i]) #set ancilla mycircuit.x(qreg[2]) mycircuit.h(qreg[2]) mycircuit.barrier() #change the number of iterations iterations=1 #Grover's iterations. for i in range(iterations): #query Uf(mycircuit,qreg) mycircuit.barrier() #inversion inversion(mycircuit,qreg) mycircuit.barrier() #set ancilla back mycircuit.h(qreg[2]) mycircuit.x(qreg[2]) mycircuit.measure(qreg[0],creg[0]) mycircuit.measure(qreg[1],creg[1]) job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000) counts = job.result().get_counts(mycircuit) # print the outcome for outcome in counts: print(outcome,"is observed",counts[outcome],"times") mycircuit.draw(output='mpl') ``` <a id="task5"></a> <h3>Task 5 (Optional, challenging)</h3> Implement the inversion operation for $n=3$ ($N=8$). This time you will need 5 qubits - 3 for the operation, 1 for ancilla, and one more qubit to implement not gate controlled by three qubits. In the implementation the ancilla qubit will be qubit 3, while qubits for control are 0, 1 and 2; qubit 4 is used for the multiple control operation. As a result you should obtain the following values in the top-left $8 \times 8$ entries: $\mymatrix{cccccccc}{-0.75 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 \\ 0.25 & -0.75 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 \\ 0.25 & 0.25 & -0.75 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 \\ 0.25 & 0.25 & 0.25 & -0.75 & 0.25 & 0.25 & 0.25 & 0.25 \\ 0.25 & 0.25 & 0.25 & 0.25 & -0.75 & 0.25 & 0.25 & 0.25 \\ 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & -0.75 & 0.25 & 0.25 \\ 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & -0.75 & 0.25 \\ 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & -0.75}$. <h3>Solution</h3> ``` def big_inversion(circuit,quantum_reg): for i in range(3): circuit.h(quantum_reg[i]) circuit.x(quantum_reg[i]) circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[4]) circuit.ccx(quantum_reg[2],quantum_reg[4],quantum_reg[3]) circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[4]) for i in range(3): circuit.x(quantum_reg[i]) circuit.h(quantum_reg[i]) circuit.x(quantum_reg[3]) ``` Below you can check the matrix of your inversion operator. We are interested in the top-left $8 \times 8$ part of the matrix, the remaining parts are because of additional qubits. ``` from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer big_qreg2 = QuantumRegister(5) big_mycircuit2 = QuantumCircuit(big_qreg2) #set ancilla big_mycircuit2.x(big_qreg2[3]) big_mycircuit2.h(big_qreg2[3]) big_inversion(big_mycircuit2,big_qreg2) #set ancilla back big_mycircuit2.h(big_qreg2[3]) big_mycircuit2.x(big_qreg2[3]) job = execute(big_mycircuit2,Aer.get_backend('unitary_simulator')) u=job.result().get_unitary(big_mycircuit2,decimals=3) for i in range(8): s="" for j in range(8): val = str(u[i][j].real) while(len(val)<6): val = " "+val s = s + val print(s) ``` <a id="task6"></a> <h3>Task 6: Testing Grover's search for 8 elements (Optional, challenging)</h3> Now we will test Grover's search on 8 elements. You are given the operator $U_{f_8}$. First run the following cell to load it. You can access it via: <pre>Uf_8(circuit,qreg)</pre> Which state do you observe the most? ``` %run quantum.py ``` <h3>Solution</h3> ``` def big_inversion(circuit,quantum_reg): for i in range(3): circuit.h(quantum_reg[i]) circuit.x(quantum_reg[i]) circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[4]) circuit.ccx(quantum_reg[2],quantum_reg[4],quantum_reg[3]) circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[4]) for i in range(3): circuit.x(quantum_reg[i]) circuit.h(quantum_reg[i]) circuit.x(quantum_reg[3]) from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer qreg8 = QuantumRegister(5) creg8 = ClassicalRegister(3) mycircuit8 = QuantumCircuit(qreg8,creg8) #set ancilla mycircuit8.x(qreg8[3]) mycircuit8.h(qreg8[3]) #Grover for i in range(3): mycircuit8.h(qreg8[i]) mycircuit8.barrier() #Try 1,2,6,12 8iterations of Grover for i in range(2): Uf_8(mycircuit8,qreg8) mycircuit8.barrier() big_inversion(mycircuit8,qreg8) mycircuit8.barrier() #set ancilla back mycircuit8.h(qreg8[3]) mycircuit8.x(qreg8[3]) for i in range(3): mycircuit8.measure(qreg8[i],creg8[i]) job = execute(mycircuit8,Aer.get_backend('qasm_simulator'),shots=10000) counts8 = job.result().get_counts(mycircuit8) # print the reverse of the outcome for outcome in counts8: print(outcome,"is observed",counts8[outcome],"times") mycircuit8.draw(output='mpl') ``` <a id="task8"></a> <h3>Task 8</h3> Implement an oracle function which marks the element 00. Run Grover's search with the oracle you have implemented. ``` def oracle_00(circuit,qreg): ``` <h3>Solution</h3> ``` def oracle_00(circuit,qreg): circuit.x(qreg[0]) circuit.x(qreg[1]) circuit.ccx(qreg[0],qreg[1],qreg[2]) circuit.x(qreg[0]) circuit.x(qreg[1]) from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer qreg = QuantumRegister(3) creg = ClassicalRegister(2) mycircuit = QuantumCircuit(qreg,creg) #Grover #initial step - equal superposition for i in range(2): mycircuit.h(qreg[i]) #set ancilla mycircuit.x(qreg[2]) mycircuit.h(qreg[2]) mycircuit.barrier() #change the number of iterations iterations=1 #Grover's iterations. for i in range(iterations): #query oracle_00(mycircuit,qreg) mycircuit.barrier() #inversion inversion(mycircuit,qreg) mycircuit.barrier() #set ancilla back mycircuit.h(qreg[2]) mycircuit.x(qreg[2]) mycircuit.measure(qreg[0],creg[0]) mycircuit.measure(qreg[1],creg[1]) job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000) counts = job.result().get_counts(mycircuit) # print the reverse of the outcome for outcome in counts: reverse_outcome = '' for i in outcome: reverse_outcome = i + reverse_outcome print(reverse_outcome,"is observed",counts[outcome],"times") mycircuit.draw(output='mpl') ```
github_jupyter
# 神经网络 ## 全连接层 ### 张量方式实现 ``` import tensorflow as tf from matplotlib import pyplot as plt plt.rcParams['font.size'] = 16 plt.rcParams['font.family'] = ['STKaiti'] plt.rcParams['axes.unicode_minus'] = False # 创建 W,b 张量 x = tf.random.normal([2,784]) w1 = tf.Variable(tf.random.truncated_normal([784, 256], stddev=0.1)) b1 = tf.Variable(tf.zeros([256])) # 线性变换 o1 = tf.matmul(x,w1) + b1 # 激活函数 o1 = tf.nn.relu(o1) o1 ``` ### 层方式实现 ``` x = tf.random.normal([4,28*28]) # 导入层模块 from tensorflow.keras import layers # 创建全连接层,指定输出节点数和激活函数 fc = layers.Dense(512, activation=tf.nn.relu) # 通过 fc 类实例完成一次全连接层的计算,返回输出张量 h1 = fc(x) h1 ``` 上述通过一行代码即可以创建一层全连接层 fc, 并指定输出节点数为 512, 输入的节点数在fc(x)计算时自动获取, 并创建内部权值张量$W$和偏置张量$\mathbf{b}$。 我们可以通过类内部的成员名 kernel 和 bias 来获取权值张量$W$和偏置张量$\mathbf{b}$对象 ``` # 获取 Dense 类的权值矩阵 fc.kernel # 获取 Dense 类的偏置向量 fc.bias # 待优化参数列表 fc.trainable_variables # 返回所有参数列表 fc.variables ``` ## 神经网络 ### 张量方式实现 ``` # 隐藏层 1 张量 w1 = tf.Variable(tf.random.truncated_normal([784, 256], stddev=0.1)) b1 = tf.Variable(tf.zeros([256])) # 隐藏层 2 张量 w2 = tf.Variable(tf.random.truncated_normal([256, 128], stddev=0.1)) b2 = tf.Variable(tf.zeros([128])) # 隐藏层 3 张量 w3 = tf.Variable(tf.random.truncated_normal([128, 64], stddev=0.1)) b3 = tf.Variable(tf.zeros([64])) # 输出层张量 w4 = tf.Variable(tf.random.truncated_normal([64, 10], stddev=0.1)) b4 = tf.Variable(tf.zeros([10])) with tf.GradientTape() as tape: # 梯度记录器 # x: [b, 28*28] # 隐藏层 1 前向计算, [b, 28*28] => [b, 256] h1 = x@w1 + tf.broadcast_to(b1, [x.shape[0], 256]) h1 = tf.nn.relu(h1) # 隐藏层 2 前向计算, [b, 256] => [b, 128] h2 = h1@w2 + b2 h2 = tf.nn.relu(h2) # 隐藏层 3 前向计算, [b, 128] => [b, 64] h3 = h2@w3 + b3 h3 = tf.nn.relu(h3) # 输出层前向计算, [b, 64] => [b, 10] h4 = h3@w4 + b4 ``` ### 层方式实现 ``` # 导入常用网络层 layers from tensorflow.keras import layers,Sequential # 隐藏层 1 fc1 = layers.Dense(256, activation=tf.nn.relu) # 隐藏层 2 fc2 = layers.Dense(128, activation=tf.nn.relu) # 隐藏层 3 fc3 = layers.Dense(64, activation=tf.nn.relu) # 输出层 fc4 = layers.Dense(10, activation=None) x = tf.random.normal([4,28*28]) # 通过隐藏层 1 得到输出 h1 = fc1(x) # 通过隐藏层 2 得到输出 h2 = fc2(h1) # 通过隐藏层 3 得到输出 h3 = fc3(h2) # 通过输出层得到网络输出 h4 = fc4(h3) ``` 对于这种数据依次向前传播的网络, 也可以通过 Sequential 容器封装成一个网络大类对象,调用大类的前向计算函数一次即可完成所有层的前向计算,使用起来更加方便。 ``` # 导入 Sequential 容器 from tensorflow.keras import layers,Sequential # 通过 Sequential 容器封装为一个网络类 model = Sequential([ layers.Dense(256, activation=tf.nn.relu) , # 创建隐藏层 1 layers.Dense(128, activation=tf.nn.relu) , # 创建隐藏层 2 layers.Dense(64, activation=tf.nn.relu) , # 创建隐藏层 3 layers.Dense(10, activation=None) , # 创建输出层 ]) out = model(x) # 前向计算得到输出 ``` ## 激活函数 ### Sigmoid $$\text{Sigmoid}(x) \triangleq \frac{1}{1 + e^{-x}}$$ ``` # 构造-6~6 的输入向量 x = tf.linspace(-6.,6.,10) x # 通过 Sigmoid 函数 sigmoid_y = tf.nn.sigmoid(x) sigmoid_y def set_plt_ax(): # get current axis 获得坐标轴对象 ax = plt.gca() ax.spines['right'].set_color('none') # 将右边 上边的两条边颜色设置为空 其实就相当于抹掉这两条边 ax.spines['top'].set_color('none') ax.xaxis.set_ticks_position('bottom') # 指定下边的边作为 x 轴,指定左边的边为 y 轴 ax.yaxis.set_ticks_position('left') # 指定 data 设置的bottom(也就是指定的x轴)绑定到y轴的0这个点上 ax.spines['bottom'].set_position(('data', 0)) ax.spines['left'].set_position(('data', 0)) set_plt_ax() plt.plot(x, sigmoid_y, color='C4', label='Sigmoid') plt.xlim(-6, 6) plt.ylim(0, 1) plt.legend(loc=2) plt.show() ``` ### ReLU $$\text{ReLU}(x) \triangleq \max(0, x)$$ ``` # 通过 ReLU 激活函数 relu_y = tf.nn.relu(x) relu_y set_plt_ax() plt.plot(x, relu_y, color='C4', label='ReLU') plt.xlim(-6, 6) plt.ylim(0, 6) plt.legend(loc=2) plt.show() ``` ### LeakyReLU $$\text{LeakyReLU}(x) \triangleq \left\{ \begin{array}{cc} x \quad x \geqslant 0 \\ px \quad x < 0 \end{array} \right.$$ ``` # 通过 LeakyReLU 激活函数 leakyrelu_y = tf.nn.leaky_relu(x, alpha=0.1) leakyrelu_y set_plt_ax() plt.plot(x, leakyrelu_y, color='C4', label='LeakyReLU') plt.xlim(-6, 6) plt.ylim(-1, 6) plt.legend(loc=2) plt.show() ``` ### Tanh $$\tanh(x)=\frac{e^x-e^{-x}}{e^x + e^{-x}}= 2 \cdot \text{sigmoid}(2x) - 1$$ ``` # 通过 tanh 激活函数 tanh_y = tf.nn.tanh(x) tanh_y set_plt_ax() plt.plot(x, tanh_y, color='C4', label='Tanh') plt.xlim(-6, 6) plt.ylim(-1.5, 1.5) plt.legend(loc=2) plt.show() ``` ## 输出层设计 ### [0,1]区间,和为 1 $$Softmax(z_i) \triangleq \frac{e^{z_i}}{\sum_{j=1}^{d_{out}} e^{z_j}}$$ ``` z = tf.constant([2.,1.,0.1]) # 通过 Softmax 函数 tf.nn.softmax(z) # 构造输出层的输出 z = tf.random.normal([2,10]) # 构造真实值 y_onehot = tf.constant([1,3]) # one-hot 编码 y_onehot = tf.one_hot(y_onehot, depth=10) # 输出层未使用 Softmax 函数,故 from_logits 设置为 True # 这样 categorical_crossentropy 函数在计算损失函数前,会先内部调用 Softmax 函数 loss = tf.keras.losses.categorical_crossentropy(y_onehot,z,from_logits=True) loss = tf.reduce_mean(loss) # 计算平均交叉熵损失 loss # 创建 Softmax 与交叉熵计算类,输出层的输出 z 未使用 softmax criteon = tf.keras.losses.CategoricalCrossentropy(from_logits=True) loss = criteon(y_onehot,z) # 计算损失 loss ``` ### [-1, 1] ``` x = tf.linspace(-6.,6.,10) # tanh 激活函数 tf.tanh(x) ``` ## 误差计算 ### 均方差误差函数 $$\text{MSE}(y, o) \triangleq \frac{1}{d_{out}} \sum_{i=1}^{d_{out}}(y_i-o_i)^2$$ MSE 误差函数的值总是大于等于 0,当 MSE 函数达到最小值 0 时, 输出等于真实标签,此时神经网络的参数达到最优状态。 ``` # 构造网络输出 o = tf.random.normal([2,10]) # 构造真实值 y_onehot = tf.constant([1,3]) y_onehot = tf.one_hot(y_onehot, depth=10) # 计算均方差 loss = tf.keras.losses.MSE(y_onehot, o) loss # 计算 batch 均方差 loss = tf.reduce_mean(loss) loss # 创建 MSE 类 criteon = tf.keras.losses.MeanSquaredError() # 计算 batch 均方差 loss = criteon(y_onehot,o) loss ``` ### 交叉熵误差函数 $$ \begin{aligned} H(p \| q) &=D_{K L}(p \| q) \\ &=\sum_{j} y_{j} \log \left(\frac{y_j}{o_j}\right) \\ &= 1 \cdot \log \frac{1}{o_i}+ \sum_{j \neq i} 0 \cdot \log \left(\frac{0}{o_j}\right) \\ & =-\log o_{i} \end{aligned} $$ ## 汽车油耗预测实战 ``` import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers, losses def load_dataset(): # 在线下载汽车效能数据集 dataset_path = keras.utils.get_file("auto-mpg.data", "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data") # 效能(公里数每加仑),气缸数,排量,马力,重量 # 加速度,型号年份,产地 column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration', 'Model Year', 'Origin'] raw_dataset = pd.read_csv(dataset_path, names=column_names, na_values="?", comment='\t', sep=" ", skipinitialspace=True) dataset = raw_dataset.copy() return dataset dataset = load_dataset() # 查看部分数据 dataset.head() def preprocess_dataset(dataset): dataset = dataset.copy() # 统计空白数据,并清除 dataset = dataset.dropna() # 处理类别型数据,其中origin列代表了类别1,2,3,分布代表产地:美国、欧洲、日本 # 其弹出这一列 origin = dataset.pop('Origin') # 根据origin列来写入新列 dataset['USA'] = (origin == 1) * 1.0 dataset['Europe'] = (origin == 2) * 1.0 dataset['Japan'] = (origin == 3) * 1.0 # 切分为训练集和测试集 train_dataset = dataset.sample(frac=0.8, random_state=0) test_dataset = dataset.drop(train_dataset.index) return train_dataset, test_dataset train_dataset, test_dataset = preprocess_dataset(dataset) # 统计数据 sns_plot = sns.pairplot(train_dataset[["Cylinders", "Displacement", "Weight", "MPG"]], diag_kind="kde") # 查看训练集的输入X的统计数据 train_stats = train_dataset.describe() train_stats.pop("MPG") train_stats = train_stats.transpose() train_stats def norm(x, train_stats): """ 标准化数据 :param x: :param train_stats: get_train_stats(train_dataset) :return: """ return (x - train_stats['mean']) / train_stats['std'] # 移动MPG油耗效能这一列为真实标签Y train_labels = train_dataset.pop('MPG') test_labels = test_dataset.pop('MPG') # 进行标准化 normed_train_data = norm(train_dataset, train_stats) normed_test_data = norm(test_dataset, train_stats) print(normed_train_data.shape,train_labels.shape) print(normed_test_data.shape, test_labels.shape) class Network(keras.Model): # 回归网络 def __init__(self): super(Network, self).__init__() # 创建3个全连接层 self.fc1 = layers.Dense(64, activation='relu') self.fc2 = layers.Dense(64, activation='relu') self.fc3 = layers.Dense(1) def call(self, inputs): # 依次通过3个全连接层 x = self.fc1(inputs) x = self.fc2(x) x = self.fc3(x) return x def build_model(): # 创建网络 model = Network() model.build(input_shape=(4, 9)) model.summary() return model model = build_model() optimizer = tf.keras.optimizers.RMSprop(0.001) train_db = tf.data.Dataset.from_tensor_slices((normed_train_data.values, train_labels.values)) train_db = train_db.shuffle(100).batch(32) def train(model, train_db, optimizer, normed_test_data, test_labels): train_mae_losses = [] test_mae_losses = [] for epoch in range(200): for step, (x, y) in enumerate(train_db): with tf.GradientTape() as tape: out = model(x) loss = tf.reduce_mean(losses.MSE(y, out)) mae_loss = tf.reduce_mean(losses.MAE(y, out)) if step % 10 == 0: print(epoch, step, float(loss)) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) train_mae_losses.append(float(mae_loss)) out = model(tf.constant(normed_test_data.values)) test_mae_losses.append(tf.reduce_mean(losses.MAE(test_labels, out))) return train_mae_losses, test_mae_losses def plot(train_mae_losses, test_mae_losses): plt.figure() plt.xlabel('Epoch') plt.ylabel('MAE') plt.plot(train_mae_losses, label='Train') plt.plot(test_mae_losses, label='Test') plt.legend() # plt.ylim([0,10]) plt.legend() plt.show() train_mae_losses, test_mae_losses = train(model, train_db, optimizer, normed_test_data, test_labels) plot(train_mae_losses, test_mae_losses) ```
github_jupyter
``` %matplotlib inline ``` # OTDA unsupervised vs semi-supervised setting This example introduces a semi supervised domain adaptation in a 2D setting. It explicits the problem of semi supervised domain adaptation and introduces some optimal transport approaches to solve it. Quantities such as optimal couplings, greater coupling coefficients and transported samples are represented in order to give a visual understanding of what the transport methods are doing. ``` # Authors: Remi Flamary <remi.flamary@unice.fr> # Stanislas Chambon <stan.chambon@gmail.com> # # License: MIT License # sphinx_gallery_thumbnail_number = 3 import matplotlib.pylab as pl import ot ``` ## Generate data ``` n_samples_source = 150 n_samples_target = 150 Xs, ys = ot.datasets.make_data_classif('3gauss', n_samples_source) Xt, yt = ot.datasets.make_data_classif('3gauss2', n_samples_target) ``` ## Transport source samples onto target samples ``` # unsupervised domain adaptation ot_sinkhorn_un = ot.da.SinkhornTransport(reg_e=1e-1) ot_sinkhorn_un.fit(Xs=Xs, Xt=Xt) transp_Xs_sinkhorn_un = ot_sinkhorn_un.transform(Xs=Xs) # semi-supervised domain adaptation ot_sinkhorn_semi = ot.da.SinkhornTransport(reg_e=1e-1) ot_sinkhorn_semi.fit(Xs=Xs, Xt=Xt, ys=ys, yt=yt) transp_Xs_sinkhorn_semi = ot_sinkhorn_semi.transform(Xs=Xs) # semi supervised DA uses available labaled target samples to modify the cost # matrix involved in the OT problem. The cost of transporting a source sample # of class A onto a target sample of class B != A is set to infinite, or a # very large value # note that in the present case we consider that all the target samples are # labeled. For daily applications, some target sample might not have labels, # in this case the element of yt corresponding to these samples should be # filled with -1. # Warning: we recall that -1 cannot be used as a class label ``` ## Fig 1 : plots source and target samples + matrix of pairwise distance ``` pl.figure(1, figsize=(10, 10)) pl.subplot(2, 2, 1) pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples') pl.xticks([]) pl.yticks([]) pl.legend(loc=0) pl.title('Source samples') pl.subplot(2, 2, 2) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples') pl.xticks([]) pl.yticks([]) pl.legend(loc=0) pl.title('Target samples') pl.subplot(2, 2, 3) pl.imshow(ot_sinkhorn_un.cost_, interpolation='nearest') pl.xticks([]) pl.yticks([]) pl.title('Cost matrix - unsupervised DA') pl.subplot(2, 2, 4) pl.imshow(ot_sinkhorn_semi.cost_, interpolation='nearest') pl.xticks([]) pl.yticks([]) pl.title('Cost matrix - semisupervised DA') pl.tight_layout() # the optimal coupling in the semi-supervised DA case will exhibit " shape # similar" to the cost matrix, (block diagonal matrix) ``` ## Fig 2 : plots optimal couplings for the different methods ``` pl.figure(2, figsize=(8, 4)) pl.subplot(1, 2, 1) pl.imshow(ot_sinkhorn_un.coupling_, interpolation='nearest') pl.xticks([]) pl.yticks([]) pl.title('Optimal coupling\nUnsupervised DA') pl.subplot(1, 2, 2) pl.imshow(ot_sinkhorn_semi.coupling_, interpolation='nearest') pl.xticks([]) pl.yticks([]) pl.title('Optimal coupling\nSemi-supervised DA') pl.tight_layout() ``` ## Fig 3 : plot transported samples ``` # display transported samples pl.figure(4, figsize=(8, 4)) pl.subplot(1, 2, 1) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples', alpha=0.5) pl.scatter(transp_Xs_sinkhorn_un[:, 0], transp_Xs_sinkhorn_un[:, 1], c=ys, marker='+', label='Transp samples', s=30) pl.title('Transported samples\nEmdTransport') pl.legend(loc=0) pl.xticks([]) pl.yticks([]) pl.subplot(1, 2, 2) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples', alpha=0.5) pl.scatter(transp_Xs_sinkhorn_semi[:, 0], transp_Xs_sinkhorn_semi[:, 1], c=ys, marker='+', label='Transp samples', s=30) pl.title('Transported samples\nSinkhornTransport') pl.xticks([]) pl.yticks([]) pl.tight_layout() pl.show() ```
github_jupyter
``` import numpy as np import pandas as pd import pickle import math import pandas as pd from pandas import HDFStore import argparse ################################################################################### #location node_ids_filename = 'data/node_locate.txt' with open(node_ids_filename) as f: _node_ids = f.read().strip() _node_ids = _node_ids.replace('\n', ' ') _node_ids = _node_ids.split(' ') node_id = [] node_ids = [] node_name = [] node_loc = [] for i in range(len(_node_ids)): if(_node_ids[i] == ''): continue node_id.append(_node_ids[i]) for i in range(len(node_id)): if(i <= 4): continue if(node_id[i] == 'C'): continue node_ids.append(node_id[i]) for i in range(len(node_ids)): if(i % 4 == 0): line = [] node_name.append(node_ids[i]) line.append(node_ids[i+1]) line.append(node_ids[i+2]) line.append(node_ids[i+3]) node_loc.append(line) node_loc=np.array(node_loc).astype('float32') print("node_loc",node_loc.shape) ################################################################################### #save to npy file np.save('data/node_signal', _node_signal) np.save('data/node_loc', node_loc) import numpy as np import pandas as pd import pickle import math import pandas as pd from pandas import HDFStore import argparse import matplotlib.pyplot as plt #################################################################################### node_loc=np.load('data/node_loc.npy') n_nodes=node_loc.shape[0] node_id = [str(i) for i in range(1,1+n_nodes)] node_id_to_ind = {} for i, node in enumerate(node_id, 1): node_id_to_ind[node] = i num_node = len(node_id) #element 1################ node_link_filename = 'data/element1.txt' link_mx = np.zeros((num_node, num_node), dtype=np.float32) with open(node_link_filename) as f: _node_link = f.read().strip() _node_link = _node_link.replace('\n', ' ') _node_link = _node_link.split(' ') node_link = [] for i in range(len(_node_link)): if(_node_link[i] == ''): continue node_link.append(_node_link[i]) element_id = [] node_link = np.reshape(node_link, (-1, 6)) node_link = node_link.astype('int32') link_mx = np.zeros((num_node, num_node), dtype=np.float32) for i in range(39): link_mx[node_link[i][2]-1][node_link[i][3]-1] = node_link[i][1] link_mx[node_link[i][3]-1][node_link[i][2]-1] = node_link[i][1] link_mx1_1=link_mx link_mx = np.zeros((num_node, num_node), dtype=np.float32) for i in range(39,node_link.shape[0]): link_mx[node_link[i][2]-1][node_link[i][3]-1] = node_link[i][1] link_mx[node_link[i][3]-1][node_link[i][2]-1] = node_link[i][1] link_mx1_2=link_mx #element 2################ node_link_filename = 'data/element2.txt' link_mx = np.zeros((num_node, num_node), dtype=np.float32) with open(node_link_filename) as f: _node_link = f.read().strip() _node_link = _node_link.replace('\n', ' ') _node_link = _node_link.split(' ') node_link = [] for i in range(len(_node_link)): if(_node_link[i] == ''): continue node_link.append(_node_link[i]) element_id = [] node_link=[i for i in node_link if i.isdigit()] node_link = np.reshape(node_link, (-1, 5)) node_link = node_link.astype('int32') for i in range(node_link.shape[0]): link_mx[node_link[i][1]-1][node_link[i][2]-1] = node_link[i][1] link_mx[node_link[i][2]-1][node_link[i][1]-1] = node_link[i][1] link_mx2=link_mx #element 3################ node_link_filename = 'data/element3.txt' link_mx = np.zeros((num_node, num_node), dtype=np.float32) with open(node_link_filename) as f: _node_link = f.read().strip() _node_link = _node_link.replace('\n', ' ') _node_link = _node_link.split(' ') node_link = [] for i in range(len(_node_link)): if(_node_link[i] == ''): continue node_link.append(_node_link[i]) element_id = [] node_link = np.reshape(node_link, (-1, 6)) node_link = node_link.astype('int32') for i in range(node_link.shape[0]): link_mx[node_link[i][2]-1][node_link[i][3]-1] = node_link[i][1] link_mx[node_link[i][3]-1][node_link[i][2]-1] = node_link[i][1] link_mx3=link_mx #################################################################################### link_mx=link_mx1_1+link_mx1_2+link_mx2+link_mx3 #np.save('data/link_mx', link_mx) np.sum(link_mx1_1!=0),np.sum(link_mx1_2!=0),np.sum(link_mx2!=0),np.sum(link_mx3!=0) #################################################################################### #adj matrix #link_mx=np.load('data/link_mx.npy') link_mx0=link_mx1_1+link_mx1_2+link_mx2+link_mx3 link_mx0=link_mx0+np.eye(106) link_mx0[link_mx0!=0]=1 link_mx1_1=link_mx1_1+np.eye(106)#cross beam link_mx1_1[link_mx1_1!=0]=1 link_mx1_2=link_mx1_2+np.eye(106)#girder link_mx1_2[link_mx1_2!=0]=1 link_mx2=link_mx2+np.eye(106)#cable link_mx2[link_mx2!=0]=1 link_mx3=link_mx3+np.eye(106)#tower link_mx3[link_mx3!=0]=1 #################################################################################### #getting distance matrix node_loc=np.load('data/node_loc.npy') num_node=len(node_loc) n_nodes=node_loc.shape[0] dist_mx = np.zeros((num_node, num_node), dtype=np.float32) dist_mx[:] = np.inf for i in range(num_node): for j in range(num_node): x = float(node_loc[i][0]) - float(node_loc[j][0]) y = float(node_loc[i][1]) - float(node_loc[j][1]) z = float(node_loc[i][2]) - float(node_loc[j][2]) x = math.pow(x,2) y = math.pow(y,2) z = math.pow(z,2) dis = math.sqrt(x+y+z) dist_mx[i, j] = dis distances = dist_mx[~np.isinf(dist_mx)].flatten() std = distances.std() adj_mx = np.exp(-np.square(dist_mx / std)) adj_mx0=adj_mx*link_mx0 print(adj_mx) ################################################ import collections print(collections.Counter(adj_mx0.flatten())) #cross beam-#girder-#cable-#tower type_e=np.array([200000,200000,15800,200000])#type_e=np.array([210000,180000,140000,200000]) type_e=(type_e)/np.max(type_e) print(type_e,type_e.std(),adj_mx0.std()) link_mx1_1_e=link_mx1_1-np.eye(106) link_mx1_1_e=link_mx1_1_e*type_e[0]#cross beam link_mx1_2_e=link_mx1_2-np.eye(106) link_mx1_2_e=link_mx1_2_e*type_e[1]#girder link_mx2_e=link_mx2-np.eye(106) link_mx2_e=link_mx2_e*type_e[2]#cable link_mx3_e=link_mx3-np.eye(106) link_mx3_e=link_mx3_e*type_e[3]#tower adj_mx_e=link_mx1_1_e+link_mx1_2_e+link_mx2_e+link_mx3_e print(adj_mx_e) print(collections.Counter(adj_mx_e.flatten())) import collections print(collections.Counter(link_mx0.flatten())) np.save('data_sage/sensor_graph/adj_wself',link_mx0) adj_mx=[adj_mx0,adj_mx_e] print(np.sum(adj_mx0==0)/(106*106)) print(np.sum(adj_mx_e==0)/(106*106)) ######################################## output_pkl_filename='data/sensor_graph/adj_mx_type_e.pkl' ######################################## with open(output_pkl_filename, 'wb') as f: pickle.dump([[], [], adj_mx], f, protocol=2) #####area #cross beam w=165.10;d=525.78;wt=8.89;ft=11.43;cb_a=(w*d)-(w-wt)*(d-2*ft);print(cb_a)#8245.14480000001 #girder b=1000;h=1000;t=50;g_a=g_a=(b*h)-((b-t)*(h-t));print(g_a)#97500 #cable c_a=3848.45#0 #tower b=1000;h=2000;t=50;t_a=(b*h)-((b-t)*(h-t));print(t_a)#147500 #cross beam-#girder-#cable-#tower type_a=np.array([cb_a,g_a,c_a,t_a])#type_e=np.array([210000,180000,140000,200000]) type_a=(type_a)/np.max(type_a) print(type_a,type_a.std()) link_mx1_1_a=link_mx1_1-np.eye(106) link_mx1_1_a=link_mx1_1_a*type_a[0]#cross beam link_mx1_2_a=link_mx1_2-np.eye(106) link_mx1_2_a=link_mx1_2_a*type_a[1]#girder link_mx2_a=link_mx2-np.eye(106) link_mx2_a=link_mx2_a*type_a[2]#cable link_mx3_a=link_mx3-np.eye(106) link_mx3_a=link_mx3_a*type_a[3]#tower adj_mx_a=link_mx1_1_a+link_mx1_2_a+link_mx2_a+link_mx3_a print(collections.Counter(adj_mx_a.flatten())) with open('data/sensor_graph/adj_mx_type_a.pkl', 'wb') as f: pickle.dump(adj_mx_a, f, protocol=2) ```
github_jupyter
# Tutorial 10: ## Extreme Gradient Boosting Classification Extreme Gradient Boosting, most popularly known as XGBoost is a gradient boosting algorithm that is used for both classification and regression problems. XGBoost is a star among hackathons as a winning algorithm. XGBoost provides a parallel tree boosting that solve many data science problems in a fast and accurate way. XGBoost is one of the most powerful and popular in competitive machine learning. It uses the concept of parallel processing to train multiple boosted trees. The term Boosting has been around for some time now and the only difference with XGBoost is that it learns the trees using different learning paradigm. Gradient boosting is an approach where new models are created that predict the residuals or errors of prior models and then added together to make the final prediction. It is called gradient boosting because it uses a gradient descent algorithm to minimize the loss when adding new models. ##### Let’s understand some of the key concepts of XGBoost, - Boosting ##### Boosting Boosting is an ensemble technique where new models are added to correct the errors made by existing models. Models are added sequentially until no further improvements can be made. A popular example is an AdaBoost algorithm that weights data points that are hard to predict. ##### For a deeper understanding of XGB Classification, use the following resources: - [**XGBoost: A Scalable Tree Boosting System**](https://www.kdd.org/kdd2016/papers/files/rfp0697-chenAemb.pdf) - [**XGBoost Documentation**](https://xgboost.readthedocs.io/en/latest/) - [**Original Paper – XGBoost**](https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf) ### In this practice session, we will learn to code Xtreme Gradient Boosting(XgBoost) Classification. #### We will perform the following steps to build a simple classifier using the popular Iris dataset. - **Data Preprocessing** - Importing the libraries. - Importing dataset (Dataset Link https://archive.ics.uci.edu/ml/datasets/iris). - Dealing with the categorical variable. - Classifying dependent and independent variables. - Splitting the data into a training set and test set. - Feature scaling. - **XgBoost Classification** - Create a XgBoost classifier. - Feed the training data to the classifier. - Predicting the species for the test set. - Using the confusion matrix to find accuracy. ## Load the Dependencies ``` import ipywidgets as widgets from IPython.display import display style = {'description_width': 'initial'} #1 Importing essential libraries import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns ``` ## Load the Dataset ``` from sklearn.datasets import load_iris iris = load_iris() # np.c_ is the numpy concatenate function # which is used to concat iris['data'] and iris['target'] arrays # for pandas column argument: concat iris['feature_names'] list # and string list (in this case one string); you can make this anything you'd like.. # the original dataset would probably call this ['Species'] dataset = pd.DataFrame(data= np.c_[iris['data'], iris['target']], columns= iris['feature_names'] + ['target']) dataset.head() print(f"Dataset has {dataset.shape[0]} rows and {dataset.shape[1]} columns.") #Plotting the relation between salary and experience wig_col = widgets.Dropdown( options=[col for col in dataset.columns.tolist() if col.startswith(('sepal', 'petal'))], description='Choose a Column to Plot vs. Attributes', disabled=False, layout=widgets.Layout(width='40%', height='40px'), style=style) ``` ## Plot Variables ``` display(wig_col) sns.catplot(x="target", y=wig_col.value, kind="boxen", data=dataset, height=8.27, aspect=11.7/8.27); g = sns.catplot(x="target", y=wig_col.value, kind="violin", inner=None, data=dataset, height=8.27, aspect=11.7/8.27) sns.swarmplot(x="target", y=wig_col.value, color="k", size=3, data=dataset, ax=g.ax); display(wig_col) #3 classify dependent and independent variables X = dataset.iloc[:,:-1].values #independent variable YearsofExperience y = dataset.iloc[:,-1].values #dependent variable salary print("\nIdependent Variable (Sepal and Petal Attributes):\n\n", X[:5]) print("\nDependent Variable (Species):\n\n", y[:5]) ``` ## Encode Classes ``` from sklearn.preprocessing import LabelEncoder labelencoder = LabelEncoder() dataset['target'] = labelencoder.fit_transform(dataset['target']) dataset['target'].unique() ``` ## Create Train and Test Sets ``` #4 Creating training set and testing set from sklearn.model_selection import train_test_split test_size = widgets.FloatSlider(min=0.01, max=0.6, value=0.2, description="Test Size :", tooltips=['Usually 20-30%']) display(test_size) ``` ## Divide the dataset into Train and Test sets ``` X_train, X_test, y_train, y_test = train_test_split(X ,y, test_size=test_size.value, random_state = 0) print("Training Set :\n----------------\n") print("X = \n", X_train[:5]) print("y = \n", y_train[:5]) print("\n\nTest Set :\n----------------\n") print("X = \n",X_test[:5]) print("y = \n", y_test[:5]) print(f"Shape of Training set is {X_train.shape}") print(f"Shape of Testing set is {X_test.shape}") ``` ## Normalise Features As the Features are not in the range of 0-1, Let's normalize the features using Standard Scaler(Z-score) normalization and Label Encode the Class String Names. ``` #Feature scaling from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) print("\n-------------------------\nDataset after Scaling:\n-------------------------\n", ) print("\nX_train :\n", X_train[:5]) print("-------------------------") print("\nX_test :\n", X_test[:5]) ``` ## XgBoost Classifier ``` # import XgBoost library from sklearn.ensemble import GradientBoostingClassifier # configure params for the model. learning_wig = widgets.ToggleButtons(options=[1e-1, 1e-2, 1e-3, 1e-4, 1e-5], description='Learning Rate :', disabled=False, style=style) display(learning_wig) max_depth_wig = widgets.Dropdown(options=[10, 20, 30, 50], description='The maximum depth of the Tree :', style=style) display(max_depth_wig) min_split_wig = widgets.Dropdown(options=[100, 200, 300, 500], description='Minimum number of splits :', style=style) display(min_split_wig) ``` ## Predict and Evaluate the Model ``` classifier = GradientBoostingClassifier(learning_rate=learning_wig.value, max_depth=max_depth_wig.value, min_samples_split=min_split_wig.value ) #Feed the training data to the classifier classifier.fit(X_train,y_train) #Predicting the species for test set y_pred = classifier.predict(X_test) print("\n---------------------------\n") print("Predicted Values for Test Set :\n",y_pred) print("\n---------------------------\n") print("Actual Values for Test Set :\n",y_test) #8 Claculating the Accuracy of the predictions from sklearn import metrics print("Prediction Accuracy = ", metrics.accuracy_score(y_test, y_pred)) #9 Comparing Actual and Predicted Salaries for he test set print("\nActual vs Predicted Salaries \n------------------------------\n") error_df = pd.DataFrame({"Actual" : y_test, "Predicted" : y_pred}) error_df ``` ## Actual vs. Predicted ``` #Using confusion matrix to find the accuracy from sklearn.metrics import confusion_matrix, classification_report cm = confusion_matrix(y_test,y_pred) accuracy = cm.diagonal().sum()/cm.sum() print("\n---------------------------\n") print("Accuracy of Predictions = ",accuracy) print("\n---------------------------\n") print(classification_report(y_test, y_pred)) ```
github_jupyter
## Imports ``` import pandas as pd import sys sys.path.insert(0,'../satori') from postprocess import * ``` ## Interaction data processing ``` # For SATORI based interactions df = pd.read_csv('../results/Arabidopsis_GenomeWide_Analysis_euclidean_v8_fixed/Interactions_SATORI/interactions_summary_attnLimit-0.12.txt', sep='\t') ##df = pd.read_csv('../../Arabidopsis_GenomeWide_Analysis_euclidean_v8/Interactions_Results_v9_run2_5000/interactions_summary_attnLimit-0.12.txt', sep='\t') # For FIS based interactions #df = pd.read_csv('../results/Arabidopsis_GenomeWide_Analysis_euclidean_v8_fixed/Interactions_FIS/interactions_summary_attnLimit-10.0.txt', sep='\t') ##df = pd.read_csv('../../DFIM_Arabidopsis_experiment_v10/Interactions/interactions_summary_attnLimit-0.txt', sep='\t') ``` ### Filter based on interaction and motif hit p-values, and keep the most significant interactions ``` df = filter_data_on_thresholds(df, motifA_pval_cutoff=0.05, motifB_pval_cutoff=0.05) df.shape df.head() ``` ### Annotate the interacting motifs ``` df['TF1'] = df['motif1'].apply(lambda x: x.split('_')[1].strip('.tnt')) df['TF2'] = df['motif2'].apply(lambda x: x.split('_')[1].strip('.tnt')) df.head() df['TF_Interaction'] = df.apply(lambda x: x['TF1']+r'$\longleftrightarrow$'+x['TF2'], axis=1) ``` ### Drop same motif interactions ``` df = df[df['TF1']!=df['TF2']] df.shape df = df.reset_index(drop=True) ``` ### Fix redundant interaction pairs ``` df = process_for_redundant_interactions(df, intr_type='TF') df.head() df.shape ``` ## Most Frequent TF Family Interactions ``` df['TF1_Family'] = df['motif1'].apply(lambda x: x.split('_')[0]) df['TF2_Family'] = df['motif2'].apply(lambda x: x.split('_')[0]) df['Family_Interaction'] = df.apply(lambda x: x['TF1_Family']+r'$\longleftrightarrow$'+x['TF2_Family'],axis=1) df = process_for_redundant_interactions(df, intr_type='Family') df.head() df['filter_interaction'] = df['filter_interaction'].apply(lambda x: x.replace('<-->',r'$\longleftrightarrow$')) df.head() df.shape df.to_csv('output/Arabidopsis_interactions.csv') ``` ### Distribution of individual TF or TF family interactions ``` plot_frequent_interactions(df, intr_level='Family_Interaction', first_n=15) ``` ### Plot interaction distance distribution ``` plot_interaction_distance_distribution(df, nbins=50, fig_size=(8,6)) df['mean_distance'].mean(), df['mean_distance'].median() ``` ### Most frequent interactions and their respective interaction distances ``` plot_interactions_and_distances_boxplot(df, first_n=15, sort_distances=False, add_sub_caption=True, show_median_dist=True, dist_color='slateblue', cap_pos=[0.5, -0.89], store_pdf_path='output/arabidopsis_main_distance_boxplot.pdf') plot_interactions_and_distances_histogram(df, first_n=15, dist_nbins=25, add_sub_caption=True, show_median_dist=True, dist_colors=['slateblue', 'darkkhaki'], cap_pos=[0.5, -0.89], store_pdf_path='output/arabidopsis_main_distance_histogram.pdf') ```
github_jupyter
# "Namentliche Abstimmungen" in the Bundestag > Parse and inspect "Namentliche Abstimmungen" (roll call votes) in the Bundestag (the federal German parliament) [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/eschmidt42/bundestag/HEAD) ## Context The German Parliament is so friendly to put all votes of all members into readable XLSX / XLS files (and PDFs ¯\\\_(ツ)\_/¯ ). Those files can be found here: https://www.bundestag.de/parlament/plenum/abstimmung/liste. Furthermore, the organisation [abgeordnetenwatch](https://www.abgeordnetenwatch.de/) offers a great platform to get to know the individual politicians and their behavior as well as an [open API](https://www.abgeordnetenwatch.de/api) to request data. ## Purpose of this repo The purpose of this repo is to help collect roll call votes from the parliament's site directly or via abgeordnetenwatch's API and make them available for analysis / modelling. This may be particularly interesting for the upcoming election in 2021. E.g., if you want to see what your local member of the parliament has been up to in terms of public roll call votes relative to the parties, or how individual parties agree in their votes, this dataset may be interesting for you. Since the files on the bundestag website are stored in a way making it tricky to automatically crawl them, a bit of manual work is required to generate that dataset. But don't fret! Quite a few recent roll call votes (as of the publishing of this repo) are already prepared for you. But if older or more recent roll call votes are missing, convenience tools to reduce your manual effort are demonstrated below. An alternative route to get the same and more data (on politicians and local parliaments as well) is via the abgeordnetenwatch route. For your inspiration, I have also included an analysis on how similar parties voted / how similar to parties individual MdBs votes and a small machine learning model which predicts the individual votes of parliament. Teaser: the "fraktionsszwang" seems to exist but is not absolute and the data shows 😁. ## How to install `pip install bundestag` ## How to use For detailed explanations see: - parse data from bundestag.de $\rightarrow$ `nbs/00_html_parsing.ipynb` - parse data from abgeordnetenwatch.de $\rightarrow$ `nbs/03_abgeordnetenwatch.ipynb` - analyze party / abgeordneten similarity $\rightarrow$ `nbs/01_similarities.ipynb` - cluster polls $\rightarrow$ `nbs/04_poll_clustering.ipynb` - predict politician votes $\rightarrow$ `nbs/05_predicting_votes.ipynb` For a short overview of the highlights see below. ### Setup ``` %load_ext autoreload %autoreload 2 from bundestag import html_parsing as hp from bundestag import similarity as sim from bundestag.gui import MdBGUI, PartyGUI from bundestag import abgeordnetenwatch as aw from bundestag import poll_clustering as pc from bundestag import vote_prediction as vp from pathlib import Path import pandas as pd from fastai.tabular.all import * ``` ### Part 1 - Party/Party similarities and Politician/Party similarities using bundestag.de data **Loading the data** If you have cloned the repo you should already have a `bundestag.de_votes.parquet` file in the root directory of the repo. If not feel free to download that file directly. If you want to have a closer look at the preprocessing please check out `nbs/00_html_parsing.ipynb`. ``` df = pd.read_parquet(path='bundestag.de_votes.parquet') df.head(3).T ``` Votes by party ``` %%time party_votes = sim.get_votes_by_party(df) sim.test_party_votes(party_votes) ``` Re-arranging `party_votes` ``` %%time party_votes_pivoted = sim.pivot_party_votes_df(party_votes) sim.test_party_votes_pivoted(party_votes_pivoted) party_votes_pivoted.head() ``` **Similarity of a single politician with the parties** Collecting the politicians votes ``` %%time mdb = 'Peter Altmaier' mdb_votes = sim.prepare_votes_of_mdb(df, mdb) sim.test_votes_of_mdb(mdb_votes) mdb_votes.head() ``` Comparing the politician against the parties ``` %%time mdb_vs_parties = (sim.align_mdb_with_parties(mdb_votes, party_votes_pivoted) .pipe(sim.compute_similarity, lsuffix='mdb', rsuffix='party')) sim.test_mdb_vs_parties(mdb_vs_parties) mdb_vs_parties.head(3).T ``` Plotting ``` sim.plot(mdb_vs_parties, title_overall=f'Overall similarity of {mdb} with all parties', title_over_time=f'{mdb} vs time') plt.tight_layout() plt.show() ``` ![mdb similarity](./README_files/mdb_similarity_vs_time.png) **Comparing one specific party against all others** Collecting party votes ``` %%time party = 'SPD' partyA_vs_rest = (sim.align_party_with_all_parties(party_votes_pivoted, party) .pipe(sim.compute_similarity, lsuffix='a', rsuffix='b')) sim.test_partyA_vs_partyB(partyA_vs_rest) partyA_vs_rest.head(3).T ``` Plotting ``` sim.plot(partyA_vs_rest, title_overall=f'Overall similarity of {party} with all parties', title_over_time=f'{party} vs time', party_col='Fraktion/Gruppe_b') plt.tight_layout() plt.show() ``` ![party similarity](./README_files/party_similarity_vs_time.png) **GUI to inspect similarities** To make the above exploration more interactive, the class `MdBGUI` and `PartyGUI` was implemented to quickly go through the different parties and politicians ``` mdb = MdBGUI(df) mdb.render() party = PartyGUI(df) party.render() ``` ### Part 2 - predicting politician votes using abgeordnetenwatch data The data used below was processed using `nbs/03_abgeordnetenwatch.ipynb`. ``` path = Path('./abgeordnetenwatch_data') ``` #### Clustering polls using Latent Dirichlet Allocation (LDA) ``` %%time source_col = 'poll_title' nlp_col = f'{source_col}_nlp_processed' num_topics = 5 # number of topics / clusters to identify st = pc.SpacyTransformer() # load data and prepare text for modelling df_polls_lda = (pd.read_parquet(path=path/'df_polls.parquet') .assign(**{nlp_col: lambda x: st.clean_text(x, col=source_col)})) # modelling clusters st.fit(df_polls_lda[nlp_col].values, mode='lda', num_topics=num_topics) # creating text features using fitted model df_polls_lda, nlp_feature_cols = df_polls_lda.pipe(st.transform, col=nlp_col, return_new_cols=True) # inspecting clusters display(df_polls_lda.head(3).T) pc.pca_plot_lda_topics(df_polls_lda, st, source_col, nlp_feature_cols) ``` #### Predicting votes Loading data ``` df_all_votes = pd.read_parquet(path=path / 'df_all_votes.parquet') df_mandates = pd.read_parquet(path=path / 'df_mandates.parquet') df_polls = pd.read_parquet(path=path / 'df_polls.parquet') ``` Splitting data set into training and validation set. Splitting randomly here because it leads to an interesting result, albeit not very realistic for production. ``` splits = RandomSplitter(valid_pct=.2)(df_all_votes) y_col = 'vote' ``` Training a neural net to predict `vote` based on embeddings for `poll_id` and `politician name` ``` %%time to = TabularPandas(df_all_votes, cat_names=['politician name', 'poll_id'], # columns in `df_all_votes` to treat as categorical y_names=[y_col], # column to use as a target for the model in `learn` procs=[Categorify], # processing of features y_block=CategoryBlock, # how to treat `y_names`, here as categories splits=splits) # how to split the data dls = to.dataloaders(bs=512) learn = tabular_learner(dls) # fastai function to set up a neural net for tabular data lrs = learn.lr_find() # searches the learning rate learn.fit_one_cycle(5, lrs.valley) # performs training using one-cycle hyperparameter schedule ``` **Predictions over unseen data** Inspecting the predictions of the neural net over the validation set. ``` vp.plot_predictions(learn, df_all_votes, df_mandates, df_polls, splits, n_worst_politicians=5) ``` Splitting our dataset randomly leads to a surprisingly good accuracy of ~88% over the validation set. The most reasonable explanation is that the model encountered polls and how most politicians voted for them already during training. This can be interpreted as, if it is known how most politicians will vote during a poll, then the vote of the remaining politicians is highly predictable. Splitting the data set by `poll_id`, as can be done using `vp.poll_splitter` leads to random chance predictions. Anything else would be surprising as well since the only available information provided to the model is who is voting. **Visualising learned embeddings** Besides the actual prediction it also is interesting to inspect what the model actually learned. This can sometimes lead to [surprises](https://github.com/entron/entity-embedding-rossmann). So let's look at the learned embeddings ``` embeddings = vp.get_embeddings(learn) ``` To make sense of the embeddings for `poll_id` as well as `politician name` we apply Principal Component Analysis (so one still kind of understands what distances mean) and project down to 2d. Using the information which party was most strongly (% of their votes being "yes"), so its strongest proponent, we color code the individual polls. ``` vp.plot_poll_embeddings(df_all_votes, df_polls, embeddings, df_mandates=df_mandates) ``` ![poll embeddings](./README_files/poll_embeddings.png) The politician embeddings are color coded using the politician's party membership ``` vp.plot_politician_embeddings(df_all_votes, df_mandates, embeddings) ``` ![mandate embeddings](./README_files/mandate_embeddings.png) The politician embeddings may be the most surprising finding in its clarity. It seems we find for polls and politicians 2-3 clusters, but for politicians with a significant grouping of mandates associated with the government coalition. It seems we find one cluster for the government parties and one for the government opposition. ## To dos / contributing Any contributions welcome. In the notebooks in `./nbs/` I've listed to dos here and there things which could be done. **General to dos**: - Check for discrepancies between bundestag.de and abgeordnetenwatch based data - Make the clustering of polls and policitians interactive - Extend the vote prediction model: currently, if the data is split by poll (which would be the realistic case when trying to predict votes of a new poll), the model is hardly better than chance. It would be interesting to see which information would help improve beyond chance. - Extend the data processed from the stored json responses from abgeordnetenwatch (currently only using the bare minimum)
github_jupyter
# Getter example The example of simple Getter class usage and even simpler analysis on recieved data. ## Preparations Import instabot from sources ``` import sys sys.path.append('../../') from instabot import User, Getter ``` Login users to be used in instabot. I suggest you to add as many users as you have because all get requests will be parallized between them to distribute the Instagram servers load. ``` _ = User("user_for_scrapping1", "password") _ = User("user_for_scrapping2", "password") _ = User("user_for_scrapping3", "password") ``` Init the Getter class without any parameters. It will use all of the available and successfully logged in users to parallize the get requests to Instagram's servers. ``` get = Getter() ``` ## Usage cases ### Users who posted with geotag Almost all Getter methods return generators to iterate over medias or users. But some of them such as __get.geo_id__ or __get.user_info__ return single value: the number or the dictionary (json-like). ``` location_name = "МФТИ" location_id = get.geo_id(location_name) print ("The id of %s is %d." % (location_name, location_id)) ``` For example you want to know who posts with specific geotag. You can iterate over medias and take the author's username. Get iterator over geo medias ``` geo_medias = get.geo_medias(location_id, total=10) print ("Users who post with %s geotag:" % location_name) for media in geo_medias: print (media["user"]["username"]) ``` All the values that are in response media's json: ``` media.keys() ``` ### User's mean likes Another use case: count the mean and std of recieved likes of specific user. ``` username = "ohld" user_info = get.user_info(username) user_id = user_info["pk"] print ("The id of '%s' is %d." % (username, user_id)) mean = lambda l: 0 if l == [] else sum(l) * 1. / len(l) like_counts = [media["like_count"] for media in get.user_feed(user_id, total=20)] print ("Amount of likes recieved by %s" % username) print (like_counts) print ("Mean: %.2f. Total: %d" % (mean(like_counts), sum(like_counts))) ``` ### Mean likes of every follower So let's test the Getter module with __hard task__: calculate mean likes of every follower and make some analysis. ``` from tqdm import tqdm_notebook # to see the progress of scrapping mean_likes = {} for user in tqdm_notebook(get.user_followers(user_id), total=user_info["follower_count"]): like_counts = [media['like_count'] for media in get.user_feed(user['pk'], total=5)] mean_likes[user["username"]] = mean(like_counts) import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(15, 5)) plt.hist([i for i in list(mean_likes.values()) if i > 0], bins=500) plt.title("Mean likes of %s followers" % username) plt.xlabel("Mean likes") plt.ylabel("Frequency") plt.show() filtered_likes = [item for item in mean_likes.values() if 0 < item < 300] plt.figure(figsize=(15, 5)) plt.hist(filtered_likes, bins=100) plt.title("Mean likes of %s followers" % username) plt.xlabel("Mean likes") plt.ylabel("Frequency") plt.show() ``` Let's take a look at the greatest mean likes owner ``` print ("%s has the highest value of mean likes in %s followers." % (max(mean_likes, key=mean_likes.get), username)) ```
github_jupyter
# Pushing an image along a Hilbert curve This started out as a discussion with my son about enumerating $\mathbb{N} \times \mathbb{N}$. Cantor found a bijection $\mathbb{N} \rightarrow \mathbb{N} \times \mathbb{N}$. Hilbert found a better bijection $\mathbb{N} \rightarrow \mathbb{N} \times \mathbb{N}$ which can be "rescaled" to give a continuous surjection $[0,1] \rightarrow [0,1] \times [0,1].$ Note that the limit map is not injective. http://www4.ncsu.edu/~njrose/pdfFiles/HilbertCurve.pdf ### The Holder exponent *Space-Filling Functions and Davenport Series*, Stéphane Jaffard and Samuel Nicolay, Recent Developments in Fractals and Related Fields, 19 Applied and Numerical Harmonic Analysis Let $f: \Omega \subset \mathbb{R}^d \rightarrow \mathbb{R}^k$ satisfies a Hölder condition, or is Hölder continuous, when there are nonnegative real constants C, α, such that $$ |f(x)-f(y)|\leq C\|x-y\|^{\alpha }$$ for all $x, y \in \Omega$. - Peano curves from $[0, 1]$ onto the square $ [0, 1]^2$ can be constructed to be 1/2–Hölder continuous. - It can be proved that when $ \alpha >{\tfrac {1}{2}}$ $ \alpha >{\tfrac {1}{2}}$ the image of a α–Hölder continuous function from the unit interval to the square cannot fill the square. https://blog.zen.ly/geospatial-indexing-on-hilbert-curves-2379b929addc older but better http://blog.notdot.net/2009/11/Damn-Cool-Algorithms-Spatial-indexing-with-Quadtrees-and-Hilbert-Curves related https://en.wikipedia.org/wiki/Hilbert_R-tree and finally https://www.americanscientist.org/article/crinkly-curves ``` import matplotlib.pyplot as plt import numpy as np motif = np.array([[0,1,1,0],[0,0,1,1]]) upper_left = np.flip(motif, axis = 0) + np.array([[0],[2]]) left_half = np.concatenate((motif,upper_left), axis =1) right_half = np.array([[-1],[1]])*left_half + np.array([[3],[0]]) motif = np.concatenate((left_half , np.flip(right_half, axis = 1) ), axis =1) motif = np.flip(motif, axis=0) ``` Let's take a look at the first portion of the curve that goes through the 16 integer points in a 3x3 closed square ``` plt.plot(motif[0,:],motif[1,:]) ``` This is a nicer version with a different axis order ``` def hilbert_curve_pts(depth=3): motif = np.array([[0,1,1,0],[0,0,1,1]]) motif = motif.transpose() y_diff = np.array([0,1]) x_diff = np.array([1,0]) scale = 2 #this is actually a recursion for _ in range(depth): #bit verbose but this how it is made #flip(_, axis=1)) is (x,y) -> (y,x) #flip(_, axis=0)) is a list.reverse() top_left_quad = np.flip( motif, axis=1) + scale*y_diff left_half = np.concatenate(( motif, top_left_quad), axis=0) right_half = np.array([-1,1])*left_half + (2*scale - 1)*x_diff motif = np.concatenate(( left_half, np.flip(right_half, axis=0) ), axis=0 ) motif = np.flip( motif, axis=1) scale *= 2 return motif motif = hilbert_curve_pts(depth=3) plt.plot(motif[:,0], motif[:,1]) import scipy as sp import imageio im = imageio.imread('zhu.jpg') from skimage.transform import resize rr = resize(im,(256,256)) plt.imshow(rr) flat_indices = np.dot(hilbert_curve_pts(depth=7), np.array([1,256]) ) def curve_shift(src_im, dx=-8): mapper = np.stack((flat_indices, np.roll(flat_indices, dx)), axis = 1 ) mapper.view('i8,i8').sort( order=['f0'], axis=0) index_mapper = mapper[:,1] new_im = np.ones_like(src_im) for k, col in enumerate(src_im.transpose()): col = col.flatten(order='F') new_im[:,:,k] = np.reshape( col[index_mapper], (256,256)) return new_im f = plt.figure(frameon=False, figsize=(5, 5), dpi=100) canvas_width, canvas_height = f.canvas.get_width_height() ax = f.add_axes([0, 0, 1, 1]) ax.axis('off') rt = plt.imshow(curve_shift(rr, dx= 1 )) ``` ## Animation This was trickier than it should have been I suppose it is because the Jupyter "platform" has been going through a lot of change. THe code in the end is less than 20 lines. 1. Import and set up for animation using %matplotlib notebook 1. Set up the canvas element. 1. Write the callbacks for the animation loop 1. Instantiate the loop using animation.FuncAnimation() ``` from matplotlib import animation, rc from IPython.display import HTML # I found this on youtube images are no longer static %matplotlib notebook # First set up the figure, the axis, and the plot element we want to animate # see also https://matplotlib.org/examples/animation/dynamic_image.html f = plt.figure(frameon=False, figsize=(3, 3), dpi=100) canvas_width, canvas_height = f.canvas.get_width_height() ax = f.add_axes([0, 0, 1, 1]) ax.axis('off') rt = plt.imshow( curve_shift(rr, dx= 0 ), animated=True) # initialization function: plot the background of each frame def init(): pass # animation function. This is called sequentially def updatefig(frame_no): rt.set_data(curve_shift(rr, dx= frame_no)) return rt, anim = animation.FuncAnimation(f, updatefig, frames=512, interval=2, blit=True) ```
github_jupyter
``` %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd from pandas import get_dummies data = pd.read_csv("A0201.csv", sep=",") data.head() indexes = data['sequence'][data['length'] == 9].index #indexes = data.index selected_X = data['sequence'][indexes] selected_y = pd.DataFrame(data['meas'][indexes]) selected_y['netmhc'] = pd.DataFrame(data['netmhc'][indexes]) selected_y['netmhcpan'] = pd.DataFrame(data['netmhcpan'][indexes]) selected_y['smmpmbec_cpp'] = pd.DataFrame(data['smmpmbec_cpp'][indexes]) letters_X = selected_X.apply(list) selected_X = pd.get_dummies(pd.DataFrame(list(letters_X))) #plt.figure(figsize = (16, 9)) plt.title("ti pidor") hh = plt.hist(selected_y['meas'], 50, color = 'blue', alpha = 0.6) from sklearn.cross_validation import train_test_split random_number = 53 X_train, X_test, y_train, y_test = train_test_split(selected_X, selected_y, test_size = 0.33, random_state = random_number) test_index = y_test.index rss_netmhc = sum((y_test['netmhc'] - y_test['meas'])**2) rss_netmhcpan = sum((y_test['netmhcpan'] - y_test['meas'])**2) rss_smmpmbec_cpp = sum((y_test['smmpmbec_cpp'] - y_test['meas'])**2) Ridge from sklearn.svm import SVR import numpy as np from sklearn.grid_search import GridSearchCV from sklearn.learning_curve import learning_curve from sklearn.kernel_ridge import KernelRidge svr = GridSearchCV(SVR(kernel='rbf', gamma=0.1), cv=5, param_grid={"C": [1e0, 1e1, 1e2, 1e3], "gamma": np.logspace(-2, 2, 5)}) svr.fit(selected_X, selected_y["meas"]) TunedSVR = SVR(kernel='rbf', C=1000, gamma=0.10000000000000001) svr.best_params_ TunedSVR.fit(X_train, y_train["meas"]) t_rss = sum((TunedSVR.predict(X_test) - y_test['meas'])**2) rss_netmhc = sum((y_test['netmhc'] - y_test['meas'])**2) rss_netmhcpan = sum((y_test['netmhcpan'] - y_test['meas'])**2) rss_smmpmbec_cpp = sum((y_test['smmpmbec_cpp'] - y_test['meas'])**2) print("SVR MSE:", t_rss) print("netmhc result", rss_netmhc) print("netmhcpan result", rss_netmhcpan) print("smmpmbec_cpp result", rss_smmpmbec_cpp) import biopython from __future__ import print_function import sys from Bio.SeqUtils import ProtParamData # Local from Bio.SeqUtils import IsoelectricPoint # Local from Bio.Seq import Seq from Bio.Alphabet import IUPAC from Bio.Data import IUPACData from Bio.SeqUtils import molecular_weight from Bio import SeqIO from Bio.SeqUtils import ProtParam data seq = "QRSDSSLV" X = ProtParam.ProteinAnalysis(seq) print(X.molecular_weight()) print(X.aromaticity()) print(X.instability_index()) print(X.flexibility()) print(X.isoelectric_point()) print(X.secondary_structure_fraction()) mol_weight = [] for i in data["sequence"]: X = ProtParam.ProteinAnalysis(i) mol_weight.append(X.molecular_weight()) data["molecular_weight"] = mol_weight aromaticity = [] for i in data["sequence"]: X = ProtParam.ProteinAnalysis(i) aromaticity.append(X.aromaticity()) data["aromaticity"] = aromaticity instability_index = [] for i in data["sequence"]: X = ProtParam.ProteinAnalysis(i) instability_index.append(X.instability_index()) data["instability_index"] = instability_index isoelectric_point = [] for i in data["sequence"]: X = ProtParam.ProteinAnalysis(i) isoelectric_point.append(X.isoelectric_point()) data["isoelectric_point"] = isoelectric_point target = data["meas"] other_targets = data[["netmhc", "netmhcpan", "smmpmbec_cpp"]] data.drop(["netmhc", "netmhcpan", "smmpmbec_cpp"], inplace=True,axis=1) data.to_csv("ExtraData.csv") data ```
github_jupyter
``` import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt from sklearn.linear_model import LogisticRegressionCV import sklearn.metrics as metrics from sklearn.preprocessing import PolynomialFeatures from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import cross_val_score from sklearn.tree import export_graphviz from IPython.display import Image from IPython.display import display %matplotlib inline import seaborn as sns import warnings warnings.filterwarnings("ignore") from sklearn import discriminant_analysis from sklearn.tree import export_graphviz from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score import statsmodels.api as sm from statsmodels.api import OLS from sklearn import linear_model, datasets from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import Ridge from sklearn.linear_model import Lasso from sklearn.linear_model import RidgeCV from sklearn.linear_model import LassoCV import seaborn.apionly as sns from scipy import stats from sklearn.metrics import mean_squared_error from sklearn.linear_model import LogisticRegression import pandas as pd df = pd.read_csv('ADNIMERGE_full.csv') pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) df.head() df.shape[0] df_dict = pd.read_csv('ADNIMERGE_DICT.csv') df_dict.head() ``` # 1. Project Scope: We looked at the contribution of genetic factors in predicting a person's Alzheimer's status at baseline. To that end we worked with two datasets: - ADNIMERGE_full - ADNI_Gene_Expression_Profile We focused on expanding the ADNIMERGE dataset and did not take into account the longitudinal aspect of the data. We chose the ADNI2 cohort because the gene expression data was collected only for ADNI2 and ADNIGO, thus ADNI1 is a no-go. Since it would be complicated to find variables common to both ADNI2 and ADNIGO, we dropped one or the either without creating any additional bias in the dataset. our final decision is to work on ADNI2 dataset specifically. # 2. Description of the Datasets The data encompasses the baseline cognitive statues of a subject and follows his/her progress over a period of mutliple visits. This is the dependent variable 'DX'. The data includes a broad range of predictor categories namely: they are genetic; neuropsychological, everday cognitive and functional tests, MRI to measure changes in brain volume and PET data which looks into brain glucose metabolism. ADNIMERGE_full is the composite data set that pulls on important predictors from all these categories into one snapshot. There are extensive databases of each of these different categories by the different cohorts (ADNI 1, ADNI 2 and ADNIGO) some of which are beyond the compute power our laptops (which was an important cosideration of which dataset we wanted to expand). # 2.1 Cleaning up the dataset (for EDA) ### ADNIMERGE_full: We chose only baseline visit data because: - we need some criteria to have a unique key of each observation. Problem is a single patient (PTID) has multiple visits, so either we combine multiple data points into a longitudinal factor or keep only one visit. We chose to ignore the longitudinal aspect since we are focusing on extending the dataset. - We want to extend the dataset based on genetic expression data, which does not have have a longitudinal aspect (since a person's genes are set at birth) hence keeping only the baseline data can be justified. Additionally, we got rid of administrative data, endline obserations, and the predictor called 'CDR', which is one of the advanced symptoms of Alzheimer's, and is highly predictive of an 'AD' status for a subject. ### ADNI Gene Expression Profile: This dataset contains the gene expression profile subjects with information on Gene LocusLinks which contains the information on the gene symbols. In order to clean the data, we have taken following steps: - Since we are working with only ADNI2, we filtered out ADNIGO observations. - Using the probe set with suffix '-at' we narrowed down unique Locus Links and then unique Gene Symbols (details were provided in our MileStone 3 and are provided in the Appendix). This resultied in a Gene Experession profile for individual patients. ### Mering the datasets The two datasets were merged by the unique PID resulting in **372 observations and 7455 columns.** ### Combining the dependent variable into three categories There were five categories in the DX, but we compbined into three to improve our prediction accuracy and then converted to a numerical variable. - CN = SMC + CN = 1 - MCI = LMCI + EMCI = 2 - AD = AD =3 ``` # Choosing on baselinein ADNIMERGE_full df_bl = df[df["VISCODE"]=='bl'] df_bl_adni2=df_bl[df_bl["COLPROT"]=='ADNI2'] df_bl.shape, df_bl_adni2.shape df_bl_adni2.head() # Dropping administrative columns df_bl_adni2=df_bl_adni2.drop('RID',1) df_bl_adni2=df_bl_adni2.drop('SITE',1) df_bl_adni2=df_bl_adni2.drop('ORIGPROT',1) df_bl_adni2=df_bl_adni2.drop('EXAMDATE',1) df_bl_adni2=df_bl_adni2.drop('PIB',1) df_bl_adni2=df_bl_adni2.drop('EXAMDATE_bl',1) df_bl_adni2=df_bl_adni2.drop('FLDSTRENG',1) df_bl_adni2=df_bl_adni2.drop('FSVERSION',1) df_bl_adni2=df_bl_adni2.drop('FLDSTRENG_bl',1) df_bl_adni2=df_bl_adni2.drop('FSVERSION_bl',1) df_bl_adni2=df_bl_adni2.drop('PIB_bl',1) df_bl_adni2=df_bl_adni2.drop('Years_bl',1) df_bl_adni2=df_bl_adni2.drop('Month_bl',1) df_bl_adni2=df_bl_adni2.drop('Month',1) df_bl_adni2=df_bl_adni2.drop('M',1) df_bl_adni2=df_bl_adni2.drop('update_stamp',1) df_bl_adni2=df_bl_adni2.drop('CDRSB_bl',1) df_bl_adni2.head() # Dropping endline columns df_bl_adni2=df_bl_adni2.drop(df_bl_adni2.columns[[range(11,45)]], 1) df_bl_adni2.head() #Dropping the study partner variables df_bl_adni2=df_bl_adni2.drop(df_bl_adni2.columns[[range(33,40)]], 1) # Dropping VISCODE and COHORT since they are also predictive df_bl_adni2=df_bl_adni2.drop('VISCODE',1) df_bl_adni2=df_bl_adni2.drop('COLPROT',1) #checking new shape df_bl_adni2.shape #exporting file df_bl_adni2.to_csv('df_ADNIMERGE_clean.csv') df_bl_adni2.head() ``` # 2.2 EDA: We did EDA on the ADNIMERGE_full data set which has 32 predictors to get a sense of which variables have some correlation wiht the independpent variable. We did not do EDA with the Gene Expression Profile because there are 7421 Gene symbols and it is unlikely that any or several of them will individually important. We primarily used the seaborn library for EDA visualization. ``` import seaborn.apionly as sns sns.set_context("notebook") ``` ## Pairgrid betwen predictors We were curious to see how predicitors were correlated wiht each other if at all. For example, is there a correlation between low ventricular volume with low cognitive score. We tried a pair grid with some randomly chosen variables across categories as a start. The only notable correlations are between ADAS11 and ventricular boundary shift integral (BSI) and FDG. Increasing ADAS score (higher score indicated greater cognitive dysfunction) is correlated with lower FDG (i.e. low glucose metabolism in the brain) and high Ventricle BSI (more volume loss). This correlation is not unexpecting. It is interesting to note that the other three cognitive tests - MMSE, RAVLT and Every day Cog Memory - don’t show any correlation with the PET or MRI variables. Of course, we didn’t do an extensive pair grid (which would result in about 900 plots) on ADNIMERGE_full, but rather an initial scoping. ``` av_age_dx=df_bl_adni2.groupby('DX_bl').AGE.mean() av_ADAS=df_bl_adni2.groupby('DX_bl').AGE.mean() av_age_dx sns.set() g = sns.PairGrid(df_bl_adni2, vars=['ADAS11_bl','MMSE_bl', 'RAVLT_perc_forgetting_bl', 'Ventricles_bl', 'EcogPtMem_bl', 'FDG_bl']) g.map_diag(sns.kdeplot) g.map_offdiag(plt.scatter, s=15) ``` ### Correlation of predictors with DX - We know that APOE4 is a strong indicator of AD, and a simple bar chart validates that. - Next we looked at how neuropsychological assessments were correlated to DX. ADAS does very well in detecting AD, MMSE is quite poor performance while RAVLT_forgetting and RAVLT_learning also perform quite well. - Next we looked at the two PET variables - FDG and AV45 which measure glucose metabolism. FDG shows stronger correlation with SD, AV45 no correlation with any of the DX_bl variables. ``` ax = sns.barplot(x="DX_bl", y="APOE4", data=df_bl_adni2) ax = sns.barplot(x="DX_bl", y="ADAS11_bl", data=df_bl_adni2) ax = sns.barplot(x="DX_bl", y="ADAS13_bl", data=df_bl_adni2) ax = sns.barplot(x="DX_bl", y="MMSE_bl", data=df_bl_adni2) ax = sns.barplot(x="DX_bl", y="RAVLT_perc_forgetting_bl", data=df_bl_adni2) x = sns.barplot(x="DX_bl", y="RAVLT_learning_bl", data=df_bl_adni2) sns.stripplot(x="DX_bl", y="FDG_bl", data=df_bl_adni2); sns.stripplot(x="DX_bl", y="AV45_bl", data=df_bl_adni2); sns.barplot(x="DX_bl", y="Ventricles_bl", data=df_bl_adni2); ``` ### Conclusion: There seems to be some correlation with the AD condition, but the other DX conditions shows very noisy correlation - which is why Alzheimer’s in its early phases it very hard to detect. **Our model would seek to make it easier to detect ealier conditions of DX. **
github_jupyter
``` # !pip install -q tf-nightly import tensorflow as tf from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout from tensorflow.keras.preprocessing.image import ImageDataGenerator import os import numpy as np import matplotlib.pyplot as plt print("Tensorflow Version: {}".format(tf.__version__)) print("GPU {} available.".format("is" if tf.config.experimental.list_physical_devices("GPU") else "not")) ``` # Data Preprocessing This tutorial uses a filtered version of [Dags vs Cats](https://www.kaggle.com/c/dogs-vs-cats/data) dataset from kaggle. ``` _URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip' path_to_zip = tf.keras.utils.get_file("cats_and_dogs.zip", origin=_URL, extract=True) PATH = os.path.join(os.path.dirname(path_to_zip), "cats_and_dogs_filtered") PATH !ls -al {PATH} ``` The data structure is below. ```text cats_and_dogs_filtered |- train + |- cats + |- xxx.jpg - |- xxy.jpg - |- dogs + |- validation + |- cats + |- dogs + |- vectorize.py ``` ``` !ls -al {PATH}/train/cats | wc -l train_dir = os.path.join(PATH, 'train') validation_dir = os.path.join(PATH, 'validation') train_cats_dir = os.path.join(train_dir, 'cats') train_dogs_dir = os.path.join(train_dir, 'dogs') validation_cats_dir = os.path.join(validation_dir, 'cats') validation_dogs_dir = os.path.join(validation_dir, 'dogs') ``` Understand the dataset. ``` num_cats_tr = len(os.listdir(train_cats_dir)) num_dogs_tr = len(os.listdir(train_dogs_dir)) num_cats_val = len(os.listdir(validation_cats_dir)) num_dogs_val = len(os.listdir(validation_dogs_dir)) total_train = num_cats_tr + num_dogs_tr total_val = num_cats_val + num_dogs_val print("Train: cats {}, dogs {}.".format(num_cats_tr, num_dogs_tr)) print("Validation: cats {}, dogs {}".format(num_cats_val, num_dogs_val)) batch_size = 128 epochs = 15 IMG_HEIGHT = 150 IMG_WIDTH = 150 ``` # Data Preparation ``` # normalize the image train_img_generator = ImageDataGenerator(rescale=1./255.) validation_img_generator = ImageDataGenerator(rescale=1./255.) ``` Load the data from the directory using the generators. (The parameter `directory` set here is the parent's path, not the category ones.) ``` train_data_gen = train_img_generator.flow_from_directory( directory=train_dir, batch_size=batch_size, shuffle=True, target_size=(IMG_HEIGHT, IMG_WIDTH), class_mode='binary') val_data_gen = validation_img_generator.flow_from_directory( directory=validation_dir, batch_size=batch_size, target_size=(IMG_HEIGHT, IMG_WIDTH), class_mode='binary') ``` ## Visualize the Training Images ``` sample_training_images, sample_training_labels = next(train_data_gen) def plotImages(img_arr): plt.figure(figsize=(10, 40)) for i in range(5): plt.subplot(1, 5, i+1) plt.imshow(img_arr[i]) plt.xticks([]) plt.yticks([]) plt.show() plotImages(sample_training_images) ``` # Create the Model ``` def build_model(): def _model(inputs): x = Conv2D(filters=16, kernel_size=(3, 3), padding='same', activation='elu')(inputs) x = MaxPooling2D()(x) x = Conv2D(filters=32, kernel_size=(3, 3), padding='same', activation='elu')(x) x = MaxPooling2D()(x) x = Conv2D(filters=64, kernel_size=(3, 3), padding='same', activation='elu')(x) x = MaxPooling2D()(x) x = Flatten()(x) x = Dense(units=512, activation='elu')(x) cls = Dense(units=1, activation='sigmoid')(x) return cls inputs = tf.keras.Input(shape=(IMG_HEIGHT, IMG_WIDTH, 3)) outputs = _model(inputs) model = tf.keras.Model(inputs, outputs) return model model = build_model() model.compile(loss=tf.keras.losses.BinaryCrossentropy(), optimizer=tf.keras.optimizers.Adam(), metrics=[tf.keras.metrics.BinaryAccuracy()]) model.summary() ``` # Train the Model ``` history = model.fit(train_data_gen, steps_per_epoch=total_train // batch_size, epochs=epochs, validation_data=val_data_gen, validation_steps=total_val // batch_size) ``` # Visualize the Result ``` history.history.keys() plt.figure(figsize=(10, 6)) plt.subplot(1, 2, 1) plt.plot(range(epochs), history.history['binary_accuracy'], label='Training Accuracy') plt.plot(range(epochs), history.history['val_binary_accuracy'], label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Accuracy') plt.subplot(1, 2, 2) plt.plot(range(epochs), history.history['loss'], label='Training Loss') plt.plot(range(epochs), history.history['val_loss'], label='Validation Loss') plt.legend(loc='lower right') plt.title('Loss') plt.show() ``` Let's look at what the wrong is and try to increase the overall performance of the model. # Overfitting The above result shows the overfitting of the model that can not perform well on the coming data. Here you can solve the overfitting using the data augmentation and adding a dropout. # Data Augmentation The goal is the model will never see the same image twice during training. ## Applying Horizontal Flip ``` image_gen = ImageDataGenerator(rescale=1./255., horizontal_flip=True) train_data_gen = image_gen.flow_from_directory( directory=train_dir, batch_size=batch_size, target_size=(IMG_HEIGHT, IMG_WIDTH), shuffle=True) augmented_images = [train_data_gen[0][0][0] for i in range(5)] plotImages(augmented_images) ``` ## Applying Randomly Rotations ``` image_gen = ImageDataGenerator(rescale=1./255., rotation_range=45) train_data_gen = image_gen.flow_from_directory( directory=train_dir, batch_size=batch_size, target_size=(IMG_HEIGHT, IMG_WIDTH), shuffle=True ) augmented_images = [train_data_gen[0][0][0] for _ in range(5)] plotImages(augmented_images) ``` ## Applying Zooming ``` # zoom range from 0 to 1 represents 0% to 100% image_gen = ImageDataGenerator(rescale=1./255., zoom_range=0.5) train_data_gen = image_gen.flow_from_directory( directory=train_dir, batch_size=batch_size, target_size=(IMG_HEIGHT, IMG_WIDTH), shuffle=True ) augmented_images = [train_data_gen[0][0][0] for _ in range(5)] plotImages(augmented_images) ``` ## Combining Augmentation Methods ``` image_gen = ImageDataGenerator( rescale=1./255., horizontal_flip=True, rotation_range=45, zoom_range=0.5, width_shift_range=.15, height_shift_range=.15) train_data_gen = image_gen.flow_from_directory( directory=train_dir, batch_size=batch_size, target_size=(IMG_HEIGHT, IMG_WIDTH), shuffle=True, class_mode='binary') augmented_images = [train_data_gen[0][0][0] for _ in range(5)] plotImages(augmented_images) ``` ## Creating a Validation Dataset Generator Normally only the training dataset generator would be applied with the augmentation methods. On contrast, the validation dataset generator would not be augmented. ``` image_gen_val = ImageDataGenerator(rescale=1./255.) val_data_gen = image_gen_val.flow_from_directory( directory=validation_dir, batch_size=batch_size, target_size=(IMG_HEIGHT, IMG_WIDTH), class_mode='binary') ``` # Dropout The `dropout` is a form of regularization. It forces the model to use a small part of weights to do the prediction. When the dropout rate is set to 0.1, it means 10% output nodes are randomly set to zero (be dropped) in each epoch of training. ## A Model with the Dropout Layer ``` def build_model_dropout(): def _model(inputs): x = Conv2D(filters=16, kernel_size=(3, 3), activation='elu', padding='same')(inputs) x = MaxPooling2D()(x) # keras handles the dropout rate in training or inference phrase x = Dropout(0.2)(x) x = Conv2D(filters=32, kernel_size=(3, 3), activation='elu', padding='same')(x) x = MaxPooling2D()(x) x = Conv2D(filters=64, kernel_size=(3, 3), activation='elu', padding='same')(x) x = MaxPooling2D()(x) x = Dropout(0.2)(x) x = Flatten()(x) x = Dense(units=512, activation='elu')(x) cls = Dense(units=1, activation='sigmoid')(x) return cls inputs = tf.keras.Input(shape=(IMG_HEIGHT, IMG_WIDTH, 3)) outputs = _model(inputs) model = tf.keras.Model(inputs, outputs) return model model_new = build_model_dropout() model_new.compile(loss=tf.keras.losses.BinaryCrossentropy(), optimizer=tf.keras.optimizers.Adam(), metrics=[tf.keras.metrics.BinaryAccuracy()]) model_new.summary() ``` ## Train the Model ``` history = model_new.fit(train_data_gen, steps_per_epoch=total_train // batch_size, epochs=epochs, validation_data=val_data_gen, validation_steps=total_val // batch_size) history.history.keys() plt.figure(figsize=(10, 6)) plt.subplot(1, 2, 1) plt.plot(range(epochs), history.history['binary_accuracy'], label='Training') plt.plot(range(epochs), history.history['val_binary_accuracy'], label='Validation') plt.title('Accuracy') plt.subplot(1, 2, 2) plt.plot(range(epochs), history.history['loss'], label='Training') plt.plot(range(epochs), history.history['val_loss'], label='Validation') plt.title('Loss') plt.show() ```
github_jupyter
# Matrix product state simulation method ## Simulation methods The `QasmSimulator` has several simulation methods including `statevector`, `stabilizer`, `extended_stabilizer` and `matrix_product_state`. Each of these determines the internal representation of the quantum circuit and the algorithms used to process the quantum operations. They each have advantages and disadvantages, and choosing the best method is a matter of investigation. In this tutorial, we focus on the `matrix product state simulation method`. ## Matrix product state simulation method This simulation method is based on the concept of `matrix product states`. This structure was initially proposed in the paper *Efficient classical simulation of slightly entangled quantum computations* by Vidal in https://arxiv.org/abs/quant-ph/0301063. There are additional papers that describe the structure in more detail, for example *The density-matrix renormalization group in the age of matrix product states* by Schollwoeck https://arxiv.org/abs/1008.3477. A pure quantum state is usually described as a state vector, by the expression $|\psi\rangle = \sum_{i_1=0}^1 {\ldots} \sum_{i_n=0}^1 c_{i_1 \ldots i_n} |i_i\rangle {\otimes} {\ldots} {\otimes} |i_n\rangle$. The state vector representation implies an exponential size representation, regardless of the actual circuit. Every quantum gate operating on this representation requires exponential time and memory. The matrix product state (MPS) representation offers a local representation, in the form: $\Gamma^{[1]} \lambda^{[1]} \Gamma^{[2]} \lambda^{[2]}\ldots \Gamma^{[1]} \lambda^{[n-1]} \Gamma^{[n]}$, such that all the information contained in the $c_{i_1 \ldots i_n}$, can be generated out of the MPS representation. Every $\Gamma^{[i]}$ is a tensor of complex numbers that represents qubit $i$. Every $\lambda^{[i]}$ is a matrix of real numbers that is used to normalize the amplitudes of qubits $i$ and $i+1$. Single-qubit gates operate only on the relevant tensor. Two-qubit gates operate on consecutive qubits $i$ and $i+1$. This involves a tensor-contract operation over $\lambda^{[i-1]}$, $\Gamma^{[i-1]}$, $\lambda^{[i]}$, $\Gamma^{[i+1]}$ and $\lambda^{[i+1]}$, that creates a single tensor. We apply the gate to this tensor, and then decompose back to the original structure. This operation may increase the size of the respective tensors. Gates that involve two qubits that are not consecutive, require a series of swap gates to bring the two qubits next to each other and then the reverse swaps. In the worst case, the tensors may grow exponentially. However, the size of the overall structure remains 'small' for circuits that do not have 'many' two-qubit gates. This allows much more efficient operations in circuits with relatively 'low' entanglement. Characterizing when to use this method over other methods is a subject of current research. ## Using the matrix product state simulation method The matrix product state simulation method is invoked in the `QasmSimulator` by setting the `simulation_method`. Other than that, all operations are controlled by the `QasmSimulator` itself, as in the following example: ``` import numpy as np # Import Qiskit from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister from qiskit import Aer, execute from qiskit.providers.aer import QasmSimulator # Construct quantum circuit circ = QuantumCircuit(2, 2) circ.h(0) circ.cx(0, 1) circ.measure([0,1], [0,1]) # Select the QasmSimulator from the Aer provider simulator = Aer.get_backend('qasm_simulator') # Define the simulation method backend_opts_mps = {"method":"matrix_product_state"} # Execute and get counts, using the matrix_product_state method result = execute(circ, simulator, backend_options=backend_opts_mps).result() counts = result.get_counts(circ) counts ``` To see the internal state vector of the circuit, we can import the snapshot files: ``` from qiskit.extensions.simulator import Snapshot from qiskit.extensions.simulator.snapshot import snapshot circ = QuantumCircuit(2, 2) circ.h(0) circ.cx(0, 1) # Define a snapshot that shows the current state vector circ.snapshot('my_sv', snapshot_type='statevector') circ.measure([0,1], [0,1]) # Execute job_sim = execute([circ], QasmSimulator(), backend_options=backend_opts_mps) result = job_sim.result() #print the state vector result.data()['snapshots']['statevector']['my_sv'][0] result.get_counts() ``` Running circuits using the matrix product state simulation method can be fast, relative to other methods. However, if we generate the state vector during the execution, then the conversion to state vector is, of course, exponential in memory and time, and therefore we don't benefit from using this method. We can benefit if we only do operations that don't require printing the full state vector. For example, if we run a circuit and then take measurement. The circuit below has 200 qubits. We create an `EPR state` involving all these qubits. Although this state is highly entangled, it is handled well by the matrix product state method, because there are effectively only two states. We can handle more qubits than this, but execution may take a few minutes. Try running a similar circuit with 500 qubits! Or maybe even 1000 (you can get a cup of coffee while waiting). ``` num_qubits = 50 qr = QuantumRegister(num_qubits) cr = ClassicalRegister(num_qubits) circ = QuantumCircuit(qr, cr) # Create EPR state circ.h(qr[0]) for i in range (0,num_qubits-1): circ.cx(qr[i], qr[i+1]) # Measure circ.measure(qr, cr) job_sim = execute([circ], QasmSimulator(), backend_options=backend_opts_mps) result = job_sim.result() print("Time taken: {} sec".format(result.time_taken)) result.get_counts() import qiskit.tools.jupyter %qiskit_version_table %qiskit_copyright ```
github_jupyter
# Initilization ``` !mkdir -p models !wget "https://drive.google.com/uc?id=1E95HNEYQI1R-UTuYwJOycrYDJxFxiICC&export=download" -O songs.p !pip install tensorflow-gpu==1.14 ``` # Dataset Preparation ``` %pylab inline import pickle import pandas as pd import keras from music21 import converter, instrument, note, chord, stream from keras.models import Sequential, load_model from keras.layers import LSTM, Bidirectional, Dropout, Dense, Activation from keras.callbacks import ModelCheckpoint, History # implementation based on https://towardsdatascience.com/how-to-generate-music-using-a-lstm-neural-network-in-keras-68786834d4c5 def generate_dataset(): """datas preprocessing based on a list of song objects, each song object contains song's name and a sequence of notes""" songs = pickle.load(open("songs.p", "rb")) # get a list of all notes in all songs notes = [] for song in songs: notes += song["notes"] # n_vocab is the number of unique netes n_vocab = len(set(notes)) # length of input sequence to LSTM network sequence_length = 100 # get all pitch names pitchnames = sorted(set(item for item in notes)) # create a dictionary to map pitches to integers note_to_int = dict((note, number) for number, note in enumerate(pitchnames)) int_to_note = dict((number, note) for number, note in enumerate(pitchnames)) network_input = [] network_output = [] # create input sequences and the corresponding outputs for song in songs: print("Loading", song["name"]) notes = song["notes"] for i in range(0, len(notes) - sequence_length, 1): sequence_in = notes[i : i + sequence_length] sequence_out = notes[i + sequence_length] network_input.append([note_to_int[char] for char in sequence_in]) network_output.append(note_to_int[sequence_out]) n_patterns = len(network_input) # reshape the input into a format compatible with LSTM layers network_input = np.reshape(network_input, (n_patterns, sequence_length, 1)) # normalize input network_input = network_input / float(n_vocab) network_output = keras.utils.to_categorical(network_output) return (n_vocab, int_to_note, network_input, network_output) ``` # Create Network ``` def create_network(network_input, n_vocab): """create network structure""" model = Sequential() model.add(LSTM(128,input_shape=(network_input.shape[1], network_input.shape[2]),return_sequences=True)) model.add(Dropout(0.5)) model.add(Bidirectional(LSTM(256, return_sequences=True))) model.add(Dropout(0.5)) model.add(Bidirectional(LSTM(256))) model.add(Dense(128)) model.add(Dropout(0.5)) model.add(Dense(n_vocab)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') return model ``` # Train Network ``` def train_network(): """train the network""" (n_vocab, _, network_input, network_output) = generate_dataset() # get model structure model = create_network(network_input, n_vocab) model.summary() # callbacks history = History() filepath = 'models/model-{epoch:02d}-{loss:.2f}-{val_loss:.2f}.hdf5' checkpoint_cb = ModelCheckpoint( filepath, monitor='loss', verbose=0, save_best_only=True, mode='min' ) # start training the model n_epoch = 30 model.fit(network_input, network_output, epochs=n_epoch, batch_size=64, validation_split=0.2, callbacks=[history, checkpoint_cb]) model.save('music_generate_model.h5') # Plot the model losses pd.DataFrame(history.history).plot() plt.savefig('network_loss_per_epoch.png', transparent=True) plt.close() train_network() ``` # Model Prediction ``` # Model predicion's implementation based on https://github.com/corynguyen19/midi-lstm-gan def generate_notes(model): """ Generate notes from the neural network based on a sequence of notes """ (n_vocab, int_to_note, network_input, _) = generate_dataset() # pick a random sequence from the input as a starting point for the prediction start = np.random.randint(0, len(network_input) - 1) pattern = network_input[start] prediction_output = [] # generate 500 notes for i in range(500): prediction_input = np.reshape(pattern, (1, len(pattern), 1)) prediction_input = prediction_input / float(n_vocab) prediction = model.predict(prediction_input, verbose=0) # random choose from a distribution index = np.random.choice(n_vocab, 1, p=prediction[0])[0] # index = np.argmax(prediction) result = int_to_note[index] prediction_output.append(result) print(result) pattern = np.append(pattern, index) pattern = pattern[1 : len(pattern)] return prediction_output def create_midi(prediction_output, filename): """ convert the output from the prediction to notes and create a midi file from the notes """ offset = 0 output_notes = [] # create note and chord objects based on the values generated by the model for pattern in prediction_output: # pattern is a chord if ('.' in pattern) or pattern.isdigit(): notes_in_chord = pattern.split('.') notes = [] for current_note in notes_in_chord: new_note = note.Note(int(current_note)) new_note.storedInstrument = instrument.Piano() notes.append(new_note) new_chord = chord.Chord(notes) new_chord.offset = offset output_notes.append(new_chord) # pattern is a note else: new_note = note.Note(pattern) new_note.offset = offset new_note.storedInstrument = instrument.Piano() output_notes.append(new_note) # increase offset each iteration so that notes do not stack offset += 0.5 midi_stream = stream.Stream(output_notes) midi_stream.write('midi', fp='{}.mid'.format(filename)) # model = load_model('/content/models/model-20-1.53-6.50.hdf5') # prediction_output = generate_notes(model) # create_midi(prediction_output, 'test') ``` #Download files ``` from google.colab import files from google.colab import drive # files.download('test.mid') drive.mount('/content/gdrive',force_remount=True) !zip models.zip models/* !cp models.zip '/content/gdrive/My Drive/' ```
github_jupyter
# 8. Dashboard In this notebook we created a dashboard based on a little EDA with the tmdb dataset. As output we decided, the dashboard should contain a figure with the most expensive movies, the most popular ones, the proportion of different movie genres, the production countries and the releases over the year. As common for a dashboard the graphs should have functionalities. We decided to give te user the choice for which year he/she/they wants to have those informations. The dashboard was deployed on heroku an can be found [here](https://dashboard-movie.herokuapp.com/) ``` # load requirements import pandas as pd import dash from dash import dcc from dash import html import plotly.express as px from ast import literal_eval from dash import dash_table from dash.dependencies import Input, Output smd = pd.read_csv("data/data_for_dash.csv") smd.describe() smd_year.sort_values("popularity",ascending =False).popularity[:10] # this function converts lists within the strings in the specified columns into real lists columns = ['actors', "genre", "production_country"] for i in columns: smd[i] = smd[i].apply(literal_eval) #this function coverts the series data into an 1D array def to_1D(series): return pd.Series([x for _list in series for x in _list]) #functions for dashboard figures # get year values for dropdown-menu years = smd.year.unique() smd_year = smd[smd["year"] == years[0]] # dat subsets for plots smd_ya = pd.DataFrame(to_1D(smd_year["actors"]).value_counts()) smd_yg = pd.DataFrame(to_1D(smd_year["genre"]).value_counts()) smd_yp = pd.DataFrame(to_1D(smd_year["production_country"]).value_counts()) rel = pd.DataFrame(smd_year.month.value_counts().reindex(['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'])) # figures fig1 = px.bar(x=smd_year.sort_values("budget")["original_title"][:10], y=smd_year.sort_values("budget").index[:10]) fig1.update_layout(title_text='Most expensive movies', title_x=0.5) fig2 = px.bar(x=smd_year.sort_values("popularity")["original_title"][:10], y=smd_year.sort_values("popularity").index[:10]) fig2.update_layout(title_text='Most popular movies', title_x=0.5) fig3 = px.pie(values=smd_ya[0][:10], names=smd_yg.index[:10]) fig3.update_layout(title_text='Movie genres', title_x=0.5) fig4 = px.choropleth(locations=smd_yp.index, locationmode="country names", color = smd_yp[0]) fig4.update_layout(title_text='Movies produced by Country', title_x=0.5) fig5 = px.line(x=rel.index, y=rel["month"]) fig5.update_layout(title_text='Releases over the year', title_x=0.5) # app =dash.Dash() app.title = 'Mokey Dash' app.layout = html.Div(style = {"background-color": "white"},children=[html.Div([ html.H2("Please choose a year you want to have information about",style = {"padding-top": "10px","padding-bottom": "10px","background-color": "#FFF1AF"}), dcc.Dropdown( id="dropdown", options=sorted([{'label': i, 'value': i} for i in years], key = lambda x: x['label']), value=years[0], clearable=False, style= {"width": "50%",'margin':'auto'}), dcc.Graph(id="exp-movies", figure = fig1,style = {"width":"30%", "margin-left": "20px","position": "relative", "display":"inline-block"}), dcc.Graph(id="pop-movies", figure = fig2,style = {"width":"30%", "margin-left": "20px","position": "relative","display":"inline-block"}), dcc.Graph(id="pieplot", figure = fig3,style = {"width":"30%", "margin-left": "20px","position": "relative","display":"inline-block"}), dcc.Graph(id="mapplot", figure = fig4,style = {"width":"65%", "margin-left":"20px","position": "relative","display":"inline-block"}), dcc.Graph(id="lineplot", figure = fig5,style = {"width":"30%", "margin-left": "20px","position": "relative", "display":"inline-block"}), ]) ]) @app.callback( [Output("exp-movies", "figure"), Output("pop-movies", "figure"),Output("pieplot", "figure"), Output("mapplot", "figure"), Output("lineplot", "figure")], [Input("dropdown", "value")]) def update_plots(year): smd_year = smd[smd["year"] == year] smd_ya = pd.DataFrame(to_1D(smd_year["actors"]).value_counts()) smd_yg = pd.DataFrame(to_1D(smd_year["genre"]).value_counts()) smd_yp = pd.DataFrame(to_1D(smd_year["production_country"]).value_counts()) rel = pd.DataFrame(smd_year.month.value_counts().reindex(['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'])) fig1 = px.bar(x=smd_year.sort_values("budget",ascending =False).budget[:10], y=smd_year.sort_values("budget", ascending =False)["original_title"][:10]).update_layout(title_text='Most expensive movies', title_x=0.5).update_xaxes(title="US $") fig1.update_yaxes(title=None, autorange="reversed") fig2 = px.bar(x=smd_year.sort_values("popularity",ascending =False).popularity[:10], y=smd_year.sort_values("popularity",ascending =False)["original_title"][:10]).update_layout(title_text='Most popular movies', title_x=0.5).update_xaxes(title="TMDB popularity score") fig2.update_yaxes(title=None, autorange="reversed") fig3 = px.pie(values=smd_ya[0][:10], names=smd_yg.index[:10]).update_layout(title_text='Movie genres', title_x=0.5) fig4 = px.choropleth(locations=smd_yp.index, locationmode="country names", color = smd_yp[0]).update_layout(title_text='Movies produced by Country', title_x=0.5) fig5 = px.line(x=rel.index, y=rel["month"], markers = True).update_layout(title_text='Releases over the year', title_x=0.5).update_xaxes(title=None) fig5.update_yaxes(title=None) return fig1, fig2, fig3, fig4, fig5 if __name__ == "__main__": app.run_server() ```
github_jupyter
# Learning Objectives: 1. Reading files 2. Exploring the read dataframe 3. Checking the dataframe info 4. Merging the two dataframes into one 5. Defining questions for the analysis 6. Cleaning Steps ``` import numpy as np import pandas as pd ``` ## Reading the files ``` movies_df = pd.read_csv("../data/movies.csv") credits_df = pd.read_csv("https://raw.githubusercontent.com/harshitcodes/tmdb_movie_data_analysis/master/tmdb-5000-movie-dataset/tmdb_5000_credits.csv") ``` ## Exploring the read dataframe * Look at samples rows * Columns and shape of the dataframe * Check if you can merge the files * Understanding the type of questions that we can answer using the data. * Define the cleaning steps - using pandas * Start looking for answers ``` movies_df.head() movies_df.sample(5) movies_df.shape credits_df.head() ``` ## Checking the dataframe info ``` movies_df.info() credits_df.info() movies_df.head() credits_df.head() ``` ## Merging the two dataframes ``` movies_df = pd.merge(movies_df, credits_df, left_on='id', right_on='movie_id') movies_df.head() ``` ## Define questions for the analysis 1. Which are the top 5 most expensive movies? 2. Top profitable movies? Comparision between min and max profits. 3. Most talked about movies. 4. Average runtime of movies. 5. Movies which are rated above 7 by critics. 6. Which year did we have the most profitable movies? ## Cleaning Steps 1. Need to remove redundant columns 2. Cleaning or flattening the genres, cast and other columns that contain JSON data. 3. Remove duplicate rows 4. Some movies in the data have zero budget/ zero revenue, that us there valueis not recorded. 5. Change the data types of columns wherever required. 6. Replacing zero with NAN in runtime column. ``` del_col_list = ['keywords', 'homepage', 'status', 'tagline', 'original_language', 'overview', 'production_companies', 'original_title', 'title_y'] movies_df = movies_df.drop(del_col_list, axis=1) movies_df.head() movies_df = movies_df.rename(columns={"title_x": "title"}) movies_df.head() ``` ## Flattening the JSON ``` movies_df['genres'][0] import json payload = json.loads(movies_df['genres'][0]) payload type(payload) genre_list = [] for i in range(len(payload)): genre_list.append(payload[i]['name']) genre_list def clean_json(x): genre_list = [] for i in range(len(x)): genre_list.append(x[i]['name']) return str(genre_list) movies_df['genres'].apply(json.loads).apply(clean_json) def flatten_json(column): movies_df[column] = movies_df[column].apply(json.loads).apply(clean_json) flatten_json('spoken_languages') flatten_json('cast') flatten_json('production_countries') flatten_json('genres') movies_df ``` ## Dropping duplicate rows ``` print(movies_df.shape) movies_df = movies_df.drop_duplicates(keep='first') print(movies_df.shape) ``` ## Dropping values with zero revenue and budget ``` movies_df.describe() cols = ['budget', 'revenue'] movies_df[cols] = movies_df[cols].replace(0, np.nan) movies_df.dropna(subset=cols, inplace=True) movies_df.shape ``` ## Change the release_date column to datetime column ``` movies_df.info() movies_df["release_date"] = pd.to_datetime(movies_df['release_date']) movies_df.info() movies_df['release_year'] = movies_df['release_date'].dt.year movies_df.head() ``` ## Saving the cleaned dataframe to a file ``` movies_df.to_csv("../data/cleaned_movies_data.csv") ```
github_jupyter
``` import pickle import os import numpy as np base_dirs = ['/localdata/juan/inferno/', '/localdata/juan/erehwon/', '/localdata/juan/numenor/'] experiments = ['dcp_mcpilco_dropoutd_mlpp_4', 'dcp_mcpilco_lndropoutd_mlpp_6', 'dcp_mcpilco_dropoutd_dropoutp_7', 'dcp_mcpilco_lndropoutd_dropoutp_8'] result_files = [] for b in base_dirs: dirs = os.listdir(b) for e in experiments: for d in dirs: if d.find(e) >= 0: res_dir = os.path.join(b,d) res_file = os.path.join(res_dir, 'results_50_10') result_files.append(res_file) print(result_files) base_dirs = ['/localdata/juan/inferno/', '/localdata/juan/erehwon//', '/localdata/juan/numenor/'] experiments = ['dcp_mcpilco_dropoutd_mlpp_4', 'dcp_mcpilco_lndropoutd_mlpp_6', 'dcp_mcpilco_dropoutd_dropoutp_7', 'dcp_mcpilco_lndropoutd_dropoutp_8'] result_files = [] for b in base_dirs: for e in experiments: res_dir = os.path.join(b,e) res_file = os.path.join(res_dir, 'results_50_10') result_files.append(res_file) print(result_files) result_arrays = {} for rpath in result_files: if not os.path.isfile(rpath): continue with open(rpath, 'rb') as f: print('Opening %s' % rpath) exp_type = None for e in experiments: if rpath.find(e) >= 0: exp_type = e break arrays = result_arrays.get(exp_type, []) arrays.append(pickle.load(f)) result_arrays[exp_type] = arrays ids = [0,1,2,4,3,5,6] names = ['SSGP-DYN_RBF-POL (PILCO)', 'DROPOUT-DYN_RBF-POL', 'DROPOUT-DYN_MLP-POL', 'LOGNORMAL-DYN_RBF-POL', 'DROPOUT-DYN_DROPOUT-POL', 'LOGNORMAL-DYN_MLP-POL', 'LOGNORMAL-DYN_DROPOUT-POL'] from collections import OrderedDict # gather all costs costs = OrderedDict() for e in experiments: exp_results = result_arrays[e] costs_e = [] for results in exp_results: costs_i = [] #learning_iter for rj in results: costs_ij = [] #trial for r in rj: costs_ij.append(r[2]) costs_i.append(costs_ij) if len(costs_i) > 0 : costs_e.append(costs_i) costs_i = np.concatenate(costs_e, axis=1).squeeze() costs_i = costs_i/costs_i.shape[-1] print e, costs_i.shape mean_sum_costs = costs_i.sum(-1).mean(-1) std_sum_costs = costs_i.sum(-1).std(-1) min_sum_costs = costs_i.sum(-1).min(1) max_sum_costs = costs_i.sum(-1).max(1) costs[e] = (mean_sum_costs, std_sum_costs, min_sum_costs, max_sum_costs) print costs[e] import matplotlib from matplotlib import pyplot as plt names = ['Binary Drop (p=0.1) Dyn, MLP Pol', 'Log-Normal Drop Dyn, MLP Pol', 'Binary Drop (p=0.1) Dyn, Drop MLP Pol p=0.1', 'Log-Normal Drop Dyn, Drop MLP Pol p=0.1'] linetype = ['--','-','--','-'] names = dict(zip(experiments ,names)) linetype = dict(zip(experiments ,linetype)) matplotlib.rcParams.update({'font.size': 20}) fig = plt.figure(figsize=(15,9)) t = range(len(costs.values()[0][1])) for e in costs: mean, std, min_, max_ = costs[e] min_ = np.minimum.accumulate(mean) max_ = max_[np.array([np.where(mean==mi) for mi in min_]).flatten()] std_ = std[np.array([np.where(mean==mi) for mi in min_]).flatten()] if name.find('rbf') < 0: pl, = plt.plot(t, min_, linetype[e], label=names[e], linewidth=2) #pl, = plt.plot(t, max_, linetype[e], label=names[e], linewidth=2, color=pl.get_color()) alpha = 0.5 for i in range(1,2): alpha = alpha*0.5 lower_bound = min_ - i*std_*0.5 #lower_bound = min_ upper_bound = min_ + i*std_*0.5 #upper_bound = max_ plt.plot(t, upper_bound, linetype[e], linewidth=2,color=pl.get_color(),alpha=alpha) plt.plot(t, lower_bound, linetype[e], linewidth=2,color=pl.get_color(),alpha=alpha) plt.fill_between(t, lower_bound, upper_bound, alpha=alpha, color=pl.get_color(), linestyle=linetype[e]) plt.legend() plt.xlabel('Number of interactions (3.0 secs at 10 Hz each)') plt.ylabel('Average cost (over 80 runs)') plt.show() import matplotlib from matplotlib import pyplot as plt matplotlib.rcParams.update({'font.size': 18}) fig = plt.figure(figsize=(15,9)) t = range(len(costs.values()[0][1])) for name in costs: mean, std = costs[name] if name.find('rbf') > 0: pl, = plt.plot(t, mean, label=name, linewidth=2) alpha = 0.5 for i in range(1,2): alpha = alpha*0.8 lower_bound = mean - i*std upper_bound = mean + i*std plt.fill_between(t, lower_bound, upper_bound, alpha=alpha, color=pl.get_color()) plt.legend() plt.xlabel('Number of interactions') plt.ylabel('Average cost (over 100 runs)') plt.show() np.where(mean==min__) mean min__ ```
github_jupyter
``` # This cell is added by sphinx-gallery !pip install mrsimulator --quiet %matplotlib inline import mrsimulator print(f'You are using mrsimulator v{mrsimulator.__version__}') ``` # ¹¹⁹Sn MAS NMR of SnO The following is a spinning sideband manifold fitting example for the 119Sn MAS NMR of SnO. The dataset was acquired and shared by Altenhof `et al.` [#f1]_. ``` import csdmpy as cp import matplotlib.pyplot as plt from lmfit import Minimizer from mrsimulator import Simulator, SpinSystem, Site, Coupling from mrsimulator.methods import BlochDecaySpectrum from mrsimulator import signal_processing as sp from mrsimulator.utils import spectral_fitting as sf from mrsimulator.utils import get_spectral_dimensions ``` ## Import the dataset ``` filename = "https://sandbox.zenodo.org/record/814455/files/119Sn_SnO.csdf" experiment = cp.load(filename) # standard deviation of noise from the dataset sigma = 0.6410905 # For spectral fitting, we only focus on the real part of the complex dataset experiment = experiment.real # Convert the coordinates along each dimension from Hz to ppm. _ = [item.to("ppm", "nmr_frequency_ratio") for item in experiment.dimensions] # plot of the dataset. plt.figure(figsize=(4.25, 3.0)) ax = plt.subplot(projection="csdm") ax.plot(experiment, "k", alpha=0.5) ax.set_xlim(-1200, 600) plt.grid() plt.tight_layout() plt.show() ``` ## Create a fitting model **Guess model** Create a guess list of spin systems. There are two spin systems present in this example, - 1) an uncoupled $^{119}\text{Sn}$ and - 2) a coupled $^{119}\text{Sn}$-$^{117}\text{Sn}$ spin systems. ``` sn119 = Site( isotope="119Sn", isotropic_chemical_shift=-210, shielding_symmetric={"zeta": 700, "eta": 0.1}, ) sn117 = Site( isotope="117Sn", isotropic_chemical_shift=0, ) j_sn = Coupling( site_index=[0, 1], isotropic_j=8150.0, ) sn117_abundance = 7.68 # in % spin_systems = [ # uncoupled spin system SpinSystem(sites=[sn119], abundance=100 - sn117_abundance), # coupled spin systems SpinSystem(sites=[sn119, sn117], couplings=[j_sn], abundance=sn117_abundance), ] ``` **Method** ``` # Get the spectral dimension parameters from the experiment. spectral_dims = get_spectral_dimensions(experiment) MAS = BlochDecaySpectrum( channels=["119Sn"], magnetic_flux_density=9.395, # in T rotor_frequency=10000, # in Hz spectral_dimensions=spectral_dims, experiment=experiment, # add the measurement to the method. ) # Optimize the script by pre-setting the transition pathways for each spin system from # the method. for sys in spin_systems: sys.transition_pathways = MAS.get_transition_pathways(sys) ``` **Guess Spectrum** ``` # Simulation # ---------- sim = Simulator(spin_systems=spin_systems, methods=[MAS]) sim.run() # Post Simulation Processing # -------------------------- processor = sp.SignalProcessor( operations=[ sp.IFFT(), sp.apodization.Exponential(FWHM="1500 Hz"), sp.FFT(), sp.Scale(factor=5000), ] ) processed_data = processor.apply_operations(data=sim.methods[0].simulation).real # Plot of the guess Spectrum # -------------------------- plt.figure(figsize=(4.25, 3.0)) ax = plt.subplot(projection="csdm") ax.plot(experiment, "k", linewidth=1, label="Experiment") ax.plot(processed_data, "r", alpha=0.75, linewidth=1, label="guess spectrum") ax.set_xlim(-1200, 600) plt.grid() plt.legend() plt.tight_layout() plt.show() ``` ## Least-squares minimization with LMFIT Use the :func:`~mrsimulator.utils.spectral_fitting.make_LMFIT_params` for a quick setup of the fitting parameters. ``` params = sf.make_LMFIT_params(sim, processor, include={"rotor_frequency"}) # Remove the abundance parameters from params. Since the measurement detects 119Sn, we # also remove the isotropic chemical shift parameter of 117Sn site from params. The # 117Sn is the site at index 1 of the spin system at index 1. params.pop("sys_0_abundance") params.pop("sys_1_abundance") params.pop("sys_1_site_1_isotropic_chemical_shift") # Since the 119Sn site is shared between the two spin systems, we add constraints to the # 119Sn site parameters from the spin system at index 1 to be the same as 119Sn site # parameters from the spin system at index 0. lst = [ "isotropic_chemical_shift", "shielding_symmetric_zeta", "shielding_symmetric_eta", ] for item in lst: params[f"sys_1_site_0_{item}"].expr = f"sys_0_site_0_{item}" print(params.pretty_print(columns=["value", "min", "max", "vary", "expr"])) ``` **Solve the minimizer using LMFIT** ``` minner = Minimizer(sf.LMFIT_min_function, params, fcn_args=(sim, processor, sigma)) result = minner.minimize() result ``` ## The best fit solution ``` best_fit = sf.bestfit(sim, processor)[0] residuals = sf.residuals(sim, processor)[0] # Plot the spectrum plt.figure(figsize=(4.25, 3.0)) ax = plt.subplot(projection="csdm") ax.plot(experiment, "k", linewidth=1, label="Experiment") ax.plot(best_fit, "r", alpha=0.75, linewidth=1, label="Best Fit") ax.plot(residuals, alpha=0.75, linewidth=1, label="Residuals") ax.set_xlim(-1200, 600) plt.grid() plt.legend() plt.tight_layout() plt.show() ``` .. [#f1] Altenhof A. R., Jaroszewicz M. J., Lindquist A. W., Foster L. D. D., Veinberg S. L., and Schurko R. W. Practical Aspects of Recording Ultra-Wideline NMR Patterns under Magic-Angle Spinning Conditions. J. Phys. Chem. C. 2020, **124**, 27, 14730–14744 `DOI: 10.1021/acs.jpcc.0c04510 <https://doi.org/10.1021/acs.jpcc.0c04510>`_
github_jupyter
--- ``` __authors__ = ["Tricia D Shepherd" , "Ryan C. Fortenberry", "Matthewy Kennedy", "C. David Sherril"] __credits__ = ["Victor H. Chavez", "Lori Burns"] __email__ = ["profshep@icloud.com", "r410@olemiss.edu"] __copyright__ = "(c) 2008-2019, The Psi4Education Developers" __license__ = "BSD-3-Clause" __date__ = "2019-11-18" ``` --- ## Introduction The eigenfunctions solutions to the Schrödinger equation for a multielectron system depend on the coordinates of all electrons. The orbital approximation says that we can represent a many-electron eigenfunction in terms of individual electron orbitals, each of which depends only on the coordinates of a single electron. A *basis set* in this context is a set of *basis functions* used to approximate these orbitals. There are two general categories of basis sets: *minimal basis sets* that describe only occupied orbitals and *extended basis sets* that describe both occupied and unoccupied orbitals. ### Part A. What is the calculated Hartree Fock energy using a minimal basis set? 1. Import the required modules (**psi4** and **numpy**) 2. Define a Boron atom as a ```psi4.geometry``` object. Be mindful of the charge and spin multiplicity. For a neutral B atom, the atom can only be a doublet (1 unpaired electron). 3. Set psi4 options to select an **unrestricted** calculation (restricted calculation *won't* work with this electronic configuration). 4. Run a **Hartree-Fock** calculation using the basis set **STO-3G**, store both the energy and the wavefunction object. The energy will be given in atomic units. 5. Look at your results by printing them within a cell. It is possible to obtain information about the basis set from the wfn object. The number of basis functions can be accessed with: ```wfn.basiset().nbf()``` RESPONSE: *** ### Part B. How does the Hartree Fock energy depend on the trial basis set? In computational chemistry, we focus on two types of functions: the Slater-type function and the Gaussian-type functions. Their most basic shape can be given by the following definitions. $$ \Phi_{gaussian}(r) = 1.0 \cdot e^{-1.0 \cdot x^2} $$ and $$ \Phi_{slater}(r) = 1.0 \cdot e^{-1.0 \cdot |x|} $$ Both functions can be visualized below: ``` import matplotlib.pyplot as plt import numpy as np r = np.linspace(0, 5, 100) sto = 1.0 * np.exp(-np.abs(r)) gto = 1.0 * np.exp(-r**2) fig, ax = plt.subplots(ncols=2, nrows=1) p1 = ax[0] p2 = ax[1] fig.set_figheight(5) fig.set_figwidth(15) p1.plot(r, sto, lw=4, color="teal") p1.plot(-r, sto, lw=4, color="teal") p2.plot(r, gto, lw=4, color="salmon") p2.plot(-r, gto, lw=4, color="salmon") p1.title.set_text("Slater-type Orbital") p2.title.set_text("Gaussian-type Orbital") ``` The STO is characterized by two features: 1) The peak at the nucleus and 2) the behavior far from the nucleus, which should tend to zero nice and smoothly. You can see that the GTO does not have those characteristics since the peak is smooth and the ends go to zero *too* quickly. You may remember that the ground state eigenfunction of the Hydrogen atom with a spin equal to zero has the same shape as the STO. This is true not only for Hydrogen but for every atomistic system. One may wonder then why are we not using STO in every calculation? The short answer is that we don't have the exact solution to each of the systems, and when it comes to handling approximations, the GTO are simply more efficient than the STO. Remember the theorem that states that the product of two Gaussians is also a Gaussian? In the first part of the lab, we used the smallest basis set available STO-3G, where STO stands for *Slater-type orbital* which is approximated by the sum of *3 Gaussian functions*. $$\phi^{STO-3G} = \sum_i^3 d_i \cdot C(\alpha_i) \cdot e^{-\alpha_i|r-R_A|^2} $$ Where the $\{ \alpha \}_i$ and $\{ d \}_i$ are the exponents and coefficients that define a basis set and are usually the components needed to create a basis set. STO-3G is an example of a minimal basis set, *i.e.* it represents the orbitals of each occupied subshell with one basis function. While basis sets of the form STO-nG were popular in the 1980's, they are not widely used today. For the same reason that multiple Gaussian functions better approximate a Slater type orbital, multiple STO-nG functions are found more efficient to approximate atomic orbitals. In practice, inner shell (core) electrons are still described by a single STO-nG function and only valence electrons are expressed as the sum of STO-nG type functions. You will see that the approximation performs really well. Look at the following example$^1$: ``` import matplotlib.pyplot as plt def sto(r, coef, exp): return coef * (2*exp/np.pi)**(3/4) * np.exp(-exp*(r)**2) slater = (1/np.pi)**(0.5) * np.exp(-1.0*np.abs(r)) sto_1g = sto(r, 1.00, 0.270950) sto_2g = sto(r, 0.67, 0.151623) + sto(r, 0.43, 0.851819) sto_3g = sto(r, 0.44, 0.109818) + sto(r, 0.53, 0.405771) + sto(r, 0.154, 2.22766) plt.figure(figsize=(15,5)) plt.plot(r, sto_1g, lw=4, c="c") plt.plot(-r, sto_1g, label="STO-1G", lw=4, c="c") plt.plot(r, sto_2g, lw=4, c="orchid") plt.plot(-r, sto_2g, label="STO-2G", lw=4, c="orchid") plt.plot(r, sto_3g, lw=4, c="gold" ) plt.plot(-r, sto_3g, label="STO-3G", lw=4, c="gold" ) plt.plot(r, slater, ls=":", lw=4, c="grey") plt.plot(-r, slater, label="Slater-type function", ls=":", lw=4, c="grey") plt.legend(fontsize=15) plt.show() ``` We can clearly see that with the addition of each new Gaussian, our linear combination behaves more and more like a STO. Each of these Gaussians is commonly knon as *primitive*. ###### $^1$ Szabo, Attila, and Neil S. Ostlund. Modern quantum chemistry: introduction to advanced electronic structure theory. Courier Corporation, 2012. ___ To understand how to read each basis set, let's consider the next available basis set: 3-21G basis set. The number before the dash, "3" represents the 3 Gaussian primitives (i.e. a STO-"3"G) use to represent the inner shell electrons. The next two numbers represent the valence shell split into two sets of STO-nG functions--One with"2" Gaussian-type orbitals (GTOs) and one with "1" GTO. Let us see how this other basis set performs. 1. With the previously defined Boron atom. Run a new HF calculation using the basis set "3-21G". 2. Rationalize the number of basis functions used for the STO-3G and 3-21G calculations. 3. Compare the STO-3G and 3-21G HF energies. WHich basis set is more accurate? (Recall the variational principle states that for a given Hamiltonian operator, any trial wavefunction will have n average energy that is greater than or equal to the "true" corresponding ground state wavefunction. Because of this, the Hartree Fock energy is an upper bound to the ground state energy of a given molecule. ) RESPONSE: *** ### Part C. How can we improve the accuracy of the HF energy? To make an even better approximation to our trial function, we may need to take into account the two following effects: #### Polarization: Accounts for the unequal distribution of electrons when two atoms approach each other. We can include these effects by adding STO's of higher orbital angular momentum, i.e., d-type functions are added to describe valence elctrons in 2p orbitals. We can if there is presence of polarization functions with the use of asterisks: * One asterisk (*) refers to polarization on heavy atoms. * Two asterkisks (**) is used for polarization on Hydrogen (H-Bonding). #### Difusse Functions: These are useful for systems in an excited state, systems with low ionization potential, and systems with some significant negative charge attached. The presence of diffuse functions is symbolized by the addition of a plus sign: * One plus sign (+) adds diffuse functions on heavy atoms. * Two plus signs (++) add diffuse functions on Hydrogen atoms. *** Let us look at how the addition of these effects will improve our energy: 1. Repeat the boron atom energy calculation for each of the basis sets listed: ``['6-31G', '6-31+G', '6-31G*', '6-31+G*', '6-311G*', '6-311+G**', 'cc-pVDZ', 'cc-pVTZ']`` 2. Using `print(f"")` statements, builld a table where for each basis you identify the type and number of STO-nG function used for the core and valence electrons. 3. For each basis, identify the type and number of STO-nG functions used for the core and valence electrons. 4. On the same table, specify wether polarized or difusse functions are included. 5. Record the total number of orbitals. For the Boron atom, which approximation (choice of basis set) is the most accurate? How does the accuracy relate to the number of basis functions used? RESPONSE: *** ### Part D. How much "correlation energy" can we recover At the Hartree Fock level of theory, each electron experiences an average potential field of all the other electrons. In essence, it is a "mean field" approach that neglects individual electron-electron interactions or "electron correlation". Thus, we define t he difference between the self-consistent field energy and the exact energy as the correlation energy. Two fundamentally different approaches to account for electron correlation effects are available by selecting a Correlation method: Moller Plesset (MP) Perturbation theory and Coupled Cluster (CC) theory. 1. Based on the calculated SCF energy for the *6-311+G** basis set, determine the value of the correlation energy for boron assuming an "experimental" energy of **-24.608 hartrees$^2$** 2. Using the same basis set, perform an energy calculation with MP2 and MP4. (You may recover the MP2 energy from the MP4 calculation but you will have to look at the output file). MP4 will require the use of the following options: ```psi4.set_options({"reference" :"rohf", "qc_module":"detci"})``` 3. Using the same basis set, perform an energy calculation with CCSD and CCSD(T). (You may recover the CCSD energy from the CCSD(T) calculation but you will have to look at the output file). 4. For each method, determine the percentage of the correlation energy recovered. <br /> ###### $^2$. H. S. Schaefer and F. Harris, (1968) Phys Rev. 167, 67 RESPONSE: *** ### Part E. Can we use DFT(B3LYP) to calculate the electron affinity of boron? The electron affinity of atom A is the energy released for the process: $$ A + e^{-} \rightarrow A^{-} $$ Or simply the energy difference between the anion and the neutral forms of an atom. These are reported in positive values: $$ EA = - (E_{anion} - E_{neutral}) $$ It was reported$^3$ that the electron affinity of Boron at the B3LYP 6-311+G** level of theory is **-0.36 eV**. In comparison to the experimental value of **0.28 eV** this led to the assumption that B3LYP does not yield a reasonable electron affinity. | 1. Define a Boron atom for two different configurations: For the anion, set the charge to **-1**. Once we do that, the charge and spin multiplicity are no longer compatible. For 2 electrons in a set of *p* orbitals, the multiplicity can only be 3 (triplet state, unpaired spins) or 1 (singlet state, paired spins). Here, by Hund's rules, we expect the spins will remain unpaired, leading to a triplet. Run the calculation and record the energy for boron anion. 2. Calculate the electron affinity. Is this literature result consistent with your calculation? (Remember 1 hartree = 27.2116 eV) 3. Repeat the electron affinity calculation of boron, but this time, assume the anion is a singlet sate. Whata is the reason$^4$ for the reporte failure of the B3LYP method? <br/> ###### $^3$C. W. Bauschlicher, (1998) Int. J. Quantum Chem. 66, 285 ###### $^4$ B. S. Jursic, (1997) Int. J. Quantum Chem. 61, 93 RESPONSE:
github_jupyter
Utilities to visualize agent's trade execution and portfolio performance Chapter 4, TensorFlow 2 Reinforcement Learning Cookbook | Praveen Palanisamy ``` import matplotlib import matplotlib.pyplot as plt import mplfinance as mpf import numpy as np import pandas as pd from matplotlib import style from mplfinance.original_flavor import candlestick_ohlc style.use("seaborn-whitegrid") class TradeVisualizer(object): """Visualizer for stock trades""" def __init__(self, ticker, ticker_data_stream, title=None, skiprows=0): self.ticker = ticker # Stock/crypto market/exchange data stream. An offline file stream is used. # Alternatively, a web # API can be used to pull live data. self.ohlcv_df = pd.read_csv( ticker_data_stream, parse_dates=True, index_col="Date", skiprows=skiprows ).sort_values(by="Date") if "USD" in self.ticker: # True for crypto-fiat currency pairs # Use volume of the crypto currency for volume plot. # A column with header="Volume" is required for default mpf plot. # Remove "USD" from self.ticker string and clone the crypto volume column self.ohlcv_df["Volume"] = self.ohlcv_df[ "Volume " + self.ticker[:-3] # e.g: "Volume BTC" ] self.account_balances = np.zeros(len(self.ohlcv_df.index)) fig = plt.figure("TFRL-Cookbook", figsize=[12, 6]) fig.suptitle(title) nrows, ncols = 6, 1 gs = fig.add_gridspec(nrows, ncols) row, col = 0, 0 rowspan, colspan = 2, 1 # self.account_balance_ax = plt.subplot2grid((6, 1), (0, 0), rowspan=2, colspan=1) self.account_balance_ax = fig.add_subplot( gs[row : row + rowspan, col : col + colspan] ) row, col = 2, 0 rowspan, colspan = 8, 1 self.price_ax = plt.subplot2grid( (nrows, ncols), (row, col), rowspan=rowspan, colspan=colspan, sharex=self.account_balance_ax, ) self.price_ax = fig.add_subplot(gs[row : row + rowspan, col : col + colspan]) plt.show(block=False) self.viz_not_initialized = True def _render_account_balance(self, current_step, account_balance, horizon): self.account_balance_ax.clear() date_range = self.ohlcv_df.index[current_step : current_step + len(horizon)] self.account_balance_ax.plot_date( date_range, self.account_balances[horizon], "-", label="Account Balance ($)", lw=1.0, ) self.account_balance_ax.legend() legend = self.account_balance_ax.legend(loc=2, ncol=2) legend.get_frame().set_alpha(0.4) last_date = self.ohlcv_df.index[current_step + len(horizon)].strftime( "%Y-%m-%d" ) last_date = matplotlib.dates.datestr2num(last_date) last_account_balance = self.account_balances[current_step] self.account_balance_ax.annotate( "{0:.2f}".format(account_balance), (last_date, last_account_balance), xytext=(last_date, last_account_balance), bbox=dict(boxstyle="round", fc="w", ec="k", lw=1), color="black", ) self.account_balance_ax.set_ylim( min(self.account_balances[np.nonzero(self.account_balances)]) / 1.25, max(self.account_balances) * 1.25, ) plt.setp(self.account_balance_ax.get_xticklabels(), visible=False) def render_image_observation(self, current_step, horizon): window_start = max(current_step - horizon, 0) step_range = range(window_start, current_step + 1) date_range = self.ohlcv_df.index[current_step : current_step + len(step_range)] stock_df = self.ohlcv_df[self.ohlcv_df.index.isin(date_range)] if self.viz_not_initialized: self.fig, self.axes = mpf.plot( stock_df, volume=True, type="candle", mav=2, block=False, returnfig=True, style="charles", tight_layout=True, ) self.viz_not_initialized = False else: self.axes[0].clear() self.axes[2].clear() mpf.plot( stock_df, ax=self.axes[0], volume=self.axes[2], type="candle", mav=2, style="charles", block=False, tight_layout=True, ) self.fig.canvas.set_window_title("TFRL-Cookbook") self.fig.canvas.draw() fig_data = np.frombuffer(self.fig.canvas.tostring_rgb(), dtype=np.uint8) fig_data = fig_data.reshape(self.fig.canvas.get_width_height()[::-1] + (3,)) self.fig.set_size_inches(12, 6, forward=True) self.axes[0].set_ylabel("Price ($)") self.axes[0].yaxis.set_label_position("left") self.axes[2].yaxis.set_label_position("left") # Volume return fig_data def _render_ohlc(self, current_step, dates, horizon): self.price_ax.clear() candlesticks = zip( dates, self.ohlcv_df["Open"].values[horizon], self.ohlcv_df["Close"].values[horizon], self.ohlcv_df["High"].values[horizon], self.ohlcv_df["Low"].values[horizon], ) candlestick_ohlc( self.price_ax, candlesticks, width=np.timedelta64(1, "D"), colorup="g", colordown="r", ) self.price_ax.set_ylabel(f"{self.ticker} Price ($)") self.price_ax.tick_params(axis="y", pad=30) last_date = self.ohlcv_df.index[current_step].strftime("%Y-%m-%d") last_date = matplotlib.dates.datestr2num(last_date) last_close = self.ohlcv_df["Close"].values[current_step] last_high = self.ohlcv_df["High"].values[current_step] self.price_ax.annotate( "{0:.2f}".format(last_close), (last_date, last_close), xytext=(last_date, last_high), bbox=dict(boxstyle="round", fc="w", ec="k", lw=1), color="black", ) plt.setp(self.price_ax.get_xticklabels(), visible=False) def _render_trades(self, trades, horizon): for trade in trades: if trade["step"] in horizon: date = self.ohlcv_df.index[trade["step"]].strftime("%Y-%m-%d") date = matplotlib.dates.datestr2num(date) high = self.ohlcv_df["High"].values[trade["step"]] low = self.ohlcv_df["Low"].values[trade["step"]] if trade["type"] == "buy": high_low = low color = "g" arrow_style = "<|-" else: # sell high_low = high color = "r" arrow_style = "-|>" proceeds = "{0:.2f}".format(trade["proceeds"]) self.price_ax.annotate( f"{trade['type']} ${proceeds}".upper(), (date, high_low), xytext=(date, high_low), color=color, arrowprops=( dict( color=color, arrowstyle=arrow_style, connectionstyle="angle3", ) ), ) def render(self, current_step, account_balance, trades, window_size=100): self.account_balances[current_step] = account_balance window_start = max(current_step - window_size, 0) step_range = range(window_start, current_step + 1) dates = self.ohlcv_df.index[step_range] self._render_account_balance(current_step, account_balance, step_range) self._render_ohlc(current_step, dates, step_range) self._render_trades(trades, step_range) """ self.price_ax.set_xticklabels( self.ohlcv_df.index[step_range], rotation=45, horizontalalignment="right", ) """ plt.grid() plt.pause(0.001) def close(self): plt.close() ```
github_jupyter
### Preparing Working Env ``` import matplotlib.pyplot as plt import numpy as np from importlib.util import find_spec if find_spec("core") is None: import sys sys.path.append('..') import tensorflow as tf import tensorflow_datasets as tfds import random from core.datasets import RetinaDataset from core.datasets.data_util import preprocess_image, preprocess_for_train from sklearn.utils.class_weight import compute_class_weight, compute_sample_weight from core.networks.resnet_with_conv import resnetconv from core.networks.resnet_with_conv_finetune import resnetconvfinetune from sklearn.metrics import classification_report, confusion_matrix, accuracy_score, f1_score, precision_score, recall_score import seaborn as sns import os from pathlib import Path from core.models.base import WEIGHTS_DIRNAME import pandas as pd from sklearn.decomposition import PCA import plotly import plotly.express as px import umap import tf_explain from IPython.display import clear_output import warnings warnings.filterwarnings('ignore') #This code snippet helps if your computer has RTX 2070 GPU. If not then comment this cell. from tensorflow.compat.v1 import ConfigProto from tensorflow.compat.v1 import InteractiveSession config = ConfigProto() config.gpu_options.allow_growth = True session = InteractiveSession(config=config) tf.config.run_functions_eagerly(True) ``` ### Utility Functions ``` def resize_image(img, lb): return tf.image.resize(img, (IMG_SIZE,IMG_SIZE)), tf.one_hot(lb, NCLASS) def augment_image(img, lb): img, lb = resize_image(img, lb) return preprocess_for_train(img, height=IMG_SIZE, width=IMG_SIZE), lb def save_training_history(history,train_type,data_fraction,batch_number): # summarize history for accuracy plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.savefig(TRAINING_HISTORY_PATH+"/{}/DataFraction_{}_batch_{}_type_{}_metric_accuracy.png".format(data_fraction,data_fraction,batch_number,train_type)) plt.clf() # summarize history for loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.savefig(TRAINING_HISTORY_PATH+"/{}/DataFraction_{}_batch_{}_type_{}_metric_loss.png".format(data_fraction,data_fraction,batch_number,train_type)) plt.close() def save_projection_viz(data_fraction,batch_number,model): idx = 6 # index of desired layer inputs = tf.keras.layers.Input((IMG_SIZE,IMG_SIZE,IMG_CH)) x = tf.keras.applications.resnet_v2.preprocess_input(inputs) for layer in model.layers[3:idx+1]: x = layer(x) new_model = tf.keras.Model(inputs, x) proj_testset = new_model.predict(ds_test.map(resize_image).batch(32)) n_components = 3 l2 = np.square(proj_testset).mean(axis=-1, keepdims=True)**0.5 proj_normed = proj_testset / l2 proj_normed_pcas = PCA(n_components=n_components).fit_transform(proj_normed) cols = [] for i in range(n_components): cols.append('PCA{}'.format(i+1)) df = pd.DataFrame(proj_normed_pcas, columns=cols) df['targets'] = test_targets sns_plot = sns.kdeplot(x ='PCA1', y='PCA2', data= df, hue='targets', palette= sns.color_palette()[0:4]) fig = sns_plot.get_figure() fig.savefig(PROJECTIONS_PATH+"/{}/Projections_PCA_2D_DataFraction_{}_batch_{}.png".format(data_fraction,data_fraction,batch_number)) fig = px.scatter_3d(df, x='PCA1', y='PCA2', z='PCA3', color='targets') plotly.offline.plot(fig, filename=PROJECTIONS_PATH+"/{}/Projections_PCA_3D_DataFraction_{}%_batch_{}.html".format(data_fraction,data_fraction,batch_number)) plt.close() metric = 'cosine' reducer = umap.UMAP(n_components= n_components, metric=metric, n_neighbors= 50) proj_normed_umap = reducer.fit_transform(proj_normed) cols = [] for i in range(n_components): cols.append('UMAP{}'.format(i+1)) df = pd.DataFrame(proj_normed_umap, columns=cols) df['targets'] = test_targets sns_plot = sns.kdeplot(x ='UMAP1', y='UMAP2', data= df, hue='targets', palette= sns.color_palette()[0:4]) fig = sns_plot.get_figure() fig.savefig(PROJECTIONS_PATH+"/{}/Projections_UMAP_2D_DataFraction_{}_batch_{}.png".format(data_fraction,data_fraction,batch_number)) fig = px.scatter_3d(df, x='UMAP1', y='UMAP2', z='UMAP3', color='targets') plotly.offline.plot(fig, filename=PROJECTIONS_PATH+"/{}/Projections_UMAP_3D_DataFraction_{}_batch_{}.html".format(data_fraction,data_fraction,batch_number)) plt.close() def explain_predictions(data_fraction,batch_number,image_list,label_list,model,layer_name='conv2d_2'): global label_names for i in range(len(image_list)): image = image_list[i] label = label_list[i] label_name = label_names[label.numpy()] resized_img = tf.image.resize(image, (IMG_SIZE,IMG_SIZE)) resized_img = tf.keras.preprocessing.image.img_to_array(resized_img) expanded_img = np.expand_dims(resized_img, axis=0) prediction = np.argmax(model.predict(expanded_img)) prediction_acc = np.max(model.predict(expanded_img)) predicted_label = label_names[prediction] data = ([resized_img.astype('uint8')], None) explainer = tf_explain.core.grad_cam.GradCAM() grid = explainer.explain(data, model, class_index=label, layer_name=layer_name,image_weight=0.9) explainer_occ = tf_explain.core.occlusion_sensitivity.OcclusionSensitivity() grid_occ = explainer_occ.explain(data, model, class_index=label, patch_size=4) f, ax = plt.subplots(1,3,figsize = (8,8)) f.suptitle("True label: " + label_name+", "+"Predicted label: " + predicted_label+", "+"Predicted Accuracy: " + str(prediction_acc), fontsize=15) ax[0].set_title("Original Image") ax[0].imshow(resized_img.astype('uint8')) ax[1].set_title("Grad-CAM") ax[1].imshow(grid) ax[2].set_title("Occlusion Sensitivity") ax[2].imshow(grid_occ) plt.tight_layout() plt.subplots_adjust(top=1.5) plt.savefig(GRADCAM_PATH+"/{}/GradCAM_IMG_{}_DataFraction_{}_batch_{}.png".format(data_fraction,i,data_fraction,batch_number)) plt.close() ``` ### Constants ``` NCLASS = 4 IMG_SIZE = 224 IMG_CH = 3 BATCH_SIZE = 32 EPOCHS = 100 TOTAL_ITERATIONS = 3 random.seed(7) BASE_PATH = Path(os.getcwd()).parent CHECKPOINTS_PATH = str(BASE_PATH.joinpath('core/experiment_results/checkpoints/model_weights.h5')) TRAINING_HISTORY_PATH= str(BASE_PATH.joinpath('core/experiment_results/training_history/')) CONFUSION_MATRIX_PATH = str(BASE_PATH.joinpath('core/experiment_results/confusion_matrix/')) PROJECTIONS_PATH = str(BASE_PATH.joinpath('core/experiment_results/projections/')) GRADCAM_PATH = str(BASE_PATH.joinpath('core/experiment_results/gradcam/')) ``` ### Preparing the Data ``` ds_test, ds_test_info = tfds.load('RetinaDataset', split='test', shuffle_files=False, as_supervised=True,with_info=True) test_targets = [t.numpy() for t in ds_test.map(lambda img, lb: lb).batch(BATCH_SIZE)] test_targets = np.hstack(test_targets) label_names = ds_test_info.features['label'].names # This is used for all the GradCAM Images i = 0 images_for_gradcam = [] lables_for_gradcam = [] for image, label in ds_test: images_for_gradcam.append(image) lables_for_gradcam.append(label) if i >= 2: break i+=1 def get_train_validation(ds_train, sample_percent, total_len=83489, validation_percent=0.2): sample_percent_cnt = int((sample_percent/100) * total_len) total_val_count = int(validation_percent * sample_percent_cnt) # print (sample_percent_cnt, total_val_count) train_sample_cnt = int(0.8*sample_percent_cnt) # print (train_sample_cnt) ds_train = ds_train.take(train_sample_cnt) ds_val, ds_val_info = tfds.load('RetinaDataset', split='train[-2%:]', as_supervised=True,with_info=True) ds_0 = ds_val.filter(lambda image, label: label == 0).take(total_val_count//4) ds_1 = ds_val.filter(lambda image, label: label == 1).take(total_val_count //4) ds_2 = ds_val.filter(lambda image, label: label == 2).take(total_val_count //4) ds_3 = ds_val.filter(lambda image, label: label == 3).take(total_val_count //4) ds_val = ds_0.concatenate(ds_1).concatenate(ds_2).concatenate(ds_3) return ds_train, ds_val metrics_store = pd.DataFrame() def get_results_for_data_fraction(SAMPLE_SIZE): global test_targets, label_names, EPOCHS, BATCH_SIZE, images_for_gradcam, lables_for_gradcam, metrics_store, TOTAL_ITERATIONS for ITER_COUNT in range(TOTAL_ITERATIONS): print("Execution for batch: "+str(ITER_COUNT)) if SAMPLE_SIZE==100: ds_train, ds_train_info = tfds.load('RetinaDataset', split='train[:98%]', as_supervised=True,with_info=True) ds_val, ds_val_info = tfds.load('RetinaDataset', split='train[-2%:]', as_supervised=True,with_info=True) else: start_idx = int(ITER_COUNT*SAMPLE_SIZE) end_idx = start_idx+SAMPLE_SIZE ds_train, ds_train_info = tfds.load('RetinaDataset', split='train[{}%:{}%]'.format(start_idx,end_idx), as_supervised=True,with_info=True, shuffle_files=True) ds_train, ds_val = get_train_validation(ds_train, SAMPLE_SIZE, total_len=83489, validation_percent=0.2) ds_train_augment = ds_train.map(augment_image) ds_val = ds_val.map(resize_image) print("Computing weights for the classes.") y_labels = [] labels = ds_train_augment.map(lambda x, y: y) for l in labels.batch(BATCH_SIZE).as_numpy_iterator(): y_labels.append(l) y_labels = np.vstack(y_labels) y_labels.sum(axis=0) class_weights = compute_class_weight('balanced', [0, 1, 2, 3], y_labels.argmax(axis=1)) class_weights = {i: w for i, w in enumerate(class_weights)} print("Training the classifier.") model = resnetconv(input_shape = (IMG_SIZE, IMG_SIZE, IMG_CH), output_shape = (NCLASS,)) metrics = ['accuracy'] callbacks = [tf.keras.callbacks.EarlyStopping(patience=10, monitor='val_loss', ), tf.keras.callbacks.ModelCheckpoint(filepath=CHECKPOINTS_PATH,monitor='val_accuracy',save_best_only=True),] optimizer = tf.keras.optimizers.Adam(lr=0.002) model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=metrics) history = model.fit(ds_train_augment.batch(BATCH_SIZE), validation_data = ds_val.batch(BATCH_SIZE), callbacks = callbacks, class_weight = class_weights, epochs=EPOCHS, verbose=1) save_training_history(history,'train',SAMPLE_SIZE,ITER_COUNT) print("Fine-tuning the classifier.") model.layers[3].trainable = True optimizer = tf.keras.optimizers.Adam(lr=0.00005) model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=metrics) history_finetune = model.fit(ds_train_augment.batch(BATCH_SIZE), validation_data = ds_val.batch(BATCH_SIZE), callbacks = callbacks, class_weight = class_weights, epochs=EPOCHS, verbose=1) save_training_history(history_finetune,'finetune',SAMPLE_SIZE,ITER_COUNT) print("Model Evaluation.") logits_testset = model.predict(ds_test.map(resize_image).batch(BATCH_SIZE)) ytest_pred = logits_testset.argmax(axis=-1) print("Saving artifacts.") ax= plt.subplot() sns_plot = sns.heatmap(confusion_matrix(test_targets, ytest_pred),annot=True,xticklabels=label_names,yticklabels=label_names,fmt='g', ax = ax) ax.set_xlabel('Predicted labels') ax.set_ylabel('True labels') ax.set_title('Confusion Matrix') fig = sns_plot.get_figure() fig.savefig(CONFUSION_MATRIX_PATH+"/{}/ConfusionMatrix_DataFraction_{}_batch_{}.png".format(SAMPLE_SIZE,SAMPLE_SIZE,ITER_COUNT)) plt.close() accuracy = accuracy_score(test_targets, ytest_pred) precision = precision_score(test_targets, ytest_pred, average='weighted') recall = recall_score(test_targets, ytest_pred, average='weighted') f1_sc = f1_score(test_targets, ytest_pred, average='weighted') curr_metrics = pd.DataFrame({'Data_Fraction':SAMPLE_SIZE,'Batch_Number':ITER_COUNT,'Accuracy':accuracy,'Precision':precision,'Recall':recall,'F1_Score':f1_sc},index=[0]) metrics_store = metrics_store.append(curr_metrics,ignore_index=True) # Saving the Model model.save_weights(str(WEIGHTS_DIRNAME)+"/{}/Supervised_ResNet_DataFraction_{}_batch_{}_TestAcc_{}.h5".format(SAMPLE_SIZE,SAMPLE_SIZE,ITER_COUNT,int(accuracy*100))) print("Saving Projections") save_projection_viz(SAMPLE_SIZE,ITER_COUNT,model) print("Saving Saliency Maps") explain_predictions(SAMPLE_SIZE,ITER_COUNT,images_for_gradcam,lables_for_gradcam,model,layer_name=model.layers[-4].name) data_fractions_list = [2] # 1%, 5%, 10% for fraction in data_fractions_list: # clear_output(wait=True) get_results_for_data_fraction(fraction) metrics_store ``` #### END ``` metrics_store.mean() ```
github_jupyter
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/CLASSIFICATION_EN_TREC.ipynb) # **Classify text according to TREC classes** ## 1. Colab Setup ``` # Install java !apt-get update -qq !apt-get install -y openjdk-8-jdk-headless -qq > /dev/null !java -version # Install pyspark !pip install --ignore-installed -q pyspark==2.4.4 # Install Sparknlp !pip install --ignore-installed spark-nlp import pandas as pd import numpy as np import os os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"] import json from pyspark.ml import Pipeline from pyspark.sql import SparkSession import pyspark.sql.functions as F from sparknlp.annotator import * from sparknlp.base import * import sparknlp from sparknlp.pretrained import PretrainedPipeline ``` ## 2. Start Spark Session ``` spark = sparknlp.start() ``` ## 3. Select the DL model and re-run all the cells below ``` ### Select Model #model_name = 'classifierdl_use_trec6' model_name = 'classifierdl_use_trec50' ``` ## 4. Some sample examples ``` text_list = [ "What effect does pollution have on the Chesapeake Bay oysters?", "What financial relationships exist between Google and its advertisers?", "What financial relationships exist between the Chinese government and the Cuban government?", "What was the number of member nations of the U.N. in 2000?", "Who is the Secretary-General for political affairs?", "When did the construction of stone circles begin in the UK?", "In what country is the WTO headquartered?", "What animal was the first mammal successfully cloned from adult cells?", "What other prince showed his paintings in a two-prince exhibition with Prince Charles in London?", "Is there evidence to support the involvement of Garry Kasparov in politics?", ] ``` ## 5. Define Spark NLP pipeline ``` documentAssembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document") use = UniversalSentenceEncoder.pretrained(lang="en") \ .setInputCols(["document"])\ .setOutputCol("sentence_embeddings") document_classifier = ClassifierDLModel.pretrained(model_name, 'en') \ .setInputCols(["document", "sentence_embeddings"]) \ .setOutputCol("class") nlpPipeline = Pipeline(stages=[ documentAssembler, use, document_classifier ]) ``` ## 6. Run the pipeline ``` empty_df = spark.createDataFrame([['']]).toDF("text") pipelineModel = nlpPipeline.fit(empty_df) df = spark.createDataFrame(pd.DataFrame({"text":text_list})) result = pipelineModel.transform(df) ``` ## 7. Visualize results ``` result.select(F.explode(F.arrays_zip('document.result', 'class.result')).alias("cols")) \ .select(F.expr("cols['0']").alias("document"), F.expr("cols['1']").alias("class")).show(truncate=False) ```
github_jupyter
# Downloading and saving CSV data files from the web ``` import urllib.request url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data' csv_cont = urllib.request.urlopen(url) csv_cont = csv_cont.read() # .decode('utf-8') # saving the data to local drive #with open('./datasets/wine_data.csv', 'wb') as out: # out.write(csv_cont) ``` # Reading in a dataset from a CSV file ``` import numpy as np # reading in all data into a NumPy array all_data = np.loadtxt( "./datasets/wine_data.csv", delimiter=",", dtype=np.float64 ) # load class labels from column 1 y_wine = all_data[:,0] # conversion of the class labels to integer-type array y_wine = y_wine.astype(np.int64, copy=False) # load the 14 features X_wine = all_data[:,1:] # printing some general information about the data print('\ntotal number of samples (rows):', X_wine.shape[0]) print('total number of features (columns):', X_wine.shape[1]) # printing the 1st wine sample float_formatter = lambda x: '{:.2f}'.format(x) np.set_printoptions(formatter={'float_kind':float_formatter}) print('\n1st sample (i.e., 1st row):\nClass label: {:d}\n{:}\n' .format(int(y_wine[0]), X_wine[0])) # printing the rel.frequency of the class labels print('Class label relative frequencies') print('Class 1 samples: {:.2%}'.format(list(y_wine).count(1)/y_wine.shape[0])) print('Class 2 samples: {:.2%}'.format(list(y_wine).count(2)/y_wine.shape[0])) print('Class 3 samples: {:.2%}'.format(list(y_wine).count(3)/y_wine.shape[0])) ``` **Histograms** are a useful data to explore the distribution of each feature across the different classes. This could provide us with intuitive insights which features have a good and not-so-good inter-class separation. Below, we will plot a sample histogram for the "Alcohol content" feature for the three wine classes. # Visualizating of a dataset with Histograms ``` first_fea = X_wine[:, 0] print('minimum:', first_fea.min()) print('mean:', first_fea.mean()) print('Maximum:', first_fea.max()) %matplotlib inline from matplotlib import pyplot as plt from math import floor, ceil # for rounding up and down # bin width of the histogram in steps of 0.15 #bins = np.arange(floor(min(X_wine[:,0])), ceil(max(X_wine[:,0])), 0.15) bins = np.linspace(floor(min(X_wine[:,0])), ceil(max(X_wine[:,0])), 20) labels = np.unique(y_wine) plt.figure(figsize=(20,8)) # 我们可以下面的循环语句来plot直方图,不用这三个基本上重复的语句 #plt.hist(first_fea[y_wine == 1], bins, alpha = 0.3, color='red') #plt.hist(first_fea[y_wine == 2], bins, alpha = 0.3, color='blue') #plt.hist(first_fea[y_wine == 3], bins, alpha = 0.3, color='green') # the order of the colors for each histogram colors = ('blue', 'red', 'green') for label, color in zip( labels, colors ): plt.hist( first_fea[y_wine == label], bins=bins, alpha=0.3, color=color, label='class' + str(label) ) plt.title('Wine data set - Distribution of alocohol contents') plt.xlabel('alcohol by volume', fontsize=14) plt.ylabel('count', fontsize=14) plt.legend(loc='upper right') # 它返回的是数组的tuple,第一个数组是每个bin对应的值,第二个是bin的边界,因此第二个数组比第一个多1个元素 bin_value = np.histogram(first_fea, bins=bins) # 找到最大的bin max_bin = max(bin_value[0]) # 扩大y轴的范围 plt.ylim([0, max_bin*1.3]) plt.show() ``` # Visualizating of a dataset with Scatter plot **Scatter plot** are useful for visualizing features in more than just one dimension, for example to get a feeling for the correlation between particular features.Below, we will create an example 2D-Scatter plot from the features "Alcohol content" and "Malic acid content". ``` second_fea = X_wine[:, 1] print('minimum:', second_fea.min()) print('mean:', second_fea.mean()) print('Maximum:', second_fea.max()) plt.figure(figsize=(10,8)) markers = ('x', 'o', '^') # 和上图的原理类似,只不过把hist替换成了scatter for label,marker,color in zip( labels, markers, colors ): plt.scatter( x=first_fea[y_wine == label], y=second_fea[y_wine == label], marker=marker, color=color, alpha=0.7, label='class' + str(label) ) plt.title('Wine Dataset') plt.xlabel('alcohol by volume in percent') plt.ylabel('malic acid in g/l') plt.legend(loc='upper right') plt.show() ``` If we want to pack 3 different features into **one scatter plot at once**, we can also do the same thing in 3D. ``` third_fea = X_wine[:, 2] print('minimum:', third_fea.min()) print('mean:', third_fea.mean()) print('Maximum:', third_fea.max()) from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize=(8, 8)) ax = fig.add_subplot(111, projection='3d') for label, marker, color in zip( labels, markers, colors ): ax.scatter( first_fea[y_wine == label], second_fea[y_wine == label], third_fea[y_wine == label], marker=marker, color=color, s=40, alpha=0.7, label='class' + str(label) ) ax.set_xlabel('alcohol by volume in percent') ax.set_ylabel('malic acid in g/l') ax.set_zlabel('ash content in g/l') plt.title('Wine dataset') plt.legend(loc='upper right') plt.show() ``` # Splitting into training and test dataset It is a typical procedure for machine learning and pattern classification tasks to split one dataset into two: a training dataset and a test dataset. The training dataset is henceforth used to train our algorithms or classifier, and the test dataset is a way to validate the outcome quite objectively before we apply it to "new, real world data". Here, we will split the dataset randomly so that 70% of the total dataset will become our training dataset, and 30% will become our test dataset, respectively. ``` from sklearn.cross_validation import train_test_split # 加random_state的目的是reproducible X_train, X_test, y_train, y_test = train_test_split(X_wine, y_wine, test_size=0.30, random_state=2016811) ``` # Standardization Standardizing the features so that they are centered around 0 with a standard deviation of 1 is especially important if we are comparing measurements that have different units, e.g., in our "wine data" example, where the alcohol content is measured in volume percent, and the malic acid content in g/l. ``` from sklearn import preprocessing # StandardScaler that implements the Transformer API to compute the mean and standard deviation on a training set # so as to be able to later reapply the same transformation on the testing set. std_scale = preprocessing.StandardScaler().fit(X_train) X_train = std_scale.transform(X_train) X_test = std_scale.transform(X_test) f, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(10,5)) # 和上面的图形一个原理,由于有重复的语句从而采用循环结构 for a, x_dat, y_lab in zip( ax, (X_train, X_test), (y_train, y_test) ): for label, marker, color in zip( labels, markers, colors ): a.scatter( x=x_dat[:,0][y_lab == label], y=x_dat[:,1][y_lab == label], marker=marker, color=color, alpha=0.7, label='class' + str(label) ) a.legend(loc='upper left') ax[0].set_title('Training Dataset') ax[1].set_title('Test Dataset') f.text(0.5, 0.04, 'malic acid (standardized)', ha='center', va='center') f.text(0.08, 0.5, 'alcohol (standardized)', ha='center', va='center', rotation='vertical') plt.show() ``` # Min-Max scaling (Normalization) In this approach, the data is scaled to a fixed range - usually 0 to 1. The cost of having this bounded range - in contrast to standardization - is that we will end up with small standard deviations, for example in the case where outliers are present. ``` minmax_scale = preprocessing.MinMaxScaler(feature_range=(0, 1)).fit(X_train) X_train_minmax = minmax_scale.transform(X_train) X_test_minmax = minmax_scale.transform(X_test) f, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(10,5)) # 和上面的图形一个原理,由于有重复的语句从而采用循环结构 for a, x_dat, y_lab in zip( ax, (X_train_minmax, X_test_minmax), (y_train, y_test) ): for label, marker, color in zip( labels, markers, colors ): a.scatter( x=x_dat[:,0][y_lab == label], y=x_dat[:,1][y_lab == label], marker=marker, color=color, alpha=0.7, label='class' + str(label) ) a.legend(loc='upper left') ax[0].set_title('Training Dataset') ax[1].set_title('Test Dataset') f.text(0.5, 0.04, 'malic acid (normalized)', ha='center', va='center') f.text(0.08, 0.5, 'alcohol (normalized)', ha='center', va='center', rotation='vertical') plt.show() ``` # Linear Transformation: Principal Component Analysis (PCA) Here, our desired outcome of the principal component analysis is to project a feature space (our dataset consisting of n x d-dimensional samples) onto a smaller subspace that represents our data "well". A possible application would be a pattern classification task, where we want to reduce the computational costs and the error of parameter estimation by reducing the number of dimensions of our feature space by extracting a subspace that describes our data "best". ``` from sklearn.decomposition import PCA pca_two_components = PCA(n_components = 2) pca_train = pca_two_components.fit_transform(X_train) plt.figure(figsize=(10,8)) for label,marker,color in zip( labels, markers, colors ): plt.scatter( x=pca_train[:,0][y_train == label], y=pca_train[:,1][y_train == label], marker=marker, color=color, alpha=0.7, label='class' + str(label) ) plt.xlabel('vector 1') plt.ylabel('vector 2') plt.legend() plt.title('Most significant singular vectors after linear transformation via PCA') plt.show() ``` 为了可视化的目的,上面我只保留了2个主成分。但是,在实际应用中,我们应该根据实际的情况来判断应该保留多少个主成分。下面的代码把n_components设置为None,因此,我们保留了所有的主成分,然后在打印出来,来分析一下我们应该怎样去保留主成分。 ``` sklearn_pca = PCA(n_components=None) sklearn_transf = sklearn_pca.fit_transform(X_train) print(sklearn_pca.explained_variance_ratio_) ``` # Linear Transformation: Linear Discriminant Analysis Principal Component Analysis (PCA) applied to this data identifies the combination of attributes (principal components, or directions in the feature space) that account for the most variance in the data. Linear Discriminant Analysis (LDA) tries to identify attributes that account for the most variance between classes. In particular, LDA, in contrast to PCA, is a supervised method, using known class labels. ``` from sklearn.discriminant_analysis import LinearDiscriminantAnalysis lda_two_components = LinearDiscriminantAnalysis(n_components = 2) lda_train = lda_two_components.fit_transform(X_train, y_train) plt.figure(figsize=(10,8)) for label,marker,color in zip( labels, markers, colors ): plt.scatter( x=lda_train[:,0][y_train == label], y=lda_train[:,1][y_train == label], marker=marker, color=color, alpha=0.7, label='class' + str(label) ) plt.xlabel('vector 1') plt.ylabel('vector 2') plt.legend() plt.title('Most significant singular vectors after linear transformation via LDA') plt.show() ``` Linear Discriminant Analysis 不仅仅可以用于dimensionality reduction,它还可以作为分类器,详情请参考[Logistic Regression、Linear Discriminant Analysis、Shrinkage Methods(Ridge Regression and Lasso)](http://blog.csdn.net/xlinsist/article/details/52211334#t2)
github_jupyter
# FastAI Experiments Using Google Colab CPU <a href="https://colab.research.google.com/github/rambasnet/DeepLearningMaliciousURLs/blob/master/FastAI-Experiments.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Import Libraries ``` from fastai.tabular import * import pandas as pd import numpy as np from sklearn.model_selection import train_test_split, StratifiedShuffleSplit import os import sys import glob from sklearn.utils import shuffle import matplotlib.pyplot as plt from sklearn import model_selection ``` ### Note Notebook doesn't display all the rows and columns - let's fix that ``` pd.options.display.max_columns = None pd.options.display.max_rows = None ``` ## Check CSV files inside FinalDataset folder - if it doesn't exists, download data using Bash script in Baseline-Experiments notebook ``` ! ls FinalDataset def loadData(csvFile): pickleDump = '{}DroppedNaNCols.pickle'.format(csvFile) if os.path.exists(pickleDump): df = pd.read_pickle(pickleDump) else: df = pd.read_csv(csvFile, low_memory=False) # clean data # strip the whitspaces from column names df = df.rename(str.strip, axis='columns') # drop Infinity rows and NaN string from each column for col in df.columns: indexNames = df[df[col] == 'Infinity'].index if not indexNames.empty: print('deleting {} rows with Infinity in column {}'.format(len(indexNames), col)) df.drop(indexNames, inplace=True) df.argPathRatio = df['argPathRatio'].astype('float') # drop all columns with NaN values beforeColumns = df.shape[1] df.dropna(axis='columns', inplace=True) print('Dropped {} columns with NaN values'.format(beforeColumns - df.shape[1])) # drop all rows with NaN values beforeRows = df.shape[0] df.dropna(inplace=True) print('Dropped {} rows with NaN values'.format(beforeRows - df.shape[0])) df.to_pickle(pickleDump) return df df = loadData('FinalDataset/All.csv') # let's check the shape again df.shape # class distribution for original data label = 'URL_Type_obf_Type' print(df.groupby(label).size()) ``` ## Experimenting with FinalDataset/All.csv ## Multi-class classification ## Total samples for each class ``` dataPath = 'FinalDataset' dep_var = label cat_names = [] cont_names = list(set(df.columns) - set(cat_names) - set([dep_var])) cont_names procs = [FillMissing, Categorify, Normalize] sss = StratifiedShuffleSplit(n_splits = 1, test_size=0.2, random_state=0) print(sss) for train_idx, test_idx in sss.split(df.index, df[dep_var]): data_fold = (TabularList.from_df(df, path=dataPath, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idxs(train_idx, test_idx) .label_from_df(cols=dep_var) .databunch()) # create model and learn model = tabular_learner(data_fold, layers=[200, 100], metrics=accuracy, callback_fns=ShowGraph) model.fit_one_cycle(cyc_len=10) # model.save('{}.model'.format(os.path.basename(dataPath))) loss, acc = model.validate() print('loss {}: accuracy: {:.2f}%'.format(loss, acc*100)) preds, y, losses = model.get_preds(with_loss=True) interp = ClassificationInterpretation(model, preds, y, losses) interp.plot_confusion_matrix(slice_size=10) print(interp.confusion_matrix()) interp.most_confused() ``` ## Binary-class classification - Relabel spam, phishing, defacement, malware as 'malicious' - Keep benign type as benign ``` lblTypes = list(lblTypes) lblTypes lblTypes = dict(zip(lblTypes, ['malicious']*5)) lblTypes['benign'] = 'benign' lblTypes df[label] = df[label].map(lblTypes) for train_idx, test_idx in sss.split(df.index, df[dep_var]): data_fold = (TabularList.from_df(df, path=dataPath, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idxs(train_idx, test_idx) .label_from_df(cols=dep_var) .databunch()) # create model and learn model = tabular_learner(data_fold, layers=[200, 100], metrics=accuracy, callback_fns=ShowGraph) model.fit_one_cycle(cyc_len=10) loss, acc = model.validate() print('loss {}: accuracy: {:.2f}%'.format(loss, acc*100)) preds, y, losses = model.get_preds(with_loss=True) interp = ClassificationInterpretation(model, preds, y, losses) interp.plot_confusion_matrix(slice_size=10) df1.shape ```
github_jupyter
``` %load_ext autoreload %autoreload 2 import os, sys currentdir = os.path.dirname(os.path.realpath("__file__")) parentdir = os.path.dirname(currentdir) sys.path.append(parentdir) ``` # 2. Index assets In order to build a mosaic, we want to replace each image part with a matching source picture (a patch). We will run the search algorithm to find the best matches. To make search fast, we will store preprocessed sources together with their feature vectors. More precisely, we will store: - a resized version of the original - result of applying Sobel filter to the resized image - image description in the form of CSV file entry The `index_images` function from the `mosaic_mager.index_assets` module is responsible for indexing sources. It uses `ImageProcessor` class from the `mosaic_maker.basic_processing.image_processor` module that will resize and create a Sobel version of the input. After these operations, both results are fed to the `Patch` class from the `mosaic_maker.patch.patch` module that will be responsible for creating a description of the input images. The `Patch` class will also be used in mosaic creation. One could consider embedding Sobel operators in the `Patch` itself, but in the case of mosaic composing, we will want to apply them for the whole source image, not only currently processed part. This way, we'll have better values for patches' borders. In this chapter, we will focus on the `ImageProcessor` implementation. `Patch`, in its current version, will return dummy feature vectors, but we will feed it with properly rescaled and transformed images that will be stored in the `assets` directory. Image processing will consist of three steps: 1. Cropping image to the square shape - patches are square, so we will use only the central part fo the images leaving out the margins 2. Applying Sobel filters 3. Resizing cropped original and its Sobel version. ``` from config import IMAGES_SET, PATCH_SIZE from mosaic_maker.index_assets import index_images index_images(IMAGES_SET, PATCH_SIZE) ``` After the running line above, the `indexed-sources` directory should appear in `assets`. In this directory, you will find another directory named after source images set name. It will contain processed images and a CSV file. As `ImageProcessor` contains only dummy implementation, processed images will be the original images, and CSV will only contain empty lines with filenames in the first column. Let's work on the image processor. We will use the target image generated in the previous chapter as our test input. ``` import cv2 from matplotlib import pyplot as plt from config import PROJECT_ROOT image_path = PROJECT_ROOT / 'assets/test-target.jpg' print(PROJECT_ROOT / 'assets/test-target.jpg') test_image = cv2.imread(image_path.as_posix()) plt.imshow(cv2.cvtColor(test_image, cv2.COLOR_BGR2RGB)) ``` Let's create an `ImageProcessor` class instance for it. For now, we will focus on the two fields of `ImageProcessor` class: `cropped_image` and `sobel_magnitude_image`. ``` from mosaic_maker.basic_processing.image_processor import ImageProcessor processed_test = ImageProcessor('test', test_image, PATCH_SIZE) _, grid = plt.subplots(1, 2) grid[0].imshow(cv2.cvtColor(processed_test.cropped_image, cv2.COLOR_BGR2RGB)) grid[1].imshow(processed_test.sobel_magnitude_image, cmap='gray') ``` As no of the functions is implemented yet, you should see two copies of the original image above. The right one will have wrong colors but we can leave it this way - after processing image will be in grayscale and will be displayed correctly. Let's start with cropping the image to the center square. Use basic `Numpy` indexing and a `shape` field to achieve this target. You will find a description of the needed operators in the `Numpy` [quickstart tutorial](https://docs.scipy.org/doc/numpy/user/quickstart.html). You can wokr on `test_image` in this notebook. After finding proper algorithm, modify `_crop_to_square` method of `ImageProcessor` to make it work within the class. ``` test_image_copy = test_image.copy() # do some sample operations on test image copy here plt.imshow(cv2.cvtColor(test_image_copy, cv2.COLOR_BGR2RGB)) ``` After fixing `crop_to_square` method block below should create two cropped images. ``` processed_test = ImageProcessor('test', test_image, PATCH_SIZE) _, grid = plt.subplots(1, 2) grid[0].imshow(cv2.cvtColor(processed_test.cropped_image, cv2.COLOR_BGR2RGB)) grid[1].imshow(processed_test.sobel_magnitude_image, cmap='gray') ``` Now let's work on Sobel filters. Applying Sobel filters will be more sophisticated than cropping images. You will have to perform the following steps: - Convert image to grayscale and blur the result - read [changing colorspaces tutorial](https://docs.opencv.org/master/df/d9d/tutorial_py_colorspaces.html) and [smoothing images tutorial](https://docs.opencv.org/master/d4/d13/tutorial_py_filtering.html) - Calculate gradients - read [image gradients tutorial](https://docs.opencv.org/master/d5/d0f/tutorial_py_gradients.html) to find instructions for building `x` and `y` Sobel images - process gradients by applying [`convertScaleAbs`](https://docs.opencv.org/4.2.0/d2/de8/group__core__array.html#ga3460e9c9f37b563ab9dd550c4d8c4e7d) function to remove signs and then combine results using [`addWeighted`](https://docs.opencv.org/4.2.0/d2/de8/group__core__array.html#gafafb2513349db3bcff51f54ee5592a19) - to remove noises and unimportant edges, threshold result - you can read about this operation in the [image thresholding tutorial](https://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html) You can work on the code block below and then apply results to the `calculate_sobel_magnitude_image` method. ``` test_image_copy = test_image.copy() # do some sample operations on test image copy here plt.imshow(cv2.cvtColor(test_image_copy, cv2.COLOR_BGR2RGB)) ``` After applying all changes, you should see two different images below: one with the cropped image and one with edges. ``` processed_test = ImageProcessor('test', test_image, PATCH_SIZE) _, grid = plt.subplots(1, 2) grid[0].imshow(cv2.cvtColor(processed_test.cropped_image, cv2.COLOR_BGR2RGB)) grid[1].imshow(processed_test.sobel_magnitude_image, cmap='gray') ``` We can now run `index_images` function again and check the results in the the `assets` directory. ``` index_images(IMAGES_SET, PATCH_SIZE) ```
github_jupyter
# BERT + Keras 对新闻标题分类 日期:2020年4月3日 此方法与 PyTorch 的前半部分基本一致。 ``` import os import re import time import numpy as np import pandas as pd import transformers from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder import tensorflow as tf import tensorflow.keras.backend as K from tensorflow.keras.layers import Dense, Flatten from tensorflow.keras.optimizers import Adam from tensorflow.keras.preprocessing.sequence import pad_sequences # padding句子用 from tqdm.notebook import tqdm from transformers import create_optimizer from transformers import TFBertModel, BertTokenizer, TFBertForSequenceClassification print(tf.__version__) print(transformers.__version__) # 查看可用GPU数量 print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU'))) # 指定GPU print(tf.device('/device:gpu:0')) class Config(object): """配置参数""" def __init__(self): self.model_name = 'BERT_SPBM' # 模型名称 self.bert_path = './bert-chinese/' # BERT 文件路径 self.batch_size = 128 # mini-batch大小 self.epsilon = 1e-08 # adam参数 self.hidden_size = 768 # 隐藏层神经元 self.hidden_dropout_prob = 0.1 # dropout比率 self.learning_rate = 2e-5 # 学习率 self.max_len = 32 # 句子的最长padding长度 self.num_classes = 240 # 类别数 self.num_epoch = 2 # epoch config = Config() ``` ## 读取数据 首先读取新闻数据。这部分可以参考 Pytorch 部分。 ``` file = "train.txt" with open(file, encoding="utf-8") as f: sentences_and_labels = [line for line in f.readlines()] f.close() # 前几句 sentences_and_labels[0:10] seq, label = sentences_and_labels[2].split('\t') print(seq) print(label) sentences = [] labels = [] for sentence_with_label in sentences_and_labels: sentence, label = sentence_with_label.split('\t') sentences.append(sentence) labels.append(label) print(sentences[0:3]) print(labels[0:3]) ``` ## 输入准备 ### Tokenizer ``` tokenizer = BertTokenizer.from_pretrained('./bert-chinese/', do_lower_case=True) tokenized_texts = [tokenizer.encode(sent, add_special_tokens=True) for sent in sentences] # 这句话的input_ids print(f"Tokenize 前的第一句话:\n{sentences[0]}\n") print(f"Tokenize 后的第一句话: \n{tokenized_texts[0]}") print (len(tokenized_texts)) # 180000句话 ``` ### Padding ``` # 输入padding # 此函数在keras里面 input_ids = pad_sequences([txt for txt in tokenized_texts], maxlen=config.max_len, dtype="long", truncating="post", padding="post") print(f"Tokenize 前的第一句话:\n\n{sentences[0]}\n\n") print(f"Tokenize 后的第一句话: \n\n{tokenized_texts[0]}\n\n") print(f"Padding 后的第一句话: \n\n{input_ids[0]}") # 转换回来 raw_texts = [tokenizer.decode(input_ids[0])] print(raw_texts) print(len(raw_texts)) # 创建attention masks attention_masks = [] # Create a mask of 1s for each token followed by 0s for padding for seq in input_ids: seq_mask = [float(i > 0) for i in seq] attention_masks.append(seq_mask) print(attention_masks[0]) ``` ### Labels ``` print(len(labels)) print(labels[0:10]) clean_labels = [] for label in labels: clean_labels.append(int(label.strip('\n'))) print(clean_labels[0:10]) input_ids = np.asarray(input_ids) clean_labels = np.asarray(clean_labels) attention_masks = np.asarray(attention_masks) train_inputs, validation_inputs, train_labels, validation_labels = train_test_split(input_ids, clean_labels, random_state=2019, test_size=0.1) train_masks, validation_masks, _, _ = train_test_split(attention_masks, input_ids, random_state=2019, test_size=0.1) print(train_labels[0:10]) print(len(set(train_labels))) print(f" 标签总数:", len(labels)) print(f"训练集标签总数:", len(train_labels)) print(f"验证集标签总数:", len(validation_labels)) # 用于测试额外拼接 train_add = np.random.randn(len(train_labels)) print(len(train_add)) print(type(train_add)) print(train_add[0:10]) ``` Keras 可以直接接受 `np.array()` 形式,所以不用转换为 `Tensor`。 ``` train_X = [train_inputs, train_masks, train_add] train_y = train_labels ``` ## 构建模型 ``` loss_function = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy("accuracy") optimizer = Adam(learning_rate=config.learning_rate, epsilon=config.epsilon) def build_model(): """构建模型""" token_inputs = tf.keras.layers.Input((config.max_len), dtype=tf.int32, name='Input_word_ids') mask_inputs = tf.keras.layers.Input((config.max_len,), dtype=tf.int32, name='Input_masks') #seg_inputs = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='input_segments') # going with pooled output since seq_output results in ResourceExhausted Error even with GPU # 导入 BERT 模型 bert_model = TFBertModel.from_pretrained(config.bert_path) _, pooled_output = bert_model(inputs=token_inputs, attention_mask=mask_inputs, token_type_ids=None, position_ids=None) # X = GlobalAveragePooling1D()(pooled_output) X = tf.keras.layers.Dropout(0.1)(pooled_output) output_ = tf.keras.layers.Dense(10, activation='softmax', name='output')(X) # 输入输出确定 bert_model2 = tf.keras.models.Model([token_inputs, mask_inputs], output_) print(bert_model2.summary()) # 编译模型 bert_model2.compile(optimizer=optimizer, loss=loss_function) return bert_model2 ``` 如果需要修改模型结构,参考如下,比如多了一个 `add_inputs`: ```python def build_model(): """构建模型""" token_inputs = tf.keras.layers.Input((config.max_len), dtype=tf.int32, name='Input_word_ids') mask_inputs = tf.keras.layers.Input((config.max_len,), dtype=tf.int32, name='Input_masks') add_inputs = tf.keras.layers.Input((1,), dtype=tf.float32, name='Random_add') #seg_inputs = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='input_segments') # going with pooled output since seq_output results in ResourceExhausted Error even with GPU # 导入 BERT 模型 bert_model = TFBertModel.from_pretrained(config.bert_path) _, pooled_output = bert_model(inputs=token_inputs, attention_mask=mask_inputs, token_type_ids=None, position_ids=None) # X = GlobalAveragePooling1D()(pooled_output) X = tf.keras.layers.Dropout(0.1)(pooled_output) X = tf.keras.layers.Concatenate()([X, add_inputs]) output_ = tf.keras.layers.Dense(10, activation='softmax', name='output')(X) # 输入输出确定 bert_model2 = tf.keras.models.Model([token_inputs, mask_inputs, add_inputs], output_) print(bert_model2.summary()) # 编译模型 bert_model2.compile(optimizer=optimizer, loss=loss_function) return bert_model2 ``` ### 参考: https://medium.com/analytics-vidhya/bert-in-keras-tensorflow-2-0-using-tfhub-huggingface-81c08c5f81d8 https://www.kaggle.com/stitch/albert-in-keras-tf2-using-huggingface-explained ``` K.clear_session() model = build_model() model.fit(train_X, train_y, epochs=2, batch_size = 128) valid_X = [validation_inputs, validation_masks] valid_y = validation_labels # 模型预测 result = model.predict(valid_X) pred_flat = np.argmax(result, axis=1).flatten() print(pred_flat[0:10]) print(len(pred_flat)) # 保存模型 print(config.model_name) model.save_weights(config.model_name, overwrite=True) # 读取模型 model.load_weights(config.model_name) ```
github_jupyter
# Final Project Report Group 10 ## <u> Insurance Cross Selling <u> Ashwin Yenigalla. Natwar Koneru, Pratheep Raju, Rahul Narang ### <u> Data Dictionary <u> The dataset is from Kaggle.com: https://www.kaggle.com/anmolkumar/health-insurance-cross-sell-prediction Unique ID Rows: 381109 values Data Features are as follows: Attributes Count: There are 11 Attributes attached to each unique identifier. One is a Target Variable. Please find the included details below. ### <u> Introduction <u> This project is focused on Vehicle Insurance cross sales by a health insurance company. <br> The Health Insurance company guarantees compensation for damage to a person’s health, loss of life and any property loss incurred. They are now aimed at leveraging their repository of customer and prospect data to cross sell a vehicle insurance product. <br> <br> This is a continuous cycle that allows the company to follow the customer from more avenues. How to design a product that will win over most customers in this area is the business problem that we are trying to address. The case provides some data that has been generated from their beta market tests to develop an understanding of their customer responsiveness to the new vehicle insurance product based on certain identifying demographics. <br> <br> This case is being approached as a predictive analytics data mining problem. The steps we will follow to classify customers into viable candidates for cross-selling are based on the positive, or negative responses generated by their existing customers. A host of supervised and unsupervised classification algorithms will be trained on the data provided in this case study, treating their logged responses to the new vehicle insurance product as the target variable (yes/no to purchasing the product). The report below details the attributes of the dataset, feature engineering and data munging applied to the dataset and classification algorithms used in generating predictions to classify customers based on their responses. Variable|Variable Description :-----|:----- ID|Unique ID for the customer Gender|Gender of the customer Age|Age of the customer Driving License|0 for Customer that does not have a Driver’s License, <br>1 for Customer already has Driver’s License Region Code|Unique code for the region of the customer Previously Insured|1 for Customer already has previously opted for Vehicle Insurance, 0 for Customer that doesn't have pre-existing Vehicle Insurance coverage Vehicle Age|Age of the Vehicle Vehicle Damage|1 for Customer that damaged their vehicle in the past. 0 for Customer that did not damage their vehicle in the past. Annual Premium|The amount customer would need to pay as premium in the year Policy Sales Channel|Anonymized Code for the outbound sales channel connecting to the customer <br> i.e. Different Agents, Over Mail, Over Phone, In Person, etc. Vintage |Number of Days, Customer has been associated with the company Response|Target Variable. <br> 1 if the Customer shows a positive response to purchasing the insurance product, <br>0 for Customers who are not interested in purchasing this product ## <u> Exploratory Data Analysis [EDA] <u> ### To analyze the variables, some of the important libraries used in our project are numpy, pandas, matlotlib, seaborn sklearn. ### To know the dataset better, we need to know the relationship of variables with the target variable. ### Eliminating / dumping unnecessary variables is an important part of data cleaning. ### As part of our analysis, 'ID' does not play much significant role for better performance of algorithms in training and testing data set. <br> <br> <br> ``` import numpy as np import pandas as pd # Packages for Ploting import seaborn as sns import matplotlib.pyplot as plt import plotly.express as px from sklearn.metrics import accuracy_score # Importing the ML Algorithm Packages from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier from xgboost import XGBClassifier from sklearn.linear_model import LogisticRegression, RidgeClassifier from sklearn.tree import DecisionTreeClassifier # Importing the Matrics and Other Required Packages from sklearn.metrics import plot_confusion_matrix from sklearn import metrics from sklearn.model_selection import train_test_split from sklearn.model_selection import ShuffleSplit from sklearn.model_selection import cross_val_score from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.metrics import confusion_matrix, accuracy_score from sklearn.metrics import classification_report from sklearn.utils import shuffle import warnings warnings.filterwarnings('ignore') pd.set_option('display.max_columns', 300) sns.set() # Reading the Data from CSV File df = pd.read_csv('train.csv') # Copying the Main Data from csv to a New Varable DF1 df1 = df.copy() df1 = df.drop(['id'],axis = 1) # To Know the Type of Data df.info() # To get the Total Rows and Columns df.shape ``` There are 381109 Rows and 12 Columns in the dataset ``` # Seeing if there are any Null Values df.isnull().sum() df_train = pd.read_csv('train.csv') df1 = df_train.copy() df1.Gender = df1.Gender.apply(lambda x: int(1) if x == 'Male' else int(0)) df1.Vehicle_Damage = df1.Vehicle_Damage.apply(lambda x: int(1) if x == 'Yes' else int(0)) df1.Vehicle_Age = df1.Vehicle_Damage.apply(lambda x: int(0) if x == 1.0 else (int(1) if x==1.5 else int(2))) df1['Policy_Sales_Channel'] = df1['Policy_Sales_Channel'].astype(int) df1['Region_Code'] = df1['Region_Code'].astype(int) df1['Response'] = df1['Response'].astype(int) df1['Vehicle_Age'] = df1['Vehicle_Age'].astype(int) df1.describe() df1.info() ``` ## <u> Target Variable <u> ``` plt.plot sns.countplot(df1['Response']) plt.title("response plot (target variable)") print( "Percentage of target class\n") print(df1['Response'].value_counts()/len(df1)*100) ``` ### The target variable is disproportionate and will affect the accuracy of the classification algorithm. ### The target variable needs to be rebalanced before we can proceed with our machine learning algorithm. ``` # DataFrame With All 1s in Response df_once = df1[df1['Response'] == 1] # DataFrame With All 0s Response df_zeros = df1[df1['Response'] == 0] # Takeing a random sample from df_zeros(0s) to a length of df_once(1s) Zero_Resampling = df_zeros.sample(n = len(df_once)) # Concat(joining) df_once and Zero_Resampling New_df = pd.concat([df_once,Zero_Resampling]) # Shuffleing the New_df final_df = shuffle(New_df) Responses_count = [len(final_df[final_df.Response == 1]),len(final_df[final_df.Response == 0])] Responses_count sns.countplot(df1['Response']) plt.title("response plot (target variable)") plt.show() ``` ## <u> Gender Variable <u> ``` plt.figure(figsize = (13,5)) sns.countplot(df1['Gender']) plt.show() plt.figure(figsize=(13,5)) sns.countplot(df1['Gender'], hue = df1['Response']) plt.title("Male and female responses") plt.show() ``` ## <u> Annual Premium Variable <u> ``` # Ploting a Distibution Plot for Annual_Premium plt.figure(figsize=(10,5)) Annual_Premium_plot = sns.distplot(final_df.Annual_Premium) sns.boxplot(final_df['Annual_Premium']) ``` ### We are performing a log transform on Annual_premium since to remove the skewness and for better distribution ``` final_df['Log_Annual_Premium'] = np.log(final_df['Annual_Premium']) final_df sns.boxplot(final_df['Log_Annual_Premium']) # Ploting a Distibution Plot for Annual_Premium plt.figure(figsize=(10,5)) Annual_Premium_plot = sns.distplot(final_df.Log_Annual_Premium) ``` ### We will examine the highest correlated attributes and eliminate attributes that are not contributing more to the information gain ``` corrmat = final_df.corr() top_corr_features = corrmat.index plt.figure(figsize=(20,20)) #plot heat map g=sns.heatmap(final_df[top_corr_features].corr(),annot=True,cmap="RdYlGn") def correlation(dataset, threshold): col_corr = set() corr_matrix = dataset.corr() for i in range(len(corr_matrix.columns)): for j in range(i): if abs(corr_matrix.iloc[i, j]) > threshold: colname = corr_matrix.columns[i] col_corr.add(colname) return col_corr correlation(final_df, 0.6) final_df = final_df.drop(['Vehicle_Damage'],axis = 1) final_df = final_df.drop(['Annual_Premium'],axis = 1) ``` ### We are removing Vehicle damage because it is highly correlated with two other attributes(vehicle_age and previously_insured) to avoid multicollinearity ``` corrmat = final_df.corr() top_corr_features = corrmat.index plt.figure(figsize=(20,20)) #plot heat map g=sns.heatmap(final_df[top_corr_features].corr(),annot=True,cmap="RdYlGn") final_df.head(10) ``` ## <u> Previously Insured Variable <u> ``` sns.countplot('Previously_Insured', hue = 'Response',data = final_df) ``` ### Train:Validate is 70:30 split ``` y = final_df.Response X = final_df.drop(['Response'],axis = 1,inplace = False) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=12) pipeline = { 'LogisticRegression': make_pipeline(StandardScaler(), LogisticRegression()), 'RidgeClassifier': make_pipeline(StandardScaler(), RidgeClassifier()), 'DecisionTreeClassifier': make_pipeline(StandardScaler(), DecisionTreeClassifier(random_state=0)), 'RandomForestClassifier': make_pipeline(StandardScaler(), RandomForestClassifier()), 'GradientBoostingClassifier': make_pipeline(StandardScaler(), GradientBoostingClassifier()), 'XGBClassifier': make_pipeline(StandardScaler(), XGBClassifier(verbosity=0)), } fit_model = {} for algo,pipelines in pipeline.items(): model = pipelines.fit(X_train,y_train) fit_model[algo] = model score = [] names = [] dds = [] for algo,model in fit_model.items(): yhat = model.predict(X_test) names.append(algo) score.append(accuracy_score(y_test, yhat)) result= pd.DataFrame(names,columns = ['Name']) result['Score'] = score result for names,value in pipeline.items(): model = value.fit(X_train,y_train) preds = model.predict(X_test) print(" ") print(names) print('-'*len(names)) print(' ') print(classification_report(y_test, preds)) print(' ') print('_'*55) ``` ### XGBoost has the highest Classification Accuracy score among the classifiers listed above.
github_jupyter
``` import os import json from docx import Document from io import StringIO, BytesIO import re import time import pandas as pd import json import spacy from nltk.corpus import stopwords from gensim.models import LdaModel from gensim.models.wrappers import LdaMallet import gensim.corpora as corpora from gensim.corpora import Dictionary from gensim import matutils, models from gensim.models import CoherenceModel, TfidfModel from gensim.models.phrases import Phrases, Phraser import pyLDAvis.gensim from docx import Document from io import StringIO, BytesIO import matplotlib.pyplot as plt %matplotlib inline plt.rcParams.update({'font.size': 14, 'lines.linewidth': 3}) nlp = spacy.load("en_core_web_sm") # stop_words = set(stopwords.words('english')) notebook_dir = os.getcwd() sop_df = pd.read_csv('../data/interim/sop_types_valid.csv', converters = {'juri': eval, 'filename': eval}) sop_df.head(3) type_list = sop_df['type'] type_list def load_event_role(event_type, role): with open(F'../data/sop_jsons/{event_type}.txt') as f: dct = json.load(f) f.close() event_row = sop_df[sop_df['type'] == event_type] juri_to_filename = dict(zip(event_row['juri'].values[0], event_row['filename'].values[0])) types, juris, roles, sops = list(), list(), list(), list() for juri, role_sop in dct.items(): if role in role_sop: types.append(event_type) juris.append(juri) roles.append(role) sops.append(role_sop[role]) df = pd.DataFrame({'type': types, 'juri': juris, 'role': roles, 'sop': sops}) df['filename'] = df['juri'].apply(lambda x: juri_to_filename[x]) return df def load_event_types_for_role(types, role): res = pd.DataFrame() for t in types: res = res.append(load_event_role(t, role)) return res.reset_index(drop = True) calltaker_all = load_event_types_for_role(type_list, 'call taker') calltaker_all def preprocess(strlist, min_token_len = 2, allowed_pos = ['ADV', 'ADJ', 'VERB', 'NOUN', 'PART', 'NUM']): removal = ['-', r'i\.e\.'] res = list() for string in strlist: text = re.sub(r"|".join(removal), ' ', string.lower()) doc = nlp(text) res += [token.lemma_ for token in doc \ if token.pos_ in allowed_pos \ # Spacy considers 'call' as a stop word, which is not suitable for our case # and not token.is_stop \ # and token.text not in stop_words \ # and token.is_alpha \ and len(token.lemma_) > min_token_len ] return ' '.join(res) preprocess(calltaker_all.iloc[0, :]['sop']) def get_dct_dtmatrix(sops): corpus = [sop.split() for sop in map(preprocess, sops)] # phrases = Phrases(corpus, min_count = 1, threshold = 1) # bigram = Phraser(phrases) # corpus = bigram(corpus) dictionary = corpora.Dictionary(corpus) doc_term_matrix = [dictionary.doc2bow(doc) for doc in corpus] return doc_term_matrix, corpus, dictionary def tfidf_dct_dtmatrix(sops): doc_term_matrix, corpus, dictionary = get_dct_dtmatrix(sops) tfidf = TfidfModel(doc_term_matrix) return doc_term_matrix, corpus, dictionary, tfidf[doc_term_matrix] doc_term_bow, corpus, dictionary, doc_term_tfidf = tfidf_dct_dtmatrix(calltaker_all['sop']) def topics_with_coherence(doc_term_matrix, corpus, dictionary, passes, coherence, alpha, N = 100, random_state = 2020): num_topic, ldas, scores = list(), list(), list() # doc_term_matrix = [dictionary.doc2bow(doc) for doc in corpus] for n in range(1, N+1): lda = models.LdaModel(corpus = doc_term_matrix, id2word = dictionary, num_topics = n, passes = passes, random_state = random_state, alpha = alpha) coherence_model = CoherenceModel( model = lda, texts = corpus, dictionary = dictionary, coherence = coherence) coherence_score = coherence_model.get_coherence() # mod = {'num_topic': n, 'model': lda, 'coherence_score': coherence_score} # res[f"num_topic={n}"] = [lda, coherence_score] num_topic.append(n) ldas.append(lda) scores.append(coherence_score) return pd.DataFrame({ 'num_topic':num_topic, 'model': ldas, 'coherence_score': scores }) def save_df(df, name): filename = '../data/interim/' + name df.to_csv(filename, index = False) def tune_coherence(doc_term_matrix, corpus, dictionary, num_pass, coherence, alpha, mdtype): coh = topics_with_coherence(doc_term_matrix, corpus, dictionary, passes = num_pass, coherence = coherence, alpha = alpha) name = f'call_{coherence}_{mdtype}_{num_pass}_{alpha}.csv' save_df(coh, name) return coh import datetime t0 = time.time() num_pass, coherence, alpha = 20, 'c_npmi', 'asymmetric' call_coh_20_bow = tune_coherence(doc_term_bow, corpus, dictionary, num_pass, coherence, alpha, 'bow') call_coh_20_tfidf = tune_coherence(doc_term_tfidf, corpus, dictionary, num_pass, coherence, alpha, 'tfidf') ellapsed = str(datetime.timedelta(seconds = time.time() - t0)) print(f'It takes {ellapsed} to run 20 passes') def plot_coh(df, md_type, N = 40): fig, ax = plt.subplots(1, 1, figsize = (12, 3)) ax.plot(df.loc[:N, 'num_topic'].values, df.loc[:N, 'coherence_score'].values) ax.set_xlabel('number of topics') ax.set_ylabel('coherence score') ax.set_title(f'Coherence Score vs Number of Topics ({md_type})') ax.grid() plt.show() plot_coh(call_coh_20_bow, 'bow', N = 40) plot_coh(call_coh_20_tfidf, 'tfidf', N = 40) lda_event_bow = call_coh_20_bow[call_coh_20_bow['num_topic'] == 64]['model'].values[0] fpath = '../data/interim/lda_event_bow' lda_event_bow.save(fpath) lda_event_bow_26 = call_coh_20_bow[call_coh_20_bow['num_topic'] == 26]['model'].values[0] fpath = '../data/interim/lda_event_bow_26' lda_event_bow_26.save(fpath) lda_event_tfidf = call_coh_20_tfidf[call_coh_20_tfidf['num_topic'] == 10]['model'].values[0] fpath = '../data/interim/lda_event_tfidf' lda_event_tfidf.save(fpath) lda_event_tfidf_26 = call_coh_20_tfidf[call_coh_20_tfidf['num_topic'] == 26]['model'].values[0] fpath = '../data/interim/lda_event_tfidf_26' lda_event_tfidf_26.save(fpath) print(lda_event_bow) print(lda_event_tfidf_26) def get_topic(model, doc): ppdoc = preprocess(doc) doc_term_arr = dictionary.doc2bow(ppdoc.split()) return sorted(model[doc_term_arr], key = lambda x: x[1], reverse = True)[0][0] event_topics_bow = calltaker_all.copy() event_topics_bow['topic_id'] = list(map(lambda x: get_topic(lda_event_bow, x), event_topics_bow['sop'].values.tolist())) event_topics_bow[event_topics_bow['topic_id'] == 2].head(40) event_topics_bow_26 = calltaker_all.copy() event_topics_bow_26['topic_id'] = list(map(lambda x: get_topic(lda_event_bow_26, x), event_topics_bow_26['sop'].values.tolist())) event_topics_bow_26 = event_topics_bow_26.sort_values(by = ['topic_id', 'type', 'juri'], ignore_index = True) event_bow_26_topic1 = event_topics_bow_26[event_topics_bow_26['topic_id'] == 1] event_bow_26_topic1.head(40) event_bow_26_topic1.iloc[1, :]['sop'] event_bow_26_topic1.iloc[20, :]['sop'] event_topics_tfidf = calltaker_all.copy() event_topics_tfidf['topic_id'] = list(map(lambda x: get_topic(lda_event_tfidf, x), event_topics_tfidf['sop'].values.tolist())) event_topics_tfidf[event_topics_tfidf['topic_id'] == 2].head(40) event_topics_tfidf_26 = calltaker_all.copy() event_topics_tfidf_26['topic_id'] = list(map(lambda x: get_topic(lda_event_tfidf_26, x), event_topics_tfidf_26['sop'].values.tolist())) event_topics_tfidf_26 = event_topics_tfidf_26.sort_values(by = ['topic_id', 'type', 'juri'], ignore_index = True) event_tfidf_26_topic0 = event_topics_tfidf_26[event_topics_tfidf_26['topic_id'] == 0] event_tfidf_26_topic0.head(30) event_tfidf_26_topic0.iloc[0, :]['sop'] event_tfidf_26_topic0.iloc[29, :]['sop'] ``` ## Do not change anything below ``` raise Exception('Stop here') calltaker_topic = calltaker_all.copy() calltaker_topic['topic_id'] = list(map(lambda x: get_topic(lda_20, x), calltaker_topic['sop'].values.tolist())) calltaker_topic[calltaker_topic['type'] == '1033'] calltaker_topic = calltaker_topic.sort_values(by = ['topic_id', 'type', 'juri'], ignore_index = True) calltaker_topic call_6 = calltaker_topic[calltaker_topic['topic_id'] == 6] call_6 calltaker_topic['topic_id'].unique() unwant = calltaker_topic[calltaker_topic['type'] == 'UNWANT'] unwant unwant['sop'].values.tolist()[-2:] call_6['sop'].values.tolist()[0] sents = call_6['sop'].tolist()[2] sents[1:3] def get_entities(sent): ent1 = '' ent2 = '' prv_tok_dep = '' prv_tok_txt = '' prefix = '' mod = '' for tok in nlp(sent): if tok.dep_ != 'punct': if tok.dep_ == 'compound': prefix = tok.text if prv_tok_dep == 'compound': prefix = prv_tok_text + ' ' + tok.text if tok.dep_.endswith('mod'): modifier = tok.text if prv_tok_dep == 'compound': modifier = prv_tok_text + ' ' + tok.text if tok.dep_.find('sub'): ent1 = modifier + ' ' + prefix + ' ' + tok.text prefix = '' modifier = '' prv_tok_dep = '' prv_tok_text = '' if tok.dep_.find('obj'): ent2 = modifier + ' ' + prefix + ' ' + tok.text prv_tok_dep = tok.dep_ prv_tok_text = tok.text return ent1.strip(), ent2.strip() # df_call_withtopic = df_dispatcher.copy() # df_call_withtopic.loc[:, 'topic_id'] = list(map(lambda x: get_topic(call_model_cv, x), # df_calltaker['sop'].values.tolist())) # df_call_withtopic = df_call_withtopic.sort_values(by = ['topic_id', 'juri'], ignore_index = True) # df_call_withtopic # empty = pd.DataFrame() # df1 = pd.DataFrame({'type': ['type1', 'type2'], 'value': [1, 2]}) # empty = empty.append(df1) # empty = empty.append(df1) # empty ``` #### Reflection of DRUGS coherence score - the coherence score is very high for the one-topic model - this makes sense, because we are looking at docs under the same type "DRUGS" #### Question - While the model assigns the documents with the correct topic, does this necessarily mean the documents are similar enough to be consolicated? - LDA in not stable. How will this instability affect us? ``` type_list = sop_df['type'].values.tolist() type_list[0] type_list = sop_df['type'] res = pd.DataFrame() for event_type in type_list: dct = load_event(event_type) event_row = sop_df[sop_df['type'] == event_type] juri_to_filename = dict(zip(event_row['juri'].values[0], event_row['filename'].values[0])) juris, roles, sops, types = list(), list(), list(), list() for juri, role_sop in dct.items(): for role, sop in role_sop.items(): juris.append(juri) roles.append(role) sops.append(sop) types.append(event_type) typedf = pd.DataFrame({'type': types, 'juri': juris, 'role': roles, 'sop': sops}) typedf['filename'] = typedf['juri'].apply(lambda x: juri_to_filename[x]) df_calltaker = typedf[typedf['role'] == 'call taker'] df_dispatcher = typedf[typedf['role'] == 'dispatcher'] print(df_calltaker.shape) print(df_dispatcher.shape) for df in [df_calltaker, df_dispatcher]: if len(df) == 0: continue print('Start working on:', event_type, df['role'].unique()) doc_term_matrix, corpus, dictionary = get_dct_dtmatrix(df['sop']) coherence_cv = topics_with_coherence(doc_term_matrix, corpus, dictionary, df['sop'].values.tolist()) best_model_cv = coherence_cv.iloc[1:, :].sort_values('coherence_score')['model'].tolist()[-1] df_with_topic = df.copy() df_with_topic.loc[:, 'topic_id'] = list(map(lambda x: get_topic(best_model_cv, x), df['sop'].values.tolist())) df_with_topic = df_with_topic.sort_values(by = ['topic_id', 'juri'], ignore_index = True) res = res.append(df_with_topic) print('Finish working on:', event_type, df['role'].unique()) ress = res.reset_index(drop = True) ress from datetime import datetime dt = datetime.now().strftime('%Y-%m-%dT%H_%M_%S') cwd = os.getcwd() os.chdir(notebook_dir) ress.to_csv(f'../data/interim/sop_topics_{dt}.csv', index = False) os.chdir(cwd) print(type_list.values.tolist()) ress[ (ress['type'] == 'MISCH') & (ress['role'] == 'call taker')] ress[ (ress['type'] == 'MISCH') & (ress['role'] == 'dispatcher')] ress[ (ress['type'] == 'ANIMAL') & (ress['role'] == 'call taker')] ress[ (ress['type'] == 'DRUGS') & (ress['role'] == 'call taker')] ress[ (ress['type'] == 'DRUGS') & (ress['role'] == 'call taker')]['sop'].values.tolist()[0] # all_coherence = topics_with_coherence(dt_matrix_all, corpus_all, dictionary_all, N = 20) # all_coherence # plt.figure(figsize = (12, 8)) # plt.plot(all_coherence.loc[:, 'num_topic'].values, all_coherence.loc[:, 'coherence_score'].values) # plt.show() ```
github_jupyter
# The central limit theorem ## Understanding via visualization #### Giovanni Pizzi (EPFL), Sep 2018 [Go back to the list of all visualizations](https://github.com/giovannipizzi/educational-scientific-visualizations/) # Aim of this app The aim of this app is to: - visually prove the central limit theorem - give a feeling on how fast the normal distribution is obtained when you sum random distributions Read the _tasks_ below and play with the sliders to see the result! # The central limit theorem Suppose ${X_1, X_2, \ldots}$ is a sequence of _independent and identically distributed_ random variables with expectation value $\text{E}[X_i] = \mu$ and variance $\text{Var}[X_i] = \sigma^2$. Then, in the limit of $n\to\infty$, the random variables $$ S_N = \sum_{i=1}^N X_i $$ converge in distribution to a normal distribution with the following expectation value and variance: $$ \lim_{N\to\infty} \text{E}[S_N] = N\mu, \qquad \lim_{N\to\infty} \text{Var}[S_N] = N\sigma^2. $$ # Tasks - <span style="color: #cc0000">Move the sliders below and look at the results.</span> - <span style="color: #cc0000">Verify that (for a uniform distribution) $N\geq 3$ already the distribution approximates quite accurately a random distribution!!</span> - <span style="color: #cc0000">Check how fast (as a function of $N$) the distribution converges to a normal distribution with other distributions.</span> - <span style="color: #cc0000">Check the importance of having a large number of $n_\text{samples}$ to achieve a good convergence.</span> # Numerical verification and visualization With the following selectors, you can pick: - the number $N$ of random variables to add - the number $n_\text{samples}$ of random numbers that you want to generate for every sequence - the number $n_\text{bins}$ of histogram bins - the type of random distribution ``` import math import numpy as np import bqplot.pyplot as pl def get_numerical_arrays(number_of_samples, number_of_addends, bins, distrib_type): if distrib_type == "uniform": all_numbers = np.random.rand(number_of_addends, number_of_samples) description_string = ( r"<strong>Distribution</strong>: random numbers in the [0,1[ range, uniformly distributed<br>" r"<strong>Distribution average</strong>: $\text{E}(X_i) = \int_0^1 x\, dx = \frac 1 2$<br>" r"<strong>Distribution variance</strong>: $\text{Var}(X_i) = \int_0^1 [x-\text{E}(X_i)]^2\, dx = \frac 1 {12}$<br>") average = 0.5 variance = 1./12. elif distrib_type == "squared": all_numbers = np.random.rand(number_of_addends, number_of_samples)**2 description_string = ( r"<strong>Distribution</strong>: random numbers in the [0,1[ range, uniformly distributed, then squared<br>" r"<strong>Distribution average</strong>: $\text{E}(X_i) = \int_0^1 x^2\, dx = \frac 1 3$<br>" r"<strong>Distribution variance</strong>: $\text{Var}(X_i) = \int_0^1 [x^2-\text{E}(X_i)]^2\, dx = \frac 4 {45}$<br>") average = 1./3. variance = 4./45. elif distrib_type == "squareroot": all_numbers = np.sqrt(np.random.rand(number_of_addends, number_of_samples)) description_string = ( r"<strong>Distribution</strong>: random numbers in the [0,1[ range, uniformly distributed, then considering their square root<br>" r"<strong>Distribution average</strong>: $\text{E}(X_i) = \int_0^1 \sqrt{x}\, dx = \frac 3 2$<br>" r"<strong>Distribution variance</strong>: $\text{Var}(X_i) = \int_0^1 [\sqrt{x}-\text{E}(X_i)]^2\, dx = \frac 1 {18}$<br>") average = 2./3. variance = 1./18. else: raise NotImplementedError("Unknown type '{}'".format(distrib_type)) all_numbers_sum = all_numbers.sum(axis=0) y, x_edges = np.histogram(all_numbers_sum, bins=bins) x = (x_edges[1:] + x_edges[:-1])/2. mu = average * number_of_addends sigma = math.sqrt(variance * number_of_addends) bin_width = x_edges[1] - x_edges[0] norm = number_of_samples * bin_width gaussian_x = np.linspace(x[0], x[-1], 300) gaussian_y = 1./math.sqrt(2 * math.pi * sigma**2) * np.exp(-(gaussian_x-mu)**2/2/sigma**2) * norm return x, y, gaussian_x, gaussian_y, description_string from ipywidgets import Accordion, IntSlider, HTMLMath, Dropdown, Box, HBox, VBox, Layout from IPython.display import display n_widget = IntSlider(value=2, min=1, max=10, description = "$N$", continuous_update=False) n_samples_widget = IntSlider(value=50000, min=100, max=100000, description = r"$n_{\text{samples}}$", continuous_update=False) n_bins_widget = IntSlider(value=100, min=20, max=400, description = r"$n_{\text{bins}}$", continuous_update=False) type_widget = Dropdown(options=( ("Uniform","uniform"), ("Squared","squared"), ("Square root","squareroot"), ), description = "Distrib. type", continuous_update=False, layout=Layout(width='250px')) distribution_plot = pl.figure() result_plot = pl.figure() distribution_description = HTMLMath(value="") distribution_accordion = Accordion(children=[distribution_plot], layout=Layout(width='90%', max_width='400px')) distribution_accordion.set_title(0, 'Plot of a single distribution X') # Start closed distribution_accordion.selected_index = None def on_distrib_params_change(change): x1, y1, _, _, _ = get_numerical_arrays( number_of_samples=n_samples_widget.value, number_of_addends=1, # one single distribution bins=n_bins_widget.value, distrib_type=type_widget.value) x, y, gaussian_x, gaussian_y, description_string = get_numerical_arrays( number_of_samples=n_samples_widget.value, number_of_addends=n_widget.value, bins=n_bins_widget.value, distrib_type=type_widget.value) distribution_description.value = description_string pl.figure(fig=distribution_plot) pl.clear() pl.bar(x1,y1) pl.ylim(0,max(y1)*1.3) pl.xlabel("Value") pl.ylabel("Distribution of results") pl.figure(fig=result_plot) result_plot.legend_location = 'top-left' pl.clear() reverse_options_map = {_[1]: _[0] for _ in type_widget.options} pl.title('Distribution S (sum of {} "{}" distributions X)'.format( n_widget.value, reverse_options_map[type_widget.value].lower() )) pl.bar(x,y, labels=["Distribution S"]) pl.plot(gaussian_x,gaussian_y, labels=["Theoretical limit"], colors=["#ff0000"]) pl.legend() pl.xlabel("Value") pl.ylabel("Distribution of results") n_widget.observe(on_distrib_params_change, names='value', type='change') n_samples_widget.observe(on_distrib_params_change, names='value', type='change') n_bins_widget.observe(on_distrib_params_change, names='value', type='change') type_widget.observe(on_distrib_params_change, names='value', type='change') # Create the plot on_distrib_params_change(None) display(Box([ VBox([n_widget, n_samples_widget, n_bins_widget, type_widget], layout=Layout(width='350px')), VBox([distribution_description, distribution_accordion], layout=Layout(min_width='300px')), ], layout=Layout(width='100%', flex_flow='row wrap', display='flex'))) result_plot.layout.width = '100%' result_plot.layout.max_width = '800px' distribution_plot.layout.width = '100%' display(Box(children=[result_plot], layout=Layout(justify_content='center'))) ``` # References [1] [Central limit theorem on Wikipedia](https://en.wikipedia.org/wiki/Central_limit_theorem) ``` #from IPython.core.display import display, HTML #display(HTML("<style>.container { width:100% !important; }</style>")) ```
github_jupyter
<a href="https://colab.research.google.com/github/gmihaila/machine_learning_things/blob/master/learning_pytorch/pytorch_nn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ### SImple NN 1 hiddent layer NN #### Initialize NN ``` import torch n_input, n_hidden, n_output = 5, 3, 1 ## initialize tensor for inputs, and outputs x = torch.randn((1, n_input)) y = torch.rand((1,n_output)) print(x.size()) print(y.size()) print() ## initialize tensor variables for weights w1 = torch.rand((n_input, n_hidden)) w2 = torch.rand((n_hidden, n_output)) print(w1.size()) print(w2.size()) print() ## initialize tensor variables for bias terms b1 = torch.rand((1,n_hidden)) b2 = torch.rand((1,n_output)) print(b1.size()) print(b2.size()) print() ``` #### Forward Pass 1. Forward Propagation 2. Loss computation 3. Backpropagation 4. Updating the parameters ``` ## sigmoid activation function using pytorch def sigmoid_activation(z): return 1 / (1 + torch.exp(-z)) ## activation of hidden layer z1 = torch.mm(x,w1) + b1 a1 = sigmoid_activation(z1) print(z1) print(a1) print() ## activation (output) of final layer z2 = torch.mm(a1, w2) + b2 a2 = output = sigmoid_activation(z2) print(z2) print(output) print() loss = y - output print(loss) ``` #### Backprop * loss gets multiplied by weights to penalize more of the bad weights * some weights contirbute more to the output. If the error is large, their loss will be more ``` ## function to calculate the derivative of activation def sigmoid_delta(x): return x * (1 - x) ## compute derivative of error terms delta_output = sigmoid_delta(output) delta_hidden = sigmoid_delta(a1) print(delta_output) print(delta_hidden) print() ## backpass the changes to previous layers d_output = loss * delta_output loss_h = torch.mm(d_output, w2.t()) d_hidden = loss_h * delta_hidden print(d_output) print(loss_h) print(d_hidden) ``` #### Update Parameters ``` learning_rate = 0.1 w2 += torch.mm(a1.t(), d_output) * learning_rate w1 += torch.mm(x.t(), d_hidden) * learning_rate print(w2) print(w1) print() b1 += d_output.sum() * learning_rate b2 += d_hidden.sum() * learning_rate print(b1) print(b2) ``` ### MNIST Data Loader ``` import torch from torch import optim import torch.nn as nn import torch.nn.functional as F from torchvision import transforms from torchvision.datasets import MNIST from torch.utils.data import DataLoader from torch.utils.data.sampler import SubsetRandomSampler import numpy as np # transform the raw dataset into tensors and normalize them in a fixed range _tasks = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,), (0.5,))]) #pass mean 0.5 and std 0.5 ## Load MNIST Dataset and apply transformations mnist = MNIST("data", download=True, train=True, transform=_tasks) ## create training and validation split split = int(0.8 * len(mnist)) index_list = list(range(len(mnist))) train_idx, valid_idx = index_list[:split], index_list[split:] ## create sampler objects using SubsetRandomSampler tr_sampler = SubsetRandomSampler(train_idx) val_sampler = SubsetRandomSampler(valid_idx) ## create iterator objects for train and valid datasets trainloader = DataLoader(mnist, batch_size=256, sampler=tr_sampler) validloader = DataLoader(mnist, batch_size=256, sampler=val_sampler) for data, label in trainloader: print(np.shape(data)) # Flatten MNIST images into a 784 long vector # data = data.view(data.shape[0], -1) # print(data.shape) data = torch.flatten(data, start_dim=1) print(data.shape) break ``` ### MNIST - NN ``` import torch from torch import optim import torch.nn as nn import torch.nn.functional as F from torchvision import transforms from torchvision.datasets import MNIST from torch.utils.data import DataLoader from torch.utils.data.sampler import SubsetRandomSampler import numpy as np # transform the raw dataset into tensors and normalize them in a fixed range _tasks = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,), (0.5,))]) ## Load MNIST Dataset and apply transformations mnist = MNIST("data", download=True, train=True, transform=_tasks) ## create training and validation split split = int(0.8 * len(mnist)) index_list = list(range(len(mnist))) train_idx, valid_idx = index_list[:split], index_list[split:] ## create sampler objects using SubsetRandomSampler tr_sampler = SubsetRandomSampler(train_idx) val_sampler = SubsetRandomSampler(valid_idx) ## create iterator objects for train and valid datasets trainloader = DataLoader(mnist, batch_size=256, sampler=tr_sampler) validloader = DataLoader(mnist, batch_size=256, sampler=val_sampler) ## Build class of model class Model(nn.Module): def __init__(self): super().__init__() self.hidden = nn.Linear(784, 128) self.output = nn.Linear(128, 10) def forward(self, x): x = self.hidden(x) x = torch.sigmoid(x) x = self.output(x) return x model = Model() loss_function = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-6, momentum=0.9, nesterov=True) for epoch in range(1,11): train_loss, valid_loss = [], [] model.train() # activates training mod ## Training on 1 epoch for data, target in trainloader: data = torch.flatten(data, start_dim=1) optimizer.zero_grad() #clears gradients of all optimized classes ## forward pass output = model(data) ## loss calc loss = loss_function(output, target) ## backeard propagation loss.backward() ## weight optimization optimizer.step() #performs a single optimization step train_loss.append(loss.item()) ### Evaluation on 1 epoch for data, target in validloader: data = torch.flatten(data, start_dim=1) output = model(data) loss = loss_function(output, target) valid_loss.append(loss.item()) print("Epoch:", epoch, "Training Loss: ", np.mean(train_loss), "Valid Loss: ", np.mean(valid_loss)) ``` #### Evaluation ``` ## dataloader for validation dataset dataiter = iter(validloader) data, labels = dataiter.next() data = torch.flatten(data, start_dim=1) output = model(data) print(output.shape) print(output[0]) _, pred_tensor = torch.max(output, 1) print(pred_tensor.shape) print(pred_tensor[0]) preds = np.squeeze(pred_tensor.numpy()) print("Actual: ", labels[:10]) print("Predic: ", preds[:10]) ``` ### MNIST - NN [1 GPU] ``` import torch from torch import optim import torch.nn as nn import torch.nn.functional as F from torchvision import transforms from torchvision.datasets import MNIST from torch.utils.data import DataLoader from torch.utils.data.sampler import SubsetRandomSampler import numpy as np from torch.backends import cudnn cudnn.benchmark = True # transform the raw dataset into tensors and normalize them in a fixed range _tasks = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,), (0.5,))]) ## Load MNIST Dataset and apply transformations mnist = MNIST("data", download=True, train=True, transform=_tasks) ## create training and validation split split = int(0.8 * len(mnist)) index_list = list(range(len(mnist))) train_idx, valid_idx = index_list[:split], index_list[split:] ## create sampler objects using SubsetRandomSampler tr_sampler = SubsetRandomSampler(train_idx) val_sampler = SubsetRandomSampler(valid_idx) ## create iterator objects for train and valid datasets trainloader = DataLoader(mnist, batch_size=256, sampler=tr_sampler, num_workers=2) validloader = DataLoader(mnist, batch_size=256, sampler=val_sampler, num_workers=2) ## GPU device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ## Build class of model class Model(nn.Module): def __init__(self): super().__init__() self.hidden = nn.Linear(784, 128) self.output = nn.Linear(128, 10) def forward(self, x): x = self.hidden(x) x = torch.sigmoid(x) x = self.output(x) return x model = Model() model.to(device) loss_function = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-6, momentum=0.9, nesterov=True) for epoch in range(1,11): train_loss, valid_loss = [], [] model.train() # activates training mod ## Training on 1 epoch for data, target in trainloader: data = torch.flatten(data.to(device), start_dim=1) optimizer.zero_grad() #clears gradients of all optimized classes ## forward pass output = model(data.to(device)) ## loss calc loss = loss_function(output.to(device), target.to(device)) ## backeard propagation loss.backward() ## weight optimization optimizer.step() #performs a single optimization step train_loss.append(loss.item()) ### Evaluation on 1 epoch for data, target in validloader: data = torch.flatten(data, start_dim=1) output = model(data.to(device)) loss = loss_function(output.to(device), target.to(device)) valid_loss.append(loss.item()) print("Epoch:", epoch, "Training Loss: ", np.mean(train_loss), "Valid Loss: ", np.mean(valid_loss)) ``` #### Evaluation ``` ``` ### MNIST - NN [Multy GPU, Core] Specify certain GPUs ``` import os # os.environ["CUDA_VISIBLE_DEVICES"]="0,1,2" # number of gpu devices import torch from torch import optim import torch.nn as nn import torch.nn.functional as F from torchvision import transforms from torchvision.datasets import MNIST from torch.utils.data import DataLoader from torch.utils.data.sampler import SubsetRandomSampler import numpy as np from torch.backends import cudnn import multiprocessing cudnn.benchmark = True n_cores = multiprocessing.cpu_count() # transform the raw dataset into tensors and normalize them in a fixed range _tasks = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,), (0.5,))]) ## Load MNIST Dataset and apply transformations mnist = MNIST("data", download=True, train=True, transform=_tasks) ## create training and validation split split = int(0.8 * len(mnist)) index_list = list(range(len(mnist))) train_idx, valid_idx = index_list[:split], index_list[split:] ## create sampler objects using SubsetRandomSampler tr_sampler = SubsetRandomSampler(train_idx) val_sampler = SubsetRandomSampler(valid_idx) ## create iterator objects for train and valid datasets trainloader = DataLoader(mnist, batch_size=256, sampler=tr_sampler, num_workers=n_cores) validloader = DataLoader(mnist, batch_size=256, sampler=val_sampler, num_workers=n_cores) ## GPU device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ## Build class of model class Model(nn.Module): def __init__(self): super().__init__() self.hidden = nn.Linear(784, 128) self.output = nn.Linear(128, 10) def forward(self, x): x = self.hidden(x) x = torch.sigmoid(x) x = self.output(x) return x model = Model() ## Multi GPU if torch.cuda.device_count() > 1: print("We can use", torch.cuda.device_count(), "GPUs") model = nn.DataParallel(model, device_ids=[1]) # device_ids=[0,1,2] depending on the # of gpus model.to(device) loss_function = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-6, momentum=0.9, nesterov=True) for epoch in range(1,11): train_loss, valid_loss = [], [] model.train() # activates training mod ## Training on 1 epoch for data, target in trainloader: data = torch.flatten(data.to(device), start_dim=1) optimizer.zero_grad() #clears gradients of all optimized classes ## forward pass output = model(data.to(device)) ## loss calc loss = loss_function(output.to(device), target.to(device)) ## backeard propagation loss.backward() ## weight optimization optimizer.step() #performs a single optimization step train_loss.append(loss.item()) ### Evaluation on 1 epoch for data, target in validloader: data = torch.flatten(data, start_dim=1) output = model(data.to(device)) loss = loss_function(output.to(device), target.to(device)) valid_loss.append(loss.item()) print("Epoch:", epoch, "Training Loss: ", np.mean(train_loss), "Valid Loss: ", np.mean(valid_loss)) ``` #### Evaluation ``` ``` ### MNIST CNN [Multy GPU, Core] ``` import os import torch from torch import optim import torch.nn as nn import torch.nn.functional as F from torchvision import transforms from torchvision.datasets import MNIST from torch.utils.data import DataLoader from torch.utils.data.sampler import SubsetRandomSampler from torch.backends import cudnn import numpy as np import multiprocessing from sklearn.metrics import accuracy_score cudnn.benchmark = True num_cores = multiprocessing.cpu_count() # transform the raw dataset into tensors and normalize them in a fixed range _tasks = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) ## Load MNIST Dataset and apply transformations mnist = MNIST("data", download=True, train=True, transform=_tasks) ## create training and validation split split_train = int(0.7 * len(mnist)) split_valid = split_train + int(0.1 * len(mnist)) index_list = list(range(len(mnist))) train_idx, valid_idx, test_idx = index_list[:split_train], index_list[split_train:split_valid], index_list[split_valid:] ## create sampler objects using SubsetRandomSampler tr_sampler = SubsetRandomSampler(train_idx) val_sampler = SubsetRandomSampler(valid_idx) tes_sampler = SubsetRandomSampler(test_idx) ## create iterator objects for train and valid datasets trainloader = DataLoader(mnist, batch_size=256, sampler=tr_sampler, num_workers=num_cores) validloader = DataLoader(mnist, batch_size=256, sampler=val_sampler, num_workers=num_cores) testloader = DataLoader(mnist, batch_size=10, sampler=tes_sampler, num_workers=num_cores) ## GPU device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ## Build class of model class Model(nn.Module): def __init__(self): super(Model, self).__init__() ## define layers self.conv1 = nn.Conv2d(1, 16, 3, padding=1) self.conv2 = nn.Conv2d(16, 32, 3, padding=1) self.conv3 = nn.Conv2d(32, 64, 3, padding=1) self.pool = nn.MaxPool2d(kernel_size=2, stride=2) self.linear1 = nn.Linear(64*3*3, 512) self.linear2 = nn.Linear(512,10) return def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = self.pool(F.relu(self.conv3(x))) x = x.view(-1,64*3*3) #torch.flatten(x, start_dim=1) ## reshaping x = F.relu(self.linear1(x)) x = self.linear2(x) return x ## create model model = Model() ## in case of multi gpu if torch.cuda.device_count() > 1: print("Using", torch.cuda.device_count(), "GPUs") model = nn.DataParallel(model, device_ids=[1]) # [0,1,2,3] ## put model on gpu model.to(device) ## loss fucntion loss_function = nn.CrossEntropyLoss() ## optimizer optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-6, momentum=0.9, nesterov=True) ## run for n epochs for epoch in range(1,11): train_loss , valid_loss = [], [] ## train part model.train() for data, target in trainloader: ## gradients acumulate. need to clear them on each example optimizer.zero_grad() output = model(data.to(device)) loss = loss_function(output.to(device), target.to(device)) loss.backward() optimizer.step() train_loss.append(loss.item()) ## evaluation part on validation model.eval() ##set model in evaluation mode for data, target in validloader: output = model(data.to(device)) loss = loss_function(output.to(device), target.to(device)) valid_loss.append(loss.item()) print("Epoch:", epoch, "Training Loss: ", np.mean(train_loss), "Valid Loss: ", np.mean(valid_loss)) ``` #### Evaluation ``` model.eval() y_pred, y_true = [], [] for data, target in testloader: predicted = model(data.to(device)) _, predicted = torch.max(predicted.cpu(), 1) y_pred += predicted.tolist() y_true += target.tolist() print("Accuracy: ", accuracy_score(y_pred, y_true)) ``` ### Sentiment Classification [NOT FINISHED] #### Download Data ``` from IPython.display import clear_output !wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz clear_output(wait=True) !gunzip aclImdb_v1.tar.gz clear_output(wait=True) !tar -xvf aclImdb_v1.tar clear_output(wait=True) !ls ``` #### Import ``` ## Load TF 2.0 try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import os from os import listdir from os.path import isfile, join import torch from torch import optim import torch.nn as nn import torch.nn.functional as F from torchvision import transforms from torch.utils.data import DataLoader, Dataset from torch.utils.data.sampler import SubsetRandomSampler from torch.backends import cudnn import tensorflow as tf import numpy as np import multiprocessing from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences num_cores = multiprocessing.cpu_count() ``` #### Data Loader ``` train_pos_path = "/content/aclImdb/train/pos/" train_neg_path = "/content/aclImdb/train/neg/" test_pos_path = "/content/aclImdb/test/pos/" test_neg_path = "/content/aclImdb/test/neg/" class ImdbMovieDataset(Dataset): def __init__(self, pos_path, neg_path, maxlen=100, text_tokenizer=None): """ Args: """ self.pos_path = pos_path self.neg_path = neg_path self.pos_files = [file_name[:-4] for file_name in self.get_files(self.pos_path)] self.neg_files = [file_name[:-4] for file_name in self.get_files(self.neg_path)] self.n_pos_files = len(self.pos_files) self.n_neg_files = len(self.neg_files) self.maxlen = maxlen if text_tokenizer: self.text_tokenizer = text_tokenizer else: self.text_tokenizer = self.fit_tokenizer() return def __len__(self): return (self.n_pos_files + self.n_neg_files) def __getitem__(self, idx): if idx < self.n_pos_files: ## positive review path = self.pos_path + self.pos_files[idx] y = 1 else: ## negative review path = self.neg_path + self.neg_files[idx - self.n_pos_files] y = 0 review = self.read_file(path) X = self.text_tokenizer.texts_to_sequences([review]) X = pad_sequences(sequences=X, maxlen=self.maxlen, padding='post', truncating='post')[0] return torch.tensor(X), torch.tensor(y) def get_files(self, path): return [f for f in listdir(path) if isfile(join(path, f))] def read_file(self, path): with open(path + '.txt', 'r') as raw_file: content_file = raw_file.read() return content_file def fit_tokenizer(self): tmp_tokenizer = Tokenizer(num_words=None, lower=True, oov_token='<UNK>') print("Positive file fit Tokenizer") for pos_file in self.pos_files[:50]: tmp_tokenizer.fit_on_texts([self.read_file(self.pos_path + pos_file)]) print("Negative file fit Tokenizer") for neg_file in self.neg_files[:50]: tmp_tokenizer.fit_on_texts([self.read_file(self.neg_path + neg_file)]) return tmp_tokenizer ## data generator parameters data_generator_parameters = {'batch_size': 64, 'shuffle': True, 'num_workers': num_cores} training_set = ImdbMovieDataset(pos_path=train_pos_path, neg_path=train_neg_path) training_generator = DataLoader(training_set, **data_generator_parameters) for local_batch, local_label in training_generator: print(local_batch, local_label) print(np.shape(local_label)) break torch.tensor(training_set.text_tokenizer.texts_to_sequences(["this is me, not you"]), dtype=torch.long) ```
github_jupyter
# Gibbs Sampling [Casella 1992](http://biostat.jhsph.edu/~mmccall/articles/casella_1992.pdf) Suppose we are given a joint density $f(x, y_1, \ldots, y_p)$ and are interested in obtaining the characteristics of the marginal density $$ f(x) = \int\ldots\int f(x, y_1,\ldots, y_p)dy_1\ldots dy_p $$ such as the mean or variance. Perhaps the most natural and straightforward approach would be to calculate $f(x)$ and use it to obtain the desired characteristic. However, there are many cases where the integration are extremely difficult to perform, either analytically or numerically. In such cases the Gibbs sampler provides an alternative method for obtaining $f(x)$. Rather than compute or approximate $f(x)$ directly, the Gibbs sampler allows us effectively to generate a sample $X_1,\ldots,X_m\sim f(x)$ *without requiring* $f(x)$. By simulating a large enough sample, the mean, variance, or any other characteristic of $f(x)$ can be calculated to the desired degree of accuracy. Consider the distribution $p(\mathbf{z})=p(z_1,\ldots,z_M)$ from which we wish to sample, and suppose that we have chosen some initial state for the Markov Chain. Each step of the Gibbs sampling procedure involves replacing the value of one of the variables by a value drawn from the distribution of that variable conditioned on the values of the remaining variables. Thus we replace $z_i$ by a value drawn from the distribution $p(z_i|\mathbf{z}_{\\i})$, where $z_i$ denotes the $i$th component of $\mathbf{z}$, and $\mathbf{z}_{\\i}$ denotes $z_i,\ldots,z_M$ but with $z_i$ omitted. This procedure is repeated either by cycling through the variables in some particular order or by choosing the variable to be updated at each step at random for some distribution. For example, suppose we have a distribution $p(z_1, z_2, z_3)$ over three variables, and at step $\tau$ of the algorithm we have selected values $z_1^{(\tau)}$, $z_2^{(\tau)}$ and $z_3^{(\tau)}$. We first replace $z_1^{(\tau)}$ by a new value $z_1^{(\tau+1)}$ obtained by sampling from the conditional distribution $$ p(z_1|z_2^{(\tau)}, z_3^{(\tau)}). $$ Next we replace $z_2^{(\tau)}$ by a value $z_2^{(\tau+1)}$ obtained by sampling from the conditional distribution $$ p(z_2|z_1^{(\tau+1)}, z_3^{(\tau)}) $$ so that the new value for $z_1$ is used straight away in subsequent sampling steps. Then we update $z_3$ with a sample $z_3^{(\tau+1)}$ drawn from $$ p(z_3| z_1^{(\tau+1)}, z_2^{(\tau+1)}) $$ and so on, cycling through the three variables in turn. > ### Gibbs Sampling 1. Initialize $\{z_i: i=1,\ldots,M\}$ 2. For $\tau = 1,\ldots,T$: - Sample $z_1^{(\tau+1)} \sim p(z_1|z_2^{(\tau)}, z_3^{(\tau)}, \ldots,z_M^{(\tau)})$. - Sample $z_2^{(\tau+1)} \sim p(z_2|z_1^{(\tau+1)}, z_3^{(\tau)}, \ldots,z_M^{(\tau)})$. - $\vdots$ - Sample $z_{j}^{(\tau+1)} \sim p(z_j|z_1^{(\tau+1)},\ldots, z_{j-1}^{(\tau+1)},z_{j+1}^{(\tau)},\ldots,z_M^{(\tau)})$. - $\vdots$ - Sample $z_M^{(\tau+1)} \sim p(z_M|z_1^{(\tau+1)}, z_2^{(\tau+1)}, \ldots,z_{M-1}^{(\tau+1)})$. To show this procedure samples from the required distribution, we first of all note that the distribution $p(\mathbf{z})$ is an invariant of each of the Gibbs sampling steps individually and hence of the whole Markov chain. This follows from the fact that when we sample from $p(z_i|\{\mathbf{z}_{\backslash i})$, the marginal distribution $p(\mathbf{z}_{\backslash i})$ is clearly invariant because the value of $\mathbf{z}_{\backslash i}$ is unchanged. Also, each step by definition samples from the correct conditional distribution $p(z_i|\mathbf{z}_{\backslash i})$. Because the conditional and marginal distributions together specify the joint distribution, we see that the joint distribution is itself invariant. The second requirement to be satisfied in order that the Gibbs sampling procedure samples from the correct distribution is that it be ergodic. A sufficient condition for ergodicity is that none of the conditional distributions be anywhere zero. If this is the case, then any point in $z$ space can be reached from any other point in a finite number of steps involving one update of each of the component variables. Because the basic Gibbs sampling technique considers one variable at a time, there are strong dependencies between successive samples. We can hope to improve on simple Gibbs sampler by adopting an intermediate strategy in which we sample successively from groups of variables rather than individual variables. This is achieved in the *blocking Gibbs* sampling algorithm by choosing blocks of variables, not necessarily disjoint, then sampling jointly from the variables in each block in turn, conditioned on the remaining variables.
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Классифицируй структурированные данные <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/beta/tutorials/keras/feature_columns"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ru/beta/tutorials/keras/feature_columns.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/ru/beta/tutorials/keras/feature_columns.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/ru/beta/tutorials/keras/feature_columns.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [docs-ru@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ru). Этот учебник показывает, как классифицировать структурированные данные (например, табличные данные в CSV). Мы будем использовать [Keras](https://www.tensorflow.org/guide/keras) чтобы определить модель и [feature columns](https://www.tensorflow.org/guide/feature_columns) в для отображения столбцов в CSV в признаки, используемыми для обучения модели. Этот учебник содержит полный код с помощью которого Вы сможете: * Загрузить CSV файл с использованием [Pandas](https://pandas.pydata.org/). * Создать входной пайплайн для пакетной обработки и перемешивания строк, используя [tf.data](https://www.tensorflow.org/guide/datasets). * Отобразить колонки CSV в признаки используемые для обучения модели используя feature columns. * Построить, обучить и оценить модель используя Keras. ## Набор данных Мы будем использовать небольшой [датасет](https://archive.ics.uci.edu/ml/datasets/heart+Disease) предоставленный Кливлендской клиникой (Cleveland Clinic Foundation for Heart Disease). Датасет содержит несколько сотен строк в формате CSV. Каждая строка описывает пациента, а каждая колонка характеризует свойство. Мы будем использовать эту информацию чтобы предсказать, есть ли у пациента сердечное заболевание, что в этом наборе данных является задачей бинарной классификации. По ссылке [описание](https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/heart-disease.names) этого датасета. Обратите внимание что в нем есть и числовые и категорийные столбцы. >Column| Description| Feature Type | Data Type >------------|--------------------|----------------------|----------------- >Age | Age in years | Numerical | integer >Sex | (1 = male; 0 = female) | Categorical | integer >CP | Chest pain type (0, 1, 2, 3, 4) | Categorical | integer >Trestbpd | Resting blood pressure (in mm Hg on admission to the hospital) | Numerical | integer >Chol | Serum cholestoral in mg/dl | Numerical | integer >FBS | (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) | Categorical | integer >RestECG | Resting electrocardiographic results (0, 1, 2) | Categorical | integer >Thalach | Maximum heart rate achieved | Numerical | integer >Exang | Exercise induced angina (1 = yes; 0 = no) | Categorical | integer >Oldpeak | ST depression induced by exercise relative to rest | Numerical | integer >Slope | The slope of the peak exercise ST segment | Numerical | float >CA | Number of major vessels (0-3) colored by flourosopy | Numerical | integer >Thal | 3 = normal; 6 = fixed defect; 7 = reversable defect | Categorical | string >Target | Diagnosis of heart disease (1 = true; 0 = false) | Classification | integer ## Импортируйте TensorFlow и прочие библиотеки ``` !pip install sklearn from __future__ import absolute_import, division, print_function, unicode_literals import numpy as np import pandas as pd try: # Colab only %tensorflow_version 2.x except Exception: pass import tensorflow as tf from tensorflow import feature_column from tensorflow.keras import layers from sklearn.model_selection import train_test_split ``` ## Используйте Pandas чтобы создать датафрейм [Pandas](https://pandas.pydata.org/) это библиотека Python множеством полезных утилит для загрузки и работы со структурированными данными. Мы будем использовать Pandas для скачивания данных по ссылке и выгрузки их в датафрейм. ``` URL = 'https://storage.googleapis.com/applied-dl/heart.csv' dataframe = pd.read_csv(URL) dataframe.head() ``` ## Разбейте датафрейм на обучающую, проверочную и тестовую выборки Датасет который мы скачали был одним CSV файлом. Мы разделим его на тренировочную, проверочную и тестовую выборки. ``` train, test = train_test_split(dataframe, test_size=0.2) train, val = train_test_split(train, test_size=0.2) print(len(train), 'train examples') print(len(val), 'validation examples') print(len(test), 'test examples') ``` ## Создайте входной пайплайн с помощью tf.data Далее мы обернем датафреймы в [tf.data](https://www.tensorflow.org/guide/datasets). Это позволит нам использовать feature columns в качестве моста для отображения столбцов датафрейма Pandas в признаки используемые для обучения модели. Если бы мы работали с очень большим CSV файлом (таким большим, что он не помещается в память), нам нужно было использовать tf.data чтобы прочитать его напрямую с диска. Подобный случай не рассматривается в этом уроке. ``` # Вспомогательный метод для создания tf.data dataset из датафрейма Pandas def df_to_dataset(dataframe, shuffle=True, batch_size=32): dataframe = dataframe.copy() labels = dataframe.pop('target') ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels)) if shuffle: ds = ds.shuffle(buffer_size=len(dataframe)) ds = ds.batch(batch_size) return ds batch_size = 5 # Небольшой размер пакета используется для демонстрационных целей train_ds = df_to_dataset(train, batch_size=batch_size) val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size) test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size) ``` ## Поймите входной пайплайн Сейчас когда мы создали входной пайплайн, давайте вызовем его чтобы увидеть формат данных который он возвращает. Мы использовали небольшой размер пакета чтобы результат был читабельный. ``` for feature_batch, label_batch in train_ds.take(1): print('Every feature:', list(feature_batch.keys())) print('A batch of ages:', feature_batch['age']) print('A batch of targets:', label_batch ) ``` Мы видим что датасет возвращает словарь имен столбцов (из датафрейма) который сопоставляет их значениям столбцов взятых из строк датафрейма. ## Продемонстрируем несколько видов столбцов признаков (feature columns) TensorFlow предоставляет множество типов столбцов признаков. В этом разделе мы создадим несколько видов столбцов признаков и покажем как они преобразуют столбцы из датафрейма. ``` # Мы используем этот пакте для демонстрации нескольких видов столбцов признаков example_batch = next(iter(train_ds))[0] # Служебный метод для создания столбца признаков # и преобразования пакета данных def demo(feature_column): feature_layer = layers.DenseFeatures(feature_column) print(feature_layer(example_batch).numpy()) ``` ### Численные столбцы (numeric columns) Выходные данные столбцов признаков становятся входными данными модели (используя демо функцию определенную выше мы сможем посмотреть как конкретно преобразуется каждый столбец датафрейма). Числовой столбец [(numeric column)](https://www.tensorflow.org/api_docs/python/tf/feature_column/numeric_column) простейший вид столбца. Он используется для представления числовых признаков. При использовании этого столбца модель получает столбец значений из датафрейма без изменений. ``` age = feature_column.numeric_column("age") demo(age) ``` В наборе данных о сердечных заболеваниях большинство столбцов из датафрейма - числовые. ### Сгруппированные столбцы (bucketized columns) Часто вы не хотите передавать числа непосредственно в модель, а вместо этого делите их на несколько категорий на основе числовых диапазонов. Рассмотрим данные представляющие возраст человека. Вместо представления возраста как числового столбца мы можем разбить возраст на несколько категорий использовав [сгруппированный столбец](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column). Обратите внимание, что one-hot значения приведенные ниже описывают к которому возрастному диапазону относится каждая из строк. ``` age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65]) demo(age_buckets) ``` ### Категориальные столбцы (categorical columns) В этом датасете thal представлен в виде строк (например 'fixed', 'normal', или 'reversible'). Мы не можем передать строки напрямую в модель. Вместо этого мы должны сперва поставить им в соответствие численные значения. Словарь категориальных столбцов (categorical vocabulary columns) обеспечивает способ представления строк в виде one-hot векторов (как вы видели выше для возраста разбитого на категории). Справочник может быть передан как список с использованием [categorical_column_with_vocabulary_list](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list), или загружен из файла с использованием [categorical_column_with_vocabulary_file](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file). ``` thal = feature_column.categorical_column_with_vocabulary_list( 'thal', ['fixed', 'normal', 'reversible']) thal_one_hot = feature_column.indicator_column(thal) demo(thal_one_hot) ``` В более сложных датасетах многие столбцы бывают категориальными (т.е. строками). Столбцы признаков наиболее полезны при работе с категориальными данными. Хотя в этом наборе данных есть только один категориальный столбец, мы будем использовать его для демонстрации нескольких важных видов столбцов признаков, которые вы можете использовать при работе с другими наборами данных. ### Столбцы векторных представлений (embedding column) Предположим, что вместо нескольких возможных строковых значений мы имеем тысячи и более значений для категорий. По ряду причин когда число категорий сильно вырастает, становится невозможным обучение нейронной сети с использованием one-hot кодирования. Мы можем использовать столбец векторных представлений для преодоления этого ограничения. Вместо представления данных в виде многомерных one-hot векторов [столбец векторных представлений](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column) представляет эти данные в виде плотных векторов меньшей размерности, в которых в каждой ячейке может содержаться любое число, а не только О или 1. Размерность векторного представления (8, в нижеприведенном при мере) это параметр который необходимо настраивать. Ключевой момент: использование столбца векторных представлений лучше всего, когда у категориального столбца много возможных значений. Здесь мы используем его только для демонстрационных целей, чтобы у вас был полный пример, который вы можете использовать в будущем для другого набора данных. ``` # Обратите внимание, что входными данными для столбца векторных представлений является категориальный столбец # который мы создали до этого thal_embedding = feature_column.embedding_column(thal, dimension=8) demo(thal_embedding) ``` ### Хэшированные столбцы признаков Другим способом представления категориального столбца с большим количеством значений является использование [categorical_column_with_hash_bucket](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket). This feature column calculates a hash value of the input, then selects one of the `hash_bucket_size` buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash_buckets significantly smaller than the number of actual categories to save space. Ключевой момент: Важным недостатком этого метода является то, что возможны коллизии, при которых разные строки отображаются в одну и ту же категорию. На практике метод хорошо работает для некоторых наборов данных. ``` thal_hashed = feature_column.categorical_column_with_hash_bucket( 'thal', hash_bucket_size=1000) demo(feature_column.indicator_column(thal_hashed)) ``` ### Пересеченные столбцы признаков (crossed feature columns) Комбинирование признаков в один больше известное как [пересечение признаков](https://developers.google.com/machine-learning/glossary/#feature_cross), позволяет модели изучать отдельные веча для каждой комбинации свойств. Здесь мы создадим новый признак являющийся пересечением возраста и thal. Обратите внимание на то, что `crossed_column` не строит полную таблицу комбинаций значений признаков (которая может быть очень большой). Вместо этого он поддерживает `hashed_column` так что Вы можете сами выбирать размер таблицф. ``` crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000) demo(feature_column.indicator_column(crossed_feature)) ``` ## Выберите которые столбцы использовать Мы увидели как использовать несколько видов столбцов признаков. Сейчас мы их используем для обучения модели. Данное руководство покажет Вам полный код (т.е. механизм) необходимый для работы со столбцами признаков. Мы ниже случайно выбрали несколько столбцов для обучения нашей модели. Ключевой момент: если вы собиратесь построить точную модель, попробуйте сами больший набор данных и тщательно подумайте о том, какие признаки являются наиболее значимыми для включения и как они должны быть представлены. ``` feature_columns = [] # численные столбцы for header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']: feature_columns.append(feature_column.numeric_column(header)) # группировка значений столбцов age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65]) feature_columns.append(age_buckets) # столбцы индикаторы thal = feature_column.categorical_column_with_vocabulary_list( 'thal', ['fixed', 'normal', 'reversible']) thal_one_hot = feature_column.indicator_column(thal) feature_columns.append(thal_one_hot) # столбцы векторных представлений thal_embedding = feature_column.embedding_column(thal, dimension=8) feature_columns.append(thal_embedding) # столбцы пересечений свойств crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000) crossed_feature = feature_column.indicator_column(crossed_feature) feature_columns.append(crossed_feature) ``` ### Создайте слой признаков Сейчас, после того как мы определили колонки признаков, используем слой [DenseFeatures](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/DenseFeatures) чтобы передать их в модель Keras. ``` feature_layer = tf.keras.layers.DenseFeatures(feature_columns) ``` Ранее мы использовали пакеты маленького размера, чтобы показать вам как работают колонки признаков. Сейчас мы создали новый входной пайплайн с большим размером пакетов. ``` batch_size = 32 train_ds = df_to_dataset(train, batch_size=batch_size) val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size) test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size) ``` ## Создайте, скомпилируйте и обучите модель ``` model = tf.keras.Sequential([ feature_layer, layers.Dense(128, activation='relu'), layers.Dense(128, activation='relu'), layers.Dense(1, activation='sigmoid') ]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'], run_eagerly=True) model.fit(train_ds, validation_data=val_ds, epochs=5) loss, accuracy = model.evaluate(test_ds) print("Accuracy", accuracy) ``` Ключевой момент: вы, как правило, получите лучшие результаты при использовании глубокого обучения с гораздо большими и более сложными датасетами. При работе с небольшими наборами данных, подобным этому, мы рекомендуем использовать дерево решений или случайный лес в качестве надежной базовой модели. Цель этого руководства - не обучить точную модель, а продемонстрировать механику работы со структурированными данными, дать вам код, который можно использовать в качестве старта при работе с вашими собственными наборами данных в будущем. ## Следующие шаги Лучший способ узнать больше о классификации структурированных данных - попробовать самостоятельно. Мы предлагаем взять другой набор данных и обучить модель его классифицировать с использованием кода, аналогичного приведенному выше. Чтобы повысить точность, тщательно продумайте, какие признаки включить в вашу модель и как они должны быть представлены.
github_jupyter
# A/B test 2 - Loved journeys, control vs content similarity sorted list This related links A/B test (ab2) was conducted from 26 Feb -5th March 2019. The data used in this report are 27th Feb 2019 - 5th March because on 26th the split was not 50:50. The test compared the existing related links (where available) to links generated using Google's universal sentence encoder V2. The first *500* words of all content in content store (clean_content.csv.gz) were encoded and cosine distance was used to find the nearest vector to each content vector. A maximum of 5 links were suggested and only links above a threshold of 0.15 were suggested. ## Import ``` import os import pandas as pd import numpy as np import ast import re # z test from statsmodels.stats.proportion import proportions_ztest # bayesian bootstrap and vis import matplotlib.pyplot as plt import seaborn as sns import bayesian_bootstrap.bootstrap as bb from astropy.utils import NumpyRNGContext # progress bar from tqdm import tqdm, tqdm_notebook from scipy import stats from collections import Counter import sys sys.path.insert(0, '../../src' ) import analysis as analysis # set up the style for our plots sns.set(style='white', palette='colorblind', font_scale=1.3, rc={'figure.figsize':(12,9), "axes.facecolor": (0, 0, 0, 0)}) # instantiate progress bar goodness tqdm.pandas(tqdm_notebook) pd.set_option('max_colwidth',500) # the number of bootstrap means used to generate a distribution boot_reps = 10000 # alpha - false positive rate alpha = 0.05 # number of tests m = 4 # Correct alpha for multiple comparisons alpha = alpha / m # The Bonferroni correction can be used to adjust confidence intervals also. # If one establishes m confidence intervals, and wishes to have an overall confidence level of 1-alpha, # each individual confidence interval can be adjusted to the level of 1-(alpha/m). # reproducible seed = 1337 ``` ## File/dir locations ### Processed journey data ``` DATA_DIR = os.getenv("DATA_DIR") filename = "full_sample_loved_947858.csv.gz" filepath = os.path.join( DATA_DIR, "sampled_journey", "20190227_20190305", filename) filepath # read in processed sampled journey with just the cols we need for related links df = pd.read_csv(filepath, sep ="\t", compression="gzip") # convert from str to list df['Event_cat_act_agg']= df['Event_cat_act_agg'].progress_apply(ast.literal_eval) df['Page_Event_List'] = df['Page_Event_List'].progress_apply(ast.literal_eval) df['Page_List'] = df['Page_List'].progress_apply(ast.literal_eval) df['Page_List_Length'] = df['Page_List'].progress_apply(len) # drop dodgy rows, where page variant is not A or B. df = df.query('ABVariant in ["A", "B"]') ``` ### Nav type of page lookup - is it a finding page? if not it's a thing page ``` filename = "document_types.csv.gz" # created a metadata dir in the DATA_DIR to hold this data filepath = os.path.join( DATA_DIR, "metadata", filename) print(filepath) df_finding_thing = pd.read_csv(filepath, sep="\t", compression="gzip") df_finding_thing.head() thing_page_paths = df_finding_thing[ df_finding_thing['is_finding']==0]['pagePath'].tolist() finding_page_paths = df_finding_thing[ df_finding_thing['is_finding']==1]['pagePath'].tolist() ``` ## Outliers Some rows should be removed before analysis. For example rows with journey lengths of 500 or very high related link click rates. This process might have to happen once features have been created. # Derive variables ## journey_click_rate There is no difference in the proportion of journeys using at least one related link (journey_click_rate) between page variant A and page variant B. \begin{equation*} \frac{\text{total number of journeys including at least one click on a related link}}{\text{total number of journeys}} \end{equation*} ``` # get the number of related links clicks per Sequence df['Related Links Clicks per seq'] = df['Event_cat_act_agg'].map(analysis.sum_related_click_events) # map across the Sequence variable, which includes pages and Events # we want to pass all the list elements to a function one-by-one and then collect the output. df["Has_Related"] = df["Related Links Clicks per seq"].map(analysis.is_related) df['Related Links Clicks row total'] = df['Related Links Clicks per seq'] * df['Occurrences'] df.head(3) ``` ## count of clicks on navigation elements There is no statistically significant difference in the count of clicks on navigation elements per journey between page variant A and page variant B. \begin{equation*} {\text{total number of navigation element click events from content pages}} \end{equation*} ### Related link counts ``` # get the total number of related links clicks for that row (clicks per sequence multiplied by occurrences) df['Related Links Clicks row total'] = df['Related Links Clicks per seq'] * df['Occurrences'] ``` ### Navigation events ``` def count_nav_events(page_event_list): """Counts the number of nav events from a content page in a Page Event List.""" content_page_nav_events = 0 for pair in page_event_list: if analysis.is_nav_event(pair[1]): if pair[0] in thing_page_paths: content_page_nav_events += 1 return content_page_nav_events # needs finding_thing_df read in from document_types.csv.gz df['Content_Page_Nav_Event_Count'] = df['Page_Event_List'].progress_map(count_nav_events) def count_search_from_content(page_list): search_from_content = 0 for i, page in enumerate(page_list): if i > 0: if '/search?q=' in page: if page_list[i-1] in thing_page_paths: search_from_content += 1 return search_from_content df['Content_Search_Event_Count'] = df['Page_List'].progress_map(count_search_from_content) # count of nav or search clicks df['Content_Nav_or_Search_Count'] = df['Content_Page_Nav_Event_Count'] + df['Content_Search_Event_Count'] # occurrences is accounted for by the group by bit in our bayesian boot analysis function df['Content_Nav_Search_Event_Sum_row_total'] = df['Content_Nav_or_Search_Count'] * df['Occurrences'] # required for journeys with no nav later df['Has_No_Nav_Or_Search'] = df['Content_Nav_Search_Event_Sum_row_total'] == 0 ``` ## Temporary df file in case of crash ### Save ``` df.to_csv(os.path.join( DATA_DIR, "ab2_loved_temp.csv.gz"), sep="\t", compression="gzip", index=False) ``` ### Frequentist statistics #### Statistical significance ``` # help(proportions_ztest) has_rel = analysis.z_prop(df, 'Has_Related') has_rel has_rel['p-value'] < alpha ``` #### Practical significance - uplift ``` # Due to multiple testing we used the Bonferroni correction for alpha ci_low,ci_upp = analysis.zconf_interval_two_samples(has_rel['x_a'], has_rel['n_a'], has_rel['x_b'], has_rel['n_b'], alpha = alpha) print(' difference in proportions = {0:.2f}%'.format(100*(has_rel['p_b']-has_rel['p_a']))) print(' % relative change in proportions = {0:.2f}%'.format(100*((has_rel['p_b']-has_rel['p_a'])/has_rel['p_a']))) print(' 95% Confidence Interval = ( {0:.2f}% , {1:.2f}% )' .format(100*ci_low, 100*ci_upp)) ``` ### Bayesian statistics Based on [this](https://medium.com/@thibalbo/coding-bayesian-ab-tests-in-python-e89356b3f4bd) blog To be developed, a Bayesian approach can provide a simpler interpretation. ### Bayesian bootstrap ``` analysis.compare_total_searches(df) fig, ax = plt.subplots() plot_df_B = df[df.ABVariant == "B"].groupby( 'Content_Nav_or_Search_Count').sum().iloc[:, 0] plot_df_A = df[df.ABVariant == "A"].groupby( 'Content_Nav_or_Search_Count').sum().iloc[:, 0] ax.set_yscale('log') width =0.4 ax = plot_df_B.plot.bar(label='B', position=1, width=width) ax = plot_df_A.plot.bar(label='A', color='salmon', position=0, width=width) plt.title("Unloved journeys") plt.ylabel("Log(number of journeys)") plt.xlabel("Number of uses of search/nav elements in journey") legend = plt.legend(frameon=True) frame = legend.get_frame() frame.set_facecolor('white') plt.savefig('nav_counts_unloved_bar.png', dpi = 900, bbox_inches = 'tight') a_bootstrap, b_bootstrap = analysis.bayesian_bootstrap_analysis(df, col_name='Content_Nav_or_Search_Count', boot_reps=boot_reps, seed = seed) np.array(a_bootstrap).mean() np.array(a_bootstrap).mean() - (0.05 * np.array(a_bootstrap).mean()) np.array(b_bootstrap).mean() (1 - np.array(b_bootstrap).mean()/np.array(a_bootstrap).mean())*100 # ratio is vestigial but we keep it here for convenience # it's actually a count but considers occurrences ratio_stats = analysis.bb_hdi(a_bootstrap, b_bootstrap, alpha=alpha) ratio_stats ax = sns.distplot(b_bootstrap, label='B') ax.errorbar(x=[ratio_stats['b_ci_low'], ratio_stats['b_ci_hi']], y=[2, 2], linewidth=5, c='teal', marker='o', label='95% HDI B') ax = sns.distplot(a_bootstrap, label='A', ax=ax, color='salmon') ax.errorbar(x=[ratio_stats['a_ci_low'], ratio_stats['a_ci_hi']], y=[5, 5], linewidth=5, c='salmon', marker='o', label='95% HDI A') ax.set(xlabel='mean search/nav count per journey', ylabel='Density') sns.despine() legend = plt.legend(frameon=True, bbox_to_anchor=(0.75, 1), loc='best') frame = legend.get_frame() frame.set_facecolor('white') plt.title("Unloved journeys") plt.savefig('nav_counts_unloved.png', dpi = 900, bbox_inches = 'tight') # calculate the posterior for the difference between A's and B's ratio # ypa prefix is vestigial from blog post ypa_diff = np.array(b_bootstrap) - np.array(a_bootstrap) # get the hdi ypa_diff_ci_low, ypa_diff_ci_hi = bb.highest_density_interval(ypa_diff) # the mean of the posterior print('mean:', ypa_diff.mean()) print('low ci:', ypa_diff_ci_low, '\nhigh ci:', ypa_diff_ci_hi) ax = sns.distplot(ypa_diff) ax.plot([ypa_diff_ci_low, ypa_diff_ci_hi], [0, 0], linewidth=10, c='k', marker='o', label='95% HDI') ax.set(xlabel='Content_Nav_or_Search_Count', ylabel='Density', title='The difference between B\'s and A\'s mean counts times occurrences') sns.despine() legend = plt.legend(frameon=True) frame = legend.get_frame() frame.set_facecolor('white') plt.show(); # We count the number of values greater than 0 and divide by the total number # of observations # which returns us the the proportion of values in the distribution that are # greater than 0, could act a bit like a p-value (ypa_diff > 0).sum() / ypa_diff.shape[0] # We count the number of values less than 0 and divide by the total number # of observations # which returns us the the proportion of values in the distribution that are # less than 0, could act a bit like a p-value (ypa_diff < 0).sum() / ypa_diff.shape[0] (ypa_diff>0).sum() (ypa_diff<0).sum() ``` ## proportion of journeys with a page sequence including content and related links only There is no statistically significant difference in the proportion of journeys with a page sequence including content and related links only (including loops) between page variant A and page variant B \begin{equation*} \frac{\text{total number of journeys that only contain content pages and related links (i.e. no nav pages)}}{\text{total number of journeys}} \end{equation*} ### Overall ``` # if (Content_Nav_Search_Event_Sum == 0) that's our success # Has_No_Nav_Or_Search == 1 is a success # the problem is symmetrical so doesn't matter too much sum(df.Has_No_Nav_Or_Search * df.Occurrences) / df.Occurrences.sum() sns.distplot(df.Content_Nav_or_Search_Count.values); ``` ### Frequentist statistics #### Statistical significance ``` nav = analysis.z_prop(df, 'Has_No_Nav_Or_Search') nav ``` #### Practical significance - uplift ``` # Due to multiple testing we used the Bonferroni correction for alpha ci_low,ci_upp = analysis.zconf_interval_two_samples(nav['x_a'], nav['n_a'], nav['x_b'], nav['n_b'], alpha = alpha) diff = 100*(nav['x_b']/nav['n_b']-nav['x_a']/nav['n_a']) print(' difference in proportions = {0:.2f}%'.format(diff)) print(' 95% Confidence Interval = ( {0:.2f}% , {1:.2f}% )' .format(100*ci_low, 100*ci_upp)) print("There was a {0: .2f}% relative change in the proportion of journeys not using search/nav elements".format(100 * ((nav['p_b']-nav['p_a'])/nav['p_a']))) ``` ## Average Journey Length (number of page views) There is no statistically significant difference in the average page list length of journeys (including loops) between page variant A and page variant B. ``` length_B = df[df.ABVariant == "B"].groupby( 'Page_List_Length').sum().iloc[:, 0] lengthB_2 = length_B.reindex(np.arange(1, 501, 1), fill_value=0) length_A = df[df.ABVariant == "A"].groupby( 'Page_List_Length').sum().iloc[:, 0] lengthA_2 = length_A.reindex(np.arange(1, 501, 1), fill_value=0) fig, ax = plt.subplots(figsize=(100, 30)) ax.set_yscale('log') width = 0.4 ax = lengthB_2.plot.bar(label='B', position=1, width=width) ax = lengthA_2.plot.bar(label='A', color='salmon', position=0, width=width) plt.xlabel('length', fontsize=1) legend = plt.legend(frameon=True) frame = legend.get_frame() frame.set_facecolor('white') plt.show(); ``` ### Bayesian bootstrap for non-parametric hypotheses ``` # http://savvastjortjoglou.com/nfl-bayesian-bootstrap.html # let's use mean journey length (could probably model parametrically but we use it for demonstration here) # some journeys have length 500 and should probably be removed as they are liekely bots or other weirdness #exclude journeys of longer than 500 as these could be automated traffic df_short = df[df['Page_List_Length'] < 500] print("The mean number of pages in an unloved journey is {0:.3f}".format(sum(df.Page_List_Length*df.Occurrences)/df.Occurrences.sum())) # for reproducibility, set the seed within this context a_bootstrap, b_bootstrap = analysis.bayesian_bootstrap_analysis(df, col_name='Page_List_Length', boot_reps=boot_reps, seed = seed) a_bootstrap_short, b_bootstrap_short = analysis.bayesian_bootstrap_analysis(df_short, col_name='Page_List_Length', boot_reps=boot_reps, seed = seed) np.array(a_bootstrap).mean() np.array(b_bootstrap).mean() print("There's a relative change in page length of {0:.2f}% from A to B".format((np.array(b_bootstrap).mean()-np.array(a_bootstrap).mean())/np.array(a_bootstrap).mean()*100)) print(np.array(a_bootstrap_short).mean()) print(np.array(b_bootstrap_short).mean()) # Calculate a 95% HDI a_ci_low, a_ci_hi = bb.highest_density_interval(a_bootstrap) print('low ci:', a_ci_low, '\nhigh ci:', a_ci_hi) ax = sns.distplot(a_bootstrap, color='salmon') ax.plot([a_ci_low, a_ci_hi], [0, 0], linewidth=10, c='k', marker='o', label='95% HDI') ax.set(xlabel='Journey Length', ylabel='Density', title='Page Variant A Mean Journey Length') sns.despine() plt.legend(); # Calculate a 95% HDI b_ci_low, b_ci_hi = bb.highest_density_interval(b_bootstrap) print('low ci:', b_ci_low, '\nhigh ci:', b_ci_hi) ax = sns.distplot(b_bootstrap) ax.plot([b_ci_low, b_ci_hi], [0, 0], linewidth=10, c='k', marker='o', label='95% HDI') ax.set(xlabel='Journey Length', ylabel='Density', title='Page Variant B Mean Journey Length') sns.despine() legend = plt.legend(frameon=True) frame = legend.get_frame() frame.set_facecolor('white') plt.show(); ax = sns.distplot(b_bootstrap, label='B') ax = sns.distplot(a_bootstrap, label='A', ax=ax, color='salmon') ax.set(xlabel='Journey Length', ylabel='Density') sns.despine() legend = plt.legend(frameon=True) frame = legend.get_frame() frame.set_facecolor('white') plt.title("Unloved journeys") plt.savefig('journey_length_unloved.png', dpi = 900, bbox_inches = 'tight') ax = sns.distplot(b_bootstrap_short, label='B') ax = sns.distplot(a_bootstrap_short, label='A', ax=ax, color='salmon') ax.set(xlabel='Journey Length', ylabel='Density') sns.despine() legend = plt.legend(frameon=True) frame = legend.get_frame() frame.set_facecolor('white') plt.show(); ``` We can also measure the uncertainty in the difference between the Page Variants's Journey Length by subtracting their posteriors. ``` # calculate the posterior for the difference between A's and B's YPA ypa_diff = np.array(b_bootstrap) - np.array(a_bootstrap) # get the hdi ypa_diff_ci_low, ypa_diff_ci_hi = bb.highest_density_interval(ypa_diff) # the mean of the posterior ypa_diff.mean() print('low ci:', ypa_diff_ci_low, '\nhigh ci:', ypa_diff_ci_hi) ax = sns.distplot(ypa_diff) ax.plot([ypa_diff_ci_low, ypa_diff_ci_hi], [0, 0], linewidth=10, c='k', marker='o', label='95% HDI') ax.set(xlabel='Journey Length', ylabel='Density', title='The difference between B\'s and A\'s mean Journey Length') sns.despine() legend = plt.legend(frameon=True) frame = legend.get_frame() frame.set_facecolor('white') plt.show(); ``` We can actually calculate the probability that B's mean Journey Length was greater than A's mean Journey Length by measuring the proportion of values greater than 0 in the above distribution. ``` # We count the number of values greater than 0 and divide by the total number # of observations # which returns us the the proportion of values in the distribution that are # greater than 0, could act a bit like a p-value (ypa_diff > 0).sum() / ypa_diff.shape[0] # We count the number of values greater than 0 and divide by the total number # of observations # which returns us the the proportion of values in the distribution that are # greater than 0, could act a bit like a p-value (ypa_diff < 0).sum() / ypa_diff.shape[0] ```
github_jupyter
# The Acrobot (v-1) Problem Acrobot is a 2-link pendulum with only the second joint actuated. Intitially, both links point downwards. The goal is to swing the end-effector at a height at least the length of one link above the base. Both links can swing freely and can pass by each other, i.e., they don't collide when they have the same angle. ## States The state consists of the sin() and cos() of the two rotational joint angles and the joint angular velocities : [cos(theta1) sin(theta1) cos(theta2) sin(theta2) thetaDot1 thetaDot2]. For the first link, an angle of 0 corresponds to the link pointing downwards. The angle of the second link is relative to the angle of the first link. An angle of 0 corresponds to having the same angle between the two links. A state of [1, 0, 1, 0, ..., ...] means that both links point downwards. ## Actions The action is either applying +1, 0 or -1 torque on the joint between the two pendulum links. FPS = 15 ``` import gym import numpy as np import matplotlib import matplotlib.pyplot as plt import os import sys module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) import torch from utils.DQN_model import DQN_CNN from PIL import Image import time import pickle import random from itertools import count from utils.Schedule import LinearSchedule, ExponentialSchedule from utils.Agent import AcrobotAgent, preprocess_frame def calc_moving_average(lst, window_size=10): ''' This function calculates the moving average of `lst` over `window_size` samples. Parameters: arr: list (list) window_size: size over which to average (int) Returns: mean_arr: array with the averages (np.array) ''' assert len(lst) >= window_size mean_arr = [] for j in range(1, window_size): mean_arr.append(np.mean(lst[:j])) i = 0 while i != (len(lst) - window_size + 1): mean_arr.append(np.mean(lst[i : i + window_size])) i += 1 return np.array(mean_arr) def plot_rewards(episode_rewards, window_size=10, title=''): ''' This function plots the rewards vs. episodes and the mean rewards vs. episodes. The mean is taken over `windows_size` episodes. Parameters: episode_rewards: list of all the rewards (list) ''' num_episodes = len(episode_rewards) mean_rewards = calc_moving_average(episode_rewards, window_size) plt.plot(list(range(num_episodes)), episode_rewards, label='rewards') plt.plot(list(range(num_episodes)), mean_rewards, label='mean_rewards') plt.title(title) plt.xlabel('Episode') plt.ylabel('Reward') plt.legend() plt.show() # Play def play_acrobot(env, agent, num_episodes=5): ''' This function plays the Acrobot-v1 environment given an agent. Parameters: agent: the agent that holds the policy (AcrobotAgent) num_episodes: number of episodes to play ''' if agent.obs_represent == 'frame_seq': print("Playing Acrobot-v1 with " , agent.name ,"agent using Frame Sequence") start_time = time.time() for episode in range(num_episodes): episode_start_time = time.time() env.reset() last_obs = preprocess_frame(env, mode='atari', render=True) episode_reward = 0 for t in count(): ### Step the env and store the transition # Store lastest observation in replay memory and last_idx can be used to store action, reward, done last_idx = agent.replay_buffer.store_frame(last_obs) # encode_recent_observation will take the latest observation # that you pushed into the buffer and compute the corresponding # input that should be given to a Q network by appending some # previous frames. recent_observation = agent.replay_buffer.encode_recent_observation() action = agent.predict_action(recent_observation) _ , reward, done, _ = env.step(action) episode_reward += reward # Store other info in replay memory agent.replay_buffer.store_effect(last_idx, action, reward, done) if done: print("Episode: ", episode, " Done, Reward: ", episode_reward, " Episode Time: %.2f secs" % (time.time() - episode_start_time)) break last_obs = preprocess_frame(env, mode='atari', render=True) env.close() else: # mode == 'frame diff' print("Playing Acrobot-v1 with " , agent.name ,"agent using Frame Difference") start_time = time.time() for episode in range(num_episodes): print("### Episode ", episode + 1, " ###") episode_start_time = time.time() env.reset() last_obs = preprocess_frame(env, mode='control', render=True) current_obs = preprocess_frame(env, mode='control', render=True) state = current_obs - last_obs episode_reward = 0 for t in count(): action = agent.predict_action(state) _ , reward, done, _ = env.step(action) episode_reward += reward if done: print("Episode: ", episode + 1, " Done, Reward: ", episode_reward, " Episode Time: %.2f secs" % (time.time() - episode_start_time)) break last_obs = current_obs current_obs = preprocess_frame(env, mode='control', render=True) state = current_obs - last_obs env.close() exp_schedule = ExponentialSchedule(decay_rate=100) lin_schedule = LinearSchedule(total_timesteps=1000) gym.logger.set_level(40) env = gym.make("Acrobot-v1") agent = AcrobotAgent(env, name='frame_seq_rep', frame_history_len = 4, exploration=lin_schedule, steps_to_start_learn=10000, target_update_freq=500, learning_rate=0.00025, clip_grads=True, use_batch_norm=False) mean_episode_reward = -float('nan') best_mean_episode_reward = -float('inf') last_obs = env.reset() LOG_EVERY_N_STEPS = 5000 batch_size = 32 # 32 num_episodes = 100000 with open('./acrobot_agent_ckpt/frame_seq_training.status', 'rb') as fp: training_status = pickle.load(fp) mean_episode_reward = training_status['mean_episode_reward'] best_mean_episode_reward = training_status['best_mean_episode_reward'] episode_durations = training_status['episode_durations'] episodes_rewards = training_status['episodes_rewards'] total_steps = training_status['total_steps'] # episode_durations = [] # episodes_rewards = [] # total_steps = 0 start_time = time.time() for episode in range(num_episodes): episode_start_time = time.time() env.reset() last_obs = preprocess_frame(env, mode='atari', render=True) episode_reward = 0 agent.episodes_seen += 1 for t in count(): agent.steps_count += 1 total_steps += 1 ### Step the env and store the transition # Store lastest observation in replay memory and last_idx can be used to store action, reward, done last_idx = agent.replay_buffer.store_frame(last_obs) # encode_recent_observation will take the latest observation # that you pushed into the buffer and compute the corresponding # input that should be given to a Q network by appending some # previous frames. recent_observation = agent.replay_buffer.encode_recent_observation() action = agent.select_greedy_action(recent_observation, use_episode=True) # Advance one step _ , reward, done, _ = env.step(action) episode_reward += reward agent.replay_buffer.store_effect(last_idx, action, reward, done) ### Perform experience replay and train the network. # Note that this is only done if the replay buffer contains enough samples # for us to learn something useful -- until then, the model will not be # initialized and random actions should be taken agent.learn(batch_size) ### Log progress and keep track of statistics if len(episodes_rewards) > 0: mean_episode_reward = np.mean(episodes_rewards[-100:]) if len(episodes_rewards) > 100: best_mean_episode_reward = max(best_mean_episode_reward, mean_episode_reward) if total_steps % LOG_EVERY_N_STEPS == 0 and total_steps > agent.steps_to_start_learn: print("Timestep %d" % (agent.steps_count,)) print("mean reward (100 episodes) %f" % mean_episode_reward) print("best mean reward %f" % best_mean_episode_reward) print("episodes %d" % len(episodes_rewards)) print("exploration value %f" % agent.epsilon) total_time = time.time() - start_time print("time since start %.2f seconds" % total_time) training_status = {} training_status['mean_episode_reward'] = mean_episode_reward training_status['best_mean_episode_reward'] = best_mean_episode_reward training_status['episode_durations'] = episode_durations training_status['episodes_rewards'] = episodes_rewards training_status['total_steps'] = total_steps with open('./acrobot_agent_ckpt/frame_seq_training.status', 'wb') as fp: pickle.dump(training_status, fp) # Resets the environment when reaching an episode boundary. if done: episode_durations.append(t + 1) episodes_rewards.append(episode_reward) print("Episode: ", agent.episodes_seen, " Done, Reward: ", episode_reward, " Step: ", agent.steps_count, " Episode Time: %.2f secs" % (time.time() - episode_start_time)) break last_obs = preprocess_frame(env, mode='atari', render=True) # print(last_obs) print("Training Complete!") env.close() play_acrobot(env, agent, num_episodes=5) plt.imshow(last_obs[:,:,0], cmap='gray') with open('./acrobot_agent_ckpt/frame_seq_training.status', 'rb') as fp: training_status = pickle.load(fp) episodes_rewards = training_status['episodes_rewards'] plt.rcParams['figure.figsize'] = (20, 10) plot_rewards(episodes_rewards, 100, title='Acrobot Frame Sequence') # Frame Differenece mean_episode_reward = -float('nan') best_mean_episode_reward = -float('inf') last_obs = env.reset() LOG_EVERY_N_STEPS = 1000 batch_size = 128 # 32 num_episodes = 10000 agent = AcrobotAgent(env, name='frame_diff', exploration=lin_schedule, steps_to_start_learn=2000, target_update_freq=500, learning_rate=0.003, clip_grads=True, use_batch_norm=True, obs_represent='frame_diff') with open('./acrobot_agent_ckpt/training.status', 'rb') as fp: training_status = pickle.load(fp) mean_episode_reward = training_status['mean_episode_reward'] best_mean_episode_reward = training_status['best_mean_episode_reward'] episode_durations = training_status['episode_durations'] episodes_rewards = training_status['episodes_rewards'] total_steps = training_status['total_steps'] # episode_durations = [] # episodes_rewards = [] # total_steps = 0 start_time = time.time() for episode in range(num_episodes): episode_start_time = time.time() env.reset() last_obs = preprocess_frame(env, mode='control', render=True) current_obs = preprocess_frame(env, mode='control', render=True) state = current_obs - last_obs episode_reward = 0 agent.episodes_seen += 1 # agent.epsilon = agent.explore_schedule.value(agent.episodes_seen) for t in count(): agent.steps_count += 1 total_steps += 1 ### Step the env and store the transition # Store lastest observation in replay memory and last_idx can be used to store action, reward, done last_idx = agent.replay_buffer.store_frame(state) # encode_recent_observation will take the latest observation # that you pushed into the buffer and compute the corresponding # input that should be given to a Q network by appending some # previous frames. recent_observation = agent.replay_buffer.encode_recent_observation() action = agent.select_greedy_action(recent_observation, use_episode=True) # Advance one step _ , reward, done, _ = env.step(action) episode_reward += reward agent.replay_buffer.store_effect(last_idx, action, reward, done) ### Perform experience replay and train the network. # Note that this is only done if the replay buffer contains enough samples # for us to learn something useful -- until then, the model will not be # initialized and random actions should be taken agent.learn(batch_size) ### Log progress and keep track of statistics if len(episodes_rewards) > 0: mean_episode_reward = np.mean(episodes_rewards[-100:]) if len(episodes_rewards) > 100: best_mean_episode_reward = max(best_mean_episode_reward, mean_episode_reward) if total_steps % LOG_EVERY_N_STEPS == 0 and total_steps > agent.steps_to_start_learn: print("Timestep %d" % (agent.steps_count,)) print("mean reward (100 episodes) %f" % mean_episode_reward) print("best mean reward %f" % best_mean_episode_reward) print("episodes %d" % len(episodes_rewards)) print("exploration value %f" % agent.epsilon) total_time = time.time() - start_time print("time since start %.2f seconds" % total_time) training_status = {} training_status['mean_episode_reward'] = mean_episode_reward training_status['best_mean_episode_reward'] = best_mean_episode_reward training_status['episode_durations'] = episode_durations training_status['episodes_rewards'] = episodes_rewards training_status['total_steps'] = total_steps with open('./acrobot_agent_ckpt/frame_diff_training.status', 'wb') as fp: pickle.dump(training_status, fp) # Resets the environment when reaching an episode boundary. if done: episode_durations.append(t + 1) episodes_rewards.append(episode_reward) print("Episode: ", agent.episodes_seen, " Done, Reward: ", episode_reward, " Step: ", agent.steps_count, " Episode Time: %.2f secs" % (time.time() - episode_start_time)) break last_obs = current_obs current_obs = preprocess_frame(env, mode='control', render=True) state = current_obs - last_obs print("Training Complete!") env.close() agent.save_agent_state() env.close() plt.rcParams['figure.figsize'] = (15,15) env.reset() img = env.render(mode='rgb_array') env.close() img = np.reshape(img, [500, 500, 3]).astype(np.float32) img = img[:, :, 0] * 0.299 + img[:, :, 1] * 0.587 + img[:, :, 2] * 0.114 # img = img[:, :, 0] * 0.5 + img[:, :, 1] * 0.4 + img[:, :, 2] * 0.1 img = Image.fromarray(img) resized_screen = img.resize((84, 84), Image.BILINEAR) resized_screen = np.array(resized_screen) x_t_1 = np.reshape(resized_screen, [84, 84, 1]) x_t_1 = x_t.astype(np.uint8) env.step(0) img = env.render(mode='rgb_array') env.close() img = np.reshape(img, [500, 500, 3]).astype(np.float32) # img = img[:, :, 0] * 0.299 + img[:, :, 1] * 0.587 + img[:, :, 2] * 0.114 img = img[:, :, 0] * 0.5 + img[:, :, 1] * 0.4 + img[:, :, 2] * 0.1 img = Image.fromarray(img) resized_screen = img.resize((84, 84), Image.BILINEAR) resized_screen = np.array(resized_screen) x_t_1 = np.reshape(resized_screen, [84, 84, 1]) x_t_1 = x_t_1.astype(np.uint8) env.step(1) img = env.render(mode='rgb_array') env.close() img = np.reshape(img, [500, 500, 3]).astype(np.float32) # img = img[:, :, 0] * 0.299 + img[:, :, 1] * 0.587 + img[:, :, 2] * 0.114 img = img[:, :, 0] * 0.7 + img[:, :, 1] * 0.2 + img[:, :, 2] * 0.1 img = Image.fromarray(img) resized_screen = img.resize((84, 84), Image.BILINEAR) resized_screen = np.array(resized_screen) x_t_2 = np.reshape(resized_screen, [84, 84, 1]) x_t_2 = x_t_2.astype(np.uint8) env.step(0) img = env.render(mode='rgb_array') env.close() img = np.reshape(img, [500, 500, 3]).astype(np.float32) # img = img[:, :, 0] * 0.299 + img[:, :, 1] * 0.587 + img[:, :, 2] * 0.114 img = img[:, :, 0] * 0.5 + img[:, :, 1] * 0.4 + img[:, :, 2] * 0.1 img[img < 150] = 0 img[img > 230] = 255 img = Image.fromarray(img) resized_screen = img.resize((84, 84), Image.BILINEAR) resized_screen = np.array(resized_screen) x_t_3 = np.reshape(resized_screen, [84, 84, 1]) x_t_3 = x_t_3.astype(np.uint8) env.step(0) img = env.render(mode='rgb_array') env.close() img = np.reshape(img, [500, 500, 3]).astype(np.float32) # img = img[:, :, 0] * 0.299 + img[:, :, 1] * 0.587 + img[:, :, 2] * 0.114 img = img[:, :, 0] * 0.9 + img[:, :, 1] * 0.05 + img[:, :, 2] * 0.05 img = Image.fromarray(img) resized_screen = img.resize((84, 84), Image.BILINEAR) resized_screen = np.array(resized_screen) x_t_4 = np.reshape(resized_screen, [84, 84, 1]) x_t_4 = x_t_4.astype(np.uint8) plt.subplot(2,2,1) plt.imshow(np.uint8(x_t_1[:,:,0]),cmap='gray') # plt.imshow(np.uint8(x_t_1[:,:,0])) plt.subplot(2,2,2) plt.imshow(np.uint8(x_t_2[:,:,0]),cmap='gray') # plt.imshow(np.uint8(x_t_2[:,:,0])) plt.subplot(2,2,3) plt.imshow(np.uint8(x_t_3[:,:,0]),cmap='gray') # plt.imshow(np.uint8(x_t_3[:,:,0])) plt.subplot(2,2,4) plt.imshow(np.uint8(x_t_4[:,:,0]),cmap='gray') # plt.imshow(np.uint8(x_t_4[:,:,0])) env.close() with open('./acrobot_agent_ckpt/frame_diff_training.status', 'rb') as fp: training_status = pickle.load(fp) episodes_rewards = training_status['episodes_rewards'] plt.rcParams['figure.figsize'] = (20, 10) plot_rewards(episodes_rewards, 100, title='Acrobot Frame Difference') gym.logger.set_level(40) env = gym.make("Acrobot-v1") agent = AcrobotAgent(env, name='frame_diff', obs_represent='frame_diff', use_batch_norm=True) play_acrobot(env, agent, num_episodes=10) env.close() ```
github_jupyter
``` import numpy as np from numpy import ndarray from typing import List def assert_same_shape(array: ndarray, array_grad: ndarray): assert array.shape == array_grad.shape, \ ''' 두 ndarray의 모양이 같아야 하는데, 첫 번째 ndarray의 모양은 {0}이고, 두 번째 ndarray의 모양은 {1}이다. '''.format(tuple(array_grad.shape), tuple(array.shape)) return None ``` # `Operation`, `ParamOperation` 클래스 ``` class Operation(object): ''' 신경망 모델의 연산 역할을 하는 기반 클래스 ''' def __init__(self): pass def forward(self, input_: ndarray): ''' 인스턴스 변수 self._input에 입력값을 저장한 다음 self._output() 함수를 호출한다. ''' self.input_ = input_ self.output = self._output() return self.output def backward(self, output_grad: ndarray) -> ndarray: ''' self._input_grad() 함수를 호출한다. 이때 모양이 일치하는지 먼저 확인한다. ''' assert_same_shape(self.output, output_grad) self.input_grad = self._input_grad(output_grad) assert_same_shape(self.input_, self.input_grad) return self.input_grad def _output(self) -> ndarray: ''' Operation을 구현한 모든 구상 클래스는 _output 메서드를 구현해야 한다. ''' raise NotImplementedError() def _input_grad(self, output_grad: ndarray) -> ndarray: ''' Operation을 구현한 모든 구상 클래스는 _input_grad 메서드를 구현해야 한다. ''' raise NotImplementedError() class ParamOperation(Operation): ''' 파라미터를 갖는 연산 ''' def __init__(self, param: ndarray) -> ndarray: ''' 생성자 메서드 ''' super().__init__() self.param = param def backward(self, output_grad: ndarray) -> ndarray: ''' self._input_grad, self._param_grad를 호출한다. 이때 ndarray 객체의 모양이 일치하는지 확인한다. ''' assert_same_shape(self.output, output_grad) self.input_grad = self._input_grad(output_grad) self.param_grad = self._param_grad(output_grad) assert_same_shape(self.input_, self.input_grad) assert_same_shape(self.param, self.param_grad) return self.input_grad def _param_grad(self, output_grad: ndarray) -> ndarray: ''' ParamOperation을 구현한 모든 구상 클래스는 _param_grad 메서드를 구현해야 한다. ''' raise NotImplementedError() ``` ## `Operation`의 구상 클래스 ``` class WeightMultiply(ParamOperation): ''' 신경망의 가중치 행렬곱 연산 ''' def __init__(self, W: ndarray): ''' self.param = W로 초기화 ''' super().__init__(W) def _output(self) -> ndarray: ''' 출력값 계산 ''' return np.dot(self.input_, self.param) def _input_grad(self, output_grad: ndarray) -> ndarray: ''' 입력에 대한 기울기 계산 ''' return np.dot(output_grad, np.transpose(self.param, (1, 0))) def _param_grad(self, output_grad: ndarray) -> ndarray: ''' 파라미터에 대한 기울기 계산 ''' return np.dot(np.transpose(self.input_, (1, 0)), output_grad) class BiasAdd(ParamOperation): ''' 편향을 더하는 연산 ''' def __init__(self, B: ndarray): ''' self.param = B로 초기화한다. 초기화 전에 행렬의 모양을 확인한다. ''' assert B.shape[0] == 1 super().__init__(B) def _output(self) -> ndarray: ''' 출력값 계산 ''' return self.input_ + self.param def _input_grad(self, output_grad: ndarray) -> ndarray: ''' 입력에 대한 기울기 계산 ''' return np.ones_like(self.input_) * output_grad def _param_grad(self, output_grad: ndarray) -> ndarray: ''' 파라미터에 대한 기울기 계산 ''' param_grad = np.ones_like(self.param) * output_grad return np.sum(param_grad, axis=0).reshape(1, param_grad.shape[1]) class Sigmoid(Operation): ''' Sigmoid 활성화 함수 ''' def __init__(self) -> None: '''Pass''' super().__init__() def _output(self) -> ndarray: ''' 출력값 계산 ''' return 1.0/(1.0+np.exp(-1.0 * self.input_)) def _input_grad(self, output_grad: ndarray) -> ndarray: ''' 입력에 대한 기울기 계산 ''' sigmoid_backward = self.output * (1.0 - self.output) input_grad = sigmoid_backward * output_grad return input_grad class Linear(Operation): ''' 항등 활성화 함수 ''' def __init__(self) -> None: '''기반 클래스의 생성자 메서드 실행''' super().__init__() def _output(self) -> ndarray: '''입력을 그대로 출력''' return self.input_ def _input_grad(self, output_grad: ndarray) -> ndarray: '''그대로 출력''' return output_grad ``` # `Layer`와 `Dense` 클래스 ``` class Layer(object): ''' 신경망 모델의 층 역할을 하는 클래스 ''' def __init__(self, neurons: int): ''' 뉴런의 개수는 층의 너비에 해당한다 ''' self.neurons = neurons self.first = True self.params: List[ndarray] = [] self.param_grads: List[ndarray] = [] self.operations: List[Operation] = [] def _setup_layer(self, num_in: int) -> None: ''' Layer를 구현하는 구상 클래스는 _setup_layer 메서드를 구현해야 한다 ''' raise NotImplementedError() def forward(self, input_: ndarray) -> ndarray: ''' 입력값을 각 연산에 순서대로 통과시켜 순방향 계산을 수행한다. ''' if self.first: self._setup_layer(input_) self.first = False self.input_ = input_ for operation in self.operations: input_ = operation.forward(input_) self.output = input_ return self.output def backward(self, output_grad: ndarray) -> ndarray: ''' output_grad를 각 연산에 역순으로 통과시켜 역방향 계산을 수행한다. 계산하기 전, 행렬의 모양을 검사한다. ''' assert_same_shape(self.output, output_grad) for operation in reversed(self.operations): output_grad = operation.backward(output_grad) input_grad = output_grad self._param_grads() return input_grad def _param_grads(self) -> ndarray: ''' 각 Operation 객체에서 _param_grad 값을 꺼낸다. ''' self.param_grads = [] for operation in self.operations: if issubclass(operation.__class__, ParamOperation): self.param_grads.append(operation.param_grad) def _params(self) -> ndarray: ''' 각 Operation 객체에서 _params 값을 꺼낸다. ''' self.params = [] for operation in self.operations: if issubclass(operation.__class__, ParamOperation): self.params.append(operation.param) class Dense(Layer): ''' Layer 클래스를 구현한 전결합층 ''' def __init__(self, neurons: int, activation: Operation = Sigmoid()): ''' 초기화 시 활성화 함수를 결정해야 함 ''' super().__init__(neurons) self.activation = activation def _setup_layer(self, input_: ndarray) -> None: ''' 전결합층의 연산을 정의 ''' if self.seed: np.random.seed(self.seed) self.params = [] # 가중치 self.params.append(np.random.randn(input_.shape[1], self.neurons)) # 편향 self.params.append(np.random.randn(1, self.neurons)) self.operations = [WeightMultiply(self.params[0]), BiasAdd(self.params[1]), self.activation] return None ``` # `Loss`와 `MeanSquaredError` 클래스 ``` class Loss(object): ''' 신경망 모델의 손실을 계산하는 클래스 ''' def __init__(self): '''기반 클래스의 생성자 메서드를 실행''' pass def forward(self, prediction: ndarray, target: ndarray) -> float: ''' 실제 손실값을 계산함 ''' assert_same_shape(prediction, target) self.prediction = prediction self.target = target loss_value = self._output() return loss_value def backward(self) -> ndarray: ''' 손실함수의 입력값에 대해 손실의 기울기를 계산함 ''' self.input_grad = self._input_grad() assert_same_shape(self.prediction, self.input_grad) return self.input_grad def _output(self) -> float: ''' Loss 클래스를 확장한 모든 구상 클래스는 이 메서드를 구현해야 함 ''' raise NotImplementedError() def _input_grad(self) -> ndarray: ''' Loss 클래스를 확장한 모든 구상 클래스는 이 메서드를 구현해야 함 ''' raise NotImplementedError() class MeanSquaredError(Loss): def __init__(self) -> None: '''Pass''' super().__init__() def _output(self) -> float: ''' 관찰 단위로 오차를 집계한 평균제곱오차 손실함수 ''' loss = ( np.sum(np.power(self.prediction - self.target, 2)) / self.prediction.shape[0] ) return loss def _input_grad(self) -> ndarray: ''' 예측값에 대한 평균제곱오차 손실의 기울기를 계산 ''' return 2.0 * (self.prediction - self.target) / self.prediction.shape[0] ``` # `NeuralNetwork` 클래스 ``` class NeuralNetwork(object): ''' 신경망을 나타내는 클래스 ''' def __init__(self, layers: List[Layer], loss: Loss, seed: int = 1) -> None: ''' 신경망의 층과 손실함수를 정의 ''' self.layers = layers self.loss = loss self.seed = seed if seed: for layer in self.layers: setattr(layer, "seed", self.seed) def forward(self, x_batch: ndarray) -> ndarray: ''' 데이터를 각 층에 순서대로 통과시킴(순방향 계산) ''' x_out = x_batch for layer in self.layers: x_out = layer.forward(x_out) return x_out def backward(self, loss_grad: ndarray) -> None: ''' 데이터를 각 층에 역순으로 통과시킴(역방향 계산) ''' grad = loss_grad for layer in reversed(self.layers): grad = layer.backward(grad) return None def train_batch(self, x_batch: ndarray, y_batch: ndarray) -> float: ''' 순방향 계산 수행 손실값 계산 역방향 계산 수행 ''' predictions = self.forward(x_batch) loss = self.loss.forward(predictions, y_batch) self.backward(self.loss.backward()) return loss def params(self): ''' 신경망의 파라미터 값을 받음 ''' for layer in self.layers: yield from layer.params def param_grads(self): ''' 신경망의 각 파라미터에 대한 손실값의 기울기를 받음 ''' for layer in self.layers: yield from layer.param_grads ``` # `Optimizer`와 `SGD` 클래스 ``` class Optimizer(object): ''' 신경망을 최적화하는 기능을 제공하는 추상 클래스 ''' def __init__(self, lr: float = 0.01): ''' 초기 학습률이 반드시 설정되어야 한다. ''' self.lr = lr def step(self) -> None: ''' Optimizer를 구현하는 구상 클래스는 이 메서드를 구현해야 한다. ''' pass class SGD(Optimizer): ''' 확률적 경사 하강법을 적용한 Optimizer ''' def __init__(self, lr: float = 0.01) -> None: '''Pass''' super().__init__(lr) def step(self): ''' 각 파라미터에 학습률을 곱해 기울기 방향으로 수정함 ''' for (param, param_grad) in zip(self.net.params(), self.net.param_grads()): param -= self.lr * param_grad ``` # `Trainer` 클래스 ``` from copy import deepcopy from typing import Tuple class Trainer(object): ''' 신경망 모델을 학습시키는 역할을 수행함 ''' def __init__(self, net: NeuralNetwork, optim: Optimizer) -> None: ''' 학습을 수행하려면 NeuralNetwork, Optimizer 객체가 필요함 Optimizer 객체의 인스턴스 변수로 NeuralNetwork 객체를 전달할 것 ''' self.net = net self.optim = optim self.best_loss = 1e9 setattr(self.optim, 'net', self.net) def generate_batches(self, X: ndarray, y: ndarray, size: int = 32) -> Tuple[ndarray]: ''' 배치 생성 ''' assert X.shape[0] == y.shape[0], \ ''' 특징과 목푯값은 행의 수가 같아야 하는데, 특징은 {0}행, 목푯값은 {1}행이다 '''.format(X.shape[0], y.shape[0]) N = X.shape[0] for ii in range(0, N, size): X_batch, y_batch = X[ii:ii+size], y[ii:ii+size] yield X_batch, y_batch def fit(self, X_train: ndarray, y_train: ndarray, X_test: ndarray, y_test: ndarray, epochs: int=100, eval_every: int=10, batch_size: int=32, seed: int = 1, restart: bool = True)-> None: ''' 일정 횟수의 에폭을 수행하며 학습 데이터에 신경망을 최적화함 eval_every 변수에 설정된 횟수의 매 에폭마다 테스트 데이터로 신경망의 예측 성능을 측정함 ''' np.random.seed(seed) if restart: for layer in self.net.layers: layer.first = True self.best_loss = 1e9 for e in range(epochs): if (e+1) % eval_every == 0: # 조기 종료 last_model = deepcopy(self.net) X_train, y_train = permute_data(X_train, y_train) batch_generator = self.generate_batches(X_train, y_train, batch_size) for ii, (X_batch, y_batch) in enumerate(batch_generator): self.net.train_batch(X_batch, y_batch) self.optim.step() if (e+1) % eval_every == 0: test_preds = self.net.forward(X_test) loss = self.net.loss.forward(test_preds, y_test) if loss < self.best_loss: print(f"{e+1} 에폭에서 검증 데이터에 대한 손실값: {loss:.3f}") self.best_loss = loss else: print(f"""{e+1}에폭에서 손실값이 증가했다. 마지막으로 측정한 손실값은 {e+1-eval_every}에폭까지 학습된 모델에서 계산된 {self.best_loss:.3f}이다.""") self.net = last_model # self.optim이 self.net을 수정하도록 다시 설정 setattr(self.optim, 'net', self.net) break ``` #### 평가 기준 ``` def mae(y_true: ndarray, y_pred: ndarray): ''' 신경망 모델의 평균절대오차 계산 ''' return np.mean(np.abs(y_true - y_pred)) def rmse(y_true: ndarray, y_pred: ndarray): ''' 신경망 모델의 제곱근 평균제곱오차 계산 ''' return np.sqrt(np.mean(np.power(y_true - y_pred, 2))) def eval_regression_model(model: NeuralNetwork, X_test: ndarray, y_test: ndarray): ''' 신경망 모델의 평균절대오차 및 제곱근 평균제곱오차 계산 Compute mae and rmse for a neural network. ''' preds = model.forward(X_test) preds = preds.reshape(-1, 1) print("평균절대오차: {:.2f}".format(mae(preds, y_test))) print() print("제곱근 평균제곱오차 {:.2f}".format(rmse(preds, y_test))) lr = NeuralNetwork( layers=[Dense(neurons=1, activation=Linear())], loss=MeanSquaredError(), seed=20190501 ) nn = NeuralNetwork( layers=[Dense(neurons=13, activation=Sigmoid()), Dense(neurons=1, activation=Linear())], loss=MeanSquaredError(), seed=20190501 ) dl = NeuralNetwork( layers=[Dense(neurons=13, activation=Sigmoid()), Dense(neurons=13, activation=Sigmoid()), Dense(neurons=1, activation=Linear())], loss=MeanSquaredError(), seed=20190501 ) ``` ### 데이터 로드, 테스트 / 학습 데이터 분할 ``` from sklearn.datasets import load_boston boston = load_boston() data = boston.data target = boston.target features = boston.feature_names # 데이터 축척 변환 from sklearn.preprocessing import StandardScaler s = StandardScaler() data = s.fit_transform(data) def to_2d_np(a: ndarray, type: str="col") -> ndarray: ''' 1차원 텐서를 2차원으로 변환 ''' assert a.ndim == 1, \ "입력된 텐서는 1차원이어야 함" if type == "col": return a.reshape(-1, 1) elif type == "row": return a.reshape(1, -1) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.3, random_state=80718) # 목푯값을 2차원 배열로 변환 y_train, y_test = to_2d_np(y_train), to_2d_np(y_test) ``` ### 3가지 모델 학습 ``` # 헬퍼 함수 def permute_data(X, y): perm = np.random.permutation(X.shape[0]) return X[perm], y[perm] trainer = Trainer(lr, SGD(lr=0.01)) trainer.fit(X_train, y_train, X_test, y_test, epochs = 50, eval_every = 10, seed=20190501); print() eval_regression_model(lr, X_test, y_test) trainer = Trainer(nn, SGD(lr=0.01)) trainer.fit(X_train, y_train, X_test, y_test, epochs = 50, eval_every = 10, seed=20190501); print() eval_regression_model(nn, X_test, y_test) trainer = Trainer(dl, SGD(lr=0.01)) trainer.fit(X_train, y_train, X_test, y_test, epochs = 50, eval_every = 10, seed=20190501); print() eval_regression_model(dl, X_test, y_test) ```
github_jupyter
``` from __future__ import print_function, division from keras.datasets import fashion_mnist import pandas as pd import numpy as np from scipy.interpolate import interp1d import os from keras.layers import Input, Dense, Reshape, Flatten, Dropout, multiply from keras.layers import BatchNormalization, Activation, Embedding, ZeroPadding2D from keras.layers.advanced_activations import LeakyReLU from keras.layers.convolutional import UpSampling2D, Conv2D from keras.layers import MaxPooling2D, concatenate from keras.models import Sequential, Model from keras.optimizers import Adam import keras.backend as K import matplotlib.pyplot as plt name = 'fashion_BIGAN' if not os.path.exists("saved_model/"+name): os.mkdir("saved_model/"+name) if not os.path.exists("images/"+name): os.mkdir("images/"+name) # Download the dataset (X_train, y_train), (X_test, y_test) = fashion_mnist.load_data() print('X_train', X_train.shape,'y_train', y_train.shape) print('X_test', X_test.shape,'y_test', y_test.shape) input_classes = pd.Series(y_train).nunique() input_classes # Training Labels are evenly distributed Train_label_count = pd.Series(y_train).value_counts() Train_label_count # Test Labels are evenly distributed Test_label_count = pd.Series(y_test).value_counts() Test_label_count #label dictionary from documentation label_dict = {0: 'tshirt', 1: 'trouser', 2: 'pullover', 3: 'dress', 4: 'coat', 5: 'sandal', 6: 'shirt', 7: 'sneaker', 8: 'bag', 9: 'boot'} X_train[1].shape #input dimensions input_rows = X_train[1][0] input_cols = X_train[1][1] input_channels = 1 # plot images from the train dataset for i in range(10): # define subplot a=plt.subplot(2, 5, 1 + i) # turn off axis plt.axis('off') # plot raw pixel data plt.imshow(X_train[i], cmap='gray_r') a.set_title(label_dict[y_train[i]]) # plot images from the test dataset for i in range(10): # define subplot a=plt.subplot(2, 5, 1 + i) # turn off axis plt.axis('off') # plot raw pixel data plt.imshow(X_test[i], cmap='gray_r') a.set_title(label_dict[y_test[i]]) class BIGAN(): def __init__(self): self.img_rows = 28 self.img_cols = 28 self.channels = 1 self.img_shape = (self.img_rows, self.img_cols, self.channels) self.latent_dim = 100 optimizer = Adam(0.0002, 0.5) # Build and compile the discriminator self.discriminator = self.build_discriminator() self.discriminator.compile(loss=['binary_crossentropy'], optimizer=optimizer, metrics=['accuracy']) # Build the generator self.generator = self.build_generator() # Build the encoder self.encoder = self.build_encoder() # The part of the bigan that trains the discriminator and encoder self.discriminator.trainable = False # Generate image from sampled noise z = Input(shape=(self.latent_dim, )) img_ = self.generator(z) # Encode image img = Input(shape=self.img_shape) z_ = self.encoder(img) # Latent -> img is fake, and img -> latent is valid fake = self.discriminator([z, img_]) valid = self.discriminator([z_, img]) # Set up and compile the combined model # Trains generator to fool the discriminator self.bigan_generator = Model([z, img], [fake, valid]) self.bigan_generator.compile(loss=['binary_crossentropy', 'binary_crossentropy'], optimizer=optimizer) def build_encoder(self): model = Sequential() model.add(Flatten(input_shape=self.img_shape)) model.add(Dense(512)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Dense(512)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Dense(self.latent_dim)) model.summary() img = Input(shape=self.img_shape) z = model(img) return Model(img, z) def build_generator(self): model = Sequential() model.add(Dense(512, input_dim=self.latent_dim)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Dense(512)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Dense(np.prod(self.img_shape), activation='tanh')) model.add(Reshape(self.img_shape)) model.summary() z = Input(shape=(self.latent_dim,)) gen_img = model(z) return Model(z, gen_img) def build_discriminator(self): z = Input(shape=(self.latent_dim, )) img = Input(shape=self.img_shape) d_in = concatenate([z, Flatten()(img)]) model = Dense(1024)(d_in) model = LeakyReLU(alpha=0.2)(model) model = Dropout(0.5)(model) model = Dense(1024)(model) model = LeakyReLU(alpha=0.2)(model) model = Dropout(0.5)(model) model = Dense(1024)(model) model = LeakyReLU(alpha=0.2)(model) model = Dropout(0.5)(model) validity = Dense(1, activation="sigmoid")(model) return Model([z, img], validity) def train(self, epochs, batch_size=128, sample_interval=50): # Load the dataset (X_train, y_train), (X_test, y_test) = fashion_mnist.load_data() # Rescale -1 to 1 X_train = (X_train.astype(np.float32) - 127.5) / 127.5 X_train = np.expand_dims(X_train, axis=3) # Adversarial ground truths valid = np.ones((batch_size, 1)) fake = np.zeros((batch_size, 1)) for epoch in range(epochs): # --------------------- # Train Discriminator # --------------------- # Sample noise and generate img z = np.random.normal(size=(batch_size, self.latent_dim)) imgs_ = self.generator.predict(z) # Select a random batch of images and encode idx = np.random.randint(0, X_train.shape[0], batch_size) imgs = X_train[idx] z_ = self.encoder.predict(imgs) # Train the discriminator (img -> z is valid, z -> img is fake) d_loss_real = self.discriminator.train_on_batch([z_, imgs], valid) d_loss_fake = self.discriminator.train_on_batch([z, imgs_], fake) d_loss = 0.5 * np.add(d_loss_real, d_loss_fake) # --------------------- # Train Generator # --------------------- # Train the generator (z -> img is valid and img -> z is is invalid) g_loss = self.bigan_generator.train_on_batch([z, imgs], [valid, fake]) # Plot the progress #print ("%d [D loss: %f, acc: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100*d_loss[1], g_loss[0])) # If at save interval => save generated image samples if epoch % sample_interval == 0: self.sample_interval(epoch) self.save_model() print ("%d [D loss: %f, acc: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100*d_loss[1], g_loss[0])) def sample_interval(self, epoch): r, c = 5, 5 z = np.random.normal(size=(25, self.latent_dim)) gen_imgs = self.generator.predict(z) gen_imgs = 0.5 * gen_imgs + 0.5 fig, axs = plt.subplots(r, c) cnt = 0 for i in range(r): for j in range(c): axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray') axs[i,j].axis('off') cnt += 1 fig.savefig("images/"+name+"/_%d.png" % epoch) plt.imread("images/"+name+"/_%d.png" % epoch) plt.show() plt.close() def save_model(self): def save(model, model_name): model_path = "saved_model/"+name+"/%s.json" % model_name weights_path = "saved_model/"+name+"/%s_weights.hdf5" % model_name options = {"file_arch": model_path, "file_weight": weights_path} json_string = model.to_json() open(options['file_arch'], 'w').write(json_string) model.save_weights(options['file_weight']) save(self.generator, "bigan_generator") save(self.discriminator, "bigan_discriminator") save(self.encoder, "bigan_encoder") bigan = BIGAN() bigan.train(epochs=10, batch_size=128, sample_interval=1) ```
github_jupyter
``` # from google.colab import drive # drive.mount('/content/drive') # path = "/content/drive/MyDrive/Research/cods_comad_plots/sdc_task/mnist/" m = 100 desired_num = 100 import torch.nn as nn import torch.nn.functional as F import pandas as pd import numpy as np import matplotlib.pyplot as plt import torch import torchvision import torchvision.transforms as transforms from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils from matplotlib import pyplot as plt import copy # Ignore warnings import warnings warnings.filterwarnings("ignore") torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5), (0.5))]) trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform) testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform) classes = ('zero','one','two','three','four','five','six','seven','eight','nine') foreground_classes = {'zero','one'} fg_used = '01' fg1, fg2 = 0,1 all_classes = {'zero','one','two','three','four','five','six','seven','eight','nine'} background_classes = all_classes - foreground_classes background_classes trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle = False) testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle = False) dataiter = iter(trainloader) background_data=[] background_label=[] foreground_data=[] foreground_label=[] batch_size=10 for i in range(6000): images, labels = dataiter.next() for j in range(batch_size): if(classes[labels[j]] in background_classes): img = images[j].tolist() background_data.append(img) background_label.append(labels[j]) else: img = images[j].tolist() foreground_data.append(img) foreground_label.append(labels[j]) foreground_data = torch.tensor(foreground_data) foreground_label = torch.tensor(foreground_label) background_data = torch.tensor(background_data) background_label = torch.tensor(background_label) def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img#.numpy() plt.imshow(np.reshape(npimg, (28,28))) plt.show() foreground_data.shape, foreground_label.shape, background_data.shape, background_label.shape val, idx = torch.max(background_data, dim=0, keepdims= True,) # torch.abs(val) mean_bg = torch.mean(background_data, dim=0, keepdims= True) std_bg, _ = torch.max(background_data, dim=0, keepdims= True) mean_bg.shape, std_bg.shape foreground_data = (foreground_data - mean_bg) / std_bg background_data = (background_data - mean_bg) / torch.abs(std_bg) foreground_data.shape, foreground_label.shape, background_data.shape, background_label.shape torch.sum(torch.isnan(foreground_data)), torch.sum(torch.isnan(background_data)) imshow(foreground_data[0]) imshow(background_data[0]) ``` ## generating CIN train and test data ``` np.random.seed(0) bg_idx = np.random.randint(0,47335,m-1) fg_idx = np.random.randint(0,12665) bg_idx, fg_idx for i in background_data[bg_idx]: imshow(i) imshow(torch.sum(background_data[bg_idx], axis = 0)) imshow(foreground_data[fg_idx]) tr_data = ( torch.sum(background_data[bg_idx], axis = 0) + foreground_data[fg_idx] )/m tr_data.shape imshow(tr_data) foreground_label[fg_idx] train_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images train_label=[] # label of mosaic image = foreground class present in that mosaic for i in range(desired_num): np.random.seed(i) bg_idx = np.random.randint(0,47335,m-1) fg_idx = np.random.randint(0,12665) tr_data = ( torch.sum(background_data[bg_idx], axis = 0) + foreground_data[fg_idx] ) / m label = (foreground_label[fg_idx].item()) train_images.append(tr_data) train_label.append(label) train_images = torch.stack(train_images) train_images.shape, len(train_label) imshow(train_images[0]) test_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images test_label=[] # label of mosaic image = foreground class present in that mosaic for i in range(10000): np.random.seed(i) fg_idx = np.random.randint(0,12665) tr_data = ( foreground_data[fg_idx] ) / m label = (foreground_label[fg_idx].item()) test_images.append(tr_data) test_label.append(label) test_images = torch.stack(test_images) test_images.shape, len(test_label) imshow(test_images[0]) torch.sum(torch.isnan(train_images)), torch.sum(torch.isnan(test_images)) np.unique(train_label), np.unique(test_label) ``` ## creating dataloader ``` class CIN_Dataset(Dataset): """CIN_Dataset dataset.""" def __init__(self, list_of_images, labels): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.image = list_of_images self.label = labels def __len__(self): return len(self.label) def __getitem__(self, idx): return self.image[idx] , self.label[idx] batch = 250 train_data = CIN_Dataset(train_images, train_label) train_loader = DataLoader( train_data, batch_size= batch , shuffle=True) test_data = CIN_Dataset( test_images , test_label) test_loader = DataLoader( test_data, batch_size= batch , shuffle=False) train_loader.dataset.image.shape, test_loader.dataset.image.shape ``` ## model ``` class Classification(nn.Module): def __init__(self): super(Classification, self).__init__() self.fc1 = nn.Linear(28*28, 50) self.fc2 = nn.Linear(50, 2) torch.nn.init.xavier_normal_(self.fc1.weight) torch.nn.init.zeros_(self.fc1.bias) torch.nn.init.xavier_normal_(self.fc2.weight) torch.nn.init.zeros_(self.fc2.bias) def forward(self, x): x = x.view(-1, 28*28) x = F.relu(self.fc1(x)) x = self.fc2(x) return x ``` ## training ``` torch.manual_seed(12) classify = Classification().double() classify = classify.to("cuda") import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer_classify = optim.Adam(classify.parameters(), lr=0.001 ) #, momentum=0.9) correct = 0 total = 0 count = 0 flag = 1 with torch.no_grad(): for data in train_loader: inputs, labels = data inputs = inputs.double() inputs, labels = inputs.to("cuda"),labels.to("cuda") outputs = classify(inputs) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the %d train images: %f %%' % ( desired_num , 100 * correct / total)) print("total correct", correct) print("total train set images", total) correct = 0 total = 0 count = 0 flag = 1 with torch.no_grad(): for data in test_loader: inputs, labels = data inputs = inputs.double() inputs, labels = inputs.to("cuda"),labels.to("cuda") outputs = classify(inputs) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the %d test images: %f %%' % ( 10000 , 100 * correct / total)) print("total correct", correct) print("total train set images", total) nos_epochs = 200 tr_loss = [] for epoch in range(nos_epochs): # loop over the dataset multiple times epoch_loss = [] cnt=0 iteration = desired_num // batch running_loss = 0 #training data set for i, data in enumerate(train_loader): inputs, labels = data inputs = inputs.double() inputs, labels = inputs.to("cuda"),labels.to("cuda") inputs = inputs.double() # zero the parameter gradients optimizer_classify.zero_grad() outputs = classify(inputs) _, predicted = torch.max(outputs.data, 1) # print(outputs) # print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1)) loss = criterion(outputs, labels) loss.backward() optimizer_classify.step() running_loss += loss.item() mini = 1 if cnt % mini == mini-1: # print every 40 mini-batches # print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini)) epoch_loss.append(running_loss/mini) running_loss = 0.0 cnt=cnt+1 tr_loss.append(np.mean(epoch_loss)) if(np.mean(epoch_loss) <= 0.001): break; else: print('[Epoch : %d] loss: %.3f' %(epoch + 1, np.mean(epoch_loss) )) print('Finished Training') plt.plot(tr_loss) correct = 0 total = 0 count = 0 flag = 1 with torch.no_grad(): for data in train_loader: inputs, labels = data inputs = inputs.double() inputs, labels = inputs.to("cuda"),labels.to("cuda") outputs = classify(inputs) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the %d train images: %f %%' % ( desired_num , 100 * correct / total)) print("total correct", correct) print("total train set images", total) correct = 0 total = 0 count = 0 flag = 1 with torch.no_grad(): for data in test_loader: inputs, labels = data inputs = inputs.double() inputs, labels = inputs.to("cuda"),labels.to("cuda") outputs = classify(inputs) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the %d train images: %f %%' % ( 10000 , 100 * correct / total)) print("total correct", correct) print("total test set images", total) ```
github_jupyter
# py12box model usage This notebook shows how to set up and run the AGAGE 12-box model. ## Model schematic The model uses advection and diffusion parameters to mix gases between boxes. Box indices start at the northern-most box and are as shown in the following schematic: <img src="box_model_schematic.png" alt="Box model schematic" style="display:block;margin-left:auto;margin-right:auto;width:20%"/> ## Model inputs We will be using some synthetic inputs for CFC-11. Input files are in: ```data/example/CFC-11``` The location of this folder will depend on where you've installed py12box and your system. Perhaps the easiest place to view the contents is [in the repository](https://github.com/mrghg/py12box/tree/develop/py12box/data/example/CFC-11). In this folder, you will see two files: ```CFC-11_emissions.csv``` ```CFC-11_initial_conditions.csv``` As the names suggest, these contain the emissions, initial conditions and lifetimes. ### Emissions The emissions file has four columns: ```year, box_1, box_2, box_3, box_4```. The number of rows in this file determines the length of the box model simulation. The ```year``` column should contain a decimal date (e.g. 2000.5 for ~June 2000), and can be monthly or annual resolution. The other columns specify the emissions in Gg/yr in each surface box. ### Initial conditions The initial conditions file can be used to specify the mole fraction in pmol/mol (~ppt) in each of the 12 boxes. ## How to run Firstly import the ```Model``` class. This class contains all the input variables (emissions, initial conditions, etc., and run functions). We are also importing the get_data helper function, only needed for this tutorial, to point to input data files. ``` # Import from this package from py12box.model import Model from py12box import get_data # Import matplotlib for some plots import matplotlib.pyplot as plt ``` The ```Model``` class takes two arguments, ```species``` and ```project_directory```. The latter is the location of the input files, here just redirecting to the "examples" folder. The initialisation step may take a few seconds, mainly to compile the model. ``` # Initialise the model mod = Model("CFC-11", get_data("example/CFC-11")) ``` Assuming this has compiled correctly, you can now check the model inputs by accessing elements of the model class. E.g. to see the emissions: ``` mod.emissions ``` In this case, the emissions should be a 4 x 12*n_years numpy array. If annual emissions were specified in the inputs, the annual mean emissions are repeated each month. We can now run the model using: ``` # Run model mod.run() ``` The primary outputs that you'll be interested in are ```mf``` for the mole fraction (pmol/mol) in each of the 12 boxes at each timestep. Let's plot this up: ``` plt.plot(mod.time, mod.mf[:, 0]) plt.plot(mod.time, mod.mf[:, 3]) plt.ylabel("%s (pmol mol$^{-1}$)" % mod.species) plt.xlabel("Year") plt.show() ``` We can also view other outputs such as the burden and loss. Losses are contained in a dictionary, with keys: - ```OH``` (tropospheric OH losses) - ```Cl``` (losses via tropospheric chlorine) - ```other``` (all other first order losses) For CFC-11, the losses are primarily in the stratosphere, so are contained in ```other```: ``` plt.plot(mod.emissions.sum(axis = 1).cumsum()) plt.plot(mod.burden.sum(axis = 1)) plt.plot(mod.losses["other"].sum(axis = 1).cumsum()) ``` Another useful output is the lifetime. This is broken down in a variety of ways. Here we'll plot the global lifetime: ``` plt.plot(mod.instantaneous_lifetimes["global_total"]) plt.ylabel("Global instantaneous lifetime (years)") ``` ## Setting up your own model run To create your own project, create a project folder (can be anywhere on your filesystem). The folder must contain two files: ```<species>_emissions.csv``` ```<species>_initial_conditions.csv``` To point to the new project, py12box will expect a pathlib.Path object, so make sure you import this first: ``` from pathlib import Path new_model = Model("<SPECIES>", Path("path/to/project/folder")) ``` Once set up, you can run the model using: ``` new_model.run() ``` Note that you can modify any of the model inputs in memory by modifying the model class. E.g. to see what happens when you double the emissions: ``` new_model.emissions *= 2. new_model.run() ``` ## Changing lifetimes If no user-defined lifetimes are passed to the model, it will use the values in ```data/inputs/species_info.csv``` However, you can start the model up with non-standard lifetimes using the following arguments to the ```Model``` class (all in years): ```lifetime_strat```: stratospheric lifetime ```lifetime_ocean```: lifetime with respect to ocean uptake ```lifetime_trop```: non-OH losses in the troposphere e.g.: ``` new_model = Model("<SPECIES>", Path("path/to/project/folder"), lifetime_strat=100.) ``` To change the tropospheric OH lifetime, you need to modify the ```oh_a``` or ```oh_er``` attributes of the ```Model``` class. To re-tune the lifetime of the model in-memory, you can use the ```tune_lifetime``` method of the ```Model``` class: ``` new_model.tune_lifetime(lifetime_strat=50., lifetime_ocean=1e12, lifetime_trop=1e12) ```
github_jupyter
# Windows Metadata Structure and Value Issues This notebook shows a few examples of the varience that occurs and encumbers parsing windows metadata extracted and serialised via `Get-EventMetadata.ps1` into the file `.\Extracted\EventMetadata.json.zip`. Below is the number of records in my sample metadata extract. ``` import os, zipfile, json, pandas as pd if 'Windows Event Metadata' not in os.getcwd(): os.chdir('Windows Event Metadata') json_import = json.load(zipfile.ZipFile('./Extracted/EventMetadata.json.zip', 'r').open('EventMetadata.json')) df = pd.json_normalize(json_import) n_records = len(df) n_records ``` ## Null vs empty lists for Keywords, Tasks, Opcodes and Levels It's very common for some of the provider or message structure to not be used, e.g. Keywords. How these unused or undefined values are handled is highly inconsistent. Windows provider metadata has at least 3 variations for undefined metadata: - Null value - Empty list - List which may contain a null value ## Null values Keyword nodes for Providers can have null values or empty lists. As another example, the Keyword metadata for 'Microsoft-Windows-EtwCollector' is serialised as: ```json { { "Name": "Microsoft-Windows-EtwCollector", "Id": "9e5f9046-43c6-4f62-ba13-7b19896253ff", "MessageFilePath": "C:\\WINDOWS\\system32\\ieetwcollectorres.dll", "ResourceFilePath": "C:\\WINDOWS\\system32\\ieetwcollectorres.dll", "ParameterFilePath": null, "HelpLink": null, "DisplayName": null, "LogLinks": [], "Levels": null, "Opcodes": null, "Keywords": null, "Tasks": null, "Events": null, "ProviderName": "Microsoft-Windows-EtwCollector" } } ``` The listed summation results below per column label show a handful of Providers didn't use lists for Keywords, Tasks, and Opcodes, but instead were simply Null. E.g. 21 Providers had Null for Keywords. ``` df.isnull().sum() ``` ## Empty lists For providers, quite often empty lists indicate no keywords are defined. E.g. note the `"Keywords": []` for the Powershell provider (JSON object truncated for brevity). ```json { "Name": "PowerShell", "Id": "00000000-0000-0000-0000-000000000000", "MessageFilePath": "C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\pwrshmsg.dll", "ResourceFilePath": null, "ParameterFilePath": null, "HelpLink": "https://go.microsoft.com/fwlink/events.asp?CoName=Microsoft%20Corporation&ProdName=Microsoft%c2%ae%20Windows%c2%ae%20Operating%20System&ProdVer=10.0.18362.1&FileName=pwrshmsg.dll&FileVer=10.0.18362.1", "DisplayName": null, "LogLinks": [ { "LogName": "Windows PowerShell", "IsImported": true, "DisplayName": null } ], "Levels": [], "Opcodes": [], "Keywords": [], "Tasks": [ { "Name": "Engine Health\r\n", "Value": 1, "DisplayName": "Engine Health", "EventGuid": "00000000-0000-0000-0000-000000000000" }, { "Name": "Command Health\r\n", "Value": 2, "DisplayName": "Command Health", "EventGuid": "00000000-0000-0000-0000-000000000000" } ] } ``` E.g. Overall there were 684 empty list values in Keywords. ``` empty_counts = {} for c in ['Keywords', 'Tasks', 'Opcodes', 'Levels']: empty_counts.update( {c: len(df[df[c].apply(lambda i: isinstance(i, list) and len(i) == 0)])} ) empty_counts ``` ## Null values in Keyword lists ### Event Keywords For the Event metadata level, keywords can be defined as an empty list, but more often, they are serialised as a list usually with a null item regardless of how many other valid keywords are defined. Keywords at the Provider metadata level don't seem to have nullfied name values (both 'DisplayName' and 'Name'). ``` df_e = pd.json_normalize(json_import, record_path='Events', meta_prefix='Provider.', meta=['Id', 'Name']) len(df_e) ``` Sometimes Keywords at the Event metadata level are empty lists, but not often. Only ~1200 used a null value. ``` len(df_e[df_e['Keywords'].apply(lambda i: isinstance(i, list) and len(i) == 0)]) ``` As a sample of events using an empty keyword list object. ``` df_e[df_e['Keywords'].apply(lambda i: isinstance(i, list) and len(i) == 0)].head() ``` Most Keywords at the Event metadata level do seem to have at least one item with both 'DisplayName' and 'Name' as null. ``` def has_null_names(o): if isinstance(o, list): for i in o: if i['Name'] == None and i['DisplayName'] == None: return True elif isinstance(o, dict): return i['Name'] == None and i['DisplayName'] == None return False len(df_e[df_e['Keywords'].apply(has_null_names)]) ``` And as a sample of the dual null keyword names ``` pd.options.display.max_colwidth = 100 df_e[df_e['Keywords'].apply(has_null_names)][['Id','Keywords','Description','LogLink.LogName','Provider.Name']].head() ``` With over 40,000 having the nullified keyword name present, it be interesting to observe the events that dont. E.g. for Keywords: ``` pandas.reset_option('display.max_colwidth') df_e[df_e['Keywords'].apply(lambda k: not has_null_names(k))].head() ``` Unlike Keywords, Task, Opcode and Level objects were already flattened by `json_normalize()` into lables (as these are not nested in a list like Keywords). E.g a sample of nullified tasks. ``` df_e[df_e['Task.Name'].isnull() & df_e['Task.DisplayName'].isnull()][['Id','Task.Value','Task.Name','Task.DisplayName','Description','LogLink.LogName','Provider.Name']].head() ``` The nullified names for Tasks, Opcodes and Levels counted. ``` display_name_and_name_null_count = {} for c in ['Level', 'Task', 'Opcode']: display_name_and_name_null_count.update( {c: len(df_e[df_e[f'{c}.Name'].isnull() & df_e[f'{c}.DisplayName'].isnull()])} ) display_name_and_name_null_count ``` So while not being lists, the Task, Opcode and Level metadata for events is often nullfied. Even 3863 event ID had no level defined. ### Provider Keywords, Tasks, Opcodes and Levels However, the Keyword metadata for Providers doens't include the nullfied name items like seen in the Event metadata. ``` has_null_names_in_list_counts = {} for c in ['Keywords', 'Tasks', 'Opcodes', 'Levels']: has_null_names_in_list_counts.update( {c: len(df[df[c].apply(has_null_names)])} ) has_null_names_in_list_counts ``` ## Conclusion Undefined Keywords, Tasks, Opcodes and Levels have widely divergent data structures. Sometimes it's a simple Null value and other times an empty list. But the metadata level of Provider vs Event also affects the structure used. Keyword lists are particularly awkward and often include special nullified value with a null 'DisplayName' and 'Names'. This nullfied value seems to be unecessarily included along with non-null defined keywords in the list.
github_jupyter
``` %load_ext autoreload %autoreload 2 ``` This notebook is a tentative overview of how we can use my custom library `neurgoo` to train ANNs. Everything is written from scratch, directly utilizing `numpy`'s arrays and vectorizations. `neurgoo`'s philosophy is to be as modular as possible, inspired from PyTorch's API design. # Relevant Standard Imports ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline from sklearn.model_selection import train_test_split from keras.datasets import mnist import tensorflow as tf from pprint import pprint ``` # Data Preprocessing In this section we: - load MNIST data - normalize pixels to [0, 1] (dividing pixel values by 255) - do train/val/test splits. ## Train/Val/Test splits Since we're using data from keras module, it only returns ``` def load_data(): """ This loader function encapsulates preprocessing as well as data splits """ (X_train, Y_train), (X_test, Y_test) = mnist.load_data() Y_train = tf.keras.utils.to_categorical(Y_train) Y_test = tf.keras.utils.to_categorical(Y_test) h, w = X_train[0].shape X_train = X_train.reshape((len(X_train), w*h))/255 X_test = X_test.reshape((len(X_test), w*h))/255 X_val, X_test, Y_val, Y_test = train_test_split(X_test, Y_test, test_size=0.5) return (X_train, Y_train), (X_val, Y_val), (X_test, Y_test) (X_train, Y_train), (X_val, Y_val), (X_test, Y_test) = load_data() X_train.shape, Y_train.shape X_val.shape, Y_val.shape X_test.shape, Y_test.shape ``` # Custom Library implementation Now, we're going to use my `neurgoo` library (see `neurgoo` packages). There are mostly 5 components that are needed for training: - layers (either linear or activation) - models (encapsulate N number of layers into a container) - losses (compute loss and gradient to find final dJ/dW) - optimizers (perform weight update operation) - trainers (encapsulate all the training loop into one single coherent method) > See the report for more details on the architecture and implementation ## Import all the necessary stuff Note: Please `pip install requirements.txt` first. `neurgoo` can also be installed locally as: - `pip install -e .` - or `python setup.py install` - or simple copy-paste the package `neurgoo` anywhere to use it. ### Layers ``` from neurgoo.layers.linear import Linear from neurgoo.layers.activations import( ReLU, Sigmoid, Softmax, ) ``` ### Models ``` from neurgoo.models import DefaultNNModel ``` ### Losses ``` from neurgoo.losses import ( BinaryCrossEntropyLoss, CrossEntropyLossWithLogits, HingeLoss, MeanSquaredError, ) ``` ### Optimizers ``` from neurgoo.optimizers import SGD ``` ### Trainers ``` from neurgoo.trainers import DefaultModelTrainer ``` ### Evaluators ``` from neurgoo.misc.eval import Evaluator ``` ## Combine Components for training Now we use available components to form one coherent trainer ### build model We can add any number of layers. Each `Linear` layer can take `in_features` number of inputs and gives `num_neurons` number of output. Linear layer also has different initialization methods which we can access right after building the layer object. (This is like a builder design pattern): - `initialize_random()` initializes weights randomly - `initialize_gaussian(variance=...)` initializes weights from a gaussian distribution with **mu** centered at 0 and variance supplied Each layer's forward pass is done through `feed_forward(...)` method. Each layer's backward pass is done through `backpropagate(...)` method. ### model 1 ``` # a model with single hidden layer with 512 neurons model = DefaultNNModel() model.add_layer( Linear(num_neurons=512, in_features=X_train.shape[1])\ .initialize_gaussian(variance=2/784) ) model.add_layer(ReLU()) model.add_layer(Linear(num_neurons=10, in_features=512)) ``` ### model 2 ``` # a model with single hidden layer with 128 neurons model = DefaultNNModel() model.add_layer( Linear(num_neurons=128, in_features=X_train.shape[1])\ .initialize_gaussian(variance=2/784) ) model.add_layer(ReLU()) model.add_layer(Linear(num_neurons=10, in_features=128)) ``` ### model 3 ``` # a model with 2 hidden layers model = DefaultNNModel() model.add_layer( Linear(num_neurons=256, in_features=X_train.shape[1])\ .initialize_gaussian(variance=2/784) ) model.add_layer(ReLU()) model.add_layer( Linear(num_neurons=128, in_features=256)\ .initialize_gaussian(variance=2/256) ) model.add_layer(ReLU()) model.add_layer(Linear(num_neurons=10, in_features=128)) ``` #### model 4 ``` # a model with 2 hidden layers model = DefaultNNModel() model.add_layer( Linear(num_neurons=512, in_features=X_train.shape[1]).initialize_gaussian() ) model.add_layer(ReLU()) model.add_layer( Linear(num_neurons=256, in_features=512).initialize_gaussian() ) model.add_layer(ReLU()) model.add_layer(Linear(num_neurons=10, in_features=256)) print(model) ``` ### build optimizer ``` params = model.params() print(params) optimizer = SGD(params=params, lr=0.001) ``` ### build loss ``` # loss = CrossEntropyLossWithLogits() loss = HingeLoss() ``` ### build trainer ``` # helper component for evaluating the model evaluator = Evaluator(num_classes=10) trainer = DefaultModelTrainer( model=model, optimizer=optimizer, evaluator=evaluator, loss=loss, debug=False, ) ``` ## Start Training We call `fit(...)` method of the trainer.The trainer takes in splitted data, number of epochs and batch_size. Once the training is done, we get a ``dict`` that represents history of train/val/test for each epoch. > Note: test is evaluated only when the whole training is complete, after the end of last epoch. During training, several debug logs are also printed like: - Information about number of epochs passed - Train accuracy/loss - Validation accuracy/loss ``` print(model[-1], loss) history = trainer.fit( X_train=X_train, Y_train=Y_train, X_val=X_val, Y_val=Y_val, X_test=X_test, Y_test=Y_test, nepochs=75, batch_size=64, ) ``` ### Understanding history The history `dict` returned by the trainer consits of training history for train/val/test. The `train` and `val` consists of list of `neurgoo.misc.eval.EvalData` objects. Each `EvalData` object can store: - epoch - loss - accuracy - precision (to be implemented) - recall (to be implemented) Unlike `train` and `val`, the `test` history is a single `EvalData` object, not a list which stores final evaluation data after the end of the training. ``` history["train"][:10] history["val"][:10] history["test"] ``` ### Plot history We use the plotting tools from neurgoo. The `plot_history` is convenient helper that takes in the history dict and plots the metrics. Since, we can plot train-vs-val losses and accuracies, the parameter `plot_type` controls what type of plot we want. - `plot_type="loss"` for plotting losses - `plot_type="accuracy"` for plotting accuracies ``` from neurgoo.misc.plot_utils import plot_history plot_history(history, plot_type="loss") plot_history(history, plot_type="accuracy") ``` # Inference Debug Now that we have trained our model, we can do inference directly through its `predict(...)` method which takes in X values and gives final output. In this section we will do infernece on a random test data point. The `plot_images` will plot image as well as the label assigned from target or got from predictions (np.argmax). For the predicted Y values, we also add a probability text beside the label to debug the probabilities. ## Note If we have the final layer as `neurgoo.layers.activations.Softmax`, we can get normalized probabilities directly from the prediction. If we have usual `Linear` layer in the last, we won't have normalized probabilitites. So, we need to pass the prediction to a Softmax and then get the probabilities. ``` import random def plot_images(X, Y, cols=5, title=""): print(f"X.shape: {X.shape} | Y.shape: {Y.shape}") _, axes = plt.subplots(nrows=1, ncols=cols, figsize=(10, 3)) n = int(X.shape[1]**0.5) probs = Softmax()(Y) for ax, img, t, p in zip(axes, X, Y, probs): label = np.argmax(t) prob = round(np.max(p), 3) img = img.reshape((n, n)) ax.set_axis_off() ax.imshow(img, cmap=plt.cm.gray_r, interpolation="nearest") txt = f"{title}: {label}" txt = f"{txt}({prob})" if "inf" in title.lower() else txt ax.set_title(txt) X_test.shape, Y_test.shape model.eval_mode() k = 7 for i in range(2): indices_infer = random.choices(range(len(X_test)), k=k) X_infer, Y_infer_target = X_test[indices_infer], Y_test[indices_infer] # forward pass predictions = model.predict(X_infer) plot_images(X_infer, Y_infer_target, cols=k, title="Target") plot_images(X_infer, predictions, cols=k, title="Inf") ``` # Observations 1) For visually similar numbers like 7 and 1, sometimes the model is less confidence while trying to predict for the number **7**. In such cases, we have relatively lower probabilities like `0.8`, `0.9`, etc. This can be mitigated if we "properly" trained the model with: - better architecture - more training time - adding regularizations and dropout tricks 2) For unique images like `0, 5, 6`, we see high probilities as the model doesn't get "confused" much. # Further Improvements to neurgoo There's definitely more rooms for improvement in `neurgoo`. We could: - implement `Dropout` and `BatchNorm` layers at `neurgoo.layers` using the base class `neurgoo._base.AbstractLayer` - add regularization techniques - implement better optimizers such as addition of Nesterov momentum, Adam optimizers, etc. This could be done by adding new optimizer components at `neurgoo.optimizers`, directly derived from `neurgoo._base.AbstractOptimizer` - use automatic differentiation techniques [0] for computing accurate gradients. # References and footnotes - [0] - https://en.wikipedia.org/wiki/Automatic_differentiation - [PyTorch Internals](http://blog.ezyang.com/2019/05/pytorch-internals/) - [How Computational Graphs are Constructed in PyTorch](https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/) - [Why is the ReLU function not differentiable at x=0?](https://sebastianraschka.com/faq/docs/relu-derivative.html)
github_jupyter
# Sersic Profiles <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Setup" data-toc-modified-id="Setup-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Setup</a></span></li><li><span><a href="#Sersic-parameter-fits" data-toc-modified-id="Sersic-parameter-fits-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Sersic parameter fits</a></span></li><li><span><a href="#Timecourse-of-Sersic-profiles" data-toc-modified-id="Timecourse-of-Sersic-profiles-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Timecourse of Sersic profiles</a></span><ul class="toc-item"><li><span><a href="#Half-mass-radius" data-toc-modified-id="Half-mass-radius-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Half-mass radius</a></span></li><li><span><a href="#Sersic-parameter" data-toc-modified-id="Sersic-parameter-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Sersic parameter</a></span></li></ul></li><li><span><a href="#Bulge-mass-profiles" data-toc-modified-id="Bulge-mass-profiles-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Bulge mass profiles</a></span><ul class="toc-item"><li><span><a href="#MW,-3-timepoints" data-toc-modified-id="MW,-3-timepoints-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>MW, 3 timepoints</a></span></li><li><span><a href="#M31,-3-timepoints" data-toc-modified-id="M31,-3-timepoints-4.2"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>M31, 3 timepoints</a></span></li><li><span><a href="#MW-vs-M31,-two-timepoints" data-toc-modified-id="MW-vs-M31,-two-timepoints-4.3"><span class="toc-item-num">4.3&nbsp;&nbsp;</span>MW vs M31, two timepoints</a></span></li></ul></li></ul></div> ## Setup ``` import numpy as np import astropy.units as u from scipy.optimize import curve_fit # import plotting modules import matplotlib.pyplot as plt import matplotlib from matplotlib import rcParams %matplotlib inline from galaxy.galaxy import Galaxy from galaxy.galaxies import Galaxies from galaxy.centerofmass import CenterOfMass from galaxy.massprofile import MassProfile from galaxy.timecourse import TimeCourse def get_sersic(galname, snap, R): mp = MassProfile(Galaxy(galname, snap, usesql=True)) Re_bulge, bulge_total, BulgeI = mp.bulge_Re(R) n, err = mp.fit_sersic_n(R, Re_bulge, bulge_total, BulgeI) return Re_bulge, n, err tc = TimeCourse() # Array of radii R = np.arange(0.1, 30, 0.1) * u.kpc ``` ## Sersic parameter fits The next cell takes significant time to run so is commented out. ``` # with open('./sersic.txt', 'w') as f: # f.write(f"# {'gal':>5s}{'snap':>8s}{'t':>8s}{'Re':>8s}{'n':>8s}{'err':>8s}\n") # for galname in ('M31','MW'): # print(galname) # for snap in np.arange(0,802): # t = tc.snap2time(snap) # try: # Re, n, err = get_sersic(galname, snap, R) # with open('./sersic.txt', 'a') as f: # f.write(f"{galname:>7s}{snap:8d}{t:8.3f}{Re.value:8.2f}{n:8.2f}{err:8.4f}\n") # except ValueError: # print(galname, snap) ``` ## Timecourse of Sersic profiles ``` ser = np.genfromtxt('sersic_full.txt', names=True, skip_header=0, dtype=[('gal', 'U3'), ('snap', '<i8'), ('t', '<f8'), ('Re', '<f8'), ('n', '<f8'), ('err', '<f8')]) MW = ser[ser['gal'] == 'MW'] M31 = ser[ser['gal'] == 'M31'] ``` ### Half-mass radius ``` fig = plt.figure(figsize=(8,5)) ax0 = plt.subplot() # add the curves n = 1 # plot every n'th time point ax0.plot(MW['t'][::n], MW['Re'][::n], 'r-', lw=2, label='MW') ax0.plot(M31['t'][::n], M31['Re'][::n], 'b:', lw=2, label='M31') ax0.legend(fontsize='xx-large', shadow=True) # Add axis labels ax0.set_xlabel("time (Gyr)", fontsize=22) ax0.set_ylabel("Re (kpc)", fontsize=22) ax0.set_xlim(0,12) ax0.set_ylim(0,6) # ax0.set_title("Hernquist scale radius", fontsize=24) #adjust tick label font size label_size = 22 rcParams['xtick.labelsize'] = label_size rcParams['ytick.labelsize'] = label_size plt.tight_layout() plt.savefig('sersic_Re.pdf', rasterized=True, dpi=350); ``` ### Sersic parameter ``` fig = plt.figure(figsize=(8,5)) ax0 = plt.subplot() # add the curves n = 1 # plot every n'th time point ax0.errorbar(MW['t'][::n], MW['n'][::n], yerr=MW['err'][::n], fmt='r-', lw=2, label='MW') ax0.errorbar(M31['t'][::n], M31['n'][::n], yerr=M31['err'][::n], fmt='b:', lw=2, label='M31') ax0.legend(fontsize='xx-large', shadow=True) # Add axis labels ax0.set_xlabel("time (Gyr)", fontsize=22) ax0.set_ylabel("Sersic $n$", fontsize=22) ax0.set_xlim(0,12) ax0.set_ylim(5,7) # ax0.set_title("Hernquist scale radius", fontsize=24) #adjust tick label font size label_size = 22 rcParams['xtick.labelsize'] = label_size rcParams['ytick.labelsize'] = label_size plt.tight_layout() plt.savefig('sersic_n.pdf', rasterized=True, dpi=350); ``` ## Bulge mass profiles ``` Re_bulge = {} bulge_total = {} BulgeI = {} Sersic = {} n = {} for galname in ('MW','M31'): for snap in (1, 335, 801): key = f'{galname}_{snap:03}' mp = MassProfile(Galaxy(galname, snap, usesql=True)) Re_bulge[key], bulge_total[key], BulgeI[key] = mp.bulge_Re(R) n[key], _ = mp.fit_sersic_n(R, Re_bulge[key], bulge_total[key], BulgeI[key]) Sersic[key] = mp.sersic(R.value, Re_bulge[key].value, n[key], bulge_total[key]) ``` ### MW, 3 timepoints ``` fig = plt.figure(figsize=(8,8)) # subplots = (121, 122) ax0 = plt.subplot() galname = 'MW' for snap in (1, 335, 801): key = f'{galname}_{snap:03}' t = tc.snap2time(snap) # plot the bulge luminosity density as a proxy for surface brightness ax0.semilogy(R, BulgeI[key], lw=2, label=f'Bulge Density, t={t:.2f} Gyr') ax0.semilogy(R, Sersic[key], lw=3, ls=':', label=f'Sersic n={n[key]:.2f}, Re={Re_bulge[key]:.1f}') # Add axis labels ax0.set_xlabel('Radius (kpc)', fontsize=22) ax0.set_ylabel('Log(I) $L_\odot/kpc^2$', fontsize=22) ax0.set_xlim(0,20) #adjust tick label font size label_size = 22 matplotlib.rcParams['xtick.labelsize'] = label_size matplotlib.rcParams['ytick.labelsize'] = label_size # add a legend with some customizations. legend = ax0.legend(loc='upper right',fontsize='x-large'); ``` ### M31, 3 timepoints ``` fig = plt.figure(figsize=(8,8)) # subplots = (121, 122) ax0 = plt.subplot() galname = 'M31' for snap in (1, 335, 801): key = f'{galname}_{snap:03}' t = tc.snap2time(snap) # plot the bulge luminosity density as a proxy for surface brightness ax0.semilogy(R, BulgeI[key], lw=2, label=f'Bulge Density, t={t:.2f} Gyr') ax0.semilogy(R, Sersic[key], lw=3, ls=':', label=f'Sersic n={n[key]:.2f}, Re={Re_bulge[key]:.1f}') # Add axis labels ax0.set_xlabel('Radius (kpc)', fontsize=22) ax0.set_ylabel('Log(I) $L_\odot/kpc^2$', fontsize=22) ax0.set_xlim(0,20) #adjust tick label font size label_size = 22 matplotlib.rcParams['xtick.labelsize'] = label_size matplotlib.rcParams['ytick.labelsize'] = label_size # add a legend with some customizations. legend = ax0.legend(loc='upper right',fontsize='xx-large') plt.tight_layout() plt.savefig('MW_bulge_sersic.pdf', rasterized=True, dpi=350); ``` ### MW vs M31, two timepoints ``` fig = plt.figure(figsize=(8,8)) # subplots = (121, 122) ax0 = plt.subplot() galname = 'MW' for snap in (1, 801): key = f'{galname}_{snap:03}' t = tc.snap2time(snap) # plot the bulge luminosity density as a proxy for surface brightness ax0.semilogy(R, BulgeI[key], lw=2, label=f'MW, t={t:.2f} Gyr') galname = 'M31' for snap in (1, 801): key = f'{galname}_{snap:03}' t = tc.snap2time(snap) # plot the bulge luminosity density as a proxy for surface brightness ax0.semilogy(R, BulgeI[key], lw=2, ls=':', label=f'M31, t={t:.2f} Gyr') # Add axis labels ax0.set_xlabel('Radius (kpc)', fontsize=22) ax0.set_ylabel('Log(I) $L_\odot/kpc^2$', fontsize=22) ax0.set_xlim(0,20) #adjust tick label font size label_size = 22 matplotlib.rcParams['xtick.labelsize'] = label_size matplotlib.rcParams['ytick.labelsize'] = label_size # add a legend with some customizations. legend = ax0.legend(loc='upper right',fontsize='xx-large') plt.tight_layout() plt.savefig('bulge_mp.pdf', rasterized=True, dpi=350); ```
github_jupyter
# ORF recognition by LSTM LSTM and GRU are two variants of recurrent neural network (RNN). LSTM was incapable of recognizing short ORFs. How about GRU? ``` import time t = time.time() time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)) PC_SEQUENCES=20000 # how many protein-coding sequences NC_SEQUENCES=20000 # how many non-coding sequences PC_TESTS=1000 NC_TESTS=1000 BASES=125 # how long is each sequence ALPHABET=4 # how many different letters are possible INPUT_SHAPE_2D = (BASES,ALPHABET,1) # Conv2D needs 3D inputs INPUT_SHAPE = (BASES,ALPHABET) # Conv1D needs 2D inputs NEURONS = 32 #DROP_RATE = 0.2 EPOCHS=50 # how many times to train on all the data SPLITS=5 # SPLITS=3 means train on 2/3 and validate on 1/3 FOLDS=5 # train the model this many times (range 1 to SPLITS) import sys try: from google.colab import drive IN_COLAB = True print("On Google CoLab, mount cloud-local file, get our code from GitHub.") PATH='/content/drive/' #drive.mount(PATH,force_remount=True) # hardly ever need this #drive.mount(PATH) # Google will require login credentials DATAPATH=PATH+'My Drive/data/' # must end in "/" import requests r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_gen.py') with open('RNA_gen.py', 'w') as f: f.write(r.text) from RNA_gen import * r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py') with open('RNA_describe.py', 'w') as f: f.write(r.text) from RNA_describe import * r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py') with open('RNA_prep.py', 'w') as f: f.write(r.text) from RNA_prep import * except: print("CoLab not working. On my PC, use relative paths.") IN_COLAB = False DATAPATH='data/' # must end in "/" sys.path.append("..") # append parent dir in order to use sibling dirs from SimTools.RNA_gen import * from SimTools.RNA_describe import * from SimTools.RNA_prep import * MODELPATH="BestModel" # saved on cloud instance and lost after logout #MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login if not assert_imported_RNA_gen(): print("ERROR: Cannot use RNA_gen.") if not assert_imported_RNA_prep(): print("ERROR: Cannot use RNA_prep.") from os import listdir import csv from zipfile import ZipFile import numpy as np import pandas as pd from scipy import stats # mode from sklearn.preprocessing import StandardScaler from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from keras.models import Sequential from keras.layers import Dense,Embedding,Dropout from keras.layers import LSTM,GRU,SimpleRNN from keras.losses import BinaryCrossentropy # tf.keras.losses.BinaryCrossentropy import matplotlib.pyplot as plt from matplotlib import colors mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1 np.set_printoptions(precision=2) # Use code from our SimTools library. def make_generators(seq_len): pcgen = Collection_Generator() pcgen.get_len_oracle().set_mean(seq_len) pcgen.set_seq_oracle(Transcript_Oracle()) ncgen = Collection_Generator() ncgen.get_len_oracle().set_mean(seq_len) return pcgen,ncgen def get_the_facts(seqs): rd = RNA_describer() facts = rd.get_three_lengths(seqs) facts_ary = np.asarray(facts) # 5000 rows, 3 columns print("Facts array:",type(facts_ary)) print("Facts array:",facts_ary.shape) # Get the mean of each column mean_5utr, mean_orf, mean_3utr = np.mean(facts_ary,axis=0) std_5utr, std_orf, std_3utr = np.std(facts_ary,axis=0) print("mean 5' UTR length:",int(mean_5utr),"+/-",int(std_5utr)) print("mean ORF length:",int(mean_orf), "+/-",int(std_orf)) print("mean 3' UTR length:",int(mean_3utr),"+/-",int(std_3utr)) pc_sim,nc_sim = make_generators(BASES) pc_train = pc_sim.get_sequences(PC_SEQUENCES) nc_train = nc_sim.get_sequences(NC_SEQUENCES) print("Train on",len(pc_train),"PC seqs") get_the_facts(pc_train) print("Train on",len(nc_train),"NC seqs") get_the_facts(nc_train) # Use code from our SimTools library. X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles print("Data ready.") def make_DNN(): print("make_DNN") print("input shape:",INPUT_SHAPE) dnn = Sequential() #dnn.add(Embedding(input_dim=ALPHABET, output_dim=ALPHABET)) #VOCABULARY_SIZE, EMBED_DIMEN, input_length=1000, input_length=1000, mask_zero=True) #input_dim=[None,VOCABULARY_SIZE], output_dim=EMBED_DIMEN, mask_zero=True) dnn.add(GRU(NEURONS,return_sequences=True,input_shape=INPUT_SHAPE)) dnn.add(GRU(NEURONS,return_sequences=False)) dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32)) dnn.add(Dense(1,activation="sigmoid",dtype=np.float32)) dnn.compile(optimizer='adam', loss=BinaryCrossentropy(from_logits=False), metrics=['accuracy']) # add to default metrics=loss dnn.build() # input_shape=INPUT_SHAPE) #ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE) #bc=tf.keras.losses.BinaryCrossentropy(from_logits=False) #model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"]) return dnn model = make_DNN() print(model.summary()) from keras.callbacks import ModelCheckpoint def do_cross_validation(X,y): cv_scores = [] fold=0 mycallbacks = [ModelCheckpoint( filepath=MODELPATH, save_best_only=True, monitor='val_accuracy', mode='max')] splitter = KFold(n_splits=SPLITS) # this does not shuffle for train_index,valid_index in splitter.split(X): if fold < FOLDS: fold += 1 X_train=X[train_index] # inputs for training y_train=y[train_index] # labels for training X_valid=X[valid_index] # inputs for validation y_valid=y[valid_index] # labels for validation print("MODEL") # Call constructor on each CV. Else, continually improves the same model. model = model = make_DNN() print("FIT") # model.fit() implements learning start_time=time.time() history=model.fit(X_train, y_train, epochs=EPOCHS, verbose=1, # ascii art while learning callbacks=mycallbacks, # called at end of each epoch validation_data=(X_valid,y_valid)) end_time=time.time() elapsed_time=(end_time-start_time) print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time)) # print(history.history.keys()) # all these keys will be shown in figure pd.DataFrame(history.history).plot(figsize=(8,5)) plt.grid(True) plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale plt.show() do_cross_validation(X,y) from keras.models import load_model pc_sim.set_reproducible(True) nc_sim.set_reproducible(True) pc_test = pc_sim.get_sequences(PC_TESTS) nc_test = nc_sim.get_sequences(NC_TESTS) X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET) best_model=load_model(MODELPATH) scores = best_model.evaluate(X, y, verbose=0) print("The best model parameters were saved during cross-validation.") print("Best was defined as maximum validation accuracy at end of any epoch.") print("Now re-load the best model and test it on previously unseen data.") print("Test on",len(pc_test),"PC seqs") print("Test on",len(nc_test),"NC seqs") print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100)) from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score ns_probs = [0 for _ in range(len(y))] bm_probs = best_model.predict(X) ns_auc = roc_auc_score(y, ns_probs) bm_auc = roc_auc_score(y, bm_probs) ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs) bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs) plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc) plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc) plt.title('ROC') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.legend() plt.show() print("%s: %.2f%%" %('AUC',bm_auc*100.0)) t = time.time() time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)) ``` ## Conclusion This GRU performs about as well as our LSTM (with fewer parameters).
github_jupyter
# Furniture Rearrangement - How to setup a new interaction task in Habitat-Lab This tutorial demonstrates how to setup a new task in Habitat that utilizes interaction capabilities in Habitat Simulator. ![teaser](https://drive.google.com/uc?id=1pupGvb4dGefd0T_23GpeDkkcIocDHSL_) ## Task Definition: The working example in this demo will be the task of **Furniture Rearrangement** - The agent will be randomly spawned in an environment in which the furniture are initially displaced from their desired position. The agent is tasked with navigating the environment, picking furniture and putting them in the desired position. To keep the tutorial simple and easy to follow, we will rearrange just a single object. To setup this task, we will build on top of existing API in Habitat-Simulator and Habitat-Lab. Here is a summary of all the steps involved in setting up this task: 1. **Setup the Simulator**: Using existing functionalities of the Habitat-Sim, we can add or remove objects from the scene. We will use these methods to spawn the agent and the objects at some pre-defined initial configuration. 2. **Create a New Dataset**: We will define a new dataset class to save / load a list of episodes for the agent to train and evaluate on. 3. **Grab / Release Action**: We will add the "grab/release" action to the agent's action space to allow the agent to pickup / drop an object under a crosshair. 4. **Extend the Simulator Class**: We will extend the Simulator Class to add support for new actions implemented in previous step and add other additional utility functions 5. **Create a New Task**: Create a new task definition, implement new *sensors* and *metrics*. 6. **Train an RL agent**: We will define rewards for this task and utilize it to train an RL agent using the PPO algorithm. Let's get started! ``` # @title Installation { display-mode: "form" } # @markdown (double click to show code). !curl -L https://raw.githubusercontent.com/facebookresearch/habitat-sim/master/examples/colab_utils/colab_install.sh | NIGHTLY=true bash -s %cd /content !gdown --id 1Pc-J6pZzXEd8RSeLM94t3iwO8q_RQ853 !unzip -o /content/coda.zip -d /content/habitat-sim/data/scene_datasets # reload the cffi version import sys if "google.colab" in sys.modules: import importlib import cffi importlib.reload(cffi) # @title Path Setup and Imports { display-mode: "form" } # @markdown (double click to show code). %cd /content/habitat-lab ## [setup] import gzip import json import os import sys from typing import Any, Dict, List, Optional, Type import attr import cv2 import git import magnum as mn import numpy as np %matplotlib inline from matplotlib import pyplot as plt from PIL import Image import habitat import habitat_sim from habitat.config import Config from habitat.core.registry import registry from habitat_sim.utils import viz_utils as vut if "google.colab" in sys.modules: os.environ["IMAGEIO_FFMPEG_EXE"] = "/usr/bin/ffmpeg" repo = git.Repo(".", search_parent_directories=True) dir_path = repo.working_tree_dir %cd $dir_path data_path = os.path.join(dir_path, "data") output_directory = "data/tutorials/output/" # @param {type:"string"} output_path = os.path.join(dir_path, output_directory) if __name__ == "__main__": import argparse parser = argparse.ArgumentParser() parser.add_argument("--no-display", dest="display", action="store_false") parser.add_argument( "--no-make-video", dest="make_video", action="store_false" ) parser.set_defaults(show_video=True, make_video=True) args, _ = parser.parse_known_args() show_video = args.display display = args.display make_video = args.make_video else: show_video = False make_video = False display = False if make_video and not os.path.exists(output_path): os.makedirs(output_path) # @title Util functions to visualize observations # @markdown - `make_video_cv2`: Renders a video from a list of observations # @markdown - `simulate`: Runs simulation for a given amount of time at 60Hz # @markdown - `simulate_and_make_vid` Runs simulation and creates video def make_video_cv2( observations, cross_hair=None, prefix="", open_vid=True, fps=60 ): sensor_keys = list(observations[0]) videodims = observations[0][sensor_keys[0]].shape videodims = (videodims[1], videodims[0]) # flip to w,h order print(videodims) video_file = output_path + prefix + ".mp4" print("Encoding the video: %s " % video_file) writer = vut.get_fast_video_writer(video_file, fps=fps) for ob in observations: # If in RGB/RGBA format, remove the alpha channel rgb_im_1st_person = cv2.cvtColor(ob["rgb"], cv2.COLOR_RGBA2RGB) if cross_hair is not None: rgb_im_1st_person[ cross_hair[0] - 2 : cross_hair[0] + 2, cross_hair[1] - 2 : cross_hair[1] + 2, ] = [255, 0, 0] if rgb_im_1st_person.shape[:2] != videodims: rgb_im_1st_person = cv2.resize( rgb_im_1st_person, videodims, interpolation=cv2.INTER_AREA ) # write the 1st person observation to video writer.append_data(rgb_im_1st_person) writer.close() if open_vid: print("Displaying video") vut.display_video(video_file) def simulate(sim, dt=1.0, get_frames=True): # simulate dt seconds at 60Hz to the nearest fixed timestep print("Simulating " + str(dt) + " world seconds.") observations = [] start_time = sim.get_world_time() while sim.get_world_time() < start_time + dt: sim.step_physics(1.0 / 60.0) if get_frames: observations.append(sim.get_sensor_observations()) return observations # convenience wrapper for simulate and make_video_cv2 def simulate_and_make_vid(sim, crosshair, prefix, dt=1.0, open_vid=True): observations = simulate(sim, dt) make_video_cv2(observations, crosshair, prefix=prefix, open_vid=open_vid) def display_sample( rgb_obs, semantic_obs=np.array([]), depth_obs=np.array([]), key_points=None, # noqa: B006 ): from habitat_sim.utils.common import d3_40_colors_rgb rgb_img = Image.fromarray(rgb_obs, mode="RGB") arr = [rgb_img] titles = ["rgb"] if semantic_obs.size != 0: semantic_img = Image.new( "P", (semantic_obs.shape[1], semantic_obs.shape[0]) ) semantic_img.putpalette(d3_40_colors_rgb.flatten()) semantic_img.putdata((semantic_obs.flatten() % 40).astype(np.uint8)) semantic_img = semantic_img.convert("RGBA") arr.append(semantic_img) titles.append("semantic") if depth_obs.size != 0: depth_img = Image.fromarray( (depth_obs / 10 * 255).astype(np.uint8), mode="L" ) arr.append(depth_img) titles.append("depth") plt.figure(figsize=(12, 8)) for i, data in enumerate(arr): ax = plt.subplot(1, 3, i + 1) ax.axis("off") ax.set_title(titles[i]) # plot points on images if key_points is not None: for point in key_points: plt.plot( point[0], point[1], marker="o", markersize=10, alpha=0.8 ) plt.imshow(data) plt.show(block=False) ``` ## 1. Setup the Simulator --- ``` # @title Setup simulator configuration # @markdown We'll start with setting up simulator with the following configurations # @markdown - The simulator will render both RGB, Depth observations of 256x256 resolution. # @markdown - The actions available will be `move_forward`, `turn_left`, `turn_right`. def make_cfg(settings): sim_cfg = habitat_sim.SimulatorConfiguration() sim_cfg.gpu_device_id = 0 sim_cfg.default_agent_id = settings["default_agent_id"] sim_cfg.scene_id = settings["scene"] sim_cfg.enable_physics = settings["enable_physics"] sim_cfg.physics_config_file = settings["physics_config_file"] # Note: all sensors must have the same resolution sensor_specs = [] rgb_sensor_spec = habitat_sim.CameraSensorSpec() rgb_sensor_spec.uuid = "rgb" rgb_sensor_spec.sensor_type = habitat_sim.SensorType.COLOR rgb_sensor_spec.resolution = [settings["height"], settings["width"]] rgb_sensor_spec.position = [0.0, settings["sensor_height"], 0.0] rgb_sensor_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE sensor_specs.append(rgb_sensor_spec) depth_sensor_spec = habitat_sim.CameraSensorSpec() depth_sensor_spec.uuid = "depth" depth_sensor_spec.sensor_type = habitat_sim.SensorType.DEPTH depth_sensor_spec.resolution = [settings["height"], settings["width"]] depth_sensor_spec.position = [0.0, settings["sensor_height"], 0.0] depth_sensor_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE sensor_specs.append(depth_sensor_spec) # Here you can specify the amount of displacement in a forward action and the turn angle agent_cfg = habitat_sim.agent.AgentConfiguration() agent_cfg.sensor_specifications = sensor_specs agent_cfg.action_space = { "move_forward": habitat_sim.agent.ActionSpec( "move_forward", habitat_sim.agent.ActuationSpec(amount=0.1) ), "turn_left": habitat_sim.agent.ActionSpec( "turn_left", habitat_sim.agent.ActuationSpec(amount=10.0) ), "turn_right": habitat_sim.agent.ActionSpec( "turn_right", habitat_sim.agent.ActuationSpec(amount=10.0) ), } return habitat_sim.Configuration(sim_cfg, [agent_cfg]) settings = { "max_frames": 10, "width": 256, "height": 256, "scene": "data/scene_datasets/coda/coda.glb", "default_agent_id": 0, "sensor_height": 1.5, # Height of sensors in meters "rgb": True, # RGB sensor "depth": True, # Depth sensor "seed": 1, "enable_physics": True, "physics_config_file": "data/default.physics_config.json", "silent": False, "compute_shortest_path": False, "compute_action_shortest_path": False, "save_png": True, } cfg = make_cfg(settings) # @title Spawn the agent at a pre-defined location def init_agent(sim): agent_pos = np.array([-0.15776923, 0.18244143, 0.2988735]) # Place the agent sim.agents[0].scene_node.translation = agent_pos agent_orientation_y = -40 sim.agents[0].scene_node.rotation = mn.Quaternion.rotation( mn.Deg(agent_orientation_y), mn.Vector3(0, 1.0, 0) ) cfg.sim_cfg.default_agent_id = 0 with habitat_sim.Simulator(cfg) as sim: init_agent(sim) if make_video: # Visualize the agent's initial position simulate_and_make_vid( sim, None, "sim-init", dt=1.0, open_vid=show_video ) # @title Set the object's initial and final position # @markdown Defines two utility functions: # @markdown - `remove_all_objects`: This will remove all objects from the scene # @markdown - `set_object_in_front_of_agent`: This will add an object in the scene in front of the agent at the specified distance. # @markdown Here we add a chair *3.0m* away from the agent and the task is to place the agent at the desired final position which is *7.0m* in front of the agent. def remove_all_objects(sim): for obj_id in sim.get_existing_object_ids(): sim.remove_object(obj_id) def set_object_in_front_of_agent(sim, obj_id, z_offset=-1.5): r""" Adds an object in front of the agent at some distance. """ agent_transform = sim.agents[0].scene_node.transformation_matrix() obj_translation = agent_transform.transform_point( np.array([0, 0, z_offset]) ) sim.set_translation(obj_translation, obj_id) obj_node = sim.get_object_scene_node(obj_id) xform_bb = habitat_sim.geo.get_transformed_bb( obj_node.cumulative_bb, obj_node.transformation ) # also account for collision margin of the scene scene_collision_margin = 0.04 y_translation = mn.Vector3( 0, xform_bb.size_y() / 2.0 + scene_collision_margin, 0 ) sim.set_translation(y_translation + sim.get_translation(obj_id), obj_id) def init_objects(sim): # Manager of Object Attributes Templates obj_attr_mgr = sim.get_object_template_manager() obj_attr_mgr.load_configs( str(os.path.join(data_path, "test_assets/objects")) ) # Add a chair into the scene. obj_path = "test_assets/objects/chair" chair_template_id = obj_attr_mgr.load_object_configs( str(os.path.join(data_path, obj_path)) )[0] chair_attr = obj_attr_mgr.get_template_by_id(chair_template_id) obj_attr_mgr.register_template(chair_attr) # Object's initial position 3m away from the agent. object_id = sim.add_object_by_handle(chair_attr.handle) set_object_in_front_of_agent(sim, object_id, -3.0) sim.set_object_motion_type( habitat_sim.physics.MotionType.STATIC, object_id ) # Object's final position 7m away from the agent goal_id = sim.add_object_by_handle(chair_attr.handle) set_object_in_front_of_agent(sim, goal_id, -7.0) sim.set_object_motion_type(habitat_sim.physics.MotionType.STATIC, goal_id) return object_id, goal_id with habitat_sim.Simulator(cfg) as sim: init_agent(sim) init_objects(sim) # Visualize the scene after the chair is added into the scene. if make_video: simulate_and_make_vid( sim, None, "object-init", dt=1.0, open_vid=show_video ) ``` ## Rearrangement Dataset ![dataset](https://drive.google.com/uc?id=1y0qS0MifmJsZ0F4jsRZGI9BrXzslFLn7) In the previous section, we created a single episode of the rearrangement task. Let's define a format to store all the necessary information about a single episode. It should store the *scene* the episode belongs to, *initial spawn position and orientation* of the agent, *object type*, object's *initial position and orientation* as well as *final position and orientation*. The format will be as follows: ``` { 'episode_id': 0, 'scene_id': 'data/scene_datasets/coda/coda.glb', 'goals': { 'position': [4.34, 0.67, -5.06], 'rotation': [0.0, 0.0, 0.0, 1.0] }, 'objects': { 'object_id': 0, 'object_template': 'data/test_assets/objects/chair', 'position': [1.77, 0.67, -1.99], 'rotation': [0.0, 0.0, 0.0, 1.0] }, 'start_position': [-0.15, 0.18, 0.29], 'start_rotation': [-0.0, -0.34, -0.0, 0.93]} } ``` Once an episode is defined, a dataset will just be a collection of such episodes. For simplicity, in this notebook, the dataset will only contain one episode defined above. ``` # @title Create a new dataset # @markdown Utility functions to define and save the dataset for the rearrangement task def get_rotation(sim, object_id): quat = sim.get_rotation(object_id) return np.array(quat.vector).tolist() + [quat.scalar] def init_episode_dict(episode_id, scene_id, agent_pos, agent_rot): episode_dict = { "episode_id": episode_id, "scene_id": "data/scene_datasets/coda/coda.glb", "start_position": agent_pos, "start_rotation": agent_rot, "info": {}, } return episode_dict def add_object_details(sim, episode_dict, obj_id, object_template, object_id): object_template = { "object_id": obj_id, "object_template": object_template, "position": np.array(sim.get_translation(object_id)).tolist(), "rotation": get_rotation(sim, object_id), } episode_dict["objects"] = object_template return episode_dict def add_goal_details(sim, episode_dict, object_id): goal_template = { "position": np.array(sim.get_translation(object_id)).tolist(), "rotation": get_rotation(sim, object_id), } episode_dict["goals"] = goal_template return episode_dict # set the number of objects to 1 always for now. def build_episode(sim, episode_num, object_id, goal_id): episodes = {"episodes": []} for episode in range(episode_num): agent_state = sim.get_agent(0).get_state() agent_pos = np.array(agent_state.position).tolist() agent_quat = agent_state.rotation agent_rot = np.array(agent_quat.vec).tolist() + [agent_quat.real] episode_dict = init_episode_dict( episode, settings["scene"], agent_pos, agent_rot ) object_attr = sim.get_object_initialization_template(object_id) object_path = os.path.relpath( os.path.splitext(object_attr.render_asset_handle)[0] ) episode_dict = add_object_details( sim, episode_dict, 0, object_path, object_id ) episode_dict = add_goal_details(sim, episode_dict, goal_id) episodes["episodes"].append(episode_dict) return episodes with habitat_sim.Simulator(cfg) as sim: init_agent(sim) object_id, goal_id = init_objects(sim) episodes = build_episode(sim, 1, object_id, goal_id) dataset_content_path = "data/datasets/rearrangement/coda/v1/train/" if not os.path.exists(dataset_content_path): os.makedirs(dataset_content_path) with gzip.open( os.path.join(dataset_content_path, "train.json.gz"), "wt" ) as f: json.dump(episodes, f) print( "Dataset written to {}".format( os.path.join(dataset_content_path, "train.json.gz") ) ) # @title Dataset class to read the saved dataset in Habitat-Lab. # @markdown To read the saved episodes in Habitat-Lab, we will extend the `Dataset` class and the `Episode` base class. It will help provide all the relevant details about the episode through a consistent API to all downstream tasks. # @markdown - We will first create a `RearrangementEpisode` by extending the `NavigationEpisode` to include additional information about object's initial configuration and desired final configuration. # @markdown - We will then define a `RearrangementDatasetV0` class that builds on top of `PointNavDatasetV1` class to read the JSON file stored earlier and initialize a list of `RearrangementEpisode`. from habitat.core.utils import DatasetFloatJSONEncoder, not_none_validator from habitat.datasets.pointnav.pointnav_dataset import ( CONTENT_SCENES_PATH_FIELD, DEFAULT_SCENE_PATH_PREFIX, PointNavDatasetV1, ) from habitat.tasks.nav.nav import NavigationEpisode @attr.s(auto_attribs=True, kw_only=True) class RearrangementSpec: r"""Specifications that capture a particular position of final position or initial position of the object. """ position: List[float] = attr.ib(default=None, validator=not_none_validator) rotation: List[float] = attr.ib(default=None, validator=not_none_validator) info: Optional[Dict[str, str]] = attr.ib(default=None) @attr.s(auto_attribs=True, kw_only=True) class RearrangementObjectSpec(RearrangementSpec): r"""Object specifications that capture position of each object in the scene, the associated object template. """ object_id: str = attr.ib(default=None, validator=not_none_validator) object_template: Optional[str] = attr.ib( default="data/test_assets/objects/chair" ) @attr.s(auto_attribs=True, kw_only=True) class RearrangementEpisode(NavigationEpisode): r"""Specification of episode that includes initial position and rotation of agent, all goal specifications, all object specifications Args: episode_id: id of episode in the dataset scene_id: id of scene inside the simulator. start_position: numpy ndarray containing 3 entries for (x, y, z). start_rotation: numpy ndarray with 4 entries for (x, y, z, w) elements of unit quaternion (versor) representing agent 3D orientation. goal: object's goal position and rotation object: object's start specification defined with object type, position, and rotation. """ objects: RearrangementObjectSpec = attr.ib( default=None, validator=not_none_validator ) goals: RearrangementSpec = attr.ib( default=None, validator=not_none_validator ) @registry.register_dataset(name="RearrangementDataset-v0") class RearrangementDatasetV0(PointNavDatasetV1): r"""Class inherited from PointNavDataset that loads Rearrangement dataset.""" episodes: List[RearrangementEpisode] content_scenes_path: str = "{data_path}/content/{scene}.json.gz" def to_json(self) -> str: result = DatasetFloatJSONEncoder().encode(self) return result def __init__(self, config: Optional[Config] = None) -> None: super().__init__(config) def from_json( self, json_str: str, scenes_dir: Optional[str] = None ) -> None: deserialized = json.loads(json_str) if CONTENT_SCENES_PATH_FIELD in deserialized: self.content_scenes_path = deserialized[CONTENT_SCENES_PATH_FIELD] for i, episode in enumerate(deserialized["episodes"]): rearrangement_episode = RearrangementEpisode(**episode) rearrangement_episode.episode_id = str(i) if scenes_dir is not None: if rearrangement_episode.scene_id.startswith( DEFAULT_SCENE_PATH_PREFIX ): rearrangement_episode.scene_id = ( rearrangement_episode.scene_id[ len(DEFAULT_SCENE_PATH_PREFIX) : ] ) rearrangement_episode.scene_id = os.path.join( scenes_dir, rearrangement_episode.scene_id ) rearrangement_episode.objects = RearrangementObjectSpec( **rearrangement_episode.objects ) rearrangement_episode.goals = RearrangementSpec( **rearrangement_episode.goals ) self.episodes.append(rearrangement_episode) # @title Load the saved dataset using the Dataset class config = habitat.get_config("configs/datasets/pointnav/habitat_test.yaml") config.defrost() config.DATASET.DATA_PATH = ( "data/datasets/rearrangement/coda/v1/{split}/{split}.json.gz" ) config.DATASET.TYPE = "RearrangementDataset-v0" config.freeze() dataset = RearrangementDatasetV0(config.DATASET) # check if the dataset got correctly deserialized assert len(dataset.episodes) == 1 assert dataset.episodes[0].objects.position == [ 1.770593523979187, 0.6726829409599304, -1.9992598295211792, ] assert dataset.episodes[0].objects.rotation == [0.0, 0.0, 0.0, 1.0] assert ( dataset.episodes[0].objects.object_template == "data/test_assets/objects/chair" ) assert dataset.episodes[0].goals.position == [ 4.3417439460754395, 0.6726829409599304, -5.0634379386901855, ] assert dataset.episodes[0].goals.rotation == [0.0, 0.0, 0.0, 1.0] ``` ## Implement Grab/Release Action ``` # @title RayCast utility to implement Grab/Release Under Cross-Hair Action # @markdown Cast a ray in the direction of crosshair from the camera and check if it collides with another object within a certain distance threshold def raycast(sim, sensor_name, crosshair_pos=(128, 128), max_distance=2.0): r"""Cast a ray in the direction of crosshair and check if it collides with another object within a certain distance threshold :param sim: Simulator object :param sensor_name: name of the visual sensor to be used for raycasting :param crosshair_pos: 2D coordiante in the viewport towards which the ray will be cast :param max_distance: distance threshold beyond which objects won't be considered """ render_camera = sim._sensors[sensor_name]._sensor_object.render_camera center_ray = render_camera.unproject(mn.Vector2i(crosshair_pos)) raycast_results = sim.cast_ray(center_ray, max_distance=max_distance) closest_object = -1 closest_dist = 1000.0 if raycast_results.has_hits(): for hit in raycast_results.hits: if hit.ray_distance < closest_dist: closest_dist = hit.ray_distance closest_object = hit.object_id return closest_object # Test the raycast utility. with habitat_sim.Simulator(cfg) as sim: init_agent(sim) obj_attr_mgr = sim.get_object_template_manager() obj_attr_mgr.load_configs( str(os.path.join(data_path, "test_assets/objects")) ) obj_path = "test_assets/objects/chair" chair_template_id = obj_attr_mgr.load_object_configs( str(os.path.join(data_path, obj_path)) )[0] chair_attr = obj_attr_mgr.get_template_by_id(chair_template_id) obj_attr_mgr.register_template(chair_attr) object_id = sim.add_object_by_handle(chair_attr.handle) print(f"Chair's object id is {object_id}") set_object_in_front_of_agent(sim, object_id, -1.5) sim.set_object_motion_type( habitat_sim.physics.MotionType.STATIC, object_id ) if make_video: # Visualize the agent's initial position simulate_and_make_vid( sim, [190, 128], "sim-before-grab", dt=1.0, open_vid=show_video ) # Distance threshold=2 is greater than agent-to-chair distance. # Should return chair's object id closest_object = raycast( sim, "rgb", crosshair_pos=[128, 190], max_distance=2.0 ) print(f"Closest Object ID: {closest_object} using 2.0 threshold") assert ( closest_object == object_id ), f"Could not pick chair with ID: {object_id}" # Distance threshold=1 is smaller than agent-to-chair distance . # Should return -1 closest_object = raycast( sim, "rgb", crosshair_pos=[128, 190], max_distance=1.0 ) print(f"Closest Object ID: {closest_object} using 1.0 threshold") assert closest_object == -1, "Agent shoud not be able to pick any object" # @title Define a Grab/Release action and create a new action space. # @markdown Each new action is defined by a `ActionSpec` and an `ActuationSpec`. `ActionSpec` is mapping between the action name and its corresponding `ActuationSpec`. `ActuationSpec` contains all the necessary specifications required to define the action. from habitat.config.default import _C, CN from habitat.core.embodied_task import SimulatorTaskAction from habitat.sims.habitat_simulator.actions import ( HabitatSimActions, HabitatSimV1ActionSpaceConfiguration, ) from habitat_sim.agent.controls.controls import ActuationSpec from habitat_sim.physics import MotionType # @markdown For instance, `GrabReleaseActuationSpec` contains the following: # @markdown - `visual_sensor_name` defines which viewport (rgb, depth, etc) to to use to cast the ray. # @markdown - `crosshair_pos` stores the position in the viewport through which the ray passes. Any object which intersects with this ray can be grabbed by the agent. # @markdown - `amount` defines a distance threshold. Objects which are farther than the treshold cannot be picked up by the agent. @attr.s(auto_attribs=True, slots=True) class GrabReleaseActuationSpec(ActuationSpec): visual_sensor_name: str = "rgb" crosshair_pos: List[int] = [128, 128] amount: float = 2.0 # @markdown Then, we extend the `HabitatSimV1ActionSpaceConfiguration` to add the above action into the agent's action space. `ActionSpaceConfiguration` is a mapping between action name and the corresponding `ActionSpec` @registry.register_action_space_configuration(name="RearrangementActions-v0") class RearrangementSimV0ActionSpaceConfiguration( HabitatSimV1ActionSpaceConfiguration ): def __init__(self, config): super().__init__(config) if not HabitatSimActions.has_action("GRAB_RELEASE"): HabitatSimActions.extend_action_space("GRAB_RELEASE") def get(self): config = super().get() new_config = { HabitatSimActions.GRAB_RELEASE: habitat_sim.ActionSpec( "grab_or_release_object_under_crosshair", GrabReleaseActuationSpec( visual_sensor_name=self.config.VISUAL_SENSOR, crosshair_pos=self.config.CROSSHAIR_POS, amount=self.config.GRAB_DISTANCE, ), ) } config.update(new_config) return config # @markdown Finally, we extend `SimualtorTaskAction` which tells the simulator which action to call when a named action ('GRAB_RELEASE' in this case) is predicte by the agent's policy. @registry.register_task_action class GrabOrReleaseAction(SimulatorTaskAction): def step(self, *args: Any, **kwargs: Any): r"""This method is called from ``Env`` on each ``step``.""" return self._sim.step(HabitatSimActions.GRAB_RELEASE) _C.TASK.ACTIONS.GRAB_RELEASE = CN() _C.TASK.ACTIONS.GRAB_RELEASE.TYPE = "GrabOrReleaseAction" _C.SIMULATOR.CROSSHAIR_POS = [128, 160] _C.SIMULATOR.GRAB_DISTANCE = 2.0 _C.SIMULATOR.VISUAL_SENSOR = "rgb" ``` ##Setup Simulator Class for Rearrangement Task ![sim](https://drive.google.com/uc?id=1ce6Ti-gpumMEyfomqAKWqOspXm6tN4_8) ``` # @title RearrangementSim Class # @markdown Here we will extend the `HabitatSim` class for the rearrangement task. We will make the following changes: # @markdown - define a new `_initialize_objects` function which will load the object in its initial configuration as defined by the episode. # @markdown - define a `gripped_object_id` property that stores whether the agent is holding any object or not. # @markdown - modify the `step` function of the simulator to use the `grab/release` action we define earlier. # @markdown #### Writing the `step` function: # @markdown Since we added a new action for this task, we have to modify the `step` function to define what happens when `grab/release` action is called. If a simple navigation action (`move_forward`, `turn_left`, `turn_right`) is called, we pass it forward to `act` function of the agent which already defines the behavior of these actions. # @markdown For the `grab/release` action, if the agent is not already holding an object, we first call the `raycast` function using the values from the `ActuationSpec` to see if any object is grippable. If it returns a valid object id, we put the object in a "invisible" inventory and remove it from the scene. # @markdown If the agent was already holding an object, `grab/release` action will try release the object at the same relative position as it was grabbed. If the object can be placed without any collision, then the `release` action is successful. from habitat.sims.habitat_simulator.habitat_simulator import HabitatSim from habitat_sim.nav import NavMeshSettings from habitat_sim.utils.common import quat_from_coeffs, quat_to_magnum @registry.register_simulator(name="RearrangementSim-v0") class RearrangementSim(HabitatSim): r"""Simulator wrapper over habitat-sim with object rearrangement functionalities. """ def __init__(self, config: Config) -> None: self.did_reset = False super().__init__(config=config) self.grip_offset = np.eye(4) agent_id = self.habitat_config.DEFAULT_AGENT_ID agent_config = self._get_agent_config(agent_id) self.navmesh_settings = NavMeshSettings() self.navmesh_settings.set_defaults() self.navmesh_settings.agent_radius = agent_config.RADIUS self.navmesh_settings.agent_height = agent_config.HEIGHT def reconfigure(self, config: Config) -> None: super().reconfigure(config) self._initialize_objects() def reset(self): sim_obs = super().reset() if self._update_agents_state(): sim_obs = self.get_sensor_observations() self._prev_sim_obs = sim_obs self.did_reset = True self.grip_offset = np.eye(4) return self._sensor_suite.get_observations(sim_obs) def _initialize_objects(self): objects = self.habitat_config.objects[0] obj_attr_mgr = self.get_object_template_manager() obj_attr_mgr.load_configs( str(os.path.join(data_path, "test_assets/objects")) ) # first remove all existing objects existing_object_ids = self.get_existing_object_ids() if len(existing_object_ids) > 0: for obj_id in existing_object_ids: self.remove_object(obj_id) self.sim_object_to_objid_mapping = {} self.objid_to_sim_object_mapping = {} if objects is not None: object_template = objects["object_template"] object_pos = objects["position"] object_rot = objects["rotation"] object_template_id = obj_attr_mgr.load_object_configs( object_template )[0] object_attr = obj_attr_mgr.get_template_by_id(object_template_id) obj_attr_mgr.register_template(object_attr) object_id = self.add_object_by_handle(object_attr.handle) self.sim_object_to_objid_mapping[object_id] = objects["object_id"] self.objid_to_sim_object_mapping[objects["object_id"]] = object_id self.set_translation(object_pos, object_id) if isinstance(object_rot, list): object_rot = quat_from_coeffs(object_rot) object_rot = quat_to_magnum(object_rot) self.set_rotation(object_rot, object_id) self.set_object_motion_type(MotionType.STATIC, object_id) # Recompute the navmesh after placing all the objects. self.recompute_navmesh(self.pathfinder, self.navmesh_settings, True) def _sync_gripped_object(self, gripped_object_id): r""" Sync the gripped object with the object associated with the agent. """ if gripped_object_id != -1: agent_body_transformation = ( self._default_agent.scene_node.transformation ) self.set_transformation( agent_body_transformation, gripped_object_id ) translation = agent_body_transformation.transform_point( np.array([0, 2.0, 0]) ) self.set_translation(translation, gripped_object_id) @property def gripped_object_id(self): return self._prev_sim_obs.get("gripped_object_id", -1) def step(self, action: int): dt = 1 / 60.0 self._num_total_frames += 1 collided = False gripped_object_id = self.gripped_object_id agent_config = self._default_agent.agent_config action_spec = agent_config.action_space[action] if action_spec.name == "grab_or_release_object_under_crosshair": # If already holding an agent if gripped_object_id != -1: agent_body_transformation = ( self._default_agent.scene_node.transformation ) T = np.dot(agent_body_transformation, self.grip_offset) self.set_transformation(T, gripped_object_id) position = self.get_translation(gripped_object_id) if self.pathfinder.is_navigable(position): self.set_object_motion_type( MotionType.STATIC, gripped_object_id ) gripped_object_id = -1 self.recompute_navmesh( self.pathfinder, self.navmesh_settings, True ) # if not holding an object, then try to grab else: gripped_object_id = raycast( self, action_spec.actuation.visual_sensor_name, crosshair_pos=action_spec.actuation.crosshair_pos, max_distance=action_spec.actuation.amount, ) # found a grabbable object. if gripped_object_id != -1: agent_body_transformation = ( self._default_agent.scene_node.transformation ) self.grip_offset = np.dot( np.array(agent_body_transformation.inverted()), np.array(self.get_transformation(gripped_object_id)), ) self.set_object_motion_type( MotionType.KINEMATIC, gripped_object_id ) self.recompute_navmesh( self.pathfinder, self.navmesh_settings, True ) else: collided = self._default_agent.act(action) self._last_state = self._default_agent.get_state() # step physics by dt super().step_world(dt) # Sync the gripped object after the agent moves. self._sync_gripped_object(gripped_object_id) # obtain observations self._prev_sim_obs = self.get_sensor_observations() self._prev_sim_obs["collided"] = collided self._prev_sim_obs["gripped_object_id"] = gripped_object_id observations = self._sensor_suite.get_observations(self._prev_sim_obs) return observations ``` ## Create the Rearrangement Task ![task](https://drive.google.com/uc?id=1N75Mmi6aigh33uL765ljsAqLzFmcs7Zn) ``` # @title Implement new sensors and measurements # @markdown After defining the dataset, action space and simulator functions for the rearrangement task, we are one step closer to training agents to solve this task. # @markdown Here we define inputs to the policy and other measurements required to design reward functions. # @markdown **Sensors**: These define various part of the simulator state that's visible to the agent. For simplicity, we'll assume that agent knows the object's current position, object's final goal position relative to the agent's current position. # @markdown - Object's current position will be made given by the `ObjectPosition` sensor # @markdown - Object's goal position will be available through the `ObjectGoal` sensor. # @markdown - Finally, we will also use `GrippedObject` sensor to tell the agent if it's holding any object or not. # @markdown **Measures**: These define various metrics about the task which can be used to measure task progress and define rewards. Note that measurements are *privileged* information not accessible to the agent as part of the observation space. We will need the following measurements: # @markdown - `AgentToObjectDistance` which measure the euclidean distance between the agent and the object. # @markdown - `ObjectToGoalDistance` which measures the euclidean distance between the object and the goal. from gym import spaces import habitat_sim from habitat.config.default import CN, Config from habitat.core.dataset import Episode from habitat.core.embodied_task import Measure from habitat.core.simulator import Observations, Sensor, SensorTypes, Simulator from habitat.tasks.nav.nav import PointGoalSensor @registry.register_sensor class GrippedObjectSensor(Sensor): cls_uuid = "gripped_object_id" def __init__( self, *args: Any, sim: RearrangementSim, config: Config, **kwargs: Any ): self._sim = sim super().__init__(config=config) def _get_uuid(self, *args: Any, **kwargs: Any) -> str: return self.cls_uuid def _get_observation_space(self, *args: Any, **kwargs: Any): return spaces.Discrete(len(self._sim.get_existing_object_ids())) def _get_sensor_type(self, *args: Any, **kwargs: Any): return SensorTypes.MEASUREMENT def get_observation( self, observations: Dict[str, Observations], episode: Episode, *args: Any, **kwargs: Any, ): obj_id = self._sim.sim_object_to_objid_mapping.get( self._sim.gripped_object_id, -1 ) return obj_id @registry.register_sensor class ObjectPosition(PointGoalSensor): cls_uuid: str = "object_position" def _get_observation_space(self, *args: Any, **kwargs: Any): sensor_shape = (self._dimensionality,) return spaces.Box( low=np.finfo(np.float32).min, high=np.finfo(np.float32).max, shape=sensor_shape, dtype=np.float32, ) def get_observation( self, *args: Any, observations, episode, **kwargs: Any ): agent_state = self._sim.get_agent_state() agent_position = agent_state.position rotation_world_agent = agent_state.rotation object_id = self._sim.get_existing_object_ids()[0] object_position = self._sim.get_translation(object_id) pointgoal = self._compute_pointgoal( agent_position, rotation_world_agent, object_position ) return pointgoal @registry.register_sensor class ObjectGoal(PointGoalSensor): cls_uuid: str = "object_goal" def _get_observation_space(self, *args: Any, **kwargs: Any): sensor_shape = (self._dimensionality,) return spaces.Box( low=np.finfo(np.float32).min, high=np.finfo(np.float32).max, shape=sensor_shape, dtype=np.float32, ) def get_observation( self, *args: Any, observations, episode, **kwargs: Any ): agent_state = self._sim.get_agent_state() agent_position = agent_state.position rotation_world_agent = agent_state.rotation goal_position = np.array(episode.goals.position, dtype=np.float32) point_goal = self._compute_pointgoal( agent_position, rotation_world_agent, goal_position ) return point_goal @registry.register_measure class ObjectToGoalDistance(Measure): """The measure calculates distance of object towards the goal.""" cls_uuid: str = "object_to_goal_distance" def __init__( self, sim: Simulator, config: Config, *args: Any, **kwargs: Any ): self._sim = sim self._config = config super().__init__(**kwargs) @staticmethod def _get_uuid(*args: Any, **kwargs: Any): return ObjectToGoalDistance.cls_uuid def reset_metric(self, episode, *args: Any, **kwargs: Any): self.update_metric(*args, episode=episode, **kwargs) def _geo_dist(self, src_pos, goal_pos: np.array) -> float: return self._sim.geodesic_distance(src_pos, [goal_pos]) def _euclidean_distance(self, position_a, position_b): return np.linalg.norm( np.array(position_b) - np.array(position_a), ord=2 ) def update_metric(self, episode, *args: Any, **kwargs: Any): sim_obj_id = self._sim.get_existing_object_ids()[0] previous_position = np.array( self._sim.get_translation(sim_obj_id) ).tolist() goal_position = episode.goals.position self._metric = self._euclidean_distance( previous_position, goal_position ) @registry.register_measure class AgentToObjectDistance(Measure): """The measure calculates the distance of objects from the agent""" cls_uuid: str = "agent_to_object_distance" def __init__( self, sim: Simulator, config: Config, *args: Any, **kwargs: Any ): self._sim = sim self._config = config super().__init__(**kwargs) @staticmethod def _get_uuid(*args: Any, **kwargs: Any): return AgentToObjectDistance.cls_uuid def reset_metric(self, episode, *args: Any, **kwargs: Any): self.update_metric(*args, episode=episode, **kwargs) def _euclidean_distance(self, position_a, position_b): return np.linalg.norm( np.array(position_b) - np.array(position_a), ord=2 ) def update_metric(self, episode, *args: Any, **kwargs: Any): sim_obj_id = self._sim.get_existing_object_ids()[0] previous_position = np.array( self._sim.get_translation(sim_obj_id) ).tolist() agent_state = self._sim.get_agent_state() agent_position = agent_state.position self._metric = self._euclidean_distance( previous_position, agent_position ) # ----------------------------------------------------------------------------- # # REARRANGEMENT TASK GRIPPED OBJECT SENSOR # ----------------------------------------------------------------------------- _C.TASK.GRIPPED_OBJECT_SENSOR = CN() _C.TASK.GRIPPED_OBJECT_SENSOR.TYPE = "GrippedObjectSensor" # ----------------------------------------------------------------------------- # # REARRANGEMENT TASK ALL OBJECT POSITIONS SENSOR # ----------------------------------------------------------------------------- _C.TASK.OBJECT_POSITION = CN() _C.TASK.OBJECT_POSITION.TYPE = "ObjectPosition" _C.TASK.OBJECT_POSITION.GOAL_FORMAT = "POLAR" _C.TASK.OBJECT_POSITION.DIMENSIONALITY = 2 # ----------------------------------------------------------------------------- # # REARRANGEMENT TASK ALL OBJECT GOALS SENSOR # ----------------------------------------------------------------------------- _C.TASK.OBJECT_GOAL = CN() _C.TASK.OBJECT_GOAL.TYPE = "ObjectGoal" _C.TASK.OBJECT_GOAL.GOAL_FORMAT = "POLAR" _C.TASK.OBJECT_GOAL.DIMENSIONALITY = 2 # ----------------------------------------------------------------------------- # # OBJECT_DISTANCE_TO_GOAL MEASUREMENT # ----------------------------------------------------------------------------- _C.TASK.OBJECT_TO_GOAL_DISTANCE = CN() _C.TASK.OBJECT_TO_GOAL_DISTANCE.TYPE = "ObjectToGoalDistance" # ----------------------------------------------------------------------------- # # OBJECT_DISTANCE_FROM_AGENT MEASUREMENT # ----------------------------------------------------------------------------- _C.TASK.AGENT_TO_OBJECT_DISTANCE = CN() _C.TASK.AGENT_TO_OBJECT_DISTANCE.TYPE = "AgentToObjectDistance" from habitat.config.default import CN, Config # @title Define `RearrangementTask` by extending `NavigationTask` from habitat.tasks.nav.nav import NavigationTask, merge_sim_episode_config def merge_sim_episode_with_object_config( sim_config: Config, episode: Type[Episode] ) -> Any: sim_config = merge_sim_episode_config(sim_config, episode) sim_config.defrost() sim_config.objects = [episode.objects.__dict__] sim_config.freeze() return sim_config @registry.register_task(name="RearrangementTask-v0") class RearrangementTask(NavigationTask): r"""Embodied Rearrangement Task Goal: An agent must place objects at their corresponding goal position. """ def __init__(self, **kwargs) -> None: super().__init__(**kwargs) def overwrite_sim_config(self, sim_config, episode): return merge_sim_episode_with_object_config(sim_config, episode) ``` ## Implement a hard-coded and an RL agent ``` # @title Load the `RearrangementTask` in Habitat-Lab and run a hard-coded agent import habitat config = habitat.get_config("configs/tasks/pointnav.yaml") config.defrost() config.ENVIRONMENT.MAX_EPISODE_STEPS = 50 config.SIMULATOR.TYPE = "RearrangementSim-v0" config.SIMULATOR.ACTION_SPACE_CONFIG = "RearrangementActions-v0" config.SIMULATOR.GRAB_DISTANCE = 2.0 config.SIMULATOR.HABITAT_SIM_V0.ENABLE_PHYSICS = True config.TASK.TYPE = "RearrangementTask-v0" config.TASK.SUCCESS_DISTANCE = 1.0 config.TASK.SENSORS = [ "GRIPPED_OBJECT_SENSOR", "OBJECT_POSITION", "OBJECT_GOAL", ] config.TASK.GOAL_SENSOR_UUID = "object_goal" config.TASK.MEASUREMENTS = [ "OBJECT_TO_GOAL_DISTANCE", "AGENT_TO_OBJECT_DISTANCE", ] config.TASK.POSSIBLE_ACTIONS = ["STOP", "MOVE_FORWARD", "GRAB_RELEASE"] config.DATASET.TYPE = "RearrangementDataset-v0" config.DATASET.SPLIT = "train" config.DATASET.DATA_PATH = ( "data/datasets/rearrangement/coda/v1/{split}/{split}.json.gz" ) config.freeze() def print_info(obs, metrics): print( "Gripped Object: {}, Distance To Object: {}, Distance To Goal: {}".format( obs["gripped_object_id"], metrics["agent_to_object_distance"], metrics["object_to_goal_distance"], ) ) try: # Got to make initialization idiot proof sim.close() except NameError: pass with habitat.Env(config) as env: obs = env.reset() obs_list = [] # Get closer to the object while True: obs = env.step(1) obs_list.append(obs) metrics = env.get_metrics() print_info(obs, metrics) if metrics["agent_to_object_distance"] < 2.0: break # Grab the object obs = env.step(2) obs_list.append(obs) metrics = env.get_metrics() print_info(obs, metrics) assert obs["gripped_object_id"] != -1 # Get closer to the goal while True: obs = env.step(1) obs_list.append(obs) metrics = env.get_metrics() print_info(obs, metrics) if metrics["object_to_goal_distance"] < 2.0: break # Release the object obs = env.step(2) obs_list.append(obs) metrics = env.get_metrics() print_info(obs, metrics) assert obs["gripped_object_id"] == -1 if make_video: make_video_cv2( obs_list, [190, 128], "hard-coded-agent", fps=5.0, open_vid=show_video, ) # @title Create a task specific RL Environment with a new reward definition. # @markdown We create a `RearragenmentRLEnv` class and modify the `get_reward()` function. # @markdown The reward sturcture is as follows: # @markdown - The agent gets a positive reward if the agent gets closer to the object otherwise a negative reward. # @markdown - The agent gets a positive reward if it moves the object closer to goal otherwise a negative reward. # @markdown - The agent gets a positive reward when the agent "picks" up an object for the first time. For all other "grab/release" action, it gets a negative reward. # @markdown - The agent gets a slack penalty of -0.01 for every action it takes in the environment. # @markdown - Finally the agent gets a large success reward when the episode is completed successfully. from typing import Optional, Type import numpy as np import habitat from habitat import Config, Dataset from habitat_baselines.common.baseline_registry import baseline_registry from habitat_baselines.common.environments import NavRLEnv @baseline_registry.register_env(name="RearrangementRLEnv") class RearrangementRLEnv(NavRLEnv): def __init__(self, config: Config, dataset: Optional[Dataset] = None): self._prev_measure = { "agent_to_object_distance": 0.0, "object_to_goal_distance": 0.0, "gripped_object_id": -1, "gripped_object_count": 0, } super().__init__(config, dataset) self._success_distance = self._core_env_config.TASK.SUCCESS_DISTANCE def reset(self): self._previous_action = None observations = super().reset() self._prev_measure.update(self.habitat_env.get_metrics()) self._prev_measure["gripped_object_id"] = -1 self._prev_measure["gripped_object_count"] = 0 return observations def step(self, *args, **kwargs): self._previous_action = kwargs["action"] return super().step(*args, **kwargs) def get_reward_range(self): return ( self._rl_config.SLACK_REWARD - 1.0, self._rl_config.SUCCESS_REWARD + 1.0, ) def get_reward(self, observations): reward = self._rl_config.SLACK_REWARD gripped_success_reward = 0.0 episode_success_reward = 0.0 agent_to_object_dist_reward = 0.0 object_to_goal_dist_reward = 0.0 action_name = self._env.task.get_action_name( self._previous_action["action"] ) # If object grabbed, add a success reward # The reward gets awarded only once for an object. if ( action_name == "GRAB_RELEASE" and observations["gripped_object_id"] >= 0 ): obj_id = observations["gripped_object_id"] self._prev_measure["gripped_object_count"] += 1 gripped_success_reward = ( self._rl_config.GRIPPED_SUCCESS_REWARD if self._prev_measure["gripped_object_count"] == 1 else 0.0 ) # add a penalty everytime grab/action is called and doesn't do anything elif action_name == "GRAB_RELEASE": gripped_success_reward += -0.1 self._prev_measure["gripped_object_id"] = observations[ "gripped_object_id" ] # If the action is not a grab/release action, and the agent # has not picked up an object, then give reward based on agent to # object distance. if ( action_name != "GRAB_RELEASE" and self._prev_measure["gripped_object_id"] == -1 ): agent_to_object_dist_reward = self.get_agent_to_object_dist_reward( observations ) # If the action is not a grab/release action, and the agent # has picked up an object, then give reward based on object to # to goal distance. if ( action_name != "GRAB_RELEASE" and self._prev_measure["gripped_object_id"] != -1 ): object_to_goal_dist_reward = self.get_object_to_goal_dist_reward() if ( self._episode_success(observations) and self._prev_measure["gripped_object_id"] == -1 and action_name == "STOP" ): episode_success_reward = self._rl_config.SUCCESS_REWARD reward += ( agent_to_object_dist_reward + object_to_goal_dist_reward + gripped_success_reward + episode_success_reward ) return reward def get_agent_to_object_dist_reward(self, observations): """ Encourage the agent to move towards the closest object which is not already in place. """ curr_metric = self._env.get_metrics()["agent_to_object_distance"] prev_metric = self._prev_measure["agent_to_object_distance"] dist_reward = prev_metric - curr_metric self._prev_measure["agent_to_object_distance"] = curr_metric return dist_reward def get_object_to_goal_dist_reward(self): curr_metric = self._env.get_metrics()["object_to_goal_distance"] prev_metric = self._prev_measure["object_to_goal_distance"] dist_reward = prev_metric - curr_metric self._prev_measure["object_to_goal_distance"] = curr_metric return dist_reward def _episode_success(self, observations): r"""Returns True if object is within distance threshold of the goal.""" dist = self._env.get_metrics()["object_to_goal_distance"] if ( abs(dist) > self._success_distance or observations["gripped_object_id"] != -1 ): return False return True def _gripped_success(self, observations): if ( observations["gripped_object_id"] >= 0 and observations["gripped_object_id"] != self._prev_measure["gripped_object_id"] ): return True return False def get_done(self, observations): done = False action_name = self._env.task.get_action_name( self._previous_action["action"] ) if self._env.episode_over or ( self._episode_success(observations) and self._prev_measure["gripped_object_id"] == -1 and action_name == "STOP" ): done = True return done def get_info(self, observations): info = self.habitat_env.get_metrics() info["episode_success"] = self._episode_success(observations) return info import os import time from typing import Any, Dict, List, Optional import numpy as np from torch.optim.lr_scheduler import LambdaLR from habitat import Config, logger from habitat.utils.visualizations.utils import observations_to_image from habitat_baselines.common.baseline_registry import baseline_registry from habitat_baselines.common.environments import get_env_class from habitat_baselines.common.tensorboard_utils import TensorboardWriter from habitat_baselines.rl.models.rnn_state_encoder import ( build_rnn_state_encoder, ) from habitat_baselines.rl.ppo import PPO from habitat_baselines.rl.ppo.policy import Net, Policy from habitat_baselines.rl.ppo.ppo_trainer import PPOTrainer from habitat_baselines.utils.common import batch_obs, generate_video from habitat_baselines.utils.env_utils import make_env_fn def construct_envs( config, env_class, workers_ignore_signals=False, ): r"""Create VectorEnv object with specified config and env class type. To allow better performance, dataset are split into small ones for each individual env, grouped by scenes. :param config: configs that contain num_processes as well as information :param necessary to create individual environments. :param env_class: class type of the envs to be created. :param workers_ignore_signals: Passed to :ref:`habitat.VectorEnv`'s constructor :return: VectorEnv object created according to specification. """ num_processes = config.NUM_ENVIRONMENTS configs = [] env_classes = [env_class for _ in range(num_processes)] dataset = habitat.datasets.make_dataset(config.TASK_CONFIG.DATASET.TYPE) scenes = config.TASK_CONFIG.DATASET.CONTENT_SCENES if "*" in config.TASK_CONFIG.DATASET.CONTENT_SCENES: scenes = dataset.get_scenes_to_load(config.TASK_CONFIG.DATASET) if num_processes > 1: if len(scenes) == 0: raise RuntimeError( "No scenes to load, multiple process logic relies on being able to split scenes uniquely between processes" ) if len(scenes) < num_processes: scenes = scenes * num_processes random.shuffle(scenes) scene_splits = [[] for _ in range(num_processes)] for idx, scene in enumerate(scenes): scene_splits[idx % len(scene_splits)].append(scene) assert sum(map(len, scene_splits)) == len(scenes) for i in range(num_processes): proc_config = config.clone() proc_config.defrost() task_config = proc_config.TASK_CONFIG task_config.SEED = task_config.SEED + i if len(scenes) > 0: task_config.DATASET.CONTENT_SCENES = scene_splits[i] task_config.SIMULATOR.HABITAT_SIM_V0.GPU_DEVICE_ID = ( config.SIMULATOR_GPU_ID ) task_config.SIMULATOR.AGENT_0.SENSORS = config.SENSORS proc_config.freeze() configs.append(proc_config) envs = habitat.ThreadedVectorEnv( make_env_fn=make_env_fn, env_fn_args=tuple(zip(configs, env_classes)), workers_ignore_signals=workers_ignore_signals, ) return envs class RearrangementBaselinePolicy(Policy): def __init__(self, observation_space, action_space, hidden_size=512): super().__init__( RearrangementBaselineNet( observation_space=observation_space, hidden_size=hidden_size ), action_space.n, ) def from_config(cls, config, envs): pass class RearrangementBaselineNet(Net): r"""Network which passes the input image through CNN and concatenates goal vector with CNN's output and passes that through RNN. """ def __init__(self, observation_space, hidden_size): super().__init__() self._n_input_goal = observation_space.spaces[ ObjectGoal.cls_uuid ].shape[0] self._hidden_size = hidden_size self.state_encoder = build_rnn_state_encoder( 2 * self._n_input_goal, self._hidden_size ) self.train() @property def output_size(self): return self._hidden_size @property def is_blind(self): return False @property def num_recurrent_layers(self): return self.state_encoder.num_recurrent_layers def forward(self, observations, rnn_hidden_states, prev_actions, masks): object_goal_encoding = observations[ObjectGoal.cls_uuid] object_pos_encoding = observations[ObjectPosition.cls_uuid] x = [object_goal_encoding, object_pos_encoding] x = torch.cat(x, dim=1) x, rnn_hidden_states = self.state_encoder(x, rnn_hidden_states, masks) return x, rnn_hidden_states @baseline_registry.register_trainer(name="ppo-rearrangement") class RearrangementTrainer(PPOTrainer): supported_tasks = ["RearrangementTask-v0"] def _setup_actor_critic_agent(self, ppo_cfg: Config) -> None: r"""Sets up actor critic and agent for PPO. Args: ppo_cfg: config node with relevant params Returns: None """ logger.add_filehandler(self.config.LOG_FILE) self.actor_critic = RearrangementBaselinePolicy( observation_space=self.envs.observation_spaces[0], action_space=self.envs.action_spaces[0], hidden_size=ppo_cfg.hidden_size, ) self.actor_critic.to(self.device) self.agent = PPO( actor_critic=self.actor_critic, clip_param=ppo_cfg.clip_param, ppo_epoch=ppo_cfg.ppo_epoch, num_mini_batch=ppo_cfg.num_mini_batch, value_loss_coef=ppo_cfg.value_loss_coef, entropy_coef=ppo_cfg.entropy_coef, lr=ppo_cfg.lr, eps=ppo_cfg.eps, max_grad_norm=ppo_cfg.max_grad_norm, use_normalized_advantage=ppo_cfg.use_normalized_advantage, ) def _init_envs(self, config=None): if config is None: config = self.config self.envs = construct_envs(config, get_env_class(config.ENV_NAME)) def train(self) -> None: r"""Main method for training PPO. Returns: None """ if self._is_distributed: raise RuntimeError("This trainer does not support distributed") self._init_train() count_checkpoints = 0 lr_scheduler = LambdaLR( optimizer=self.agent.optimizer, lr_lambda=lambda _: 1 - self.percent_done(), ) ppo_cfg = self.config.RL.PPO with TensorboardWriter( self.config.TENSORBOARD_DIR, flush_secs=self.flush_secs ) as writer: while not self.is_done(): if ppo_cfg.use_linear_clip_decay: self.agent.clip_param = ppo_cfg.clip_param * ( 1 - self.percent_done() ) count_steps_delta = 0 for _step in range(ppo_cfg.num_steps): count_steps_delta += self._collect_rollout_step() ( value_loss, action_loss, dist_entropy, ) = self._update_agent() if ppo_cfg.use_linear_lr_decay: lr_scheduler.step() # type: ignore losses = self._coalesce_post_step( dict(value_loss=value_loss, action_loss=action_loss), count_steps_delta, ) self.num_updates_done += 1 deltas = { k: ( (v[-1] - v[0]).sum().item() if len(v) > 1 else v[0].sum().item() ) for k, v in self.window_episode_stats.items() } deltas["count"] = max(deltas["count"], 1.0) writer.add_scalar( "reward", deltas["reward"] / deltas["count"], self.num_steps_done, ) # Check to see if there are any metrics # that haven't been logged yet for k, v in deltas.items(): if k not in {"reward", "count"}: writer.add_scalar( "metric/" + k, v / deltas["count"], self.num_steps_done, ) losses = [value_loss, action_loss] for l, k in zip(losses, ["value, policy"]): writer.add_scalar("losses/" + k, l, self.num_steps_done) # log stats if self.num_updates_done % self.config.LOG_INTERVAL == 0: logger.info( "update: {}\tfps: {:.3f}\t".format( self.num_updates_done, self.num_steps_done / (time.time() - self.t_start), ) ) logger.info( "update: {}\tenv-time: {:.3f}s\tpth-time: {:.3f}s\t" "frames: {}".format( self.num_updates_done, self.env_time, self.pth_time, self.num_steps_done, ) ) logger.info( "Average window size: {} {}".format( len(self.window_episode_stats["count"]), " ".join( "{}: {:.3f}".format(k, v / deltas["count"]) for k, v in deltas.items() if k != "count" ), ) ) # checkpoint model if self.should_checkpoint(): self.save_checkpoint( f"ckpt.{count_checkpoints}.pth", dict(step=self.num_steps_done), ) count_checkpoints += 1 self.envs.close() def eval(self) -> None: r"""Evaluates the current model Returns: None """ config = self.config.clone() if len(self.config.VIDEO_OPTION) > 0: config.defrost() config.NUM_ENVIRONMENTS = 1 config.freeze() logger.info(f"env config: {config}") with construct_envs(config, get_env_class(config.ENV_NAME)) as envs: observations = envs.reset() batch = batch_obs(observations, device=self.device) current_episode_reward = torch.zeros( envs.num_envs, 1, device=self.device ) ppo_cfg = self.config.RL.PPO test_recurrent_hidden_states = torch.zeros( config.NUM_ENVIRONMENTS, self.actor_critic.net.num_recurrent_layers, ppo_cfg.hidden_size, device=self.device, ) prev_actions = torch.zeros( config.NUM_ENVIRONMENTS, 1, device=self.device, dtype=torch.long, ) not_done_masks = torch.zeros( config.NUM_ENVIRONMENTS, 1, device=self.device, dtype=torch.bool, ) rgb_frames = [ [] for _ in range(self.config.NUM_ENVIRONMENTS) ] # type: List[List[np.ndarray]] if len(config.VIDEO_OPTION) > 0: os.makedirs(config.VIDEO_DIR, exist_ok=True) self.actor_critic.eval() for _i in range(config.TASK_CONFIG.ENVIRONMENT.MAX_EPISODE_STEPS): current_episodes = envs.current_episodes() with torch.no_grad(): ( _, actions, _, test_recurrent_hidden_states, ) = self.actor_critic.act( batch, test_recurrent_hidden_states, prev_actions, not_done_masks, deterministic=False, ) prev_actions.copy_(actions) outputs = envs.step([a[0].item() for a in actions]) observations, rewards, dones, infos = [ list(x) for x in zip(*outputs) ] batch = batch_obs(observations, device=self.device) not_done_masks = torch.tensor( [[not done] for done in dones], dtype=torch.bool, device="cpu", ) rewards = torch.tensor( rewards, dtype=torch.float, device=self.device ).unsqueeze(1) current_episode_reward += rewards # episode ended if not not_done_masks[0].item(): generate_video( video_option=self.config.VIDEO_OPTION, video_dir=self.config.VIDEO_DIR, images=rgb_frames[0], episode_id=current_episodes[0].episode_id, checkpoint_idx=0, metrics=self._extract_scalars_from_info(infos[0]), tb_writer=None, ) print("Evaluation Finished.") print("Success: {}".format(infos[0]["episode_success"])) print( "Reward: {}".format(current_episode_reward[0].item()) ) print( "Distance To Goal: {}".format( infos[0]["object_to_goal_distance"] ) ) return # episode continues elif len(self.config.VIDEO_OPTION) > 0: frame = observations_to_image(observations[0], infos[0]) rgb_frames[0].append(frame) not_done_masks = not_done_masks.to(device=self.device) %load_ext tensorboard %tensorboard --logdir data/tb # @title Train an RL agent on a single episode !if [ -d "data/tb" ]; then rm -r data/tb; fi import random import numpy as np import torch import habitat from habitat import Config from habitat_baselines.config.default import get_config as get_baseline_config baseline_config = get_baseline_config( "habitat_baselines/config/pointnav/ppo_pointnav.yaml" ) baseline_config.defrost() baseline_config.TASK_CONFIG = config baseline_config.TRAINER_NAME = "ddppo" baseline_config.ENV_NAME = "RearrangementRLEnv" baseline_config.SIMULATOR_GPU_ID = 0 baseline_config.TORCH_GPU_ID = 0 baseline_config.VIDEO_OPTION = ["disk"] baseline_config.TENSORBOARD_DIR = "data/tb" baseline_config.VIDEO_DIR = "data/videos" baseline_config.NUM_ENVIRONMENTS = 2 baseline_config.SENSORS = ["RGB_SENSOR", "DEPTH_SENSOR"] baseline_config.CHECKPOINT_FOLDER = "data/checkpoints" baseline_config.TOTAL_NUM_STEPS = -1.0 if vut.is_notebook(): baseline_config.NUM_UPDATES = 400 # @param {type:"number"} else: baseline_config.NUM_UPDATES = 1 baseline_config.LOG_INTERVAL = 10 baseline_config.NUM_CHECKPOINTS = 5 baseline_config.LOG_FILE = "data/checkpoints/train.log" baseline_config.EVAL.SPLIT = "train" baseline_config.RL.SUCCESS_REWARD = 2.5 # @param {type:"number"} baseline_config.RL.SUCCESS_MEASURE = "object_to_goal_distance" baseline_config.RL.REWARD_MEASURE = "object_to_goal_distance" baseline_config.RL.GRIPPED_SUCCESS_REWARD = 2.5 # @param {type:"number"} baseline_config.freeze() random.seed(baseline_config.TASK_CONFIG.SEED) np.random.seed(baseline_config.TASK_CONFIG.SEED) torch.manual_seed(baseline_config.TASK_CONFIG.SEED) if __name__ == "__main__": trainer = RearrangementTrainer(baseline_config) trainer.train() trainer.eval() if make_video: video_file = os.listdir("data/videos")[0] vut.display_video(os.path.join("data/videos", video_file)) ```
github_jupyter
<a href="https://colab.research.google.com/github/gtbook/robotics/blob/main/S36_vacuum_RL.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` %pip install -q -U gtbook import numpy as np import gtsam import pandas as pd import gtbook import gtbook.display from gtbook import vacuum from gtbook.discrete import Variables VARIABLES = Variables() def pretty(obj): return gtbook.display.pretty(obj, VARIABLES) def show(obj, **kwargs): return gtbook.display.show(obj, VARIABLES, **kwargs) # From section 3.2: N = 5 X = VARIABLES.discrete_series("X", range(1, N+1), vacuum.rooms) A = VARIABLES.discrete_series("A", range(1, N), vacuum.action_space) # From section 3.5: conditional = gtsam.DiscreteConditional((2,5), [(0,5), (1,4)], vacuum.action_spec) R = np.empty((5, 4, 5), float) T = np.empty((5, 4, 5), float) for assignment, value in conditional.enumerate(): x, a, y = assignment[0], assignment[1], assignment[2] R[x, a, y] = 10.0 if y == vacuum.rooms.index("Living Room") else 0.0 T[x, a, y] = value ``` # Reinforcement Learning > We will talk about model-based and model-free learning. **This Section is still in draft mode and was released for adventurous spirits (and TAs) only.** ``` from gtbook.display import randomImages from IPython.display import display display(randomImages(3, 6, "steampunk", 1)) ``` ## Exploring to get Data > Where we gather experience. Let's adapt the `policy_rollout` code from the previous section to generate a whole lot of experiences of the form $(x,a,x',r)$. ``` def explore_randomly(x1, horizon=N): """Roll out states given a random policy, for given horizon.""" data = [] x = x1 for _ in range(1, horizon): a = np.random.choice(4) next_state_distribution = gtsam.DiscreteDistribution(X[1], T[x, a]) x_prime = next_state_distribution.sample() data.append((x, a, x_prime, R[x, a, x_prime])) x = x_prime return data ``` Let us use it to create 499 experiences and show the first 10: ``` data = explore_randomly(vacuum.rooms.index("Living Room"), horizon=500) print(data[:10]) ``` ## Model-based Reinforcement Learning > Just count, then solve the MDP. We can *estimate* the transition probabilities $T$ and reward table $R$ from the data, and then we can use the algorithms from before to calculate the value function and/or optimal policy. The math is just a variant of what we saw in the learning section of the last chapter. The rewards is easiest: $$ R(x,a,x') \approx \frac{1}{N(x,a,x')} \sum_{x,a,x'} r $$ where $N(x,a,x')$ counts how many times an experience $(x,a,x')$ was recorded. The transition probabilities are a bit trickier: $$ P(x'|x,a) \approx \frac{N(x,a,x)}{N(x,a)} $$ where $N(x,a)=\sum_{x'} N(x,a,x')$ is the number of times we took action $a$ in a state $x$. The code associated with that is fairly simple, modulo some numpy trickery to deal with division by zero and *broadcasting* the division: ``` R_sum = np.zeros((5, 4, 5), float) T_count = np.zeros((5, 4, 5), float) count = np.zeros((5, 4), int) for x, a, x_prime, r in data: R_sum[x, a, x_prime] += r T_count[x, a, x_prime] += 1 R_estimate = np.divide(R_sum, T_count, where=T_count!=0) xa_count = np.sum(T_count, axis=2) T_estimate = T_count/np.expand_dims(xa_count, axis=-1) ``` Above `T_count` corresponds to $N(x,a,x')$, and the variable `xa_count` is $N(x,a)$. It is good to check the latter to see whether our experiences were more or less representative, i.e., visited all state-action pairs: ``` xa_count ``` This seems pretty good. If not, we can always gather more data, which we encourage you to experiment with. We can compare the ground truth transition probabilities $T$ with the estimated transition probabilities $\hat{T}$, e.g., for the living room: ``` print(f"ground truth:\n{T[0]}") print(f"estimate:\n{np.round(T_estimate[0],2)}") ``` Not bad. And for the rewards: ``` print(f"ground truth:\n{R[0]}") print(f"estimate:\n{np.round(R_estimate[0],2)}") ``` In summary, learning in this context can simply be done by gathering lots of experiences, and estimating models for how the world behaves. ## Model-free Reinforcement Learning > All you need is Q, la la la la. A different, model-free approach is **Q_learning**. In the above we tried to *model* the world by trying estimate the (large) transition and reward tables. However, remember from the previous section that there is a much smaller table of Q-values $Q(x,a)$ that also allow us to act optimally, because we have $$ \pi^*(x) = \arg \max_a Q^*(x,a) $$ where the Q-values are defined as $$ Q^*(x,a) \doteq \bar{R}(x,a) + \gamma \sum_{x'} P(x'|x, a) V^*(x') $$ This begs the question whether we can simply learn the Q-values instead, which might be more *sample-efficient*, i.e., we would get more accurate values with less training data, as we have less quantities to estimate. To do this, remember that the Bellman equation can be written as $$ V^*(x) = \max_a Q^*(x,a) $$ allowing us to rewrite the Q-values from above as $$ Q^*(x,a) = \sum_{x'} P(x'|x, a) \{ R(x,a,x') + \gamma \max_{a'} Q^*(x',a') \} $$ This gives us a way to estimate the Q-values, as we can approximate the above using a Monte Carlo estimate, summing over our experiences: $$ Q^*(x,a) \approx \frac{1}{N(x,a)} \sum_{x,a,x'} R(x,a,x') + \gamma \max_{a'} Q^*(x',a') $$ Unfortunately the estimate above *depends* on the optimal Q-values. Hence, the final Q-learning algorithm applies this estimate gradually, by "alpha-blending" between old and new estimates, which also averages over the reward: $$ \hat{Q}(x,a) \leftarrow (1-\alpha) \hat{Q}(x,a) + \alpha \{R(x,a,x') + \gamma \max_{a'} \hat{Q}(x',a') \} $$ In code: ``` alpha = 0.5 # learning rate gamma = 0.9 # discount factor Q = np.zeros((5, 4), float) for x, a, x_prime, r in data: old_Q_estimate = Q[x,a] new_Q_estimate = r + gamma * np.max(Q[x_prime]) Q[x, a] = (1.0-alpha) * old_Q_estimate + alpha * new_Q_estimate print(Q) ``` These values are not yet quite accurate, as you can ascertain yourself by changing the number of experiences above, but note that an optimal policy can be achieved before we even converge.
github_jupyter
# Figure 3: iModulon Examples ## Setup ``` from os import path import seaborn as sns import matplotlib.pyplot as plt from pymodulon.io import load_json_model from pymodulon.plotting import * ``` ### Set plotting style ``` sns.set_style('ticks') plt.style.use('custom.mplstyle') ``` ### Load data ``` figure_dir = 'raw_figures' data_dir = path.join('..','data','processed_data') data_file = path.join(data_dir,'bsu.json.gz') ica_data = load_json_model(data_file) ``` # Panel A: Early Biofilm iModulon ``` plot_gene_weights(ica_data,'early-biofilm',show_labels=True,label_font_kwargs={'fontsize':6}) plt.savefig(path.join('raw_figures','Fig3a_biofilm_genes.pdf')) ``` # Panel B: Biofilm iModulon activities ``` fig,ax = plt.subplots(figsize=(4,3)) plot_activities(ica_data,'early-biofilm', ax=ax, projects=['biofilm_time','mk7','pamR'], highlight=['biofilm_time','mk7','pamR'], legend_kwargs={'ncol':1,'frameon':False}) plt.savefig(path.join('raw_figures','Fig3b_biofilm_activities.pdf')) ``` # Panel C: SP-beta iModulons ``` spb1 = set(ica_data.view_imodulon('SPbeta-1').index) spb2 = set(ica_data.view_imodulon('SPbeta-2').index) yono = set(ica_data.view_imodulon('YonO-1').index) from matplotlib_venn import venn3 spb = set(ica_data.gene_table.loc['BSU_19820':'BSU_21660'].index) venn3((spb2,yono,spb),set_labels=('SPbeta-2','YonO','SPB')) plt.figure() venn3((spb1,spb2,spb),set_labels=('SPbeta-1','SPbeta-2','SPB')) plt.figure() venn3((spb1,yono,spb),set_labels=('SPbeta-1','YonO','SPB')) fig, ax = plt.subplots() venn = venn3((spb1,spb2,yono),set_labels=('SPbeta-1 iModulon','SPbeta-2 iModulon','YonO-1 iModulon')) # Remove SPbeta-1 text venn.subset_labels[0].set_text('') # Add star to center venn.subset_labels[-1].set_text('5*') # Add SPbeta circle circle = plt.Circle((0.1,-0.015),.58,color='purple',zorder=0,alpha=0.25) ax.add_patch(circle) # Add SPbeta-1 texts ax.text(-0.6,0.2,str(len(spb1-spb-spb2-yono))) ax.text(-0.3,0.1,str(len((spb1&spb)-spb2-yono))) ax.text(0.42,-0.35,str(len(spb-spb1-spb2-yono))) plt.savefig(path.join('raw_figures','Fig3c_spbeta_genes.pdf')) spb1 & spb2 & yono - spb ``` # Panel D,E: SPbeta iModulon activities ``` groups = {} for i,row in ica_data.sample_table.iterrows(): if row.condition == 'wt_53C': groups[i] = 'Heatshock (53C)' elif row.condition in ['delyonO','delyonO_mmc']: groups[i] = 'YonO Mutant' elif row.strain_description == 'BEST7003 with phi3T': groups[i] = 'Phi3T Infection' elif row.strain_description == 'BEST7003 with spBeta': groups[i] = 'spBeta Infection' fig,ax = plt.subplots(figsize=(2.5,2.5)) compare_activities(ica_data,'SPbeta-1','SPbeta-2', ax=ax, line45=True, fit_metric=None, line45_margin=20, groups=groups, colors=['tab:green','tab:orange']) plt.savefig(path.join('raw_figures','Fig3d_spbeta_activities.pdf')) fig,ax = plt.subplots(figsize=(2.5,2.5)) compare_activities(ica_data,'YonO-1','SPbeta-2', ax=ax, line45=True, fit_metric=None, line45_margin=20, groups=groups, colors=['tab:red','tab:blue']) plt.savefig(path.join('raw_figures','Fig3e_spbeta_yonO_activities.pdf')) ``` # Supplemental Figure 3 ``` cmap,cg = cluster_activities(ica_data, show_best_clusters=True,show_thresholding=True, cluster_names={14:'CcpA',11:'SPbeta',1:'SigB',3:'Biofilm',9:'Carbon Sources',16:'Anaerobiosis',2:'Nutrient Availability'}, return_clustermap=True) cg.savefig(path.join('raw_figures','FigS3a_clustermap.png')) plt.savefig(path.join('raw_figures','FigS3b_clustermap.png')) ```
github_jupyter
##### Copyright 2021 The TF-Agents Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # 強化学習(RL)およびDeep Q ネットワークの概要 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/agents/tutorials/0_intro_rl"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/agents/tutorials/0_intro_rl.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colabで実行</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/agents/tutorials/0_intro_rl.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/agents/tutorials/0_intro_rl.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td> </table> ## はじめに 強化学習 (RL) は、報酬を最大化するために、エージェントが環境に対して実行する行動を学ぶための一般的なフレームワークです。強化学習における主要なコンポーネントは、環境 (解決すべき問題) とエージェント (学習アルゴリズム) の 2 つです。 エージェントと環境は継続的に相互に対話します。各時間ステップで、エージェントはその*ポリシー* $\pi(a_t|s_t)$ に基づいて環境に対して行動を実行します。$s_t$ は環境からのその時点の観測で、報酬 $r_{t+1}$ と環境からの次の観測 $s_{t+1}$ を受け取ります。目標は、報酬 (リターン) の合計を最大化するようにポリシーを改善することです。 注:環境の`状態`と`観測`を区別することが重要です。観測は、エージェントが認識する環境の`状態`の一部です。たとえば、ポーカーゲームでは、環境の状態はすべてのプレーヤーのカードとコミュニティカードで構成されますが、エージェントは自分のカードといくつかのコミュニティカードしか観測できません。ほとんどの文献では、これらの用語は同じ意味で使用されており、観測結果は $s$ とも表記されます。 ![Agent-Environment Interation Loop](images/rl_overview.png) これは非常に一般的なフレームワークであり、ゲームやロボット工学など、さまざまな逐次的な意思決定の問題をモデル化できます。 ## CartPole環境 CartPole環境は、最もよく知られている古典的な強化学習の問題の 1 つです(強化学習の*"Hello, World!"*)。台車の上に立てられた棒が倒れないように台車を制御するのが課題です。台車は摩擦のない軌道上を移動します。 - 環境 $s_t$ からの観測は、台車の位置と速度、および棒の角度と角速度を表す 4D ベクトルです。 - エージェントは、$a_t$ の 2 つの行動 (台車を右 (+1) または左 (-1) に動かす) のいずれかを実行してシステムを制御できます。 - 棒が倒れずに立っている場合は、時間ステップごとに報酬 $r_{t+1} = 1$ が提供されます。以下の場合、エピソードは終了します。 - 棒がある制限された角度より傾いた場合 - カートが規定された端の外に出た場合 - 時間ステップが 200 を経過した場合 エージェントの課題は、エピソードの報酬の合計 $\sum_{t=0}^{T} \gamma^t r_t$ を最大化するポリシー $\pi(a_t|s_t)$を学ぶことです。ここでは、$\gamma$は$[0, 1]$ の割引係数であり、即時の報酬に対して将来の報酬を割引します。このパラメータは、報酬を迅速に取得することを重視するポリシーを作成することに役立ちます。 ## DQN エージェント [DQN (Deep Q-Network) アルゴリズム](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf)は、DeepMind により 2015 年に開発されたアルゴリズムで、大規模な強化学習とディープニューラルネットワークを組み合わせることで、幅広いAtariゲームを解くことができました (ゲームによっては超人的なレベルを達成)。このアルゴリズムは、ディープニューラルネットワークで Q-Learning と呼ばれる古典的な強化学習アルゴリズムの拡張と*経験再生 (Experience Replay)* と呼ばれる手法により開発されました。 ### Q 学習 Q 学習は Q 関数の概念に基づいています。ポリシー $\pi$, $Q^{\pi}(s, a)$ のQ関数(状態アクション値関数)は、最初に $a$ を実行し、その後にポリシー $\pi$ を実行し、状態 $s$ から得られる予期される報酬または割引された報酬の合計を測定します。最適な Q 関数 $Q^*(s, a)$ は、観測 $s$ から開始して、行動 $a$ を実行し、その後最適なポリシーを実行する場合に取得可能な最大の報酬として定義します。最適な Q 関数は、次の*ベルマン*最適化方程式に従います。 $\begin{equation}Q^\ast(s, a) = \mathbb{E}[ r + \gamma \max_{a'} Q^\ast(s', a') ]\end{equation}$ つまり、状態 $s$ と行動 $a$ からの最大のリターンは、即時の報酬 $r$ とエピソードの最後まで最適なポリシーに従うことによって得られるリターン ($\gamma$ で割引) の合計です。 (つまり、次の状態 $s'$ からの最大報酬)。予測値は、即時の報酬 $r$ と可能な次の状態 $s'$ の両方の分布に対して計算されます。 Q 学習の背後にある基本的な考え方は、ベルマン最適化方程式を反復更新($Q_{i+1}(s, a) \leftarrow \mathbb{E}\left[ r + \gamma \max_{a'} Q_{i}(s', a')\right]$)として使用すると、最適な $Q$ 関数($Q_i \rightarrow Q^*$ as $i \rightarrow \infty$)に収束されるいうことです。詳細については、([DQN 関連論文](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf))を参照してください。 ### ディープ Q 学習 ほとんどの問題では、$Q$ 関数を $s$ と $a$ の各組み合わせの値を含む表として示すことは現実的ではありません。代わりに、Q 値を推定するために、パラメータ $\theta$ を使用するニューラルネットワークなどの関数近似器 ($Q(s, a; \theta) \approx Q^*(s, a)$をトレーニングします。) 各ステップ $i$ では、次の損失を最小限に抑えます。 $\begin{equation}L_i(\theta_i) = \mathbb{E}*{s, a, r, s'\sim \rho(.)} \left[ (y_i - Q(s, a; \theta_i))^2 \right]\end{equation}$ where $y_i = r + \gamma \max*{a'} Q(s', a'; \theta_{i-1})$ ここで、$y_i$は TD (時間差) ターゲットと呼ばれ、$y_i - Q$ は TD エラーと呼ばれます。環境から収集された $\rho$ は動作の分布、遷移 ${s, a, r, s'}$ の分布を表します。 前のイテレーション $\theta_{i-1}$ のパラメータは固定されており、更新されていないことに注意してください。実際には、最後のイテレーションではなく、数回前のイテレーションのネットワークパラメータのスナップショットを使用します。このコピーは*ターゲットネットワーク*と呼ばれます。 Q 学習は、環境における行動/データの収集に異なる行動ポリシーを使用しながら、グリーディなポリシー ($a = \max_{a} Q(s, a; \theta)$) について学習する<em>オフポリシー</em>アルゴリズムです。通常、この動作ポリシーは $\epsilon$-グリーディポリシーで、確率 $1-\epsilon$ のグリーディな行動と確率 $\epsilon$ のランダムな行動を選択して、状態と行動のスペースを適切に網羅します。 ### 経験再生 DQN 損失の完全な予想値の計算を回避するために、確率的勾配降下法を使用してそれを最小化できます。最後の遷移 ${s, a, r, s'}$ のみを使用して損失が計算される場合、これは標準の Q 学習になります。 DQN が ATARI を学習する場合、ネットワークの更新をより安定させるために、経験再生 (Experience Replay) と呼ばれる手法が導入されました。データ収集の各時間ステップで、遷移は*再生バッファ*と呼ばれる循環バッファに追加されます。次に、トレーニング時には、最新の遷移だけでなく、再生バッファからサンプリングされた遷移のミニバッチを使用して損失とその勾配が計算されます。これには、多くの更新で各遷移を再利用することによりデータ効率が向し、無相関遷移をバッチで使用することにより安定性が向上するという 2 つの利点があります。 ## TF-Agents ライブラリを使用した Cartpole 環境の DQN TF-Agent は、エージェント自体、環境、ポリシー、ネットワーク、再生バッファ、データ収集ループ、メトリックなど、DQN エージェントのトレーニングに必要なすべてのコンポーネントを提供します。これらのコンポーネントは Python 関数または TensorFlow グラフオペレーションとして実装されており、それらの間で変換するためのラッパーも提供されています。さらに、TF-Agent は TensorFlow 2.0 モードをサポートしており、命令モードで TF を使用できます。 次に [TF-Agent を使用して Cartpole 環境で DQN エージェントをトレーニングするためのチュートリアル](https://github.com/tensorflow/agents/blob/master/docs/tutorials/1_dqn_tutorial.ipynb)を見てみましょう。
github_jupyter
# **Assignment - 2: Basic Data Understanding** --- This assignment will get you familiarized with Python libraries and functions required for data visualization. --- ## Part 1 - Loading data --- ###Import the following libraries: * ```numpy``` with an alias name ```np```, * ```pandas``` with an alias name ```pd```, * ```matplotlib.pyplot``` with an alias name ```plt```, and * ```seaborn``` with an alias name ```sns```. ``` # Load the four libraries with their aliases import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns ``` ### Using the files ```train.csv``` and ```moviesData.csv```, peform the following: * Load these file as ```pandas``` dataframes and store it in variables named ```df``` and ```movies``` respectively. * Print the first ten rows of ```df```. ``` # Load the file as a dataframe df = pd.read_csv("train.csv") movies = pd.read_csv("moviesData.csv") # Print the first ten rows of df df.head(10) ``` ### Using the dataframe ```df```, perform the following: * Print the first five rows of the column ```MonthlyRate```. * Find out the details of the column ```MonthlyRate``` like mean, maximum value, minimum value, etc. ``` # Print the first five rows of MonthlyRate df["MonthlyRate"].head(5) # Find the details of MonthlyRate df["MonthlyRate"].describe() ``` --- ## Part 2 - Cleaning and manipulating data --- ### Using the dataframe ```df```, peform the following: * Check whether there are any missing values in ```df```. * If yes, drop those values and print the size of ```df``` after dropping these. ``` # Check for missing values df.isna() # Drop the missing values df.dropna() # Print the size of df after dropping df.shape ``` ### Using the dataframe ```df```, peform the following: * Add another column named ```MonthRateNew``` in ```df``` by subtracting the mean from ```MonthlyRate``` and dividing it by standard deviation. ``` # Add a column named MonthRateNew df["MonthRateNew"] = (df["MonthlyRate"] - df["MonthlyRate"].mean()) / df["MonthlyRate"].std() df ``` ### Using the dataframe ```movies```, perform the following: * Check whether there are any missing values in ```movies```. * Find out the number of observations/rows having any of their features/columns missing. * Drop the missing values and print the size of ```movies``` after dropping these. * Instead of dropping the missing values, replace the missing values by their mean (or some suitable value). ``` # Check for missing values movies.isna().sum() # Replace the missing values # You can use SimpleImputer of sklearn for this # Drop the missing values movies_new = movies.dropna() movies_new.shape from sklearn.impute import SimpleImputer imputer = SimpleImputer(missing_values=np.NaN, strategy="mean") movies["runtime"] = imputer.fit_transform(movies[["runtime"]]).ravel() movies["runtime"] ``` --- ## Part 3 - Visualizing data --- ### Visualize the ```df``` by drawing the following plots: * Plot a histogram of ```Age``` and find the range in which most people are there. * Modify the histogram of ```Age``` by adding 30 bins. * Draw a scatter plot between ```Age``` and ```Attrition``` and suitable labels to the axes. Find out whether people more than 50 years are more likely to leave the company. (```Attrition``` = 1 means people have left the company). ``` # Plot and modify the histogram of Age plt.hist(df.Age) df.hist(column="Age", bins=30, color="green", figsize=(10,10)) # Draw a scatter plot between Age and Attrition plt.scatter(df.Age, df.Attrition, c="pink") plt.xlim(10,70) plt.ylim(0,1) plt.title("Scatter Plot Example") plt.show() ``` ### Visualize the ```df``` by following the steps given below: * Get a series containing counts of unique values of ```Attrition```. * Draw a countplot for ```Attrition``` using ```sns.countplot()```. ### Visualize the ```df``` by following the steps given below: * Draw a cross tabulation of ```Attrition``` and ```BusinessTravel``` as bar charts. Find which value of ```BusinessTravel``` has highest number of people. ``` # Get a series of counts of values of Attrition # Draw a countplot for Attrition # You may use countplot of seaborn for this df.Attrition.value_counts() sns.countplot(x="Attrition", data=df) plt.ylim(0,1000) plt.show() # Draw a cross tab of Attritiona and BusinessTravel # You may use crosstab of pandas for this pd.crosstab(df.BusinessTravel, df.Attrition).plot(kind="bar") plt.ylabel("Attrition") ``` ### Visualize the ```df``` by drawing the following plot: * Draw a stacked bar chart between ```Attrition``` and ```Gender``` columns. ``` # Draw a stacked bar chart between Attrition and Gender new_df = pd.crosstab(df.Gender, df.Attrition) new_df.plot(kind="bar", stacked=True) plt.ylabel("Attrition") ``` ### Visualize the ```df``` by drawing the following histogram: * Draw a histogram of ```TotalWorkingYears``` with 30 bins. * Draw a histogram of ```YearsAtCompany``` with 30 bins and find whether the values in ```YearsAtCompany``` are skewed. ``` # Draw a histogram of TotalWorkingYears with 30 bins df.hist(column="TotalWorkingYears", bins=30, color="red") plt.show() # Draw a histogram of YearsAtCompany df.hist(column="YearsAtCompany", figsize=(10,10), color="yellow") ``` ### Visualize the ```df``` by drawing the following boxplot: * Draw a boxplot of ```MonthlyIncome``` for each ```Department``` and report whether there is/are outlier(s). ``` # Draw a boxplot of MonthlyIncome for each Department and report outliers sns.boxplot('Department', 'MonthlyIncome', data=df) ``` ### Visualize the ```df``` by drawing the following piechart: * Create a pie chart of the values in ```JobRole``` with suitable label and report which role has highest number of persons. ``` # Create a piechart of JobRole # You will need to find the counts of unique values in JobRole. number_of_roles = df.JobRole.value_counts() number_of_roles plt.pie(number_of_roles) plt.pie(number_of_roles, labels=number_of_roles) plt.pie(number_of_roles, labels=number_of_roles.index.tolist()) plt.show() ```
github_jupyter
![Callysto.ca Banner](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-top.jpg?raw=true) <a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=TechnologyStudies/IntroductionToDataStructures/introduction-to-data-structures.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a> # Introduction to Lists and Dictionaries in Python Hello! Today, we will be introducing powerful tools we can use when coding in Python: lists, dictionaries, and tuples. We will show you how to perform basic operations on these types, as well as explore their functions and methods. To run all the cells, find the Cell tab at the top of the page, and then select Run All. To run individual cells, click the cell you want to run, and then find the Run button at the top of the page. ### Chapter Goals: [Introduction to Lists](#Lists) <br /> [List Basics](#List_Basics) <br /> [List Slices](#List_Slicing) <br /> [List Functions](#List_Functions) <br /> [List Methods](#List_Methods) <br /> [String Splitting](#String_Splitting) <br /> [Introduction to Dictionaries](#Dictionaries) <br /> [Dictionary Functions](#Dictionary_Functions) <br /> [Dictionary Methods](#Dictionary_Methods) <br /> [Introduction to Python Tuples](#Python_Tuples) <br /> [Tuple Methods](#Tuple_Methods) <br /> [Dictionaries with Tuples](#Dictionaries_with_Tuples) <br /> [Tuple Unpacking](#Tuple_Unpacking) <a id='Lists'></a> ## Lists A **list** is a data type in Python. More specifically, a list is what is known as a **linear data structure**. That is, all elements in the list are orginized in a linear order. Much like a string, you can access individual elements, add elements, remove elements, copy a list, etc. However, the crucial difference between a string and a list is that lists can hold multiple data types at once, whereas strings can only hold characters. Items inside a list are called **elements** and each element has an **index** starting from 0 to the length of the list. <a id='List_Basics'></a> ### List Basics: To declare a list in Python, we simply enclose our items in square brackets. For example, if we type: [1,2,3,4,5,6,7,8,9,10] we will have created a list of elements numbered 1 to 10. Here is a list with multiple data types: [True, False, "Hello", 1, 2, 3, 4, 5, 'a','b','c'] Like strings, we can pass lists to the print function as a whole list, or we can specify what elements to print by providing index locations. ``` # Declaring and Printing Lists # List Declarations list1 = ["physics", "chemistry", 1997, 2000] list2 = [1, 2, 3, 4, 5, 6, 7] list3 = [1 ,2 ,3 ,4 ,5 ,6 ,7 ,8 ,9 ,10] print(list1[0]) # This print function prints the first element in list1 print(list2[1:5]) # This will print elements starting at index 1, that is the second element, up to index 5 print(list3) # This will print the entire list ``` ### Creating new lists from old lists: Like strings, we can create new lists by using the + operator and the * operator. ``` # Example list1 = [1, 2, 3] list2 = [4, 5, 6] list3 = list1 + list2 # This will create a new list consisting of list1 and list2 print(list3) list4 = ["Hello World! "] * 5 # This will create a new list, consitsing of Hello World! repeated 5 times print(list4) ``` We can also easily check for an element in a list by using the "in" keyword, just like we did when we were checking for characters in a string. ``` # Checking if a given element is in a list list1 = [1, 2, 3] # We would like to check that 3 is in this list, to do this, we say: print(3 in list1) # This will be true print(4 in list1) # This will be false ``` #### For Loops: On many occasions, we would like to traverse through a list or a string and perform an operation on each element. For loops allow us to manipulate each element in a list in a compact way, without us having to type out every individual index. ``` # Example list1 = [1, 2, 3, 4, 5, 6] # We can say this print(list1) # Or we can use a for loop # The loop starts at index 0 and goes to the end of the list, # changing the value of x is taken care of by the for loop for x in list1: print(x) # Notice how the outputs differ. If we want to print in a line instead of vertical column inside the for loop, # we have to specify what the print function puts after each x for x in list1: print(x, end = " ") ``` <a id='List_Slicing'></a> ### Introduction to List Slicing: **List slicing** is a method we can use to get subsets of elements from a list without using a for loop. Slicing can be applied to lists, strings, dictionaries or any other user defined data structure. This makes list slicing a very versatile tool that we can use. ``` # Example # We want to get the first 5 numbers from this list without using a for loop list1 = [1, 2, 3, 4, 5, 6, 7, 8] list2 = [1, 2, 3, 4, 5, 6, 7] # How do we do this? Well, we can use the syntax we used above: print(list2[1:5]) # That print function is actually an example of a list slice # The colon tells python that we want to preform a list slice # The 1 is the start index, and is included, the 5 is the end index and is not included # So we can now say: print(list1[0:5]) # Another Example # We can reverse a list quite easily with list slicing, without using a for loop: # Here, we used the fact that if there is no start and end position given, the slice starts at index 0, # and goes to the last index by default. This is nice, because we don't need to worry about the length. # The -1 tells Python to traverse the list backwards list1 = list1[::-1] print(list1) ``` #### Practice: Make three lists of length five, with names L1, L2, and L3. Compose L1 out of numbers, L2 out of strings, and L3 out of any types you choose, and do the following: > Print the element at index 1 in L1 <br /> > Print the elements at indices 1 to 3 in L2 <br /> > Multiply L2 by 4 <br /> > Reverse L1 using list slicing <br /> > Create a new List, L4, by appending L2 and L3 <br /> > Check if 4 is in L2 <br /> > Check if "Hello" is in L2 <br /> > Print L4 on one line <br /> > Print L4 on seperate lines <br /> ``` # Code Goes Here ``` <a id='List_Functions'></a> ### List Functions: Python lists have a wide variety of useful methods and functions: ``` # Length Function: This function returns the length of a list list1 = [1, 2, 4, 11, 3, 6, 13, 5, 0, -1] list2 = [1, 4, 4] list3 = ["Hi", "Hey", "Howdy"] print(len(list1)) # Max Function: This function returns the maximum value in a list print(max(list1)) # Min Function: This function returns the minumum value in a list print(min(list1)) # Sorted Function: This function sorts the list in ascending order (or alphebetical order) print(sorted(list1)) ``` <a id='List_Methods'></a> ### List Methods: ``` # Append Method: This is the method we use to add items to a list, and since it is a method, we use the . operator list2.append(7) print(list2) # Count Method: This method will return the number of times an item shows up in a list print(list2.count(4)) # Extend Method: This method acts like the + operator, it appends the contents of one list to the other, # then returns nothing list2.extend(list3) print(list2) # Index Method: This method returns the first occurence of the specified element print(list2.index(4)) # Insert Method: This method will insert an object at the specified index list2.insert(0,5) print(list2) # Remove Method: This method will remove a specified object from a list list2.remove(4) print(list2) # Reverse Method: Reverses the elements in a given list list1.reverse() print(list1) # Sort Method: This method will sort the elements in a given list list1.sort() print(list1) ``` ### Practice: Create three lists, each as long as you wish, with names L5, L6, L7. Compose L5 out of numbers, L6 out of strings, and L7 out of any types you choose, and do the following: > Find the lengths of all three and print the lengths <br /> > Find the largest and smallest number in L5 and print them <br /> > Alphabetize L6, and print the result <br /> > Sort L5 from largest to smallest and print the result <br /> ``` # Code Goes Here ``` <a id='String_Splitting'></a> ## String Splitting: Suppose we want to create a list from a string. We can use a string method called split. Split takes a string, and returns a list of all the words in a string. The function is defined by: str.split(sep = " ", maxsplit = -1) Where: sep is the character you want your words to be separated by. By default, it is a whitespace. Maxsplit is the number of splits to do. By default, the function will split every word in the string apart. ``` # Example # string1 = "Hello World" new_list = string1.split() # Default split function print(new_list) # More Complicated Example string2 = "This is a string example...wow!!!" print(string2.split()) print(string2.split("i", 1)) # This will find the i in the string, and split once, i will be the seperator print(string2.split("w")) # This will use w as a seperator, and split the string multiple times print(string2.split(" ", 2)) # This will split the string twice, and use whitespace as a seperator # Application # We can determine how many words a string has by counting the number of whitespaces # To do this, we use split on a given string, and then use the list function len() words = string2.split() print(len(words)) # Another Application # Suppose we want to sort a list of numbers in ascending order but the input is grabbed from input() # We cannot apply the sorted method, because sorted requires integers or strings. So we need to convert our # input to integers. However, we cannot say L = int(input()).split(). However, we can use a for loop! input_string = "8 3 5 1 9 2" # Pretend input from input() numbers = [int(x) for x in input_string.split()] # This is a compact way of writing a for loop when doing operations on a list print(sorted(numbers)) ``` ### Extra Resources: [Introduction to lists with methods and functions](https://www.w3schools.com/python/python_lists.asp)<br /> [A more in-depth look at lists with applications](https://www.geeksforgeeks.org/python-list/) <a id='Dictionaries'></a> ## Dictionaries We now turn to **dictionaries**. A Python **dictionary**, like a list, is a very powerful tool in Python, which we can apply in many situations and algorithms. Dictionaries are composed of **indices**, which are called **keys**, and a collection of values, where each valued is associated with one key. We call this relationship a **Key-Value Pair**. A couple of things to note: > 1. Keys must be an unchanging data type such as strings or numbers (later, we will see that Tuples also work) > 2. The values can be of any type > 3. Keys must be unique in a dictionary, however, values do not need to be unique > 4. When naming dictionaries, do not use "dict", that is reserved for the function dict() ``` # Examples of Dictionaries # A dictionary is declared using the following syntax # Each Key-Value Pair is seperated by a comma d1 = {"Name": "Zara", "Age": 7, "Class": "First"} d2 = {"one": "uno", "two": "dos", "three": "tres"} d3 = {} # Empty Dictionary # Accessing Values # We can use square brackets to access elements like lists # The difference is that we need to specify a key to access a value print(d1["Name"]) # Prints the value associated with Name print(d1["Age"]) # Prints the value associated with Age # Updating Dictionaries # To update individual values for a key-value pair, we do the following: d1["Age"] = 8 # This updates the value for the key "Age" print(d1) # Adding a new entry d1["School"] = "St. Peter's" # This will append a key-value pair to the current dictionary print(d1) # Changing Key Names # To change a key name, we have to do two things: # 1. Swap the newkey for the old key: dictionary[oldkey] = dictionary[newkey] # 2. Delete the old key: del dictionary[oldkey] d1["Category"] = d1["Class"] # updates "Class" to "Category" del d1["Class"] print(d1) # Deleting individual elements # We can remove individual key-value pairs from dictionaries # Suppose we wish to delete the pair with key Name: del d1["Name"] # This deletes the key-value pair Name: Zara print(d1) # Removing all Key-Value Pairs # To remove all Key-Value pairs without deleting the dictionary itself we use the dictionary method clear(): d1.clear() print(d1) # Deleting an entire dictionary del d1 # Removes the dictionary from memory ``` ### Practice Make a dictionary D1 with four key-value pairs, then output the following: > Find the values at the first and second key-value pairs and print them <br /> > Update the value at the first key-value pair <br /> > Add two new key-value pairs and print D1 <br /> > Delete the last key-value pair <br /> > Clear D1 and print <br /> ``` # Code Goes Here ``` <a id='Dictionary_Functions'></a> ### Dictionary Functions: Like lists, dictionaries have a variety of functions that we can use: ``` # Finding the length of a dictonary dict1 = {"Name": "Zara", "Age": 7, "Class": "First"} print(len(dict1)) # Printing a string representation of a dictionary print(str(dict1)) # This function determines the type of object you give it, works with dictionaries print(type(dict1)) ``` <a id='Dictionary_Methods'></a> ### Dictionary Methods: Dictionaries also have many methods that we can use: ``` # Clear Method: Removes all elements in a dictionary, returns None dict1.clear() print(dict1) # From Keys Method: This method will create a new dictionary from a list of keys, and a set of values list1 = ["Age", "Height", "Weight"] dict2 = dict.fromkeys(list1, 10) print(dict2) # Items Method: Method will return a list of the key-value pairs in a dictionary dict1 = {"Name": "Zara", "Age": 7, "Class": "First"} key_value_list = dict1.items() print(key_value_list) # Keys Method: Produces a list of a given dictionaries keys key_list = dict1.keys() print(key_list) # Values Method: Produces a list of the values in a given dictionary value_list = dict1.values() print(value_list) # Set Default Method: This will create a default value for a given key dict1.setdefault("Age", 2) # already in dict1 dict1.setdefault("Gender", None) # new to dict1 print(dict1) # Update Method: This takes the key-value pairs from one dictionary and adds them to another dict2 = {"Gender": "female"} dict1.update(dict2) print(dict1) # Dict and Zip Functions # Suppose we want to accept a dictionary as user input, one way we can do this is to have the user enter values # for two lists. One list is a keys list, the other is a values list. keys = ["a", "b", "c"] values = [1, 2, 3] # We can then apply the function zip, which takes two lists and creates key-value pairs out of them # and pass the result to the dict function, which turns these pairs into dictionary pairs new_dict = dict(zip(keys,values)) print(new_dict) ``` ### Practice: Create a new dictionary D2, composed of five key-value pairs, and output the following: > Print D2 <br /> > Print all values in D2 <br /> > Print all keys in D2 <br /> > Add a key-value pair using any method you wish, and print D2 <br /> ``` # Code Goes Here ``` ### Extra Resources: [Tutorial on Dictionaries](https://www.tutorialspoint.com/python/python_dictionary.htm) <br /> [Introduction to Dictionaries with applications](https://www.geeksforgeeks.org/python-dictionary/) <br /> <a id='Python_Tuples'></a> ## Introduction to Python Tuples: A **Tuple** is another data type in Python. Tuples are similar to lists, as they can hold a sequence of values. However Tuples differ from lists in the following ways: > 1. Tuples are defined using round brackets (), and not square brackets. <br /> > 2. Elements inside a tuple cannot be changed <br /> > 3. Elements inside a tuple cannot be removed <br /> > 4. Tuples can be used as elements inside a dictionary <br /> Operations such as concatenation and slicing can still be performed. ``` # Basic Tuple Syntax and Operations # Declare an empty tuple tuple1 = () # Initialize a tuple tuple2 = (1,2,3,4,5,6) # Tuple with one element, requires a comma tuple3 = (1,) # Printing tuples print(tuple2) # Concatanating tuples tuple4 = tuple3 + tuple2 print(tuple4) # Getting elements from a tuple print(tuple4[4]) ``` <a id='Tuple_Methods'></a> ### Tuple Methods: Because Python tuples cannot be changed once declared, there are only two methods it can call. ``` # Count Method: Returns the number of occurences of a given element print(tuple4.count(1)) # Index Method: Finds the first occurence of a given element, and returns its position print(tuple4.index(3)) ``` <a id='Dictionaries_with_Tuples'></a> ### Dictionaries with Tuples: As we have seen above, tuples cannot be changed. Therefore, we can use tuples as keys in a dictionary. ``` # Example of a dictionary with tuples # dict_with_tuples = {('a', 'b'): 1, (1, 2, 3, 4): 2, ("Hello", 6, 7, 8): 3} key_list = dict_with_tuples.keys() print(key_list) value_list = dict_with_tuples.values() print(value_list) ``` <a id='Tuple_Unpacking'></a> ### Tuple Unpacking: **Tuple Unpacking** is a way of assigning individual elements inside a tuple to their own unique variables. We will illustrate with an example: ``` # Tuple unpacking person = ("James", 23, 1995) # We can take all three elements, and assign them to unique variables like this: (name, age, birth_year) = person print(name, '\n') print(age, '\n') print(birth_year, '\n') ``` ### Practice: > 1. Create two tuples of length four called T1 and T2, with whatever types you want, and combine them to create a third tuple T3. <br /> > 2. Print all three tuples <br /> > 3. Create a dictionary with the tuples you created. Then create a key list and a value list. <br /> > 4. Unpack tuples T1 and T2 using the method we saw above, and print the results. <br /> ``` # Code Goes Here ``` ### Extra Resources: [Basic Introduction with some neat applications](http://openbookproject.net/thinkcs/python/english3e/tuples.html) <br /> [Another introduction with tuple functions](https://www.geeksforgeeks.org/tuples-in-python/) ## Conclusion To end this notebook, we will recap what we have seen: > 1. Basic list operations, methods and functions > 2. Basic dictionary operations, methods and functions > 3. A brief introduction to tuples Now that we have these tools at our disposal, we are now able to create more powerful and robust programs. Our expectation is that you are comfortable with the basics of lists, dictionaries, and to some extent, tuples. [![Callysto.ca License](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-bottom.jpg?raw=true)](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
github_jupyter
# Minimum inter-class distances of all points of different dataset in different norms in table form ``` import os os.chdir("../") import sys import json import math import numpy as np import pickle from PIL import Image from sklearn import metrics from sklearn.metrics import pairwise_distances as dist import matplotlib.pyplot as plt import seaborn as sns sns.set(context='paper') import provable_robustness_max_linear_regions.data as dt from utils import NumpyEncoder, normalize_per_feature_0_1, har, tinyimagenet ``` ## Plot settings: ``` SMALL_SIZE = 14 MEDIUM_SIZE = 18 BIGGER_SIZE = 26 plt.rc('font', size=SMALL_SIZE) # controls default text sizes plt.rc('axes', titlesize=MEDIUM_SIZE) # fontsize of the axes title plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title plt.rc('text', usetex=True) # dictionary that maps color string to 'good looking' seaborn colors that are easily distinguishable colors = { "orange": sns.xkcd_rgb["yellowish orange"], "red": sns.xkcd_rgb["pale red"], "green": sns.xkcd_rgb["medium green"], "blue": sns.xkcd_rgb["denim blue"], "yellow": sns.xkcd_rgb["amber"], "purple": sns.xkcd_rgb["dusty purple"], "cyan": sns.xkcd_rgb["cyan"] } ``` ## Calculate distances: Estimated runtime (if no file with data is present): 3 days Note: The dataset HAR is not included in this repository because of storage issues. You can download the dataset from https://archive.ics.uci.edu/ml/datasets/human+activity+recognition+using+smartphones. After downloading, create a folder 'har' in the root folder of the repository and extract the dataset into the folder. Note: The dataset TINY-IMAGENET-200 is not included in this repository because of storage issues. You can download the dataset from https://tiny-imagenet.herokuapp.com/. After downloading, create a folder 'tiny-imagenet-200' in the root folder of the repository and extract the dataset into the folder. Without downloading the two datasets, the following code will not be executable. ``` def load_from_json(file_name): if not os.path.exists("res/" + file_name + ".json"): return None else: with open("res/" + file_name + ".json", 'r') as fp: return json.load(fp) def save_to_json(dictionary, file_name): if not os.path.exists("res"): os.makedirs("res") with open("res/" + file_name + ".json", 'w') as fp: json.dump(dictionary, fp, cls = NumpyEncoder) dataset_to_n_points = {"mnist": 10000, "fmnist": 10000, "cifar10": 10000, "gts": 10000, "tinyimagenet": 98179, "har": 2947} minimum_distances = dict() for dataset in ["mnist", "fmnist", "gts", "har", "tinyimagenet", "cifar10"]: minimum_distances[dataset] = load_from_json("min_distances_dataset={}_n_points={}".format(dataset, dataset_to_n_points[dataset])) if not minimum_distances[dataset]: if dataset in ["mnist", "fmnist"]: _, x_test, _, y_test = dt.get_dataset(dataset) sample_inputs = x_test[:dataset_to_n_points[dataset]] sample_labels = y_test[:dataset_to_n_points[dataset]] sample_inputs = sample_inputs.reshape(sample_inputs.shape[0], 784) elif dataset in ["gts", "cifar10"]: _, x_test, _, y_test = dt.get_dataset(dataset) sample_inputs = x_test[:dataset_to_n_points[dataset]] sample_labels = y_test[:dataset_to_n_points[dataset]] sample_inputs = sample_inputs.reshape(sample_inputs.shape[0], 3072) elif dataset == "har": _, _, x_test, y_test, _ = har() sample_inputs = x_test[:dataset_to_n_points[dataset]] sample_labels = y_test[:dataset_to_n_points[dataset]] elif dataset == "tinyimagenet": x_train, y_train = tinyimagenet() sample_inputs = x_train[:dataset_to_n_points[dataset]] sample_labels = y_train[:dataset_to_n_points[dataset]] sample_inputs = sample_inputs.reshape(sample_inputs.shape[0], 12288) minimum_distances[dataset] = {"inner": {"inf": [], "2": [], "1": []}, "outer": {"inf": [], "2": [], "1": []}} scipy_norm_to_key = {"chebyshev": "inf", "l2": "2", "l1": "1"} for norm in ['chebyshev', 'l2', 'l1']: pairwise_distances = dist(sample_inputs, sample_inputs, norm) np.fill_diagonal(pairwise_distances, np.inf) for i, sample_input in enumerate(sample_inputs): row = pairwise_distances[i] label = sample_labels[i].argmax() inner_class_row = [x if sample_labels[j].argmax() == label else np.inf for j, x in enumerate(row)] minimum_distances[dataset]["inner"][scipy_norm_to_key[norm]].append(np.min(inner_class_row)) minimum_distances[dataset]["inner"][scipy_norm_to_key[norm]] = np.sort(minimum_distances[dataset]["inner"][scipy_norm_to_key[norm]]) for i, sample_input in enumerate(sample_inputs): row = pairwise_distances[i] label = sample_labels[i].argmax() inner_class_row = [x if sample_labels[j].argmax() != label else np.inf for j, x in enumerate(row)] minimum_distances[dataset]["outer"][scipy_norm_to_key[norm]].append(np.min(inner_class_row)) minimum_distances[dataset]["outer"][scipy_norm_to_key[norm]] = np.sort(minimum_distances[dataset]["outer"][scipy_norm_to_key[norm]]) save_to_json(minimum_distances[dataset], "min_distances_dataset={}_n_points={}".format(dataset, dataset_to_n_points[dataset])) ``` ## Table: ``` dataset_to_name = {"mnist": "MNIST", "fmnist": "FMNIST", "cifar10": "CIFAR10", "gts": "GTS", "tinyimagenet": "TINY-IMG", "har": "HAR"} dataset_to_n_points = {"mnist": 10000, "fmnist": 10000, "cifar10": 10000, "gts": 10000, "tinyimagenet": 98179, "har": 2947} dataset_to_n_classes = {"mnist": 10, "fmnist": 10, "cifar10": 10, "gts": 43, "tinyimagenet": 200, "har": 6} dataset_to_dim = {"mnist": "$28 \\times 28 \\times 1$", "fmnist": "$28 \\times 28 \\times 1$", "cifar10": "$32 \\times 32 \\times 3$", "gts": "$32 \\times 32 \\times 3$", "tinyimagenet": "$64 \\times 64 \\times 3$", "har": "$561$"} print("Dataset & Samples & Classes & Dimensionality & \ell_\infty & \ell_2 & \ell_1 & \ell_\infty & \ell_2 & \ell_1") for dataset in ["mnist", "tinyimagenet", "fmnist", "gts", "cifar10", "har"]: for norm in ["inf", "2", "1"]: minimum_distances[dataset]["outer"][norm] = [value for value in minimum_distances[dataset]["outer"][norm] if value >= 0.0001] print("{} & {} & {} & {} & {:.2f} & {:.2f} & {:.2f} & {:.2f} & {:.2f} & {:.2f}".format(dataset_to_name[dataset], dataset_to_n_points[dataset], dataset_to_n_classes[dataset], dataset_to_dim[dataset], np.min(minimum_distances[dataset]["outer"]["inf"]), np.min(minimum_distances[dataset]["outer"]["2"]), np.min(minimum_distances[dataset]["outer"]["1"]), np.max(minimum_distances[dataset]["outer"]["inf"]), np.max(minimum_distances[dataset]["outer"]["2"]), np.max(minimum_distances[dataset]["outer"]["1"]))) ```
github_jupyter
Conditional Generative Adversarial Network ---------------------------------------- *Note: This example implements a GAN from scratch. The same model could be implemented much more easily with the `dc.models.GAN` class. See the MNIST GAN notebook for an example of using that class. It can still be useful to know how to implement a GAN from scratch for advanced situations that are beyond the scope of what the standard GAN class supports.* A Generative Adversarial Network (GAN) is a type of generative model. It consists of two parts called the "generator" and the "discriminator". The generator takes random values as input and transforms them into an output that (hopefully) resembles the training data. The discriminator takes a set of samples as input and tries to distinguish the real training samples from the ones created by the generator. Both of them are trained together. The discriminator tries to get better and better at telling real from false data, while the generator tries to get better and better at fooling the discriminator. A Conditional GAN (CGAN) allows additional inputs to the generator and discriminator that their output is conditioned on. For example, this might be a class label, and the GAN tries to learn how the data distribution varies between classes. For this example, we will create a data distribution consisting of a set of ellipses in 2D, each with a random position, shape, and orientation. Each class corresponds to a different ellipse. Let's randomly generate the ellipses. ``` import deepchem as dc import numpy as np import tensorflow as tf n_classes = 4 class_centers = np.random.uniform(-4, 4, (n_classes, 2)) class_transforms = [] for i in range(n_classes): xscale = np.random.uniform(0.5, 2) yscale = np.random.uniform(0.5, 2) angle = np.random.uniform(0, np.pi) m = [[xscale*np.cos(angle), -yscale*np.sin(angle)], [xscale*np.sin(angle), yscale*np.cos(angle)]] class_transforms.append(m) class_transforms = np.array(class_transforms) ``` This function generates random data from the distribution. For each point it chooses a random class, then a random position in that class' ellipse. ``` def generate_data(n_points): classes = np.random.randint(n_classes, size=n_points) r = np.random.random(n_points) angle = 2*np.pi*np.random.random(n_points) points = (r*np.array([np.cos(angle), np.sin(angle)])).T points = np.einsum('ijk,ik->ij', class_transforms[classes], points) points += class_centers[classes] return classes, points ``` Let's plot a bunch of random points drawn from this distribution to see what it looks like. Points are colored based on their class label. ``` %matplotlib inline import matplotlib.pyplot as plot classes, points = generate_data(1000) plot.scatter(x=points[:,0], y=points[:,1], c=classes) ``` Now let's create the model for our CGAN. ``` import deepchem.models.tensorgraph.layers as layers model = dc.models.TensorGraph(learning_rate=1e-4, use_queue=False) # Inputs to the model random_in = layers.Feature(shape=(None, 10)) # Random input to the generator generator_classes = layers.Feature(shape=(None, n_classes)) # The classes of the generated samples real_data_points = layers.Feature(shape=(None, 2)) # The training samples real_data_classes = layers.Feature(shape=(None, n_classes)) # The classes of the training samples is_real = layers.Weights(shape=(None, 1)) # Flags to distinguish real from generated samples # The generator gen_in = layers.Concat([random_in, generator_classes]) gen_dense1 = layers.Dense(30, in_layers=gen_in, activation_fn=tf.nn.relu) gen_dense2 = layers.Dense(30, in_layers=gen_dense1, activation_fn=tf.nn.relu) generator_points = layers.Dense(2, in_layers=gen_dense2) model.add_output(generator_points) # The discriminator all_points = layers.Concat([generator_points, real_data_points], axis=0) all_classes = layers.Concat([generator_classes, real_data_classes], axis=0) discrim_in = layers.Concat([all_points, all_classes]) discrim_dense1 = layers.Dense(30, in_layers=discrim_in, activation_fn=tf.nn.relu) discrim_dense2 = layers.Dense(30, in_layers=discrim_dense1, activation_fn=tf.nn.relu) discrim_prob = layers.Dense(1, in_layers=discrim_dense2, activation_fn=tf.sigmoid) ``` We'll use different loss functions for training the generator and discriminator. The discriminator outputs its predictions in the form of a probability that each sample is a real sample (that is, that it came from the training set rather than the generator). Its loss consists of two terms. The first term tries to maximize the output probability for real data, and the second term tries to minimize the output probability for generated samples. The loss function for the generator is just a single term: it tries to maximize the discriminator's output probability for generated samples. For each one, we create a "submodel" specifying a set of layers that will be optimized based on a loss function. ``` # Discriminator discrim_real_data_loss = -layers.Log(discrim_prob+1e-10) * is_real discrim_gen_data_loss = -layers.Log(1-discrim_prob+1e-10) * (1-is_real) discrim_loss = layers.ReduceMean(discrim_real_data_loss + discrim_gen_data_loss) discrim_submodel = model.create_submodel(layers=[discrim_dense1, discrim_dense2, discrim_prob], loss=discrim_loss) # Generator gen_loss = -layers.ReduceMean(layers.Log(discrim_prob+1e-10) * (1-is_real)) gen_submodel = model.create_submodel(layers=[gen_dense1, gen_dense2, generator_points], loss=gen_loss) ``` Now to fit the model. Here are some important points to notice about the code. - We use `fit_generator()` to train only a single batch at a time, and we alternate between the discriminator and the generator. That way. both parts of the model improve together. - We only train the generator half as often as the discriminator. On this particular model, that gives much better results. You will often need to adjust `(# of discriminator steps)/(# of generator steps)` to get good results on a given problem. - We disable checkpointing by specifying `checkpoint_interval=0`. Since each call to `fit_generator()` includes only a single batch, it would otherwise save a checkpoint to disk after every batch, which would be very slow. If this were a real project and not just an example, we would want to occasionally call `model.save_checkpoint()` to write checkpoints at a reasonable interval. ``` batch_size = model.batch_size discrim_error = [] gen_error = [] for step in range(20000): classes, points = generate_data(batch_size) class_flags = dc.metrics.to_one_hot(classes, n_classes) feed_dict={random_in: np.random.random((batch_size, 10)), generator_classes: class_flags, real_data_points: points, real_data_classes: class_flags, is_real: np.concatenate([np.zeros((batch_size,1)), np.ones((batch_size,1))])} discrim_error.append(model.fit_generator([feed_dict], submodel=discrim_submodel, checkpoint_interval=0)) if step%2 == 0: gen_error.append(model.fit_generator([feed_dict], submodel=gen_submodel, checkpoint_interval=0)) if step%1000 == 999: print(step, np.mean(discrim_error), np.mean(gen_error)) discrim_error = [] gen_error = [] ``` Have the trained model generate some data, and see how well it matches the training distribution we plotted before. ``` classes, points = generate_data(1000) feed_dict = {random_in: np.random.random((1000, 10)), generator_classes: dc.metrics.to_one_hot(classes, n_classes)} gen_points = model.predict_on_generator([feed_dict]) plot.scatter(x=gen_points[:,0], y=gen_points[:,1], c=classes) ```
github_jupyter
<a href="https://colab.research.google.com/github/mashyko/Caffe2_Detectron2/blob/master/Caffe2_Quickload.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> #Tutorials Installation: https://caffe2.ai/docs/tutorials.html First download the tutorials source. from google.colab import drive drive.mount('/content/drive') %cd /content/drive/My Drive/ !git clone --recursive https://github.com/caffe2/tutorials caffe2_tutorials # Model Quickload This notebook will show you how to quickly load a pretrained SqueezeNet model and test it on images of your choice in four main steps. 1. Load the model 2. Format the input 3. Run the test 4. Process the results The model used in this tutorial has been pretrained on the full 1000 class ImageNet dataset, and is downloaded from Caffe2's [Model Zoo](https://github.com/caffe2/caffe2/wiki/Model-Zoo). For an all around more in-depth tutorial on using pretrained models check out the [Loading Pretrained Models](https://github.com/caffe2/caffe2/blob/master/caffe2/python/tutorials/Loading_Pretrained_Models.ipynb) tutorial. Before this script will work, you need to download the model and install it. You can do this by running: ``` sudo python -m caffe2.python.models.download -i squeezenet ``` Or make a folder named `squeezenet`, download each file listed below to it, and place it in the `/caffe2/python/models/` directory: * [predict_net.pb](https://download.caffe2.ai/models/squeezenet/predict_net.pb) * [init_net.pb](https://download.caffe2.ai/models/squeezenet/init_net.pb) Notice, the helper function *parseResults* will translate the integer class label of the top result to an English label by searching through the [inference codes file](inference_codes.txt). If you want to really test the model's capabilities, pick a code from the file, find an image representing that code, and test the model with it! ``` from google.colab import drive drive.mount('/content/drive') !git clone --recursive https://github.com/caffe2/tutorials caffe2_tutorials %cd /content/drive/My Drive/caffe2_tutorials !pip3 install torch torchvision !python -m caffe2.python.models.download -i squeezenet from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import numpy as np import operator # load up the caffe2 workspace from caffe2.python import workspace # choose your model here (use the downloader first) from caffe2.python.models import squeezenet as mynet # helper image processing functions import helpers ##### Load the Model # Load the pre-trained model init_net = mynet.init_net predict_net = mynet.predict_net # Initialize the predictor with SqueezeNet's init_net and predict_net p = workspace.Predictor(init_net, predict_net) ##### Select and format the input image # use whatever image you want (urls work too) # img = "https://upload.wikimedia.org/wikipedia/commons/a/ac/Pretzel.jpg" img = "images/cat.jpg" # img = "images/cowboy-hat.jpg" # img = "images/cell-tower.jpg" # img = "images/Ducreux.jpg" # img = "images/pretzel.jpg" # img = "images/orangutan.jpg" # img = "images/aircraft-carrier.jpg" #img = "images/flower.jpg" # average mean to subtract from the image mean = 128 # the size of images that the model was trained with input_size = 227 # use the image helper to load the image and convert it to NCHW img = helpers.loadToNCHW(img, mean, input_size) ##### Run the test # submit the image to net and get a tensor of results results = p.run({'data': img}) ##### Process the results # Quick way to get the top-1 prediction result # Squeeze out the unnecessary axis. This returns a 1-D array of length 1000 preds = np.squeeze(results) # Get the prediction and the confidence by finding the maximum value and index of maximum value in preds array curr_pred, curr_conf = max(enumerate(preds), key=operator.itemgetter(1)) print("Top-1 Prediction: {}".format(curr_pred)) print("Top-1 Confidence: {}\n".format(curr_conf)) # Lookup our result from the inference list response = helpers.parseResults(results) print(response) %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg img=mpimg.imread('images/cat.jpg') #image to array # show the original image plt.figure() plt.imshow(img) plt.axis('on') plt.title('Original image = RGB') plt.show() ```
github_jupyter
# BERTを用いたテキスト分類 このノートブックでは、[BERT](https://arxiv.org/abs/1810.04805)を用いて分類器を構築します。BERTは事前学習済みのNLPのモデルであり、2018年にGoogleによって公開されました。データセットとしては、IMDBレビューデータセットを使います。 なお、学習には時間がかかるので、GPUを使うことを推奨します。 ## 準備 ### パッケージのインストール ``` !pip install tensorflow-text==2.6.0 tf-models-official==2.6.0 ``` ### インポート ``` import os import re import string import numpy as np import tensorflow as tf import tensorflow_text as text import tensorflow_datasets as tfds import tensorflow_hub as hub from official.nlp import optimization ``` ### データセットの読み込み ``` train_data, validation_data, test_data = tfds.load( name="imdb_reviews", split=('train[:80%]', 'train[80%:]', 'test'), as_supervised=True ) ``` ## 前処理 前処理としては、以下の3つを行います。 - 小文字化 - HTMLタグの除去(`<br />`タグ) - 句読点の除去 ``` def preprocessing(input_data, label): lowercase = tf.strings.lower(input_data) stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ') cleaned_html = tf.strings.regex_replace( stripped_html, '[%s]' % re.escape(string.punctuation), '' ) return cleaned_html, label AUTOTUNE = tf.data.AUTOTUNE train_ds = train_data.batch(32).map(preprocessing).cache().prefetch(buffer_size=AUTOTUNE) val_ds = validation_data.batch(32).map(preprocessing).cache().prefetch(buffer_size=AUTOTUNE) test_ds = test_data.batch(32).map(preprocessing).cache().prefetch(buffer_size=AUTOTUNE) ``` ## モデルの構築 今回は、[TensorFlow Hub](https://www.tensorflow.org/hub)を用いて、BERTを使ったモデルを構築します。TensorFlow Hubは、学習済みの機械学習モデルのリポジトリです。ここには、BERTを含む多数のモデルが公開されており、ファインチューニングすることで、素早くモデルを構築できます。BERT以外にも、以下のようなモデルが公開されています。 - ALBERT - Electra - Universal Sentence Encoder それでは、TensorFlow Hubを使ってみましょう。 ### 前処理モデル テキストは、BERTへ入力される前に、数値トークンIDに変換される必要があります。TensorFlow Hubは、BERTモデルに対応する前処理モデルを提供しており、それを使うことで、テキストを変換できます。したがって、前処理のために長々とコードを書く必要はありません。以下のように、前処理モデルを指定して読み込むだけです。 ``` tfhub_handle_preprocess = 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3' preprocess_model = hub.KerasLayer(tfhub_handle_preprocess) ``` 前処理モデルの出力を確認してみましょう。 ``` text_test = ['this is such an amazing movie!'] text_preprocessed = preprocess_model(text_test) print(f'Keys : {list(text_preprocessed.keys())}') print(f'Shape : {text_preprocessed["input_word_ids"].shape}') print(f'Word Ids : {text_preprocessed["input_word_ids"][0, :12]}') print(f'Input Mask : {text_preprocessed["input_mask"][0, :12]}') print(f'Type Ids : {text_preprocessed["input_type_ids"][0, :12]}') ``` ご覧のとおり、前処理モデルは以下の3つの出力をします。 - input_words_id: 入力系列のトークンID - input_mask: パディングされたトークンには0、それ以外は1 - input_type_ids: 入力セグメントのインデックス。複数の文を入力する場合に関係する。 その他、入力が128トークンに切り詰められていることがわかります。ちなみに、トークン数はオプション引数でカスタマイズできます。詳細は、[前処理モデルのドキュメント](https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3)をご覧ください。 ### BERTモデル モデルを構築する前に、BERTモデルの出力を確認してみましょう。 ``` tfhub_handle_encoder = 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3' bert_model = hub.KerasLayer(tfhub_handle_encoder) bert_results = bert_model(text_preprocessed) print(f'Pooled Outputs Shape:{bert_results["pooled_output"].shape}') print(f'Pooled Outputs Values:{bert_results["pooled_output"][0, :12]}') print(f'Sequence Outputs Shape:{bert_results["sequence_output"].shape}') print(f'Sequence Outputs Values:{bert_results["sequence_output"][0, :12]}') ``` `pooled_output`と`sequence_output`の説明は以下の通りです。 - pooled_output: 入力全体を表しているベクトルです。レビュー文全体の埋め込みと考えられます。今回のモデルの場合、形は`[batch_size, 768]`になります。上の例では入力は1つだけなので`[1, 768]`になります。 - sequence_output: 各入力トークンを表すベクトルです。各トークンの文脈を考慮した埋め込みと考えられます。形は、`[batch_size, seq_length, 768]`です。 今回は、レビューを分類すればいいので、`pooled_output`を使います。 ### モデルの定義 ``` def build_classifier_model(): text_input = tf.keras.layers.Input(shape=(), dtype=tf.string) preprocessing_layer = hub.KerasLayer(tfhub_handle_preprocess) encoder_inputs = preprocessing_layer(text_input) encoder = hub.KerasLayer(tfhub_handle_encoder, trainable=True) outputs = encoder(encoder_inputs) net = outputs['pooled_output'] net = tf.keras.layers.Dropout(0.1)(net) net = tf.keras.layers.Dense(1, activation='sigmoid')(net) return tf.keras.Model(text_input, net) ``` ## モデルの学習 ``` model = build_classifier_model() epochs = 2 steps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy() num_train_steps = steps_per_epoch * epochs num_warmup_steps = int(0.1*num_train_steps) init_lr = 3e-5 optimizer = optimization.create_optimizer( init_lr=init_lr, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps, optimizer_type='adamw' ) model.compile( optimizer=optimizer, loss='binary_crossentropy', metrics=['acc'] ) model.fit( train_ds, validation_data=val_ds, epochs=epochs, ) loss, accuracy = model.evaluate(test_ds) print(f'Loss: {loss}') print(f'Accuracy: {accuracy}') ```
github_jupyter
``` from PyQt4 import QtGui import os, sys import pandas as pd import pandas_datareader.data as web import numpy as np import matplotlib.pyplot as plt from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas from matplotlib.backends.backend_qt4agg import NavigationToolbar2QT as NavigationToolbar google = web.DataReader('GOOG', data_source = 'google',start = '3/14/2009', end = '4/14/2016') #DataReader('GOOG', start = '3/14/2009', end = '4/14/2016') google.head() google = google.drop('Volume', axis = 1 ) google.head() class PrettyWidget(QtGui.QWidget): def __init__(self): super(PrettyWidget, self).__init__() self.initUI() def initUI(self): self.setGeometry(600,300, 1000, 600) self.center() self.setWindowTitle('Revision on Plots, Tables and File Browser') #Grid Layout grid = QtGui.QGridLayout() self.setLayout(grid) #Canvas and Toolbar self.figure = plt.figure(figsize=(15,5)) self.canvas = FigureCanvas(self.figure) self.toolbar = NavigationToolbar(self.canvas, self) grid.addWidget(self.canvas, 2,0,1,2) grid.addWidget(self.toolbar, 1,0,1,2) #Import CSV Button btn1 = QtGui.QPushButton('Import CSV', self) btn1.resize(btn1.sizeHint()) btn1.clicked.connect(self.getCSV) grid.addWidget(btn1, 0,0) #Plot Button btn2 = QtGui.QPushButton('Plot', self) btn2.resize(btn2.sizeHint()) btn2.clicked.connect(self.plot) grid.addWidget(btn2, 0,1) self.show() def getCSV(self): filePath = QtGui.QFileDialog.getOpenFileName(self, 'Single File', '~/Desktop/PyRevolution/PyQt4', '*.csv') fileHandle = open(filePath, 'r') line = fileHandle.readline()[:-1].split(',') for n, val in enumerate(line): newitem = QtGui.QTableWidgetItem(val) self.table.setItem(0, n, newitem) self.table.resizeColumnsToContents() self.table.resizeRowsToContents() def plot(self): y = [] for n in range(9): try: y.append(float(self.table.item(0, n).text())) except: y.append(np.nan) plt.cla() ax = self.figure.add_subplot(111) ax.plot(y, 'r.-') ax.set_title('Table Plot') self.canvas.draw() def center(self): qr = self.frameGeometry() cp = QtGui.QDesktopWidget().availableGeometry().center() qr.moveCenter(cp) self.move(qr.topLeft()) def main(): app = QtGui.QApplication(sys.argv) w = PrettyWidget() app.exec_() if __name__ == '__main__': main() ```
github_jupyter
# Demystifying Approximate Bayesian Computation #### Brett Morris ### In this tutorial We will write our own rejection sampling algorithm to approximate the posterior distributions for some fitting parameters. ``` %matplotlib inline import matplotlib.pyplot as plt import numpy as np from scipy.stats import anderson_ksamp from corner import corner # The Anderson-Darling statistic often throws a harmless # UserWarning which we will ignore in this example # to avoid distractions: import warnings warnings.filterwarnings("ignore", category=UserWarning) ``` ### Generate a set of observations First, let's generate a series of observations $y_\mathrm{obs}$, taken at times $x$. The observations will be drawn from one of two Gaussian distributions with a fixed standard deviation, separated by $3\sigma$ from one another. There will be a fraction $f$ of the total samples in the second mode of the distribution. In the plots that follow, blue represents the observations or the true input parameters, and shades of gray or black represent samples from the posterior distributions. ``` # Set a random seed for reproducibility np.random.seed(42) # Standard deviation of both normal distributions true_std = 1 # Mean of the first normal distribution true_mean1 = np.pi # Mean of the second normal distribution true_mean2 = 3 * np.pi # Fraction of samples in second mode: this # algorithm works best when the fraction # is between [0.2, 0.8] true_fraction = 0.3 # Third number below is the number of samples to draw: x = np.linspace(0, 1, 500) # Generate a series of observations, drawn from # two normal distributions: y_obs = np.concatenate([true_mean1 + true_std * np.random.randn(int((1-true_fraction) * len(x))), true_mean2 + true_std * np.random.randn(int(true_fraction * len(x)))]) # Plot the observations: plt.hist(y_obs, bins=50, density=True, color='#4682b4', histtype='step', lw=3) plt.xlabel('$y_\mathrm{obs}$', fontsize=20) ax = plt.gca() ax2 = ax.twiny() ax2.set_xlim(ax.get_xlim()) ax2.set_xticks([true_mean1, true_mean2]) ax2.set_xticklabels(['$\mu_1$', '$\mu_2$'], fontsize=20) plt.show() ``` So how does one fit for the means and standard deviations of the bimodal distribution? Since this example is a mixture of normal distirbutions, one way is to use [Gaussian mixture models](https://dfm.io/posts/mixture-models/), but we're going to take a different approach, which we'll see is more general later. ## Approximate Bayesian Computation For this particular dataset, it's easy to construct a model $\mathcal{M}$ which reproduces the observations $y_\mathcal{obs}$ – the model is simply the concatenation of two normal distributions $\mathcal{M} \sim \left[\mathcal{N} \left(\mu_1, \sigma, \textrm{size=(1-f)N}\right), \mathcal{N}\left(\mu_2, \sigma, \textrm{size=}fN\right)\right]$, where the `size` argument determines the number of samples to draw from the distribution, $N$ is the total number of draws, and $f$ is the fraction of draws in the second mode. One way to *approximate* the posterior distributions of $\theta = \{\mu_1, \mu_2, \sigma, f\}$ would be to propose new parameters $\theta^*$, and only keep a running list of the parameter combinations which produce a simulated dataset $y_\mathrm{sim}$ which very closely reproduces the observations $y_\mathrm{obs}$. *** ### Summary statistic: the Anderson-Darling statistic In practice, this requires a *summary statistic*, which measures the "distance" between the simulated dataset $y_\mathrm{sim}$ and the observations $y_\mathrm{obs}$. In this example we need a metric which measures the probability that two randomly-drawn samples $y$ are drawn from the same distribution. One such metric is the [Anderson-Darling statistic](https://en.wikipedia.org/wiki/Anderson–Darling_test), which approaches a minimum near $A^2=-1.3$ for two sets $y$ that are drawn from indistinguishable distributions, and grows to $A^2 > 10^5$ for easily distinguishable distributions. We can see how the Anderson-Darling statistic behaves in this simple example below: ``` n_samples = 10000 # Generate a bimodal distribution a = np.concatenate([np.random.randn(n_samples), 3 + np.random.randn(n_samples//2)]) # Plot the bimodal distribution fig, ax = plt.subplots(1, 2, figsize=(7, 3)) ax[0].hist(a, color='silver', range=[-4, 11], bins=50, lw=2, histtype='stepfilled') # For a set of bimodal distributions with varing means: for mean in [0, 1.2, 5]: # Generate a new bimodal distribution c = mean + np.concatenate([np.random.randn(n_samples), 3 + np.random.randn(n_samples//2)]) # Measure, plot the Anderson-darling statistic a2 = anderson_ksamp([a, c]).statistic ax[0].hist(c, histtype='step', range=[-4, 11], bins=50, lw=2) ax[1].plot(mean, a2, 'o') ax[0].set(xlabel='Samples', ylabel='Frequency') ax[1].set(xlabel='Mean', ylabel='$A^2$') fig.tight_layout() ``` In the figure above, we have a set of observations $y_\mathrm{obs}$ (left, gray) which we're comparing to the set of simulated observations $y_\mathrm{sim}$ (left, colors). The Anderson-Darling statistic $A^2$ is plotted for each pair of the observations and the simulations (right). You can see that the minimum of $A^2$ is near -1.3, and it grows very large when $y_\mathrm{obs}$ and $y_\mathrm{sim}$ distributions are significantly different. In order to make our distance function approach zero when the Anderson-Darling statistic is at its minimum, we're going to rescale the outputs of the Anderson-Darling statistic a bit: ``` def distance(y_obs, y_sim): """ Our distance metric between the observations y_obs and the simulation y_sim will be the Anderson-Darling Statistic A^2 + 1.31, so that its minimum value is approximately 0 and its maximum value is >10^5. """ return anderson_ksamp([y_sim, y_obs]).statistic + 1.31 ``` *** ### The rejection sampler We're now have the ingredients we need to create a *rejection sampler*, which will follow this algorithm: 1. Perturb initial/previous parameters $\theta$ by a small amount to generate new trial parameters $\theta^*$ 2. If the trial parameters $\theta^*$ are drawn from within the prior, continue, else return to (1) 3. Generate an example dataset $y_\mathrm{sim}$ using your model $\mathcal{M}$ 4. Compute _distance_ between the simulated and observed datasets $\rho(y_\mathrm{obs}, y_\mathrm{sim})$ 5. For some tolerance $h$, accept the step ($\theta^* = \theta$) if distance $\rho(y_\mathrm{obs}, y_\mathrm{sim}) \leq h$ 6. Return to step (1) In the limit $h \rightarrow 0$, the posterior samples are no longer an approximation. ``` def lnprior(theta): """ Define a prior probability, which simply requires that -10 < mu_1, mu_2 < 20 and 0 < sigma < 10 and 0 < fraction < 1. """ mean1, mean2, std, fraction = theta if -10 < mean1 < 20 and -10 < mean2 < 20 and 0 < std < 10 and 0 <= fraction <= 1: return 0 return -np.inf def propose_step(theta, scale): """ Propose new step: perturb the previous step by adding random-normal values to the previous step """ return theta + scale * np.random.randn(len(theta)) def simulate_dataset(theta): """ Simulate a dataset by generating a bimodal distribution with means mu_1, mu_2 and standard deviation sigma """ mean1, mean2, std, fraction = theta return np.concatenate([mean1 + std * np.random.randn(int((1-fraction) * len(x))), mean2 + std * np.random.randn(int(fraction * len(x)))]) def rejection_sampler(theta, h, n_steps, scale=0.1, quiet=False, y_obs=y_obs, prior=lnprior, simulate_y=simulate_dataset): """ Follow algorithm written above for a simple rejection sampler. """ # Some bookkeeping variables: accepted_steps = 0 total_steps = 0 samples = np.zeros((n_steps, len(theta))) printed = set() while accepted_steps < n_steps: # Make a simple "progress bar": if not quiet: if accepted_steps % 1000 == 0 and accepted_steps not in printed: printed.add(accepted_steps) print(f'Sample {accepted_steps} of {n_steps}') # Propose a new step: new_theta = propose_step(theta, scale) # If proposed step is within prior: if np.isfinite(prior(new_theta)): # Generate a simulated dataset from new parameters y_sim = simulate_y(new_theta) # Compute distance between simulated dataset # and the observations dist = distance(y_obs, y_sim) total_steps += 1 # If distance is less than tolerance `h`, accept step: if dist <= h: theta = new_theta samples[accepted_steps, :] = new_theta accepted_steps += 1 print(f'Acceptance rate: {accepted_steps/total_steps}') return samples ``` We can now run our rejection sampler for a given value of the tolerance $h$. ``` # Initial step parameters for the mean and std: theta = [true_mean1, true_mean2, true_std, true_fraction] # Number of posterior samples to compute n_steps = 5000 # `h` is the distance metric threshold for acceptance; # try values of h between -0.5 and 5 h = 5 samples = rejection_sampler(theta, h, n_steps) ``` `samples` now contains `n_steps` approximate posterior samples. Let's make a corner plot which shows the results: ``` labels = ['$\mu_1$', '$\mu_2$', '$\sigma$', '$f$'] truths = [true_mean1, true_mean2, true_std, true_fraction] corner(samples, truths=truths, levels=[0.6], labels=labels, show_titles=True); ``` You can experiment with the above example by changing the values of from $h=2$, for a more precise and more computationally expensive approximation to the posterior distribution, or to $h=10$ for a faster but less precise estimate of the posterior distribution. In practice, a significant fraction of your effort when applying ABC is spent balancing the computational expense of a small $h$ with the precision you need on your posterior approximation. We can see how the posterior distribution for the standard deviation $\sigma$ changes as we vary $h$, from a small value to a larger value: ``` samples_i = [] h_range = [3, 5, 8] for h_i in h_range: samples_i.append(rejection_sampler(truths, h_i, n_steps, quiet=True)) ``` Let's plot the results: ``` fig, ax = plt.subplots(1, 4, figsize=(12, 3)) for s_i, h_i in zip(samples_i, h_range): for j, axis in enumerate(ax): axis.hist(s_i[len(s_i)//2:, j], histtype='step', lw=2, label=f"h={h_i}", density=True, bins=30) axis.set_xlabel(labels[j]) axis.axvline(truths[j], ls='--', color='#4682b4') ax[0].set_ylabel('Posterior PDF') plt.legend() plt.show() ``` In the plot above, blue histograms are for the smallest $h$, then orange, then green. You can see that the posterior distribution for the standard deviation is largest for the largest $h$, and converges to a narrower distribution centered on the correct value as $h$ decreases. Now let's inspect how the simulated distributions look, generated using the posterior samples for our input parameters $\theta$: ``` props = dict(bins=25, range=[0, 12], histtype='step', density=True) for i in np.random.randint(0, len(samples_i), size=50): plt.hist(simulate_dataset(samples_i[0][i, :]), alpha=0.3, color='silver', **props) plt.hist(y_obs, color='#4682b4', lw=3, **props) plt.xlabel('$y_\mathrm{obs}, y_\mathrm{sim}$', fontsize=20) plt.show() ``` The blue histogram is the set of observations $y_\mathrm{obs}$. Shown in silver are various draws from the simulated distributions with the parameters $\theta$ drawn randomly from the posterior distributions from the previous rejection sampling. You can see that the simulated (silver) histograms are "non-rejectable approximations" to the observations (blue). *** ## A non-Gaussian example Now let's do an example where things are less Gaussian. Our data will be distributed with a _beta distribution_, according to $$f(x; a,b) = \frac{1}{B(\alpha, \beta)} x^{\alpha - 1} (1 - x)^{\beta - 1},$$ where $$B(\alpha, \beta) = \int_0^1 t^{\alpha - 1} (1 - t)^{\beta - 1} dt$$ This new distribution has positive parameters $\theta = \{\alpha, \beta\}$ which we can use ABC to infer: ``` from numpy.random import beta np.random.seed(2019) # The alpha and beta parameters are the tuning parameter # for beta distributions. true_a = 15 true_b = 2 y_obs_beta = beta(true_a, true_b, size=len(x)); plt.hist(y_obs_beta, density=True, histtype='step', color='#4682b4', lw=3) plt.xlabel('$y_\mathrm{obs}$', fontsize=20) plt.show() ``` In this example, we'll sample the logarithm of the $\alpha$ and $\beta$ parameters. ``` def lnprior_beta(theta): lna, lnb = theta if -100 < lna < 100 and -100 < lnb < 100: return 0 return -np.inf def simulate_dataset_beta(theta): """ Simulate a dataset by generating a bimodal distribution with means mu_1, mu_2 and standard deviation sigma """ a, b = np.exp(theta) return beta(a, b, size=len(x)) ``` We'll keep the Anderson-Darling statistic as our summary statistic, which is non-parametric and agnostic about the distributions of the two samples it is comparing. We will swap in our new observations, prior, and simulation function, but nothing else changes in the rejection sampling algorithm: ``` # `h` is the distance metric threshold for acceptance; # try values of h between 1 and 5 h = 1 samples = rejection_sampler([np.log(true_a), np.log(true_b)], h, n_steps, y_obs=y_obs_beta, prior=lnprior_beta, simulate_y=simulate_dataset_beta) labels_beta = [r'$\ln\alpha$', r'$\ln\beta$'] truths_beta = [np.log(true_a), np.log(true_b)] corner(samples, labels=labels_beta, truths=truths_beta, levels=[0.6]) plt.show() ``` Let's see how random draws from the posterior distributions for $\alpha$ and $\beta$ compare with the observations: ``` props = dict(bins=25, range=[0.5, 1], histtype='step', density=True) for i in np.random.randint(0, len(samples), size=100): lna, lnb = samples[i, :] a = np.exp(lna) b = np.exp(lnb) plt.hist(beta(a, b, size=len(x)), alpha=0.3, color='silver', **props) plt.hist(y_obs_beta, color='#4682b4', lw=3, **props) plt.xlabel('$y_\mathrm{obs}, y_\mathrm{sim}$', fontsize=20) plt.show() ``` Again, the blue histogram is the set of observations $y_\mathrm{obs}$. Shown in silver are various draws from beta distributions with the parameters $\alpha$ and $\beta$ drawn randomly from the posterior distributions from the previous rejection sampling chain. You can see that the simulated (silver) histograms are "non-rejectable approximations" to the observations (blue).
github_jupyter
## Contoso ISD solution package This notebook is for creating a consolidated view over the data from each of the source systems. ``` storage_account = 'steduanalytics__update_this' use_test_env = True if use_test_env: stage1 = 'abfss://test-env@' + storage_account + '.dfs.core.windows.net/stage1' stage2 = 'abfss://test-env@' + storage_account + '.dfs.core.windows.net/stage2' stage3 = 'abfss://test-env@' + storage_account + '.dfs.core.windows.net/stage3' else: stage1 = 'abfss://stage1@' + storage_account + '.dfs.core.windows.net' stage2 = 'abfss://stage2@' + storage_account + '.dfs.core.windows.net' stage3 = 'abfss://stage3@' + storage_account + '.dfs.core.windows.net' # Process sectionmark data # Convert id values to use the Person.Id and Section.Id values set in the Education Data Platform. from pyspark.sql.functions import sha2, lit sqlContext.registerDataFrameAsTable(spark.read.format('parquet').load(stage2 + '/contoso_sis/studentsectionmark'), 'SectionMark') sqlContext.registerDataFrameAsTable(spark.read.format('parquet').load(stage2 + '/m365/Person'), 'Person') sqlContext.registerDataFrameAsTable(spark.read.format('parquet').load(stage2 + '/m365/Section'), 'Section') df = spark.sql("select sm.id Id, p.Id PersonId, s.Id SectionId, cast(sm.numeric_grade_earned as int) NumericGrade, \ sm.alpha_grade_earned AlphaGrade, sm.is_final_grade IsFinalGrade, cast(sm.credits_attempted as int) CreditsAttempted, cast(sm.credits_earned as int) CreditsEarned, \ sm.grad_credit_type GraduationCreditType, sm.id ExternalId, CURRENT_TIMESTAMP CreateDate, CURRENT_TIMESTAMP LastModifiedDate, true IsActive \ from SectionMark sm, Person p, Section s \ where sm.student_id = p.ExternalId \ and sm.section_id = s.ExternalId") df.write.format('parquet').mode('overwrite').save(stage2 + '/ContosoISD/SectionMark') df.write.format('parquet').mode('overwrite').save(stage2 + '/ContosoISD/SectionMark2') # Add SectionMark data to stage3 (anonymized parquet lake) df = df.withColumn('PersonId', sha2(df.PersonId, 256)) df.write.format('parquet').mode('overwrite').save(stage3 + '/ContosoISD/SectionMark') df.write.format('parquet').mode('overwrite').save(stage3 + '/ContosoISD/SectionMark2') # Repeat the above process, this time for student attendance # Convert id values to use the Person.Id, Org.Id and Section.Id values sqlContext.registerDataFrameAsTable(spark.read.format('parquet').load(stage2 + '/contoso_sis/studentattendance'), 'Attendance') sqlContext.registerDataFrameAsTable(spark.read.format('parquet').load(stage2 + '/m365/Org'), 'Org') df = spark.sql("select att.id Id, p.Id PersonId, att.school_year SchoolYear, o.Id OrgId, to_date(att.attendance_date,'MM/dd/yyyy') AttendanceDate, \ att.all_day AllDay, att.Period Period, s.Id SectionId, att.AttendanceCode AttendanceCode, att.PresenceFlag PresenceFlag, \ att.attendance_status AttendanceStatus, att.attendance_type AttendanceType, att.attendance_sequence AttendanceSequence \ from Attendance att, Org o, Person p, Section s \ where att.student_id = p.ExternalId \ and att.school_id = o.ExternalId \ and att.section_id = s.ExternalId") df.write.format('parquet').mode('overwrite').save(stage2 +'/ContosoISD/Attendance') # Add Attendance data to stage3 (anonymized parquet lake) df = df.withColumn('PersonId', sha2(df.PersonId, 256)) df.write.format('parquet').mode('overwrite').save(stage3 + '/ContosoISD/Attendance') # Add 'Department' column to Course (hardcoded to "Math" for this Contoso example) sqlContext.registerDataFrameAsTable(spark.read.format('parquet').load(stage2 + '/m365/Course'), 'Course') df = spark.sql("select Id, Name, Code, Description, ExternalId, CreateDate, LastModifiedDate, IsActive, CalendarId, 'Math' Department from Course") df.write.format('parquet').mode('overwrite').save(stage2 + '/ContosoISD/Course') df.write.format('parquet').mode('overwrite').save(stage3 + '/ContosoISD/Course') # Create spark db to allow for access to the data in the delta-lake via SQL on-demand. # This is only creating metadata for SQL on-demand, pointing to the data in the delta-lake. # This also makes it possible to connect in Power BI via the azure sql data source connector. def create_spark_db(db_name, source_path): spark.sql('CREATE DATABASE IF NOT EXISTS ' + db_name) spark.sql(f"create table if not exists " + db_name + ".Activity using PARQUET location '" + source_path + "/m365/Activity0p2'") spark.sql(f"create table if not exists " + db_name + ".Calendar using PARQUET location '" + source_path + "/m365/Calendar'") spark.sql(f"create table if not exists " + db_name + ".Org using PARQUET location '" + source_path + "/m365/Org'") spark.sql(f"create table if not exists " + db_name + ".Person using PARQUET location '" + source_path + "/m365/Person'") spark.sql(f"create table if not exists " + db_name + ".PersonIdentifier using PARQUET location '" + source_path + "/m365/PersonIdentifier'") spark.sql(f"create table if not exists " + db_name + ".RefDefinition using PARQUET location '" + source_path + "/m365/RefDefinition'") spark.sql(f"create table if not exists " + db_name + ".Section using PARQUET location '" + source_path + "/m365/Section'") spark.sql(f"create table if not exists " + db_name + ".Session using PARQUET location '" + source_path + "/m365/Session'") spark.sql(f"create table if not exists " + db_name + ".StaffOrgAffiliation using PARQUET location '" + source_path + "/m365/StaffOrgAffiliation'") spark.sql(f"create table if not exists " + db_name + ".StaffSectionMembership using PARQUET location '" + source_path + "/m365/StaffSectionMembership'") spark.sql(f"create table if not exists " + db_name + ".StudentOrgAffiliation using PARQUET location '" + source_path + "/m365/StudentOrgAffiliation'") spark.sql(f"create table if not exists " + db_name + ".StudentSectionMembership using PARQUET location '" + source_path + "/m365/StudentSectionMembership'") spark.sql(f"create table if not exists " + db_name + ".Course using PARQUET location '" + source_path + "/ContosoISD/Course'") spark.sql(f"create table if not exists " + db_name + ".Attendance using PARQUET location '" + source_path + "/ContosoISD/Attendance'") spark.sql(f"create table if not exists " + db_name + ".SectionMark using PARQUET location '" + source_path + "/ContosoISD/SectionMark'") spark.sql(f"create table if not exists " + db_name + ".SectionMark2 using PARQUET location '" + source_path + "/ContosoISD/SectionMark2'") db_prefix = 'test_' if use_test_env else '' create_spark_db(db_prefix + 's2_ContosoISD', stage2) create_spark_db(db_prefix + 's3_ContosoISD', stage3) ```
github_jupyter
``` import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch.autograd import Variable from torch.utils.data import TensorDataset, Dataset, DataLoader, random_split from torch.nn.utils.rnn import pack_padded_sequence, pack_sequence, pad_packed_sequence, pad_sequence import os import sys import pickle import logging import random from pathlib import Path from math import log, ceil from typing import List, Tuple, Set, Dict import numpy as np import pandas as pd from sklearn import metrics import seaborn as sns import matplotlib.pyplot as plt sys.path.append('..') from src.data import prepare_data, prepare_heatmap_data,SOURCE_ASSIST0910_SELF, SOURCE_ASSIST0910_ORIG sns.set() sns.set_style('whitegrid') sns.set_palette('Set1') # ========================= # PyTorch version & GPU setup # ========================= print('PyTorch:', torch.__version__) dev = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print('Using Device:', dev) # ========================= # Seed # ========================= SEED = 0 random.seed(SEED) np.random.seed(SEED) torch.manual_seed(SEED) # torch.backends.cudnn.deterministic = True # torch.backends.cudnn.benchmark = False # ========================= # Parameters # ========================= model_name = 'LSTM' sequence_size = 20 epoch_size = 1000 lr = 0.01 batch_size, n_hidden, n_skills, n_layers = 100, 200, 124, 2 n_output = n_skills PRESERVED_TOKENS = 2 # PAD, SOS onehot_size = n_skills * 2 + PRESERVED_TOKENS n_input = ceil(log(2 * n_skills)) # n_input = onehot_size # # ========================= # Data # ========================= train_dl, eval_dl = prepare_data( SOURCE_ASSIST0910_ORIG, 'base', n_skills, preserved_tokens='?', min_n=3, max_n=sequence_size, batch_size=batch_size, device=dev, sliding_window=0) print(train_dl.dataset.tensors[2].size(), eval_dl.dataset.tensors[2].size()) # ========================= # Model # ========================= class DKT(nn.Module): ''' オリジナルのDKT ''' def __init__(self, dev, model_name, n_input, n_hidden, n_output, n_layers, batch_size, dropout=0.6, bidirectional=False): super(DKT, self).__init__() self.dev = dev self.model_name = model_name self.n_input = n_input self.n_hidden = n_hidden self.n_output = n_output self.n_layers = n_layers self.batch_size = batch_size self.bidirectional = bidirectional self.directions = 2 if self.bidirectional else 1 nonlinearity = 'tanh' # https://pytorch.org/docs/stable/nn.html#rnn if model_name == 'RNN': self.rnn = nn.RNN(n_input, n_hidden, n_layers, nonlinearity=nonlinearity, dropout=dropout, bidirectional=self.bidirectional) elif model_name == 'LSTM': self.lstm = nn.LSTM(n_input, n_hidden, n_layers, dropout=dropout, bidirectional=self.bidirectional) else: raise ValueError('Model name not supported') self.decoder = nn.Linear(n_hidden * self.directions, n_output) # self.sigmoid = nn.Sigmoid() def forward(self, input): if self.model_name == 'RNN': h0 = self.initHidden0() out, _hn = self.rnn(input, h0) elif self.model_name == 'LSTM': h0 = self.initHidden0() c0 = self.initC0() out, (_hn, _cn) = self.lstm(input, (h0, c0)) # top_n, top_i = out.topk(1) # decoded = self.decoder(out.contiguous().view(out.size(0) * out.size(1), out.size(2))) out = self.decoder(out) # decoded = self.sigmoid(decoded) return out def initHidden0(self): return torch.zeros(self.n_layers * self.directions, self.batch_size, self.n_hidden).to(self.dev) def initC0(self): return torch.zeros(self.n_layers * self.directions, self.batch_size, self.n_hidden).to(self.dev) # ========================= # Prepare and Train # ========================= assert model_name in {'LSTM', 'RNN'} model = DKT(dev, model_name, n_input, n_hidden, n_output, n_layers, batch_size) model.to(dev) loss_func = nn.BCELoss() opt = optim.SGD(model.parameters(), lr=lr) def loss_batch(model, loss_func, *args, opt=None): # Unpack data from DataLoader xs, yq, ya = args input = xs compressed_sensing = True if compressed_sensing and onehot_size != n_input: torch.manual_seed(SEED) cs_basis = torch.randn(onehot_size, n_input).to(dev) input = torch.mm( input.contiguous().view(-1, onehot_size), cs_basis) # https://pytorch.org/docs/stable/nn.html?highlight=rnn#rnn # inputの説明を見ると、input of shape (seq_len, batch, input_size) とある input = input.view(batch_size, sequence_size, n_input) input = input.permute(1, 0, 2) target = ya out = model(input) pred = torch.sigmoid(out[-1]) # [0, 1]区間にする prob = torch.max(pred * yq, 1)[0] predicted = prob actual = target loss = loss_func(prob, target) # TODO: 最後の1個だけじゃなくて、その他も損失関数に利用したら? predicted_ks = pred if opt: # バックプロバゲーション opt.zero_grad() loss.backward() opt.step() # print(predicted_ks.shape) return loss.item(), len(ya), predicted, actual, predicted_ks ``` ## Main ``` def main(): debug = False logging.basicConfig() logger = logging.getLogger('dkt log') logger.setLevel(logging.INFO + 1) train_loss_list = [] train_auc_list = [] eval_loss_list = [] eval_auc_list = [] eval_recall_list = [] eval_f1_list = [] x = [] for epoch in range(1, epoch_size + 1): print_train = epoch % 10 == 0 print_eval = epoch % 10 == 0 print_auc = epoch % 10 == 0 # ====== # TRAIN # ====== model.train() val_prob = [] val_targ = [] current_epoch_train_loss = [] for args in train_dl: loss_item, length, predicted, actual, predicted_ks = loss_batch(model, loss_func, *args, opt=opt) val_prob.append(predicted) val_targ.append(actual) current_epoch_train_loss.append(loss_item) # stop at first batch if debug if debug: break if print_train: loss = np.array(current_epoch_train_loss) logger.log(logging.INFO + (5 if epoch % 100 == 0 else 0), 'TRAIN Epoch: {} Loss: {}'.format(epoch, loss.mean())) train_loss_list.append(loss.mean()) # AUC, Recall, F1 # TRAINの場合、勾配があるから処理が必要 y = torch.cat(val_targ).cpu().detach().numpy() pred = torch.cat(val_prob).cpu().detach().numpy() # AUC fpr, tpr, thresholds = metrics.roc_curve(y, pred, pos_label=1) logger.log(logging.INFO + (5 if epoch % 100 == 0 else 0), 'TRAIN Epoch: {} AUC: {}'.format(epoch, metrics.auc(fpr, tpr))) train_auc_list.append(metrics.auc(fpr, tpr)) # ====== # EVAL # ====== if print_eval: with torch.no_grad(): model.eval() val_prob = [] val_targ = [] current_eval_loss = [] for args in eval_dl: loss_item, length, predicted, actual, predicted_ks = loss_batch(model, loss_func, *args, opt=None) val_prob.append(predicted) val_targ.append(actual) current_eval_loss.append(loss_item) # stop at first batch if debug if debug: break loss = np.array(current_eval_loss) logger.log(logging.INFO + (5 if epoch % 100 == 0 else 0), 'EVAL Epoch: {} Loss: {}'.format(epoch, loss.mean())) eval_loss_list.append(loss.mean()) # AUC, Recall, F1 if print_auc: y = torch.cat(val_targ).cpu() pred = torch.cat(val_prob).cpu() # AUC fpr, tpr, thresholds = metrics.roc_curve(y, pred, pos_label=1) logger.log(logging.INFO + (5 if epoch % 100 == 0 else 0), 'EVAL Epoch: {} AUC: {}'.format(epoch, metrics.auc(fpr, tpr))) eval_auc_list.append(metrics.auc(fpr, tpr)) # Recall logger.debug('EVAL Epoch: {} Recall: {}'.format(epoch, metrics.recall_score(y, pred.round()))) # F1 score logger.debug('EVAL Epoch: {} F1 score: {}'.format(epoch, metrics.f1_score(y, pred.round()))) if epoch % 10 == 0: x.append(epoch) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x, train_loss_list, label='train loss') ax.plot(x, train_auc_list, label='train auc') ax.plot(x, eval_loss_list, label='eval loss') ax.plot(x, eval_auc_list, label='eval auc') ax.legend() print(len(train_loss_list), len(eval_loss_list), len(eval_auc_list)) plt.show() print(f'Max Eval AUC: {max(eval_auc_list)}') if __name__ == '__main__': print('starting') main() f'{0.7785775736240735:<8.4f}' import datetime now = datetime.datetime.now().strftime('%Y_%m%d_%H%M') torch.save(model.state_dict(), '/home/qqhann/qqhann-paper/ECML2019/dkt_neo/models/rnn_' + now + '.' + str(1000)) ``` # Heatmap ``` heat_dl = prepare_heatmap_data( SOURCE_ASSIST0910_ORIG, 'base', n_skills, PRESERVED_TOKENS, min_n=3, max_n=sequence_size, batch_size=batch_size, device=dev, sliding_window=0) heat_dl.dataset.tensors model # if model_name in {'LSTM', 'RNN'}: # model = DKT(dev, model_name, n_input, n_hidden, n_output, n_layers, batch_size) # elif model_name in {'AttentionalDKT'}: # model = AttentionalDKT(dev, model_name, n_input, n_hidden, n_output, n_layers, batch_size) # model.to(dev) # # Load model # # ---------- # load_model = None # load_model = '/home/qqhann/qqhann-paper/ECML2019/dkt_neo/models/rnn_2019_0405_1618.1000' # if load_model: # model.load_state_dict(torch.load(load_model)) # model = model.to(dev) # # ---------- def heatmap(): logging.basicConfig() logger = logging.getLogger('dkt log') logger.setLevel(logging.INFO + 1) train_loss_list = [] train_auc_list = [] eval_loss_list = [] eval_auc_list = [] eval_recall_list = [] eval_f1_list = [] x = [] # ====== # HEATMAP # ====== with torch.no_grad(): model.eval() all_out_prob = [] val_prob = [] val_targ = [] current_eval_loss = [] for args in eval_dl: loss_item, length, predicted, actual, predicted_ks = loss_batch(model, loss_func, *args, opt=None) val_prob.append(predicted) val_targ.append(actual) current_eval_loss.append(loss_item) all_out_prob.append(predicted_ks) loss = np.array(current_eval_loss) # logger.log(logging.INFO + (5 if epoch % 100 == 0 else 0), # 'EVAL Epoch: {} Loss: {}'.format(epoch, loss.mean())) eval_loss_list.append(loss.mean()) # AUC, Recall, F1 y = torch.cat(val_targ).cpu() pred = torch.cat(val_prob).cpu() # AUC fpr, tpr, thresholds = metrics.roc_curve(y, pred, pos_label=1) eval_auc_list.append(metrics.auc(fpr, tpr)) # # Recall # logger.debug('EVAL Epoch: {} Recall: {}'.format(epoch, metrics.recall_score(y, pred.round()))) # # F1 score # logger.debug('EVAL Epoch: {} F1 score: {}'.format(epoch, metrics.f1_score(y, pred.round()))) print(eval_auc_list) return all_out_prob if __name__ == '__main__': all_out_prob = heatmap() print('finish') _d = all_out_prob[-1].squeeze(1).t().cpu() fig, ax = plt.subplots(figsize=(20, 10)) sns.heatmap(_d, vmin=0, vmax=1, ax=ax) ```
github_jupyter
## Analysis of UK's Tradings for 2014 Trading Year Task: A country's economy depends, sometimes heavily, on its exports and imports. The United Nations Comtrade database provides data on global trade. It will be used to analyse the UK's imports and exports of milk and cream in 2015: - How much does the UK export and import and is the balance positive (more exports than imports)? - Which are the main trading partners, i.e. from/to which countries does the UK import/export the most? - Which are the regular customers, i.e. which countries buy milk from the UK every month? - Which countries does the UK both import from and export to? ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt from pandas import * %matplotlib inline data=pd.read_csv('comtrade_milk_uk_monthly_14.csv',dtype={'Commodity Code':str}) pd.options.display.max_columns=35 display(data.head(2)) data['Commodity Code'].value_counts() data.describe().T def milk_type(code): if code=='0401': return 'unprocessed' if code=='0402': return 'processed' return 'unknown' commodity= 'Milk and Cream' data[commodity]=data['Commodity Code'].apply(milk_type) data['Milk and Cream'].value_counts() data_new=pd.DataFrame(data,columns=['Period','Partner','Milk and Cream','Trade Flow','Trade Value (US$)']) data_new.tail(5) ``` ## Question 1 ### How much does the UK export and import and is the balance positive (more exports than imports)? ``` data_new['Trade Flow'].value_counts() print(data_new.shape) data_new.head() # data_new.Partner.value_counts() data_new=data_new[data_new['Partner']!='World'] data_new.shape grouped_data=data_new.groupby("Trade Flow") export_import=pd.DataFrame(grouped_data['Trade Value (US$)'].aggregate(sum)) export_import difference=export_import['Trade Value (US$)'][0]-export_import['Trade Value (US$)'][1] print(f'The difference between exports and imports is ${difference}') ``` We see here that there are more exports than imports worth $334766993 <b><i>Answer to question 1:</i><br> Hence, the UK exports <i>$898651935 and imports $563884942</i> with a positive differnce of $334766993</b> ## Question 2 #### Which are the main trading partners, i.e. from/to which countries does the UK import/export the most? ##### Imports ``` imports=data_new[data_new['Trade Flow']=='Imports'] print(imports.shape) imports.head() grouped_import=imports.groupby('Partner') grouped_import.head() total_imports=grouped_import['Trade Value (US$)'].aggregate(sum).sort_values(inplace=False,ascending=False) total_imports.head() total_imports.head(8).plot(kind='barh') plt.title("Top Countries Importing from the UK") plt.xlabel("Amount in Billion Dollars") plt.savefig("Top Countries Importing from the UK") plt.show() ``` We see here that Ireland, France and Germany are the top three countries that import from the UK. ##### Exports ``` exports_data=data_new[data_new['Trade Flow']=="Exports"] print(exports_data.shape) exports_data.head() grouped_export=exports_data.groupby("Partner") total_exports=grouped_export['Trade Value (US$)'].aggregate(sum).sort_values(inplace=False,ascending=False) total_exports.head() total_exports.head(8).plot(kind='barh') plt.title("UK's Exporting Destinations") plt.xlabel("Amount in Billion Dollars") plt.savefig("UK's Exporting Destinations") plt.show() ``` Here, we see that UK's top three export destinations are Ireland, Algeria and The Netherlands. ## Question 3 ### Which are the regular customers, i.e. which countries buy milk from the UK every month? ``` data['Period Desc.'].value_counts() ``` We see that there are 12 months listed in this data. So a regular customer buys the both products throughout the months. ``` def regular_customer(group): return len(group)==24 grouped=exports_data.groupby('Partner') regular=grouped.filter(regular_customer) regular[(regular['Period']==201405)&(regular['Milk and Cream']=='unprocessed')] percentage_volume=np.round((regular['Trade Value (US$)'].sum() / exports_data['Trade Value (US$)'].sum())*100) print(f'The volume of trade that Uk gets from her exports is worth {percentage_volume}%') ``` We see here that any month and any commodity we take gives the same volume of trade.<br> Also, we see that this trading volume is about 72%. ## Question 4 ### Which countries does the UK both import from and export to? We check here for where both countries exchange goods by using a pivot table ``` trading_countries=pivot_table(data_new, index=['Partner'],columns=['Trade Flow'],values='Trade Value (US$)', aggfunc=sum) print(trading_countries.shape) trading_countries.head() trading_countries.isnull().sum() trading_countries.dropna(inplace=True) print(trading_countries.shape) trading_countries.head() ``` Here, we see that there are 25 countries where UK share mutual trading relationships. ## CONCLUSION After analysing the data, we come to the following conclusions about 2014 of UK's trading year. - The UK does well in her tradings as she records a positive difference of over $334 billion dollars. - Ireland, France and Germany are the top three countries that import from the UK. - UK's top three export destinations are Ireland, Algeria and The Netherlands. - UK's trading volume is about 72%. - UK shared mutual trading relationships with 25 countries in 2014 trading year.
github_jupyter
# Spark Lab This lab will demonstrate how to perform web server log analysis with Spark. Log data is a very large, common data source and contains a rich set of information. It comes from many sources, such as web, file, and compute servers, application logs, user-generated content, and can be used for monitoring servers, improving business and customer intelligence, building recommendation systems, fraud detection, and much more. This lab will show you how to use Spark on real-world text-based production logs and fully harness the power of that data. ### Apache Web Server Log file format The log files that we use for this assignment are in the [Apache Common Log Format (CLF)](http://httpd.apache.org/docs/1.3/logs.html#common) format. The log file entries produced in CLF will look something like this: `127.0.0.1 - - [01/Aug/1995:00:00:01 -0400] "GET /images/launch-logo.gif HTTP/1.0" 200 1839` Each part of this log entry is described below. * **`127.0.0.1`:** this is the IP address (or host name, if available) of the client (remote host) which made the request to the server. * **`-`:** the "hyphen" in the output indicates that the requested piece of information (user identity from remote machine) is not available. * **`-`:** the "hyphen" in the output indicates that the requested piece of information (user identity from local logon) is not available. * **`[01/Aug/1995:00:00:01 -0400]`:** the time that the server finished processing the request. The format is: `[day/month/year:hour:minute:second timezone]`. * **`"GET /images/launch-logo.gif HTTP/1.0"`:** this is the first line of the request string from the client. It consists of a three components: the request method (e.g., `GET`, `POST`, etc.), the endpoint, and the client protocol version. * **`200`:** this is the status code that the server sends back to the client. This information is very valuable, because it reveals whether the request resulted in a successful response (codes beginning in 2), a redirection (codes beginning in 3), an error caused by the client (codes beginning in 4), or an error in the server (codes beginning in 5). The full list of possible status codes can be found in the HTTP specification ([RFC 2616](https://www.ietf.org/rfc/rfc2616.txt) section 10). * **`1839`:** the last entry indicates the size of the object returned to the client, not including the response headers. If no content was returned to the client, this value will be "-" (or sometimes 0). Using the CLF as defined above, we create a regular expression pattern to extract the nine fields of the log line. The function returns a pair consisting of a Row object and 1. If the log line fails to match the regular expression, the function returns a pair consisting of the log line string and 0. A '-' value in the content size field is cleaned up by substituting it with 0. The function converts the log line's date string into a `Cal` object using the given `parseApacheTime` function. We, then, create the primary RDD and we'll use in the rest of this assignment. We first load the text file and convert each line of the file into an element in an RDD. Next, we use `map(parseApacheLogLine)` to apply the parse function to each element and turn each line into a pair `Row` object. Finally, we cache the RDD in memory since we'll use it throughout this notebook. The log file is available at `data/apache/apache.log`. ``` import scala.util.matching import org.apache.spark.rdd.RDD case class Cal(year: Int, month: Int, day: Int, hour: Int, minute: Int, second: Int) case class Row(host: String, clientID: String, userID: String, dateTime: Cal, method: String, endpoint: String, protocol: String, responseCode: Int, contentSize: Long) val month_map = Map("Jan" -> 1, "Feb" -> 2, "Mar" -> 3, "Apr" -> 4, "May" -> 5, "Jun" -> 6, "Jul" -> 7, "Aug" -> 8, "Sep" -> 9, "Oct" -> 10, "Nov" -> 11, "Dec" -> 12) //------------------------------------------------ // A regular expression pattern to extract fields from the log line val APACHE_ACCESS_LOG_PATTERN = """^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] "(\S+) (\S+)\s*(\S*)\s*" (\d{3}) (\S+)""".r //------------------------------------------------ def parseApacheTime(s: String): Cal = { return Cal(s.substring(7, 11).toInt, month_map(s.substring(3, 6)), s.substring(0, 2).toInt, s.substring(12, 14).toInt, s.substring(15, 17).toInt, s.substring(18, 20).toInt) } //------------------------------------------------ def parseApacheLogLine(logline: String): (Either[Row, String], Int) = { val ret = APACHE_ACCESS_LOG_PATTERN.findAllIn(logline).matchData.toList if (ret.isEmpty) return (Right(logline), 0) val r = ret(0) val sizeField = r.group(9) var size: Long = 0 if (sizeField != "-") size = sizeField.toLong return (Left(Row(r.group(1), r.group(2), r.group(3), parseApacheTime(r.group(4)), r.group(5), r.group(6), r.group(7), r.group(8).toInt, size)), 1) } //------------------------------------------------ def parseLogs(): (RDD[(Either[Row, String], Int)], RDD[Row], RDD[String]) = { val fileName = "data/apache/apache.log" val parsedLogs = sc.textFile(fileName).map(parseApacheLogLine).cache() val accessLogs = parsedLogs.filter(x => x._2 == 1).map(x => x._1.left.get) val failedLogs = parsedLogs.filter(x => x._2 == 0).map(x => x._1.right.get) val failedLogsCount = failedLogs.count() if (failedLogsCount > 0) { println(s"Number of invalid logline: $failedLogs.count()") failedLogs.take(20).foreach(println) } println(s"Read $parsedLogs.count() lines, successfully parsed $accessLogs.count() lines, and failed to parse $failedLogs.count()") return (parsedLogs, accessLogs, failedLogs) } val (parsedLogs, accessLogs, failedLogs) = parseLogs() ``` ### Sample Analyses on the Web Server Log File Let's compute some statistics about the sizes of content being returned by the web server. In particular, we'd like to know what are the average, minimum, and maximum content sizes. We can compute the statistics by applying a `map` to the `accessLogs` RDD. The given function to the `map` should extract the `contentSize` field from the RDD. The `map` produces a new RDD, called `contentSizes`, containing only the `contentSizes`. To compute the minimum and maximum statistics, we can use `min()` and `max()` functions on the new RDD. We can compute the average statistic by using the `reduce` function with a function that sums the two inputs, which represent two elements from the new RDD that are being reduced together. The result of the `reduce()` is the total content size from the log and it is to be divided by the number of requests as determined using the `count()` function on the new RDD. As the result of executing the following box, you should get the below result: ``` Content Size Avg: 17531, Min: 0, Max: 3421948 ``` ``` // Calculate statistics based on the content size. val contentSizes = accessLogs.map(_.contentSize).cache() println("Content Size Avg: " + contentSizes.sum()/contentSizes.count() + ", Min: " + contentSizes.min() + ", Max: " + contentSizes.max()) ``` Next, lets look at the "response codes" that appear in the log. As with the content size analysis, first we create a new RDD that contains the `responseCode` field from the `accessLogs` RDD. The difference here is that we will use a *pair tuple* instead of just the field itself (i.e., (response code, 1)). Using a pair tuple consisting of the response code and 1 will let us count the number of of records with a particular response code. Using the new RDD `responseCodes`, we perform a `reduceByKey` function that applys a given function to each element, pairwise with the same key. Then, we cache the resulting RDD and create a list by using the `take` function. Once you run the code below, you should receive the following results: ``` Found 7 response codes Response Code Counts: (404,6185) (200,940847) (304,79824) (500,2) (501,17) (302,16244) (403,58) ``` ``` // extract the response code for each record and make pair of (response code, 1) val responseCodes = accessLogs.map(x => (x.responseCode, 1)) // count the number of records for each key val responseCodesCount = responseCodes.reduceByKey(_ + _).cache() // take the first 100 records val responseCodesCountList = responseCodesCount.take(100) println("Found " + responseCodesCountList.length + " response codes") print("Response Code Counts: ") responseCodesCountList.foreach(x => print(x + " ")) ``` Let's look at "hosts" that have accessed the server multiple times (e.g., more than 10 times). First we create a new RDD to keep the `host` field from the `accessLogs` RDD using a pair tuple consisting of the host and 1 (i.e., (host, 1)), which will let us count how many records were created by a particular host's request. Using the new RDD, we perform a `reduceByKey` function with a given function to add the two values. We then filter the result based on the count of accesses by each host (the second element of each pair) being greater than 10. Next, we extract the host name by performing a `map` to return the first element of each pair. Finally, we extract 20 elements from the resulting RDD. The result should be as below: ``` Any 20 hosts that have accessed more then 10 times: ix-aug-ga1-13.ix.netcom.com n1043347.ksc.nasa.gov d02.as1.nisiq.net 192.112.22.82 anx3p4.trib.com 198.215.127.2 198.77.113.34 crc182.cac.washington.edu telford-107.salford.ac.uk universe6.barint.on.ca gatekeeper.homecare.com 157.208.11.7 unknown.edsa.co.za onyx.southwind.net ppp-hck-2-12.ios.com ix-lv5-04.ix.netcom.com f-umbc7.umbc.edu cs006p09.nam.micron.net dd22-025.compuserve.com hak-lin-kim.utm.edu ``` ``` // extract the host field for each record and make pair of (host, 1) val hosts = accessLogs.map(x => (x.da, 1)) // count the number of records for each key val hostsCount = hosts.reduceByKey(_+_) // keep the records with the count greater than 10 val hostMoreThan10 = hostsCount.filter(x => (x._2 > 10)) // take the first 100 records val hostsPick20 = hostMoreThan10.map(_._1).take(20) println("Any 20 hosts that have accessed more then 10 times: ") hostsPick20.foreach(println) ``` For the final example, we'll look at the top endpoints (URIs) in the log. To determine them, we first create a new RDD to extract the `endpoint` field from the `accessLogs` RDD using a pair tuple consisting of the endpoint and 1 (i.e., (endpoint, 1)), which will let us count how many records were created by a particular host's request. Using the new RDD, we perform a `reduceByKey` to add the two values. We then extract the top 10 endpoints by performing a `takeOrdered` with a value of 10 and a function that multiplies the count (the second element of each pair) by -1 to create a sorted list with the top endpoints at the bottom. Here is the result: ``` Top ten endpoints: (/images/NASA-logosmall.gif,59737) (/images/KSC-logosmall.gif,50452) (/images/MOSAIC-logosmall.gif,43890) (/images/USA-logosmall.gif,43664) (/images/WORLD-logosmall.gif,43277) (/images/ksclogo-medium.gif,41336) (/ksc.html,28582) (/history/apollo/images/apollo-logo1.gif,26778) (/images/launch-logo.gif,24755) (/,20292) ``` ``` // extract the endpoint for each record and make pair of (endpoint, 1) val endpoints = accessLogs.map(x => (x.endpoint, 1)) // count the number of records for each key val endpointCounts = endpoints.reduceByKey(_ + _) // extract the top 10 val topEndpoints = endpointCounts.takeOrdered(10)(Ordering[Int].reverse.on(_._2)) println("Top ten endpoints: ") topEndpoints.foreach(println) ``` ### Analyzing Web Server Log File What are the top ten endpoints which did not have return code 200? Create a sorted list containing top ten endpoints and the number of times that they were accessed with non-200 return code. Think about the steps that you need to perform to determine which endpoints did not have a 200 return code, how you will uniquely count those endpoints, and sort the list. You should receive the following result: ``` Top ten failed URLs: (/images/NASA-logosmall.gif,8761) (/images/KSC-logosmall.gif,7236) (/images/MOSAIC-logosmall.gif,5197) (/images/USA-logosmall.gif,5157) (/images/WORLD-logosmall.gif,5020) (/images/ksclogo-medium.gif,4728) (/history/apollo/images/apollo-logo1.gif,2907) (/images/launch-logo.gif,2811) (/,2199) (/images/ksclogosmall.gif,1622) ``` ``` // keep the logs with error code not 200 val not200 = accessLogs.filter(x => x.responseCode != 200) // make a pair of (x, 1) val endpointCountPairTuple = not200.map(x => (x.endpoint, 1)) // count the number of records for each key x val endpointSum = endpointCountPairTuple.reduceByKey(_+_) // take the top 10 val topTenErrURLs = endpointSum.takeOrdered(10)(Ordering[Int].reverse.on(_._2)) println("Top ten failed URLs: ") topTenErrURLs.foreach(println) ``` Let's count the number of unique hosts in the entire log. Think about the steps that you need to perform to count the number of different hosts in the log. The result should be as below: ``` Unique hosts: 54507 ``` ``` // extract the host field for each record val hosts = accessLogs.map(x => x.host) // keep the uniqe hosts val uniqueHosts = hosts.distinct() // count them val uniqueHostCount = uniqueHosts.count() println("Unique hosts: " + uniqueHostCount) ``` For an advanced exercise, let's determine the number of unique hosts in the entire log on a day-by-day basis. This computation will give us counts of the number of unique daily hosts. We'd like a list sorted by increasing day of the month, which includes the day of the month and the associated number of unique hosts for that day. Make sure you cache the resulting RDD `dailyHosts`, so that we can reuse it in the next exercise. Think about the steps that you need to perform to count the number of different hosts that make requests *each* day. Since the log only covers a single month, you can ignore the month. Here is the output you should receive: ``` Unique hosts per day: (1,2582) (3,3222) (4,4190) (5,2502) (6,2537) (7,4106) (8,4406) (9,4317) (10,4523) (11,4346) (12,2864) (13,2650) (14,4454) (15,4214) (16,4340) (17,4385) (18,4168) (19,2550) (20,2560) (21,4134) (22,4456) ``` ``` // make pairs of (day, host) val dayToHostPairTuple = accessLogs.map(x => (x.dateTime.day, x.host)) // group by day val dayGroupedHosts = dayToHostPairTuple.groupByKey() // make pairs of (day, number of host in that day) val dayHostCount = dayGroupedHosts.map(x => (x._1, x._2.toSet.size)) // sort by day val dailyHosts = dayHostCount.sortByKey().cache() // return the records as a list val dailyHostsList = dailyHosts.take(30) println("Unique hosts per day: ") dailyHostsList.foreach(println) ``` Next, let's determine the average number of requests on a day-by-day basis. We'd like a list by increasing day of the month and the associated average number of requests per host for that day. Make sure you cache the resulting RDD `avgDailyReqPerHost` so that we can reuse it in the next exercise. To compute the average number of requests per host, get the total number of request across all hosts and divide that by the number of unique hosts. Since the log only covers a single month, you can skip checking for the month. Also to keep it simple, when calculating the approximate average use the integer value. The result should be as below: ``` Average number of daily requests per Hosts is: (1,13) (3,12) (4,14) (5,12) (6,12) (7,13) (8,13) (9,14) (10,13) (11,14) (12,13) (13,13) (14,13) (15,13) (16,13) (17,13) (18,13) (19,12) (20,12) (21,13) (22,12) ``` ``` // make pairs of (day, host) val dayAndHostTuple = accessLogs.map(x => (x.dateTime.day, x.host)) // group by day val groupedByDay = dayAndHostTuple.groupByKey() // sort by day val sortedByDay = groupedByDay.sortByKey() // calculate the average request per day val avgDailyReqPerHost = sortedByDay.map(x=>(x._1, x._2.size / x._2.toSet.size)) // return the records as a list val avgDailyReqPerHostList = avgDailyReqPerHost.take(30) println("Average number of daily requests per Hosts is: ") avgDailyReqPerHostList.foreach(println) ``` ### Exploring 404 Response Codes Let's count the 404 response codes. Create a RDD containing only log records with a 404 response code. Make sure you `cache()` the RDD `badRecords` as we will use it in the rest of this exercise. How many 404 records are in the log? Here is the result: ``` Found 6185 404 URLs. ``` ``` val badRecords = accessLogs.filter(x => x.responseCode == 404).cache() println("Found " + badRecords.count() + " 404 URLs.") ``` Now, let's list the 404 response code records. Using the RDD containing only log records with a 404 response code that you cached in the previous part, print out a list up to 10 distinct endpoints that generate 404 errors - no endpoint should appear more than once in your list. You should receive the follwoing output as your result: ``` 404 URLS: /SHUTTLE/COUNTDOWN /shuttle/missions/sts-71/images/www.acm.uiuc.edu/rml/Gifs /shuttle/technology/stsnewsrof/stsref-toc.html /de/systems.html /ksc.htnl /~pccomp/graphics/sinsght.gif /PERSONS/NASA-CM. /shuttle/missions/sts-1/sts-1-mission.html /history/apollo/sa-1/sa-1-patch-small.gif /images/sts-63-Imax ``` ``` val badEndpoints = badRecords.map(x => x.endpoint) val badUniqueEndpoints = badEndpoints.distinct() val badUniqueEndpointsPick10 = badUniqueEndpoints.take(10) println("404 URLS: ") badUniqueEndpointsPick10.foreach(println) ``` Using the RDD containing only log records with a 404 response code that you cached before, print out a list of the top 10 endpoints that generate the most 404 errors. Remember, top endpoints should be in sorted order. The result would be as below: ``` Top ten 404 URLs: (/pub/winvn/readme.txt,633) (/pub/winvn/release.txt,494) (/shuttle/missions/STS-69/mission-STS-69.html,431) (/images/nasa-logo.gif,319) (/elv/DELTA/uncons.htm,178) (/shuttle/missions/sts-68/ksc-upclose.gif,156) (/history/apollo/sa-1/sa-1-patch-small.gif,146) (/images/crawlerway-logo.gif,120) (/://spacelink.msfc.nasa.gov,117) (/history/apollo/pad-abort-test-1/pad-abort-test-1-patch-small.gif,100) ``` ``` val badEndpointsCountPairTuple = badRecords.map(x => (x.endpoint, 1)) val badEndpointsSum = badEndpointsCountPairTuple.reduceByKey(_+_) val badEndpointsTop10 = badEndpointsSum.takeOrdered(10)(Ordering[Int].reverse.on[(String, Int)](_._2)) println("Top ten 404 URLs: ") badEndpointsTop10.foreach(println) ``` Instead of looking at the endpoints that generated 404 errors, now let's look at the hosts that encountered 404 errors. Using the RDD containing only log records with a 404 response code that you cached before, print out a list of the top 10 hosts that generate the most 404 errors. Here is the result: ``` Top ten hosts that generated errors: (piweba3y.prodigy.com,39) (maz3.maz.net,39) (gate.barr.com,38) (m38-370-9.mit.edu,37) (ts8-1.westwood.ts.ucla.edu,37) (nexus.mlckew.edu.au,37) (204.62.245.32,33) (spica.sci.isas.ac.jp,27) (163.206.104.34,27) (www-d4.proxy.aol.com,26) ``` ``` val errHostsCountPairTuple = badRecords.map(x => (x.host,1)) val errHostsSum = errHostsCountPairTuple.reduceByKey(_+_) val errHostsTop10 = errHostsSum.takeOrdered(10)(Ordering[Int].reverse.on[(String, Int)](_._2)) println("Top ten hosts that generated errors: ") errHostsTop10.foreach(println) ``` Let's explore the 404 records temporally. Break down the 404 requests by day and get the daily counts sorted by day as a list. Since the log only covers a single month, you can ignore the month in your checks. Cache the `errDateSorted` at the end. The output should be as below: ``` 404 errors by day: (1,243) (3,303) (4,346) (5,234) (6,372) (7,532) (8,381) (9,279) (10,314) (11,263) (12,195) (13,216) (14,287) (15,326) (16,258) (17,269) (18,255) (19,207) (20,312) (21,305) (22,288) ``` ``` val errDateCountPairTuple = badRecords.map(x => (x.dateTime.day, 1)) val errDateSum = errDateCountPairTuple.reduceByKey(_+_) val errDateSorted = errDateSum val errByDate = errDateSorted.takeOrdered(30) errDateSorted.cache() println("404 errors by day: ") errByDate.foreach(println) ``` Using the RDD `errDateSorted` you cached before, what are the top five days for 404 response codes and the corresponding counts of 404 response codes? ``` Top five dates for 404 requests: (7,532) (8,381) (6,372) (4,346) (15,326) ``` ``` val topErrDate = errDateSorted.takeOrdered(5)(Ordering[Int].reverse.on[(Int, Int)](_._2)) print("Top five dates for 404 requests: ") topErrDate.foreach(x => print(x + " ")) ``` Using the RDD `badRecords` you cached before, and by hour of the day and in increasing order, create an RDD containing how many requests had a 404 return code for each hour of the day (midnight starts at 0). ``` Top hours for 404 requests: (0,175) (1,171) (2,422) (3,272) (4,102) (5,95) (6,93) (7,122) (8,199) (9,185) (10,329) (11,263) (12,438) (13,397) (14,318) (15,347) (16,373) (17,330) (18,268) (19,269) (20,270) (21,241) (22,234) (23,272) ``` ``` val hourCountPairTuple = badRecords.map(x => (x.dateTime.hour, 1)) val hourRecordsSum = hourCountPairTuple.reduceByKey(_+_) val hourRecordsSorted = hourRecordsSum.sortByKey() val errHourList = hourRecordsSorted.collect() println("Top hours for 404 requests: ") errHourList.foreach(println) ```
github_jupyter
## Dependencies ``` import json, warnings, shutil from tweet_utility_scripts import * from tweet_utility_preprocess_roberta_scripts import * from transformers import TFRobertaModel, RobertaConfig from tokenizers import ByteLevelBPETokenizer from tensorflow.keras.models import Model from tensorflow.keras import optimizers, metrics, losses, layers from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, LearningRateScheduler SEED = 0 seed_everything(SEED) warnings.filterwarnings("ignore") ``` # Load data ``` database_base_path = '/kaggle/input/tweet-dataset-split-roberta-base-96/' k_fold = pd.read_csv(database_base_path + '5-fold.csv') display(k_fold.head()) # Unzip files !tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_1.tar.gz !tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_2.tar.gz !tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_3.tar.gz # !tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_4.tar.gz # !tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_5.tar.gz ``` # Model parameters ``` vocab_path = database_base_path + 'vocab.json' merges_path = database_base_path + 'merges.txt' base_path = '/kaggle/input/qa-transformers/roberta/' config = { "MAX_LEN": 96, "BATCH_SIZE": 32, "EPOCHS": 5, "LEARNING_RATE": 3e-5, "ES_PATIENCE": 1, "question_size": 4, "N_FOLDS": 3, "base_model_path": base_path + 'roberta-base-tf_model.h5', "config_path": base_path + 'roberta-base-config.json' } with open('config.json', 'w') as json_file: json.dump(json.loads(json.dumps(config)), json_file) ``` # Model ``` module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False) def model_fn(MAX_LEN): input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask') base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model") sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask}) last_state = sequence_output[0] x_start = layers.Dropout(0.1)(last_state) x_start = layers.Conv1D(128, 2, padding='same')(x_start) x_start = layers.LeakyReLU()(x_start) x_start = layers.Conv1D(64, 2, padding='same')(x_start) x_start = layers.Dense(1)(x_start) x_start = layers.Flatten()(x_start) y_start = layers.Activation('softmax', name='y_start')(x_start) x_end = layers.Dropout(0.1)(last_state) x_end = layers.Conv1D(128, 2, padding='same')(x_end) x_end = layers.LeakyReLU()(x_end) x_end = layers.Conv1D(64, 2, padding='same')(x_end) x_end = layers.Dense(1)(x_end) x_end = layers.Flatten()(x_end) y_end = layers.Activation('softmax', name='y_end')(x_end) model = Model(inputs=[input_ids, attention_mask], outputs=[y_start, y_end]) optimizer = optimizers.Adam(lr=config['LEARNING_RATE']) model.compile(optimizer, loss={'y_start': losses.CategoricalCrossentropy(label_smoothing=0.2), 'y_end': losses.CategoricalCrossentropy(label_smoothing=0.2)}, metrics={'y_start': metrics.CategoricalAccuracy(), 'y_end': metrics.CategoricalAccuracy()}) return model ``` # Tokenizer ``` tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path, lowercase=True, add_prefix_space=True) tokenizer.save('./') ``` ## Learning rate schedule ``` LR_MIN = 1e-6 LR_MAX = config['LEARNING_RATE'] LR_EXP_DECAY = .5 @tf.function def lrfn(epoch): lr = LR_MAX * LR_EXP_DECAY**epoch if lr < LR_MIN: lr = LR_MIN return lr rng = [i for i in range(config['EPOCHS'])] y = [lrfn(x) for x in rng] fig, ax = plt.subplots(figsize=(20, 6)) plt.plot(rng, y) print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1])) ``` # Train ``` history_list = [] for n_fold in range(config['N_FOLDS']): n_fold +=1 print('\nFOLD: %d' % (n_fold)) # Load data base_data_path = 'fold_%d/' % (n_fold) x_train = np.load(base_data_path + 'x_train.npy') y_train = np.load(base_data_path + 'y_train.npy') x_valid = np.load(base_data_path + 'x_valid.npy') y_valid = np.load(base_data_path + 'y_valid.npy') ### Delete data dir shutil.rmtree(base_data_path) # Train model model_path = 'model_fold_%d.h5' % (n_fold) model = model_fn(config['MAX_LEN']) es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'], restore_best_weights=True, verbose=1) checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min', save_best_only=True, save_weights_only=True) lr_schedule = LearningRateScheduler(lrfn) history = model.fit(list(x_train), list(y_train), validation_data=(list(x_valid), list(y_valid)), batch_size=config['BATCH_SIZE'], callbacks=[checkpoint, es, lr_schedule], epochs=config['EPOCHS'], verbose=1).history history_list.append(history) # Make predictions train_preds = model.predict(list(x_train)) valid_preds = model.predict(list(x_valid)) k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'train', 'start_fold_%d' % (n_fold)] = train_preds[0].argmax(axis=-1) k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'train', 'end_fold_%d' % (n_fold)] = train_preds[1].argmax(axis=-1) k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'validation', 'start_fold_%d' % (n_fold)] = valid_preds[0].argmax(axis=-1) k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'validation', 'end_fold_%d' % (n_fold)] = valid_preds[1].argmax(axis=-1) k_fold['end_fold_%d' % (n_fold)] = k_fold['end_fold_%d' % (n_fold)].astype(int) k_fold['start_fold_%d' % (n_fold)] = k_fold['start_fold_%d' % (n_fold)].astype(int) k_fold['end_fold_%d' % (n_fold)].clip(0, k_fold['text_len'], inplace=True) k_fold['start_fold_%d' % (n_fold)].clip(0, k_fold['end_fold_%d' % (n_fold)], inplace=True) k_fold['prediction_fold_%d' % (n_fold)] = k_fold.apply(lambda x: decode(x['start_fold_%d' % (n_fold)], x['end_fold_%d' % (n_fold)], x['text'], config['question_size'], tokenizer), axis=1) k_fold['prediction_fold_%d' % (n_fold)].fillna(k_fold["text"], inplace=True) k_fold['jaccard_fold_%d' % (n_fold)] = k_fold.apply(lambda x: jaccard(x['selected_text'], x['prediction_fold_%d' % (n_fold)]), axis=1) ``` # Model loss graph ``` sns.set(style="whitegrid") for n_fold in range(config['N_FOLDS']): print('Fold: %d' % (n_fold+1)) plot_metrics(history_list[n_fold]) ``` # Model evaluation ``` display(evaluate_model_kfold(k_fold, config['N_FOLDS']).style.applymap(color_map)) ``` # Visualize predictions ``` display(k_fold[[c for c in k_fold.columns if not (c.startswith('textID') or c.startswith('text_len') or c.startswith('selected_text_len') or c.startswith('text_wordCnt') or c.startswith('selected_text_wordCnt') or c.startswith('fold_') or c.startswith('start_fold_') or c.startswith('end_fold_'))]].head(15)) ```
github_jupyter
# Surface ``` import matplotlib.pylab as plt from mpl_toolkits.mplot3d import Axes3D import numpy as np fig = plt.figure() ax = fig.add_subplot(111, projection='3d') x = y = np.linspace(-3,3,100) X, Y = np.meshgrid(x, y) Z = X**4+Y**4-16*X*Y ax.plot_surface(X, Y, Z) ax.set_zlim3d(0,120) ax.set_xlabel('X Axis') ax.set_ylabel('Y Axis') ax.set_zlabel('Z Axis') plt.show() ``` ## Gradient Descent ``` def E(u,v): return u**4+v**4-16*u*v eta=0.01 x=1.2; y=1.2 print (0,'\t','x=', x,'\t','y=',y,'\t', 'E=',E(x,y)) for i in range(0,30): g=4*x**3-16*y h=4*y**3-16*x x=x-eta*g y=y-eta*h print (i+1,'\t','x=',round(x,3),'\t','y=',round(y,3),'\t','E=',round(E(x,y),3)) ``` ## Linear Regression Revisited We will redo the example of multivariate-data in linear regression using gradient descent. ``` data = np.genfromtxt('multivar_simulated.csv',skip_header=1,delimiter=',') Y = data[:,1] X1 = data[:,2:] O = np.ones(shape=(X.shape[0],1)) X = np.concatenate([X1,O],axis=1) X.shape ``` The error function is given by $$ E = \sum_{j=1}^{N} (y_j-\sum_{s=1}^{k+1} x_{js}m_{s})^2 .$$ Write a function for $E$. ``` #def Er(M): # formula here # return the result ``` The gradient of $E$ is given by $$ \nabla E = -2 X^{\intercal}Y + 2 X^{\intercal}XM. $$ Write a function for $\nabla E$. ``` #def GE(M): # return formula here ``` Choose initial values. ``` #eta= #iter_num= #M=np.array([?,?,?]) ``` Calculate the initial error. ``` Er(M) ``` Run a loop for gradient descent and print the values of M and Er(M). ``` #Write a loop here # #print M and Er(M) ``` Compare the result with the previous result from Linear Regression which was [ 1.78777492, -3.47899986, 6.0608333 ] ## Newton's Method ``` def E(u,v): return u**4+v**4-16*u*v eta=1 x=1.2; y=1.2 print (0,'\t','x=', x,'\t','y=',y, '\t','E=',E(x,y)) for i in range(0,10): d=9*x**2*y**2-16 g=(3*x**3*y**2 -8*y**3 -16*x)/d h=(3*x**2*y**3 -8*x**3 -16*y)/d x=x-eta*g y=y-eta*h print (i+1,'\t','x=', round(x,3),'\t','y=',round(y,3), '\t','E=',round(E(x,y),3)) ``` ### Haberman's Survival Data Set https://archive.ics.uci.edu/ml/datasets/Haberman%27s+Survival ``` import numpy as np import pandas as pd from sklearn import datasets, linear_model from sklearn.model_selection import train_test_split from matplotlib import pyplot as plt df=pd.read_csv("haberman.data",header=None) X=df.iloc[:,:-1].values y=df.iloc[:,-1].values X ``` ### Scaling Features As you can see the data from various columns have various value ranges. Having the features in same range makes the algorithm to converge faster than using the features which are not scaled to be in same range. ``` from sklearn.preprocessing import MinMaxScaler numcols=[0,1,2] mms=MinMaxScaler() X=mms.fit_transform(df[numcols]) X ``` ### Splitting the data We split the data set into two parts: one for train and the other for test. ``` X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) print(X_train.shape, y_train.shape) print(X_test.shape, y_test.shape) ``` We add one column consisting of ones. ``` n_train=len(y_train) new_col=np.ones((n_train,1)) X_train_modified=np.append(X_train,new_col,1) X_train_modified n_test=len(y_test) new_col=np.ones((n_test,1)) X_test_modified=np.append(X_test,new_col,1) ``` Define the function $\sigma(x) = \dfrac {e^x}{e^x+1}= \dfrac 1 {1+e^{-x}}$. ``` #def sigmoid(x): # return the function ``` Define the error function $$ E (\mathbf{w}) = - \frac 1 N \sum_{n=1}^N \{ t_n \ln y_n + (1-t_n) \ln (1-y_n)\}, $$ where $y_n=\sigma(w_1 x_{n1}+ w_2 x_{n2} + \cdots + w_k x_{nk}+w_{k+1} )$. This function will be obtained in Logistic Regression. ``` #def Er(w): # yn= # return ??? ``` Define the gradient of $E$. ``` #def gradE(w): # return the function ``` Set the initial values. ``` #w=[[?],[?],[?],[?]] #eta= #iter_num= ``` Run a loop for gradient descent. ``` #for i in range(iter_num): # print(w) ``` We compute the accuracy of the trained model. ``` y_pred=(sigmoid(X_test_modified@w).round()).reshape([1,n_test]) y_test=y_test%2 print("Train Accuracy:", sum(y_test==y_pred[0])*100/n_test,"%") ```
github_jupyter
# Arrays, lists and tuples In python, there are variables which can contain multiple entry of different kinds. In programming, we call them arrays; arrays of values. We already know one kind of arrays, strings. Strings are arrays of characters. See also * [Arrays](https://physics.nyu.edu/pine/pymanual/html/chap3/chap3_arrays.html) * [Average is list](https://www.geeksforgeeks.org/find-average-list-python/) You can access elements in an array using square brackets `[]` which allow you access an element at a given index. Indexing starts at 0. Thus, the first element of an array is element number 0. The following string contains 5 characters and thus, element with index 0, 1, 2, 3 and 4 can be accessed: ``` word = "Hello" word[0] word[1] word[2] word[3] word[4] word[5] ``` # Numeric lists Another type of array are numeric lists. They are common to store measurements of experiments for example: ``` measurements = [5.5, 6.3, 7.2, 8.0, 8.8] measurements[0] measurements[1] ``` Changing entries in lists works like this: ``` measurements[1] = 25 measurements[1] ``` You can also append entries to lists: ``` measurements.append(10.2) ``` Lists can also be reversed: ``` measurements measurements.reverse() measurements ``` Just like strings, you can also concatenate arrays: ``` more_measurements = [12.3, 14.5, 28.3] measurements + more_measurements ``` When working with numeric lists, you can use some of python's built-in functions to do basic statistics on your measurements ``` # minimum value in the list min(measurements) # maximum value in the list max(measurements) # sum of all elements in the list sum(measurements) # number of elements in the list len(measurements) # average of all elements in the list sum(measurements) / len(measurements) ``` # Mixed type lists You can also store values of different types in a list ``` mixed_list = [22, 5.6, "Cat", 'Dog'] mixed_list[0] mixed_list[3] type(mixed_list[3]) ``` # Tuples Tuples are lists which cannot be changed: ``` immutable = (4, 3, 7.8) immutable[1] immutable[1] = 5 ``` You can convert tubles to lists and lists to tuples: ``` type(immutable) mutable = list(immutable) type(mutable) again_immuntable = tuple(mutable) type(again_immuntable) ``` # Exercise Assume you did measurements on multiple days. Compute average measurement of this week? ``` measurements_monday = [2.3, 3.1, 5.6] measurements_tuesday = [1.8, 7.0] measurements_wednesday = [4.5, 1.5, 6.4, 3.2] measurements_thursday = [1.9, 2.0] measurements_friday = [4.4] ```
github_jupyter
# Regression with Amazon SageMaker XGBoost (Parquet input) This notebook exhibits the use of a Parquet dataset for use with the SageMaker XGBoost algorithm. The example here is almost the same as [Regression with Amazon SageMaker XGBoost algorithm](xgboost_abalone.ipynb). This notebook tackles the exact same problem with the same solution, but has been modified for a Parquet input. The original notebook provides details of dataset and the machine learning use-case. ``` import os import boto3 import re import sagemaker from sagemaker import get_execution_role role = get_execution_role() region = boto3.Session().region_name # S3 bucket for saving code and model artifacts. # Feel free to specify a different bucket here if you wish. bucket = sagemaker.Session().default_bucket() prefix = 'sagemaker/DEMO-xgboost-parquet' bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region, bucket) ``` We will use [PyArrow](https://arrow.apache.org/docs/python/) library to store the Abalone dataset in the Parquet format. ``` !python -m pip install pyarrow==0.15 %%time import numpy as np import pandas as pd import urllib.request from sklearn.datasets import load_svmlight_file # Download the dataset and load into a pandas dataframe FILE_NAME = 'abalone.csv' urllib.request.urlretrieve("https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data", FILE_NAME) feature_names=['Sex', 'Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'] data = pd.read_csv(FILE_NAME, header=None, names=feature_names) # SageMaker XGBoost has the convention of label in the first column data = data[feature_names[-1:] + feature_names[:-1]] data["Sex"] = data["Sex"].astype("category").cat.codes # Split the downloaded data into train/test dataframes train, test = np.split(data.sample(frac=1), [int(.8*len(data))]) # requires PyArrow installed train.to_parquet('abalone_train.parquet') test.to_parquet('abalone_test.parquet') %%time sagemaker.Session().upload_data('abalone_train.parquet', bucket=bucket, key_prefix=prefix+'/'+'train') sagemaker.Session().upload_data('abalone_test.parquet', bucket=bucket, key_prefix=prefix+'/'+'test') ``` We obtain the new container by specifying the framework version (0.90-1). This version specifies the upstream XGBoost framework version (0.90) and an additional SageMaker version (1). If you have an existing XGBoost workflow based on the previous (0.72) container, this would be the only change necessary to get the same workflow working with the new container. ``` from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(region, 'xgboost', '0.90-1') ``` After setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes. ``` %%time import time from time import gmtime, strftime job_name = 'xgboost-parquet-example-training-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) print("Training job", job_name) #Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below. create_training_params = { "AlgorithmSpecification": { "TrainingImage": container, "TrainingInputMode": "Pipe" }, "RoleArn": role, "OutputDataConfig": { "S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost" }, "ResourceConfig": { "InstanceCount": 1, "InstanceType": "ml.m5.24xlarge", "VolumeSizeInGB": 20 }, "TrainingJobName": job_name, "HyperParameters": { "max_depth":"5", "eta":"0.2", "gamma":"4", "min_child_weight":"6", "subsample":"0.7", "silent":"0", "objective":"reg:linear", "num_round":"10" }, "StoppingCondition": { "MaxRuntimeInSeconds": 3600 }, "InputDataConfig": [ { "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": bucket_path + "/" + prefix + "/train", "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "application/x-parquet", "CompressionType": "None" }, { "ChannelName": "validation", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": bucket_path + "/" + prefix + "/test", "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "application/x-parquet", "CompressionType": "None" } ] } client = boto3.client('sagemaker', region_name=region) client.create_training_job(**create_training_params) status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus'] print(status) while status !='Completed' and status!='Failed': time.sleep(60) status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus'] print(status) %matplotlib inline from sagemaker.analytics import TrainingJobAnalytics metric_name = 'validation:rmse' metrics_dataframe = TrainingJobAnalytics(training_job_name=job_name, metric_names=[metric_name]).dataframe() plt = metrics_dataframe.plot(kind='line', figsize=(12,5), x='timestamp', y='value', style='b.', legend=False) plt.set_ylabel(metric_name); ```
github_jupyter
# BLU15 - Model CSI ## Part 1 of 2 - When to train your model In this notebook we will be covering the following: - 1. The need for retraining - 1.1 Data drift - 1.2 Robustness - 1.3 When ground truth is not available at the time of model training - 1.4 Concept drift - 2. How to measure the decline in model performance? - 2.1 Histogram - 2.2 K-S Statistic - 2.3 Target distribution - 2.4 Correlation - 3. Retraining strategy - 4. How much data is needed for retraining? - 4.1 Fixed window size - 4.2 Dynamic window size - 4.3 Combining all of the data - 5. Final considerations ## 1. The need for retraining *Train, test and deploy* – that’s it, right? Is your work done? **Not quite!** So far this is how your process has been going: 1. Assuming the sufficient historical data available, model building starts by learning the dependencies between a set of independent features and the target variable. 2. The best learnt dependency is calculated basis some evaluation metric to minimize the predictions error on the validation dataset 3. This best learnt model is then deployed in production with the **expectation** to keep making accurate predictions on incoming unseen data **for as long as possible** One of the biggest mistake a data scientist can make is assume their models will keep working properly forever after deployment. *But what about the data, which will inevitably keep changing?* A model deployed in production and left to itself won’t be able to adapt to changes in data by itself. Let's look at the following example: <img src="https://i1.wp.com/neptune.ai/wp-content/uploads/Retraining-models-impact-of-changes.png?resize=829%2C354&ssl=1" alt="drift" width="800"/> >In a [UK bank survey from August 2020](https://www.bankofengland.co.uk/bank-overground/2021/how-has-covid-affected-the-performance-of-machine-learning-models-used-by-uk-banks), 35% of asked bankers reported a negative impact on ML model performance because of the pandemic. Unpredictable events like this are a great example of why continuous training and monitoring of ML models in production is important compared to static validation and testing techniques. Ideally, **retraining involves running the entire existing pipeline with new data**. That’s it. It does not involve any code changes or re-building the pipeline. However, if you end up exploring a new algorithm or a feature which might not have been available at the time of previous model training, then incorporating these changes while deploying the retrained model will further improve the model accuracy. But what exactly can cause this decrease in performance? ### 1.1 Data drift To understand this, let us recall one of the most critical assumptions in ML modelling > The train and test dataset should belong to similar distribution. The model will perform well if the new data is similar to the data observed in the past on which the model was trained on. Therefore, it's understandable that **if test data distribution deviates from that of train data, the model will not hold well**. There's many factors that can cause such deviation, depending on the business case, e.g. change in consumer preferences, fast moving competitive space, geographic shift, economic conditions, **a pandemic**, etc. Hence, the **drifting data distribution calls for an ongoing process of periodically checking the validity of old model**. In short, it is critical to keep your machine learning model updated; but the key is when? We will discuss this in a bit... ### 1.2 Robustness As you remember from [SLU17 - Ethics and Fairness](https://github.com/LDSSA/batch5-students/tree/main/S01%20-%20Bootcamp%20and%20Binary%20Classification/SLU17%20-%20Ethics%20and%20Fairness), a model has an impact in the world that it learned from. And that impact can change the *a priori* assumptions that once were true. People/entities that get affected by the outcome of the ML models may deliberately **alter their response** in order to send spurious input to the model, thereby **escaping the impact of the model predictions**. For example, the models such as *fraud detection* and *cyber-security* receive manipulated and distorted inputs which cause model to output misclassified predictions. Such type of adversaries also drives down the model performance. ### 1.3 When ground truth is not available at the time of model training In most of the ML models, **the ground truth labels are not available to train the model**. For example, target variable which captures the response of the end user is not known. In that case, your best bet could be to **mock the user action based on certain set of rules coming from business understanding** or **leverage the open source dataset** to initiate model training. But, this model might not necessarily represent the actual data and hence will not perform well until a burn-in period where it starts picking (aka learning) the true actions of the end user. ### 1.4 Concept drift Concept drift is a phenomenon where **the meaning the labels of the target variable you’re trying to predict changes over time. This means that the concept has changed but the model doesn’t know about the change**. Concept drift happens when **the original idea your model had about the target class changes**. For example, you build a model to classify positive and negative sentiment of tweets around certain topics, and over time people’s sentiment about these topics changes. Tweets belonging to positive sentiment may evolve over time to be negative. In simple terms, the concept of sentiment analysis has drifted. Unfortunately, your model will keep predicting positive sentiments as negative sentiments. ## 2. How to measure the decline in model performance? If **the ground truth values are stored alongside the predictions**, such as with the success of a search, the decline (or not) is calculated on a **continuous basis to assess the drift**. <img src="https://static.tildacdn.com/tild3462-6534-4732-a462-643534313536/model_decay_retraini.png" alt="retraining" width="500"/> **But what if the prediction horizon is farther into the future and we can’t wait till the ground truth label is observed to assess the model goodness?** Well, in that case, **we can roughly estimate the retraining window from back-testing**. This involves using the ground truth labels and predictions from the historical data to estimate the time frame around which the accuracy begins to taper off. >Effectively, the whole exercise of finding the model drift boils down to inferring whether the two data sets (training and test) are coming from the same distribution, or if the performance has fallen below acceptable range. Lets look at some of the ways to assess the distribution drift: ### 2.1 Histogram A quick way to visualize the comparison is to draw the histogram — the degree of overlap between the two histograms gives a measure of similarity. <img src="https://miro.medium.com/max/1400/1*Q4tXoLAbIRonpGNxdADVlA.png" alt="histogram" width="700"/> ### 2.2 K-S statistic The [Kolmogorov–Smirnov test](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test) is a useful tool to check if the upcoming new data belongs to the same distribution as that of training data. In short, this test quantifies a distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution, or between the empirical distribution functions of two samples. Python has an implementation of this test provided by [SciPy](https://scipy.org/) by Statistical functions ([scipy.stats](https://docs.scipy.org/doc/scipy/reference/stats.html)). <img src="https://upload.wikimedia.org/wikipedia/commons/c/cf/KS_Example.png" alt="ks" width="500"/> <center>llustration of the Kolmogorov–Smirnov statistic. The red line is a model cumulative distribution function, the blue line is an empirical cumulative distribution function, and the black arrow is the K–S statistic.</center> ### 2.3 Target distribution One quick way to check the consistent predictive power of the ML model is to **examine the distribution of the target variable**. For example, if your training dataset is imbalanced with 99% data belonging to class 1 and remaining 1% to class 0. And, the predictions reflect this distribution to be around 90%-10%, then it should be treated as an alert for further investigation. <img src="https://www.guru99.com/images/r_programming/032918_0752_BarChartHis7.png" alt="histogram" width="500"/> ### 2.4 Correlation Additionally, monitoring pairwise correlations between individual predictors will help bring out an underlying drift. ## 3. Retraining strategy There's two approaches to handling model retraining: - **Model is retrained at a fixed periodic interval** The model is retrained at a set interval. If the incoming data is changing frequently, the model retraining can happen even daily! A fixed preriod to retrain doesn't necessarily mean a less frequent. - **Model is continuously retrained** - **Trigger based on performance metrics** The model is retrained when a trigger is activated by monitoring the performance metrics. This approach is more effective than the above but the threshold specifying the acceptable level of performance divergence needs to be decided to initiate retraining. The following factors need to be considered while deciding the threshold: - Too low a threshold will lead to frequent retraining which will lead to increased overhead in terms of compute cost - Too high a threshold will output “strayed predictions” <img src="https://i1.wp.com/neptune.ai/wp-content/uploads/Retraining-models-graph.jpg?resize=768%2C645&ssl=1" alt="histogram" width="500"/> - **Trigger based on data changes** By monitoring your upstream data in production, you can identify changes in the distribution of your data. This can indicate that your model is outdated or that you’re in a dynamic environment. It’s a good approach to consider when you don’t get quick feedback or ground truth from your model in production. <img src="https://miro.medium.com/max/1248/1*aAR12f8rwroVf0O6Z1ohcw.png" alt="histogram" width="500"/> <center>Example of a retraining stategy based on data changes identified by monitoring upstream data in production.</center> - **Retraining on demand** Of all the options, this is the less efficient as it does not rely on automation, but it's the most simple to implement and therefore, sometimes favoured instead of the others. This is the approach that we will follow in the next `Learning notebook`. ## 4. How much data is needed for retraining? In addition to knowing why and when you need to retrain your models, it’s also important to know how to select the right data for retraining, and whether or not to drop the old data. Three things to consider when choosing the right size of data: - What is the size of your data? - Is your data drifting? - How often do you get new data? ### 4.1 Fixed window size This is a straightforward approach to selecting the training data. **Selecting the right window size** is a major drawback to using this approach: - If the **window size is too large, we may introduce noise into the data**. - If it’s **too narrow, it might lead to underfitting**. Overall, this approach is a simple heuristic approach that will work well in some cases, but will fail in a dynamic environment where data is constantly changing. ### 4.2 Dynamic window size This is an alternative to the fixed window size approach. This approach **helps to determine how much historical data should be used to retrain your model by iterating through the window size to determine the optimal value to use**. It’s an approach to consider if your data is large and you also get new data frequently. <img src="https://i2.wp.com/neptune.ai/wp-content/uploads/Training-data-vs-test-data.png?resize=900%2C420&ssl=1" alt="histogram" width="700"/> ### 4.3 Combining all of the data The simplest way, resources permitting, to handle this problem is simply to combine all of the data and retrain your model. As you're keeping data where a change has been detected, this is more sensitive to another drift and will need to be monitored closely. In production this is usually not a viable option due to the computational requirements needed as data continues to grow. ## 5. Final considerations Before we move on to a more practical demonstration, I hope you're now aware that retraining and redeployment is a constant need in any ML model. The *when* and the *how* are key questions that rely on the sensitivity of not only the methods, but of the Data Scientists itself. As *Scientists*, a critical view of any result is a fundamental skill. Now let's get our hands dirty in Part 2!
github_jupyter
# Paper Figure Creation - Created on a cloudly London Saturday morning, April 3rd 2021 - Revised versions of the figures ``` import climlab import numpy as np import xarray as xr import matplotlib.pyplot as plt import matplotlib.ticker as mticker import xarray as xr import pandas as pd import cartopy.crs as ccrs from cartopy.util import add_cyclic_point from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter from IPython.display import clear_output import time from mpl_toolkits.axes_grid1 import make_axes_locatable import matplotlib.patches as patches from matplotlib.colors import LogNorm import matplotlib.colors import matplotlib as mpl ``` # Fig. 1 ``` values = xr.open_dataset('../../Data/CERES/clear_sky_ceres.nc') landmask = xr.open_dataset('../../Data/Other/landsea.nc') lats = values.lat.values lons = values.lon.values # Variables that you want to plot plotvar1 = values.r2.values # Adding a cyclic point to the two variables # This removes a white line at lon = 0 lon_long = values.lon.values plotvar_cyc1 = np.zeros((len(lats), len(lons))) plotvar_cyc1, lon_long = add_cyclic_point(plotvar_cyc1, coord=lon_long) for i in range(len(lats)): for j in range(len(lons)): plotvar_cyc1[i, j] = plotvar1[i, j] plotvar_cyc1[:, len(lons)] = plotvar_cyc1[:, 0] # Plotting fig = plt.figure(figsize=(6, 2.7), constrained_layout=True) width_vals = [2, 1] gs = fig.add_gridspec(ncols=2, nrows=1, width_ratios=width_vals) SIZE = 8 plt.rc('font', size=SIZE) # controls default text sizes plt.rc('axes', titlesize=SIZE) # fontsize of the axes title plt.rc('axes', labelsize=SIZE) # fontsize of the x and y labels plt.rc('xtick', labelsize=SIZE) # fontsize of the tick labels plt.rc('ytick', labelsize=SIZE) # fontsize of the tick labels plt.rc('legend', fontsize=SIZE) # legend fontsize plt.rc('figure', titlesize=SIZE) # fontsize of the figure title # Upper left map ax1 = fig.add_subplot(gs[0], projection=ccrs.PlateCarree()) ax1.coastlines() ax1.set_title("a) Map of R$^2$ for Linear Fit of Monthly OLR to Monthly T$_S$") C1 = ax1.pcolor( lon_long, lats, plotvar_cyc1, transform=ccrs.PlateCarree(), cmap='RdYlGn', rasterized=True ) ax1.set_xticks([-180, -90, 0, 90, 180], crs=ccrs.PlateCarree()) ax1.set_yticks([-90, 0, 90], crs=ccrs.PlateCarree()) lon_formatter = LongitudeFormatter(number_format='.0f', dateline_direction_label=True) lat_formatter = LatitudeFormatter(number_format='.0f') ax1.xaxis.set_major_formatter(lon_formatter) ax1.yaxis.set_major_formatter(lat_formatter) # Colourbars cbar = fig.colorbar( C1, ax=ax1, label=r"$R^2$", fraction=0.1, orientation="horizontal", ticks=[0.001, 0.2, 0.4, 0.6, 0.8, 0.999] ) cbar.ax.set_xticklabels(['0', '0.2', '0.4', '0.6', '0.8', '1']) ax1.text(110, 70, 'a', horizontalalignment='center', verticalalignment='center', color='white', fontsize=8, fontweight='bold', bbox={'facecolor': 'black', 'edgecolor': 'none', 'alpha': 0.5, 'pad': 5}) ax1.text(-110, -65, 'b', horizontalalignment='center', verticalalignment='center', color='white', fontsize=8, fontweight='bold', bbox={'facecolor': 'black', 'edgecolor': 'none', 'alpha': 0.5, 'pad': 5}) ax1.text(27, -2, 'c', horizontalalignment='center', verticalalignment='center', color='white', fontsize=8, fontweight='bold', bbox={'facecolor': 'black', 'edgecolor': 'none', 'alpha': 0.5, 'pad': 5}) ax1.text(0, -10, 'd', horizontalalignment='center', verticalalignment='center', color='white', fontsize=8, fontweight='bold', bbox={'facecolor': 'black', 'edgecolor': 'none', 'alpha': 0.5, 'pad': 5}) ax1.text(80, 15, 'e', horizontalalignment='center', verticalalignment='center', color='white', fontsize=8, fontweight='bold', bbox={'facecolor': 'black', 'edgecolor': 'none', 'alpha': 0.5, 'pad': 5}) # Upper right map ax2 = fig.add_subplot(gs[1]) extra_tropics = list(np.arange(-90, -29, 1)) extra_tropics += list(np.arange(30, 91, 1)) tropics = list(np.arange(-30, 31, 1)) mask_sea_t = landmask.interp_like(values, method='nearest').sel( lat=tropics).LSMASK.values == 0 mask_land_t = landmask.interp_like( values, method='nearest').sel(lat=tropics).LSMASK.values == 1 mask_sea_et = landmask.interp_like(values, method='nearest').sel( lat=extra_tropics).LSMASK.values == 0 mask_land_et = landmask.interp_like(values, method='nearest').sel( lat=extra_tropics).LSMASK.values == 1 bnum = np.arange(-3, 5, 0.2) lw = 2 ax2.hist(values.sel(lat=extra_tropics).grad.values[mask_land_et].flatten( ), bins=bnum, density=True, histtype='step', linewidth=lw, label='Extratropics:\nLand', color='C1') ax2.hist(values.sel(lat=extra_tropics).grad.values[mask_sea_et].flatten( ), bins=bnum, density=True, histtype='step', linewidth=lw, label='Extratropics:\nOcean', color='C0') ax2.hist(values.sel(lat=tropics).grad.values[mask_land_t].flatten( ), bins=bnum, density=True, histtype='step', linewidth=lw, label='Tropics:\nLand', color='red') ax2.hist(values.sel(lat=tropics).grad.values[mask_sea_t].flatten( ), bins=bnum, density=True, histtype='step', linewidth=lw, label='Tropics:\nOcean', color='navy') ax2.set_xlim(-3, 5) ax2.set_title("b) Histogram of Slope $\partial$OLR/$\partial$T$_S$") ax2.set_xlabel(r'Linear Slope $\partial$OLR/$\partial$T$_S$ (Wm$^{-2}$/K)') ax2.set_ylabel('Probability Density') ax2.legend(loc='upper left', handlelength=0.1) ax1.set_anchor('N') ax2.set_anchor('N') path = "../../Figures/After first review/" plt.savefig(path + 'Fig 1 CERES NEW.pdf', bbox_inches='tight', dpi=300) plt.close() ``` ### Calculating Mean Values for the Figure Caption ``` area_factors = np.zeros_like(values.mean(dim='month').t2m.values) lats = values.lat.values lons = values.lon.values for i in range(len(lats)): area_factors[i, :] = np.cos(lats[i]*2*np.pi/360) values['area_factors'] = (('lat', 'lon'), area_factors) grad_area_scaled = np.zeros_like(values.mean(dim='month').t2m.values) for i in range(len(lats)): for j in range(len(lons)): grad_area_scaled[i,j] = values.sel(lat=lats[i], lon=lons[j]).area_factors.values[()] * values.sel(lat=lats[i], lon=lons[j]).grad.values[()] values['grad_area_scaled'] = (('lat', 'lon'), grad_area_scaled) glbl = np.sum(values.grad_area_scaled.values.flatten()) / np.sum(values.area_factors.values.flatten()) print('Global area weighted mean is', glbl) ``` # Fig. 2 ``` values_meas = xr.open_dataset('../../Data/CERES/clear_sky_ceres.nc') lats = values_meas.lat.values lons = values_meas.lon.values fig = plt.figure(figsize=(6, 4.6), constrained_layout=True) height_vals = [1, 4] gs = fig.add_gridspec(ncols=5, nrows=2, height_ratios=height_vals) month_list = np.arange(1, 13) lc1 = "#7d7d7d" cvals = [1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12.] colors = ["#f1423f", "#f1613e", "#f79a33", "#feba28", "#efe720", "#b6d434", "#00b34e", "#0098d1", "#0365b0", "#3e3f9b", "#83459b", "#bd2755"] norm = plt.Normalize(min(cvals), max(cvals)) tuples = list(zip(map(norm, cvals), colors)) cmap_new = matplotlib.colors.LinearSegmentedColormap.from_list("", tuples) SIZE = 8 plt.rc('font', size=SIZE) # controls default text sizes plt.rc('axes', titlesize=SIZE) # fontsize of the axes title plt.rc('axes', labelsize=SIZE) # fontsize of the x and y labels plt.rc('xtick', labelsize=SIZE) # fontsize of the tick labels plt.rc('ytick', labelsize=SIZE) # fontsize of the tick labels plt.rc('legend', fontsize=SIZE) # legend fontsize plt.rc('figure', titlesize=SIZE) # fontsize of the figure title ax1 = fig.add_subplot(gs[0, 0]) ax2 = fig.add_subplot(gs[0, 1]) ax3 = fig.add_subplot(gs[0, 2]) ax4 = fig.add_subplot(gs[0, 3]) ax5 = fig.add_subplot(gs[0, 4]) top_axs = [ax1, ax2, ax3, ax4, ax5] axs = fig.add_subplot(gs[1, :]) axs.scatter( values_meas.sel(lat=lats[:-1]).t2m.values.flatten(), values_meas.sel(lat=lats[:-1]).toa_lw_clr_c_mon.values.flatten(), c=lc1, s=1, rasterized=True, alpha=0.03 ) latvals = [70, -65, -2, -10, 15] lonvals = [110, 250, 27, 0, 80] titles = ['a) ', 'b) ', 'c) ', 'd) ', 'e) '] style = "Simple, tail_width=0.3, head_width=3, head_length=3" kw = dict(arrowstyle=style, color="k") a1 = patches.FancyArrowPatch( (284, 221), (259, 171), connectionstyle="arc3,rad=-0.3", **kw) a2 = patches.FancyArrowPatch( (274.5, 229), (271.5, 218), connectionstyle="arc3,rad=-0.3", **kw) a3 = patches.FancyArrowPatch( (297, 272), (297.7, 274.5), connectionstyle="arc3,rad=0.3", **kw) a4 = patches.FancyArrowPatch( (294.9, 294), (298, 294), connectionstyle="arc3,rad=0.4", **kw) a5 = patches.FancyArrowPatch( (301, 284), (297.5, 295), connectionstyle="arc3,rad=-0.4", **kw) arrs = [a1, a2, a3, a4, a5] for ind in range(len(latvals)): latval = latvals[ind] lonval = lonvals[ind] i_raw = latval j_raw = lonval # Bottom loop plots axs.plot( values_meas.t2m.sel(lat=i_raw, lon=j_raw).values, values_meas.toa_lw_clr_c_mon.sel(lat=i_raw, lon=j_raw).values, c="k", # label="Calculated OLR", ) axs.plot( values_meas.t2m.sel(lat=i_raw, lon=j_raw).values[0::11], values_meas.toa_lw_clr_c_mon.sel(lat=i_raw, lon=j_raw).values[0::11], c="k", ) cplot = axs.scatter( values_meas.t2m.sel(lat=i_raw, lon=j_raw).values, values_meas.toa_lw_clr_c_mon.sel(lat=i_raw, lon=j_raw).values, c=month_list, cmap=cmap_new, s=30, ) axs.set_title('f$\,$)') # Top plots ax = top_axs[ind] ax.plot( values_meas.t2m.sel(lat=i_raw, lon=j_raw).values, values_meas.toa_lw_clr_c_mon.sel(lat=i_raw, lon=j_raw).values, c="k", # label="Calculated OLR", ) ax.plot( values_meas.t2m.sel(lat=i_raw, lon=j_raw).values[0::11], values_meas.toa_lw_clr_c_mon.sel(lat=i_raw, lon=j_raw).values[0::11], c="k", ) cplot = ax.scatter( values_meas.t2m.sel(lat=i_raw, lon=j_raw).values, values_meas.toa_lw_clr_c_mon.sel(lat=i_raw, lon=j_raw).values, c=month_list, cmap=cmap_new, s=30, ) ax.set_title(titles[ind]+str(i_raw)+', '+str(j_raw)) ax.add_patch(arrs[ind]) ax.margins(x=0.2,y=0.2) ax3.set_xticks([296.5, 297.5]) top_axs[0].set_ylabel(r'OLR (Wm$^{-2}$)') axs.set_xlabel(r'T$_s$ (K)') axs.set_ylabel(r'OLR (Wm$^{-2}$)') fig.suptitle('Latitude, Longitude') cbar = fig.colorbar( cplot, ax=axs, # label='Month', fraction=0.1, orientation="vertical", aspect=40, pad=0, shrink=0.8, ticks=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], ) cbar.ax.set_title(r'Month', y=1.02, x=2.5) plt.savefig('../../Figures/After first review/Fig 2 CERES.pdf', format='pdf', bbox_inches='tight', dpi=300) plt.close() ``` # Fig. 3 ``` # CERES values_meas = xr.open_dataset('../../Data/CERES/clear_sky_ceres.nc') # Offline calculations from cluster values_calc = xr.open_dataset('../../Data/Cluster/clear_sky_calculated.nc') lats = values_meas.lat.values lons = values_meas.lon.values plotvar = values_meas.hyst.values lon_long = values_meas.lon.values plotvar_cyc = np.zeros((len(lats), len(lons))) plotvar_cyc, lon_long = add_cyclic_point(plotvar_cyc, coord=lon_long) for i in range(len(lats)): for j in range(len(lons)): plotvar_cyc[i, j] = plotvar[i, j] plotvar_cyc[:, len(lons)] = plotvar_cyc[:, 0] fig = plt.figure(figsize=(6, 5.6), constrained_layout=True) widths = [1, 1] heights = [2, 0.9] gs = fig.add_gridspec( ncols=2, nrows=2, width_ratios=widths, height_ratios=heights) SIZE = 8 plt.rc('font', size=SIZE) # controls default text sizes plt.rc('axes', titlesize=SIZE) # fontsize of the axes title plt.rc('axes', labelsize=SIZE) # fontsize of the x and y labels plt.rc('xtick', labelsize=SIZE) # fontsize of the tick labels plt.rc('ytick', labelsize=SIZE) # fontsize of the tick labels plt.rc('legend', fontsize=SIZE) # legend fontsize plt.rc('figure', titlesize=SIZE) # fontsize of the figure title ax1 = fig.add_subplot(gs[0, :], projection=ccrs.PlateCarree()) ax1.coastlines() C1 = ax1.pcolor( lon_long, lats, plotvar_cyc, transform=ccrs.PlateCarree(), cmap='coolwarm', rasterized=True ) C1.set_clim(vmin=-20, vmax=20) ax1.set_title('a)') ax1.set_xticks([-180, -90, 0, 90, 180], crs=ccrs.PlateCarree()) ax1.set_yticks([-90, 0, 90], crs=ccrs.PlateCarree()) lon_formatter = LongitudeFormatter(number_format='.0f', dateline_direction_label=True) lat_formatter = LatitudeFormatter(number_format='.0f') ax1.xaxis.set_major_formatter(lon_formatter) ax1.yaxis.set_major_formatter(lat_formatter) # Colourbars fig.colorbar( C1, ax=ax1, label="OLR Loopiness, $\mathcal{O}$ (Wm$^{-2}$)", pad=-0.005, aspect=40, fraction=0.1, # shrink=0.97, orientation="horizontal", ) xvals_l = values_meas.mean( dim='month').toa_lw_clr_c_mon.values.flatten() yvals_l = values_calc.mean(dim='month').olr_calc.values.flatten() xvals_r = values_meas.mean( dim='month').hyst.values.flatten() yvals_r = values_calc.mean(dim='month').hyst.values.flatten() # Bottom left plot ax2 = fig.add_subplot(gs[1, 0]) hist1 = ax2.hist2d(xvals_l, yvals_l, 100, cmap='Greys', norm=LogNorm(), vmin=0.2) ax2.plot([np.amin(xvals_l), np.amax(xvals_l)], [np.amin(xvals_l), np.amax( xvals_l)], c='C1', label='1:1 Line', linestyle='--', linewidth=2, rasterized=True) ax2.set_xlabel(r'CERES (Wm$^{-2}$)') ax2.set_ylabel(r'Offline Calculations (Wm$^{-2}$)') ax2.set_title('b) Annual Mean OLR') ax2.legend() cbar1 = fig.colorbar(hist1[-1], ax=ax2, fraction=0, pad=-0.005, shrink=0.7, ticks=[1000, 100, 10, 1]) cbar1.ax.set_title(r'$\frac{\#}{(W m^{-2})^2}$', fontsize=10, y=1.15, x=4) # Bottom right plot ax3 = fig.add_subplot(gs[1, 1]) ax3.hist2d(xvals_r, yvals_r, 100, cmap='Greys', norm=LogNorm(), vmin=0.2) ax3.plot([np.amin(xvals_r), np.amax(xvals_r)], [np.amin(xvals_r), np.amax( xvals_r)], c='C1', label='1:1 Line', linestyle='--', linewidth=2, rasterized=True) ax3.set_xlabel(r'CERES (Wm$^{-2}$)') ax3.set_ylabel(r'Offline Calculations (Wm$^{-2}$)') ax3.set_title("c) OLR Loopiness, $\mathcal{O}$") ax3.legend() cbar2 = fig.colorbar(hist1[-1], ax=ax3, fraction=0, pad=-0.005, shrink=0.7, ticks=[1000, 100, 10, 1]) cbar2.ax.set_title(r'$\frac{\#}{(W m^{-2})^2}$', fontsize=10, y=1.15, x=4) plt.savefig('../../Figures/After first review/Fig 3 CERES.pdf', format='pdf', bbox_inches='tight', dpi=300) plt.close() ``` ### Mean Absolute Error ``` xvals_l = values_meas.mean( dim='month').toa_lw_clr_c_mon.values.flatten() yvals_l = values_calc.mean(dim='month').olr_calc.values.flatten() xvals_r = values_meas.mean( dim='month').hyst.values.flatten() yvals_r = values_calc.mean(dim='month').hyst.values.flatten() diff_l = [] diff_r = [] for i in range(len(xvals_l)): diff_l.append(np.abs(yvals_l[i]-xvals_l[i])) diff_r.append(np.abs(yvals_r[i]-xvals_r[i])) mae_l = np.mean(diff_l) mae_r = np.mean(diff_r) print('MAE Annual Mean:', mae_l) print('MAE Loopiness :', mae_r) ``` ### Calculating the Bias ``` # Calculate bias of cluster - CERES bias = values_calc.hyst - values_meas.hyst bias.mean(dim=('lat','lon')).values[()] ``` # Fig. 4 ``` values_meas = xr.open_dataset('../../Data/CERES/clear_sky_ceres.nc') values_meas_base = xr.open_dataset('../../Data/Cluster/Combined_data_ceres_base.nc') values_meas_const_r = xr.open_dataset('../../Data/Cluster/clear_sky_calculated_const_rh.nc') values_meas_const_t = xr.open_dataset('../../Data/Cluster/clear_sky_calculated_const_t.nc') values_meas_atm = xr.open_dataset('../../Data/Cluster/data_atm.nc') lats = values_meas.lat.values lons = values_meas.lon.values levels = values_meas_atm.level.values month_val = 0 month_list = np.arange(1, 13, 1) fig, axs = plt.subplots(nrows=4, ncols=3, figsize=( 6, 6), constrained_layout=True) # Location, off of the coast of California (latitude value, longitude value) lav = 32 lov = 237 month_list = np.arange(1, 13) line_colour = "#7d7d7d" SIZE = 8 plt.rc('font', size=SIZE) # controls default text sizes plt.rc('axes', titlesize=SIZE) # fontsize of the axes title plt.rc('axes', labelsize=SIZE) # fontsize of the x and y labels plt.rc('xtick', labelsize=SIZE) # fontsize of the tick labels plt.rc('ytick', labelsize=SIZE) # fontsize of the tick labels plt.rc('legend', fontsize=SIZE) # legend fontsize plt.rc('figure', titlesize=SIZE) # fontsize of the figure title lcmap1 = mpl.cm.get_cmap('Blues') lcmap2 = mpl.cm.get_cmap('Oranges') lcmap3 = mpl.cm.get_cmap('Greens') mean_t2m = np.mean(values_meas.sel(lat=lav, lon=lov).t2m.values) for i in range(12): lc1 = lcmap1(i/11) lc2 = lcmap2(i/11) a_val = 1 # * i/11 axs[0, 0].plot( [j + (values_meas.sel(lat=lav, lon=lov).t2m.values[i] - mean_t2m) for j in values_meas_atm.mean(dim="month").t.values], levels, c=lc1, alpha=a_val, ) axs[1, 0].plot( values_meas_atm.sel(month=month_list[i]).t.values, levels, c=lc1, alpha=a_val, ) axs[2, 0].plot( [j + (values_meas.sel(lat=lav, lon=lov).t2m.values[i] - mean_t2m) for j in values_meas_atm.mean(dim="month").t.values], levels, c=lc1, alpha=a_val, ) axs[3, 0].plot( values_meas_atm.sel(month=month_list[i]).t.values, levels, c=lc1, alpha=a_val, ) axs[2, 1].plot( values_meas_atm.sel(month=month_list[i]).r.values, levels, c=lc2, alpha=a_val, ) axs[3, 1].plot( values_meas_atm.sel(month=month_list[i]).r.values, levels, c=lc2, alpha=a_val, ) # Top two RH axs[0, 1].plot( values_meas_atm.mean(dim="month").r.values, levels, c=lcmap2(1.0), ) axs[1, 1].plot( values_meas_atm.mean(dim="month").r.values, levels, c=lcmap2(1.0), ) # Base Case axs[0, 2].plot( values_meas_base.sel(lat=lav, lon=lov).ts.values, values_meas_base.sel(lat=lav, lon=lov).olr_calc.values, c=line_colour ) axs[0, 2].plot( values_meas_base.sel(lat=lav, lon=lov).ts.values[0::11], values_meas_base.sel(lat=lav, lon=lov).olr_calc.values[0::11], c=line_colour ) axs[0, 2].scatter( values_meas_base.sel(lat=lav, lon=lov).ts.values, values_meas_base.sel(lat=lav, lon=lov).olr_calc.values, c=month_list, cmap=lcmap3, s=30, ) axs[0, 2].margins(x=0.2,y=0.2) # Temperature Variation Only Case axs[1, 2].plot( values_meas_const_r.sel(lat=lav, lon=lov).ts.values, values_meas_const_r.sel(lat=lav, lon=lov).olr_calc.values, c=line_colour ) axs[1, 2].plot( values_meas_const_r.sel(lat=lav, lon=lov).ts.values[0::11], values_meas_const_r.sel(lat=lav, lon=lov).olr_calc.values[0::11], c=line_colour ) axs[1, 2].scatter( values_meas_const_r.sel(lat=lav, lon=lov).ts.values, values_meas_const_r.sel(lat=lav, lon=lov).olr_calc.values, c=month_list, cmap=lcmap3, s=30, ) axs[1, 2].margins(x=0.2,y=0.2) # Moisture Variation Only Case axs[2, 2].plot( values_meas_const_t.sel(lat=lav, lon=lov).ts.values, values_meas_const_t.sel(lat=lav, lon=lov).olr_calc.values, c=line_colour ) axs[2, 2].plot( values_meas_const_t.sel(lat=lav, lon=lov).ts.values[0::11], values_meas_const_t.sel(lat=lav, lon=lov).olr_calc.values[0::11], c=line_colour ) axs[2, 2].scatter( values_meas_const_t.sel(lat=lav, lon=lov).ts.values, values_meas_const_t.sel(lat=lav, lon=lov).olr_calc.values, c=month_list, cmap=lcmap3, s=30, ) axs[2, 2].margins(x=0.2,y=0.2) # Full Case axs[3, 2].plot( values_meas.sel(lat=lav, lon=lov).t2m.values, values_meas.sel(lat=lav, lon=lov).toa_lw_clr_c_mon.values, c=line_colour ) axs[3, 2].plot( values_meas.sel(lat=lav, lon=lov).t2m.values[0::11], values_meas.sel(lat=lav, lon=lov).toa_lw_clr_c_mon.values[0::11], c=line_colour ) axs[3, 2].scatter( values_meas.sel(lat=lav, lon=lov).t2m.values, values_meas.sel(lat=lav, lon=lov).toa_lw_clr_c_mon.values, c=month_list, cmap=lcmap3, s=30, ) axs[3, 2].margins(x=0.2,y=0.2) # Formatting for i1 in range(3): for i2 in range(4): if i1 == 0: axs[i2, i1].set_ylabel("Pressure (mBar)") axs[0, 0].invert_yaxis() axs[1, 0].invert_yaxis() axs[2, 0].invert_yaxis() axs[3, 0].invert_yaxis() axs[0, 1].invert_yaxis() axs[1, 1].invert_yaxis() axs[2, 1].invert_yaxis() axs[3, 1].invert_yaxis() labels = [''] axs[0,1].set_yticklabels(labels) axs[1,1].set_yticklabels(labels) axs[2,1].set_yticklabels(labels) axs[3,1].set_yticklabels(labels) axs[0,0].set_xticklabels(labels) axs[1,0].set_xticklabels(labels) axs[2,0].set_xticklabels(labels) axs[0,1].set_xticklabels(labels) axs[1,1].set_xticklabels(labels) axs[2,1].set_xticklabels(labels) axs[0,2].set_xticklabels(labels) axs[1,2].set_xticklabels(labels) axs[2,2].set_xticklabels(labels) temp_ticks = [200, 230, 260, 290] axs[0,0].set_xticks(temp_ticks) axs[1,0].set_xticks(temp_ticks) axs[2,0].set_xticks(temp_ticks) axs[3,0].set_xticks(temp_ticks) rh_ticks = [0, 30, 60, 90] axs[0,1].set_xticks(rh_ticks) axs[1,1].set_xticks(rh_ticks) axs[2,1].set_xticks(rh_ticks) axs[3,1].set_xticks(rh_ticks) axs[0,2].yaxis.tick_right() axs[1,2].yaxis.tick_right() axs[2,2].yaxis.tick_right() axs[3,2].yaxis.tick_right() axs[0,2].yaxis.set_label_position("right") axs[1,2].yaxis.set_label_position("right") axs[2,2].yaxis.set_label_position("right") axs[3,2].yaxis.set_label_position("right") temp_xlim = axs[1,0].get_xlim() axs[0,0].set_xlim(temp_xlim) axs[1,0].set_xlim(temp_xlim) axs[2,0].set_xlim(temp_xlim) axs[3,0].set_xlim(temp_xlim) rh_xlim = axs[2,1].get_xlim() axs[0,1].set_xlim(rh_xlim) axs[1,1].set_xlim(rh_xlim) axs[2,1].set_xlim(rh_xlim) axs[3,1].set_xlim(rh_xlim) axs[0,2].set_ylabel('OLR (Wm$^{-2}$)', rotation=-90, labelpad=15) axs[1,2].set_ylabel('OLR (Wm$^{-2}$)', rotation=-90, labelpad=15) axs[2,2].set_ylabel('OLR (Wm$^{-2}$)', rotation=-90, labelpad=15) axs[3,2].set_ylabel('OLR (Wm$^{-2}$)', rotation=-90, labelpad=15) axs[3, 0].set_xlabel("Temperature (K)") axs[3, 1].set_xlabel("Relative Humidity (%)") axs[3, 2].set_xlabel('T$_s$ (K)') xv = 1.55 yv = 1.08 fs = 8 axs[0,1].set_title('Base Case (Zero Loopiness)') axs[1,1].set_title('Seasonal Temperature Variation Only') axs[2,1].set_title('Seasonal Moisture Variation Only') axs[3,1].set_title('Full Case (Full Loopiness)') axs[0, 0].arrow(250, 550, 2.5, 0, head_width=40, head_length=2.5, fc='k') axs[0, 0].arrow(250, 550, -2.5, 0, head_width=40, head_length=2.5, fc='k') axs[2, 0].arrow(250, 550, 2.5, 0, head_width=40, head_length=2.5, fc='k') axs[2, 0].arrow(250, 550, -2.5, 0, head_width=40, head_length=2.5, fc='k') axs[1, 0].arrow(220, 350, 2.5, 0, head_width=40, head_length=2.5, fc='k') axs[1, 0].arrow(220, 350, -2.5, 0, head_width=40, head_length=2.5, fc='k') axs[1, 0].arrow(265, 850, 7.5, 0, head_width=40, head_length=2.5, fc='k') axs[1, 0].arrow(265, 850, -7.5, 0, head_width=40, head_length=2.5, fc='k') axs[3, 0].arrow(220, 350, 2.5, 0, head_width=40, head_length=2.5, fc='k') axs[3, 0].arrow(220, 350, -2.5, 0, head_width=40, head_length=2.5, fc='k') axs[3, 0].arrow(265, 850, 7.5, 0, head_width=40, head_length=2.5, fc='k') axs[3, 0].arrow(265, 850, -7.5, 0, head_width=40, head_length=2.5, fc='k') axs[2, 1].arrow(72, 830, 2.5, 0, head_width=40, head_length=2.5, fc='k') axs[2, 1].arrow(72, 830, -2.5, 0, head_width=40, head_length=2.5, fc='k') axs[2, 1].arrow(72, 250, 7.5, 0, head_width=40, head_length=2.5, fc='k') axs[2, 1].arrow(72, 250, -7.5, 0, head_width=40, head_length=2.5, fc='k') axs[3, 1].arrow(72, 830, 2.5, 0, head_width=40, head_length=2.5, fc='k') axs[3, 1].arrow(72, 830, -2.5, 0, head_width=40, head_length=2.5, fc='k') axs[3, 1].arrow(72, 250, 7.5, 0, head_width=40, head_length=2.5, fc='k') axs[3, 1].arrow(72, 250, -7.5, 0, head_width=40, head_length=2.5, fc='k') plt.savefig('../../Figures/After first review/Fig 4 CERES.pdf', bbox_inches='tight', format='pdf') # Save the figure plt.close() ``` # Fig. 5 ``` values_t = xr.open_dataset('../../Data/Cluster/clear_sky_calculated_const_t.nc') values_r = xr.open_dataset('../../Data/Cluster/clear_sky_calculated_const_rh.nc') lats = values_t.lat.values lons = values_t.lon.values # Variables that you want to plot plotvar1 = values_r.hyst.values plotvar2 = values_t.hyst.values # Adding a cyclic point to the two variables # This removes a white line at lon = 0 lon_long = values_r.lon.values plotvar_cyc1 = np.zeros((len(lats), len(lons))) plotvar_cyc1, lon_long = add_cyclic_point(plotvar_cyc1, coord=lon_long) for i in range(len(lats)): for j in range(len(lons)): plotvar_cyc1[i, j] = plotvar1[i, j] plotvar_cyc1[:, len(lons)] = plotvar_cyc1[:, 0] lon_long = values_r.lon.values plotvar_cyc2 = np.zeros((len(lats), len(lons))) plotvar_cyc2, lon_long = add_cyclic_point(plotvar_cyc2, coord=lon_long) for i in range(len(lats)): for j in range(len(lons)): plotvar_cyc2[i, j] = plotvar2[i, j] plotvar_cyc2[:, len(lons)] = plotvar_cyc2[:, 0] # Plotting fig = plt.figure(figsize=(6, 2.4), constrained_layout=True) gs = fig.add_gridspec(ncols=2, nrows=1) SIZE = 8 plt.rc('font', size=SIZE) # controls default text sizes plt.rc('axes', titlesize=SIZE) # fontsize of the axes title plt.rc('axes', labelsize=SIZE) # fontsize of the x and y labels plt.rc('xtick', labelsize=SIZE) # fontsize of the tick labels plt.rc('ytick', labelsize=SIZE) # fontsize of the tick labels plt.rc('legend', fontsize=SIZE) # legend fontsize plt.rc('figure', titlesize=SIZE) # fontsize of the figure title # Left map ax1 = fig.add_subplot(gs[0], projection=ccrs.PlateCarree()) ax1.coastlines() C1 = ax1.pcolor( lon_long, lats, plotvar_cyc1, transform=ccrs.PlateCarree(), cmap='coolwarm', rasterized=True ) C1.set_clim(vmin=-20, vmax=20) ax1.set_title("a) Temperature Variation Only") ax1.set_xticks([-180, -90, 0, 90, 180], crs=ccrs.PlateCarree()) ax1.set_yticks([-90, 0, 90], crs=ccrs.PlateCarree()) lon_formatter = LongitudeFormatter(number_format='.0f', dateline_direction_label=True) lat_formatter = LatitudeFormatter(number_format='.0f') ax1.xaxis.set_major_formatter(lon_formatter) ax1.yaxis.set_major_formatter(lat_formatter) # Colourbars fig.colorbar( C1, ax=ax1, label="OLR Loopiness, $\mathcal{O}$ (Wm$^{-2}$)", pad=0, aspect=20, fraction=0.1, # shrink=0.95, orientation="horizontal", ) # Right map ax2 = fig.add_subplot(gs[1], projection=ccrs.PlateCarree()) ax2.coastlines() C2 = ax2.pcolor( lon_long, lats, plotvar_cyc2, transform=ccrs.PlateCarree(), cmap='coolwarm', rasterized=True ) C2.set_clim(vmin=-20, vmax=20) ax2.set_title("b) Moisture Variation Only") ax2.set_xticks([-180, -90, 0, 90, 180], crs=ccrs.PlateCarree()) ax2.set_yticks([-90, 0, 90], crs=ccrs.PlateCarree()) lon_formatter = LongitudeFormatter(number_format='.0f', dateline_direction_label=True) lat_formatter = LatitudeFormatter(number_format='.0f') ax2.xaxis.set_major_formatter(lon_formatter) ax2.yaxis.set_major_formatter(lat_formatter) # Colourbars fig.colorbar( C2, ax=ax2, label="OLR Loopiness, $\mathcal{O}$ (Wm$^{-2}$)", pad=0, aspect=20, fraction=0.1, # shrink=0.95, orientation="horizontal", ) plt.savefig('../../Figures/After first review/Fig 5 CERES.pdf', format='pdf', dpi=300, bbox_inches='tight') plt.close() ``` # Fig. 6 ``` values_as = xr.open_dataset('../../Data/CERES/all_sky_ceres.nc') values_cs = xr.open_dataset('../../Data/CERES/clear_sky_ceres.nc') lats = values_cs.lat.values lons = values_cs.lon.values # Variables that you want to plot plotvar1 = values_cs.hyst_over_olr_range.values plotvar2 = values_as.hyst_over_olr_range.values # Adding a cyclic point to the two variables # This removes a white line at lon = 0 lon_long = values_cs.lon.values plotvar_cyc1 = np.zeros((len(lats), len(lons))) plotvar_cyc1, lon_long = add_cyclic_point(plotvar_cyc1, coord=lon_long) for i in range(len(lats)): for j in range(len(lons)): plotvar_cyc1[i, j] = plotvar1[i, j] plotvar_cyc1[:, len(lons)] = plotvar_cyc1[:, 0] lon_long = values_cs.lon.values plotvar_cyc2 = np.zeros((len(lats), len(lons))) plotvar_cyc2, lon_long = add_cyclic_point(plotvar_cyc2, coord=lon_long) for i in range(len(lats)): for j in range(len(lons)): plotvar_cyc2[i, j] = plotvar2[i, j] plotvar_cyc2[:, len(lons)] = plotvar_cyc2[:, 0] # Plotting fig = plt.figure(figsize=(6, 2.4), constrained_layout=True) gs = fig.add_gridspec(ncols=2, nrows=1) SIZE = 8 plt.rc('font', size=SIZE) # controls default text sizes plt.rc('axes', titlesize=SIZE) # fontsize of the axes title plt.rc('axes', labelsize=SIZE) # fontsize of the x and y labels plt.rc('xtick', labelsize=SIZE) # fontsize of the tick labels plt.rc('ytick', labelsize=SIZE) # fontsize of the tick labels plt.rc('legend', fontsize=SIZE) # legend fontsize plt.rc('figure', titlesize=SIZE) # fontsize of the figure title # Left map ax1 = fig.add_subplot(gs[0], projection=ccrs.PlateCarree()) ax1.coastlines() C1 = ax1.pcolor( lon_long, lats, plotvar_cyc1, transform=ccrs.PlateCarree(), cmap='coolwarm', rasterized=True ) C1.set_clim(vmin=-100, vmax=100) ax1.set_title("a) Clear Sky") ax1.set_xticks([-180, -90, 0, 90, 180], crs=ccrs.PlateCarree()) ax1.set_yticks([-90, 0, 90], crs=ccrs.PlateCarree()) lon_formatter = LongitudeFormatter(number_format='.0f', dateline_direction_label=True) lat_formatter = LatitudeFormatter(number_format='.0f') ax1.xaxis.set_major_formatter(lon_formatter) ax1.yaxis.set_major_formatter(lat_formatter) # Colourbars cbar1 = fig.colorbar( C1, ax=ax1, label=" $\mathcal{O}$ / OLR Range (%)", pad=0, aspect=20, fraction=0.1, # shrink=0.95, orientation="horizontal", ) cbar1.ax.set_xticklabels( ['-100%', '-50%', '0%', '50%', '100%']) # Right map ax2 = fig.add_subplot(gs[1], projection=ccrs.PlateCarree()) ax2.coastlines() C2 = ax2.pcolor( lon_long, lats, plotvar_cyc2, transform=ccrs.PlateCarree(), cmap='coolwarm', rasterized=True ) C2.set_clim(vmin=-100, vmax=100) ax2.set_title("b) All Sky") ax2.set_xticks([-180, -90, 0, 90, 180], crs=ccrs.PlateCarree()) ax2.set_yticks([-90, 0, 90], crs=ccrs.PlateCarree()) lon_formatter = LongitudeFormatter(number_format='.0f', dateline_direction_label=True) lat_formatter = LatitudeFormatter(number_format='.0f') ax2.xaxis.set_major_formatter(lon_formatter) ax2.yaxis.set_major_formatter(lat_formatter) # Colourbars cbar2 = fig.colorbar( C2, ax=ax2, label="$\mathcal{O}$ / OLR Range (%)", pad=0, aspect=20, fraction=0.1, # shrink=0.95, orientation="horizontal", ) cbar2.ax.set_xticklabels( ['-100%', '-50%', '0%', '50%', '100%']) plt.savefig('../../Figures/After first review/Fig 6 CERES.pdf', format='pdf', dpi=300, bbox_inches='tight') plt.close() ``` ### Global normalised values ``` area_factors = np.zeros_like(values_cs.mean(dim='month').t2m.values) lats = values_cs.lat.values lons = values_cs.lon.values for i in range(len(lats)): area_factors[i, :] = np.cos(lats[i]*2*np.pi/360) values_cs['area_factors'] = (('lat', 'lon'), area_factors) hyst_over_olr_area_scaled_cs = np.zeros_like(values_cs.mean(dim='month').t2m.values) hyst_over_olr_area_scaled_as = np.zeros_like(values_as.mean(dim='month').t2m.values) for i in range(len(lats)): for j in range(len(lons)): hyst_over_olr_area_scaled_cs[i,j] = values_cs.sel(lat=lats[i], lon=lons[j]).area_factors.values[()] * np.abs(values_cs.sel(lat=lats[i], lon=lons[j]).hyst_over_olr_range.values[()]) hyst_over_olr_area_scaled_as[i,j] = values_cs.sel(lat=lats[i], lon=lons[j]).area_factors.values[()] * np.abs(values_as.sel(lat=lats[i], lon=lons[j]).hyst_over_olr_range.values[()]) values_cs['hyst_over_olr_area_scaled'] = (('lat', 'lon'), hyst_over_olr_area_scaled_cs) values_as['hyst_over_olr_area_scaled'] = (('lat', 'lon'), hyst_over_olr_area_scaled_as) glbl_cs = np.sum(values_cs.hyst_over_olr_area_scaled.values.flatten()) / np.sum(values_cs.area_factors.values.flatten()) glbl_as = np.sum(values_as.hyst_over_olr_area_scaled.values.flatten()) / np.sum(values_cs.area_factors.values.flatten()) print('Global area weighted clear sky mean is', '%.2f' % glbl_cs, '%') print('Global area weighted all sky mean is', '%.2f' % glbl_as, '%') ``` # Supplementary Information ``` # ERA5 values_meas_era5 = xr.open_dataset('../../Data/Other/values_meas_dir_int_hyst.nc') # CERES values_meas_ceres = xr.open_dataset('../../Data/CERES/clear_sky_ceres.nc') lats = values_meas_ceres.lat.values lons = values_meas_ceres.lon.values plotvar1 = values_meas_era5.directional_int_hyst.values plotvar2 = values_meas_ceres.hyst.values lon_long1 = values_meas_era5.lon.values plotvar_cyc1 = np.zeros((len(lats), len(lons))) plotvar_cyc1, lon_long1 = add_cyclic_point(plotvar_cyc1, coord=lon_long1) for i in range(len(lats)): for j in range(len(lons)): plotvar_cyc1[i, j] = plotvar1[i, j] plotvar_cyc1[:, len(lons)] = plotvar_cyc1[:, 0] lon_long2 = values_meas_ceres.lon.values plotvar_cyc2 = np.zeros((len(lats), len(lons))) plotvar_cyc2, lon_long2 = add_cyclic_point(plotvar_cyc2, coord=lon_long2) for i in range(len(lats)): for j in range(len(lons)): plotvar_cyc2[i, j] = plotvar2[i, j] plotvar_cyc2[:, len(lons)] = plotvar_cyc2[:, 0] fig = plt.figure(figsize=(6, 7), constrained_layout=True) gs = fig.add_gridspec( ncols=1, nrows=2) SIZE = 8 plt.rc('font', size=SIZE) # controls default text sizes plt.rc('axes', titlesize=SIZE) # fontsize of the axes title plt.rc('axes', labelsize=SIZE) # fontsize of the x and y labels plt.rc('xtick', labelsize=SIZE) # fontsize of the tick labels plt.rc('ytick', labelsize=SIZE) # fontsize of the tick labels plt.rc('legend', fontsize=SIZE) # legend fontsize plt.rc('figure', titlesize=SIZE) # fontsize of the figure title ax1 = fig.add_subplot(gs[0], projection=ccrs.PlateCarree()) ax1.coastlines() C1 = ax1.pcolor( lon_long1, lats, plotvar_cyc1, transform=ccrs.PlateCarree(), cmap='coolwarm', rasterized=True ) C1.set_clim(vmin=-20, vmax=20) ax1.set_title('a) ERA5') ax1.set_xticks([-180, -90, 0, 90, 180], crs=ccrs.PlateCarree()) ax1.set_yticks([-90, 0, 90], crs=ccrs.PlateCarree()) lon_formatter = LongitudeFormatter(number_format='.0f', dateline_direction_label=True) lat_formatter = LatitudeFormatter(number_format='.0f') ax1.xaxis.set_major_formatter(lon_formatter) ax1.yaxis.set_major_formatter(lat_formatter) ax2 = fig.add_subplot(gs[1], projection=ccrs.PlateCarree()) ax2.coastlines() C2 = ax2.pcolor( lon_long2, lats, plotvar_cyc2, transform=ccrs.PlateCarree(), cmap='coolwarm', rasterized=True ) C2.set_clim(vmin=-20, vmax=20) ax2.set_title('b) CERES') ax2.set_xticks([-180, -90, 0, 90, 180], crs=ccrs.PlateCarree()) ax2.set_yticks([-90, 0, 90], crs=ccrs.PlateCarree()) lon_formatter = LongitudeFormatter(number_format='.0f', dateline_direction_label=True) lat_formatter = LatitudeFormatter(number_format='.0f') ax2.xaxis.set_major_formatter(lon_formatter) ax2.yaxis.set_major_formatter(lat_formatter) # Colourbar fig.colorbar( C2, ax=ax2, label="OLR Loopiness, $\mathcal{O}$ (Wm$^{-2}$)", pad=-0.005, aspect=40, fraction=0.1, # shrink=0.97, orientation="horizontal", ) plt.savefig('../../Figures/After first review/SI.pdf', format='pdf', bbox_inches='tight', dpi=300) plt.close() ```
github_jupyter
# 2A.data - Matplotlib Tutoriel sur [matplotlib](https://matplotlib.org/). ``` from jyquickhelper import add_notebook_menu add_notebook_menu() ``` *Aparté* Les librairies de visualisation en python se sont beaucoup développées ([10 plotting librairies](http://www.xavierdupre.fr/app/jupytalk/helpsphinx/2016/pydata2016.html)). La référence reste [matplotlib](http://matplotlib.org/), et la plupart sont pensées pour être intégrées à ses objets (c'est par exemple le cas de [seaborn](https://stanford.edu/~mwaskom/software/seaborn/introduction.html), [mpld3](http://mpld3.github.io/), [plotly](https://plot.ly/) et [bokeh](http://bokeh.pydata.org/en/latest/)). Il est donc utile de commencer par se familiariser avec matplotlib. Pour reprendre les termes de ses développeurs : *"matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell (ala MatLab or mathematica), web application servers, and six graphical user interface toolkits."* La structure sous-jacente de matplotlib est très générale et personnalisable (gestion de l'interface utilisateur, possibilité d'intégration dans des applications web, etc.). Heureusement, il n'est pas nécessaire de maîtriser l'ensemble de ces méthodes pour produire un graphe (il existe pas moins de 2840 pages de [documentation](http://matplotlib.org/Matplotlib.pdf)). Pour générer des graphes et les modifier, il suffit de passer par l'interface [pyplot](). L'interface pyplot est inspirée de celle de MATLAB. Ceux qui la connaissent s'y retrouveront rapidement. Pour résumer : - matplotlib - accès "low level" à la librairie de visualisation. Utile si vous souhaitez créer votre propre librairie de visualisation python ou faire des choses très custom. - matplotlib.pyplot - interface proche de celle de Matplab pour produire vos graphes - pylab - matplotlib.pyplot + numpy ``` #Pour intégrer les graphes à votre notebook, il suffit de faire %matplotlib inline #ou alors %pylab inline #pylab charge également numpy. C'est la commande du calcul scientifique python. ``` La structure des objets décrits par l'API est très hiérarchique, comme illustré par ce schéma : - "Figure" contient l'ensemble de la représentation visuelle. C'est par exemple grâce à cette méta-structure que l'on peut facilement ajouter un titre à une représentation qui contiendrait plusieurs graphes ; - "Axes" (ou "Subplots") décrit l'ensemble contenant un ou pusieurs graphes (correspond à l'objet subplot et aux méthodes add_subplot) - "Axis" correspond aux axes d'un graphique (ou instance de subplot) donné. <img src="http://matplotlib.org/_images/fig_map.png" /> Une dernière remarque d'ordre général : [pyplot est une machine à état](https://en.wikipedia.org/wiki/Matplotlib). Cela implique que les méthodes pour tracer un graphe ou éditer un label s'appliquent par défaut au dernier état en cours (dernière instance de subplot ou dernière instance d'axe par exemple). Conséquence : il faut concevoir ses codes comme une séquence d'instructions (par exemple, il ne faut pas séparer les instructions qui se rapportent au même graphique dans deux cellules différentes du Notebook). ### Figures et Subplots ``` from matplotlib import pyplot as plt plt.figure(figsize=(10,8)) plt.subplot(111) # Méthode subplot : pour définir les graphiques appartenant à l'objet figure, ici 1 X 1, indice 1 #plt.subplot(1,1,1) fonctionne aussi #attention, il est nécessaire de conserver toutes les instructions d'un même graphique dans le même bloc #pas besoin de plt.show() dans un notebook, sinon c'est nécessaire ``` Un graphique (très) simple avec l'instruction plot. ``` from numpy import random import numpy as np import pandas as p plt.figure(figsize=(10,8)) plt.subplot(111) plt.plot([random.random_sample(1) for i in range(5)]) #Il est possible de passer des listes, des arrays de numpy, des Series et des Dataframes de pandas plt.plot(np.array([random.random_sample(1) for i in range(5)])) plt.plot(p.DataFrame(np.array([random.random_sample(1) for i in range(5)]))) #pour afficher plusieurs courbes, il suffit de cumuler les instructions plt.plot #plt.show() ``` Pour faire plusieurs sous graphes, il suffit de modifier les valeurs des paramètres de l'objet subplot. ``` fig = plt.figure(figsize=(15,10)) ax1 = fig.add_subplot(2,2,1) #modifie l'objet fig et créé une nouvelle instance de subplot, appelée ax1 #vous verrez souvent la convention ax comme instance de subplot : c'est parce que l'on parle aussi d'objet "Axe" #à ne pas confondre avec l'objet "Axis" ax2 = fig.add_subplot(2,2,2) ax3 = fig.add_subplot(2,2,3) ``` Si aucune instance d'axes n'est précisée, la méthode plot est appliquée à la dernière instance créée. ``` from numpy.random import randn fig = plt.figure(figsize=(10,8)) ax1 = fig.add_subplot(2,2,1) ax2 = fig.add_subplot(2,2,2) ax3 = fig.add_subplot(2,2,3) plt.plot(randn(50).cumsum(),'k--') # plt.show() from numpy.random import randn fig = plt.figure(figsize=(15,10)) ax1 = fig.add_subplot(2,2,1) ax2 = fig.add_subplot(2,2,2) ax3 = fig.add_subplot(2,2,3) # On peut compléter les instances de sous graphiques par leur contenu. # Au passage, quelques autres exemples de graphes ax1.hist(randn(100),bins=20,color='k',alpha=0.3) ax2.scatter(np.arange(30),np.arange(30)+3*randn(30)) ax3.plot(randn(50).cumsum(),'k--') ``` Pour explorer l'ensemble des catégories de graphiques possibles : [Gallery](http://matplotlib.org/gallery.html). Les plus utiles pour l'analyse de données : [scatter](http://matplotlib.org/examples/lines_bars_and_markers/scatter_with_legend.html), [scatterhist](http://matplotlib.org/examples/axes_grid/scatter_hist.html), [barchart](http://matplotlib.org/examples/pylab_examples/barchart_demo.html), [stackplot](http://matplotlib.org/examples/pylab_examples/stackplot_demo.html), [histogram](http://matplotlib.org/examples/statistics/histogram_demo_features.html), [cumulative distribution function](http://matplotlib.org/examples/statistics/histogram_demo_cumulative.html), [boxplot](http://matplotlib.org/examples/statistics/boxplot_vs_violin_demo.html), , [radarchart](http://matplotlib.org/examples/api/radar_chart.html). ### Ajuster les espaces entre les graphes ``` fig,axes = plt.subplots(2,2,sharex=True,sharey=True) # Sharex et sharey portent bien leurs noms : si True, ils indiquent que les sous-graphiques # ont des axes paramétrés de la même manière for i in range(2): for j in range(2): axes[i,j].hist(randn(500),bins=50,color='k',alpha=0.5) # L'objet "axes" est un 2darray, simple à indicer et parcourir avec une boucle print(type(axes)) # N'h'ésitez pas à faire varier les paramètres qui vous posent question. Par exemple, à quoi sert alpha ? plt.subplots_adjust(wspace=0,hspace=0) # Cette dernière méthode permet de supprimer les espaces entres les sous graphes. ``` Pas d'autres choix que de paramétrer à la main pour corriger les chiffres qui se superposent. ### Couleurs, Marqueurs et styles de ligne MatplotLib offre la possibilité d'adopter deux types d'écriture : chaîne de caractère condensée ou paramétrage explicite via un système clé-valeur. ``` from numpy.random import randn fig = plt.figure(figsize=(8,6)) ax1 = fig.add_subplot(111) ax1.plot(randn(50).cumsum(),color='g',marker='o',linestyle='dashed') # plt.show() from numpy.random import randn fig = plt.figure(figsize=(8,6)) ax1 = fig.add_subplot(111) ax1.plot(randn(50).cumsum(),'og--') #l'ordre des paramètres n'importe pas ``` Plus de détails dans la documentation sur l'API de matplotlib pour paramétrer la <a href="http://matplotlib.org/api/colors_api.html"> couleur </a> , les <a href="http://matplotlib.org/api/markers_api.html"> markers </a> , et le <a href="http://matplotlib.org/api/lines_api.html#matplotlib.lines.Line2D.set_linestyle"> style des lignes </a> . MatplotLib est compatible avec plusieurs standards de couleur : - sous forme d'une lettre : 'b' = blue (bleu), 'g' = green (vert), 'r' = red (rouge), 'c' = cyan (cyan), 'm' = magenta (magenta), 'y' = yellow (jaune), 'k' = black (noir), 'w' = white (blanc). - sous forme d'un nombre entre 0 et 1 entre quotes qui indique le niveau de gris : par exemple '0.70' ('1' = blanc, '0' = noir). - sous forme d'un nom : par exemple 'red'. - sous forme html avec les niveaux respectifs de rouge (R), vert (G) et bleu (B) : '#ffee00'. Voici un site pratique pour récupérer une couleur en [RGB hexadécimal](http://www.proftnj.com/RGB3.htm). - sous forme d'un triplet de valeurs entre 0 et 1 avec les niveaux de R, G et B : (0.2, 0.9, 0.1). ``` from numpy.random import randn fig = plt.figure(figsize=(8,6)) ax1 = fig.add_subplot(1,1,1) #avec la norme RGB ax1.plot(randn(50).cumsum(),color='#D0BBFF',marker='o',linestyle='-.') ax1.plot(randn(50).cumsum(),color=(0.8156862745098039, 0.7333333333333333, 1.0),marker='o',linestyle='-.') ``` ### Ticks labels et legendes 3 méthodes clés : - xlim() : pour délimiter l'étendue des valeurs de l'axe - xticks() : pour passer les graduations sur l'axe - xticklabels() : pour passer les labels Pour l'axe des ordonnées c'est ylim, yticks, yticklabels. Pour récupérer les valeurs fixées : - plt.xlim() ou plt.get_xlim() - plt.xticks() ou plt.get_xticks() - plt.xticklabels() ou plt.get_xticklabels() Pour fixer ces valeurs : - plt.xlim([start,end]) ou plt.set_xlim([start,end]) - plt.xticks(my_ticks_list) ou plt.get_xticks(my_ticks_list) - plt.xticklabels(my_labels_list) ou plt.get_xticklabels(my_labels_list) Si vous voulez customiser les axes de plusieurs sous graphiques, passez par une [instance de axis](http://matplotlib.org/users/artists.html) et non subplot. ``` from numpy.random import randn fig = plt.figure(figsize=(8,6)) ax1 = fig.add_subplot(1,1,1) serie1=randn(50).cumsum() serie2=randn(50).cumsum() serie3=randn(50).cumsum() ax1.plot(serie1,color='#33CCFF',marker='o',linestyle='-.',label='un') ax1.plot(serie2,color='#FF33CC',marker='o',linestyle='-.',label='deux') ax1.plot(serie3,color='#FFCC99',marker='o',linestyle='-.',label='trois') #sur le graphe précédent, pour raccourcir le range ax1.set_xlim([0,21]) ax1.set_ylim([-20,20]) #faire un ticks avec un pas de 2 (au lieu de 5) ax1.set_xticks(range(0,21,2)) #changer le label sur la graduation ax1.set_xticklabels(["j +" + str(l) for l in range(0,21,2)]) ax1.set_xlabel('Durée après le traitement') ax1.legend(loc='best') #permet de choisir l'endroit le plus vide ``` ### Inclusion d'annotation et de texte, titre et libellé des axes ``` from numpy.random import randn fig = plt.figure(figsize=(8,6)) ax1 = fig.add_subplot(1,1,1) ax1.plot(serie1,color='#33CCFF',marker='o',linestyle='-.',label='un') ax1.plot(serie2,color='#FF33CC',marker='o',linestyle='-.',label='deux') ax1.plot(serie3,color='#FFCC99',marker='o',linestyle='-.',label='trois') ax1.set_xlim([0,21]) ax1.set_ylim([-20,20]) ax1.set_xticks(range(0,21,2)) ax1.set_xticklabels(["j +" + str(l) for l in range(0,21,2)]) ax1.set_xlabel('Durée après le traitement') ax1.annotate("You're here", xy=(7, 7), #point de départ de la flèche xytext=(10, 10), #position du texte arrowprops=dict(facecolor='#000000', shrink=0.10), ) ax1.legend(loc='best') plt.xlabel("Libellé de l'axe des abscisses") plt.ylabel("Libellé de l'axe des ordonnées") plt.title("Une idée de titre ?") plt.text(5, -10, r'$\mu=100,\ \sigma=15$') # plt.show() ``` ### matplotlib et le style Il est possible de définir son propre style. Cette possibilité est intéressante si vous faîtes régulièrement les mêmes graphes et voulez définir des templates (plutôt que de copier/coller toujours les mêmes lignes de code). Tout est décrit dans [style_sheets](http://matplotlib.org/users/style_sheets.html). ``` from numpy.random import randn #pour que la définition du style soit seulement dans cette cellule notebook with plt.style.context('ggplot'): fig = plt.figure(figsize=(8,6)) ax1 = fig.add_subplot(1,1,1) ax1.plot(serie1,color='#33CCFF',marker='o',linestyle='-.',label='un') ax1.plot(serie2,color='#FF33CC',marker='o',linestyle='-.',label='deux') ax1.plot(serie3,color='#FFCC99',marker='o',linestyle='-.',label='trois') ax1.set_xlim([0,21]) ax1.set_ylim([-20,20]) ax1.set_xticks(range(0,21,2)) ax1.set_xticklabels(["j +" + str(l) for l in range(0,21,2)]) ax1.set_xlabel('Durée après le traitement') ax1.annotate("You're here", xy=(7, 7), #point de départ de la flèche xytext=(10, 10), #position du texte arrowprops=dict(facecolor='#000000', shrink=0.10), ) ax1.legend(loc='best') plt.xlabel("Libellé de l'axe des abscisses") plt.ylabel("Libellé de l'axe des ordonnées") plt.title("Une idée de titre ?") plt.text(5, -10, r'$\mu=100,\ \sigma=15$') #plt.show() import numpy as np import matplotlib.pyplot as plt print("De nombreux autres styles sont disponibles, pick up your choice! ", plt.style.available) with plt.style.context('dark_background'): plt.plot(serie1, 'r-o') # plt.show() ``` Comme suggéré dans le nom des styles disponibles dans matplotlib, la librairie seaborn, qui est une sorte de surcouche de matplotlib, est un moyen très pratique d'accéder à des styles pensés et adaptés pour la mise en valeur de pattern dans les données. Voici quelques exemples, toujours sur la même série de données. Je vous invite également à explorer les [palettes de couleurs](https://stanford.edu/~mwaskom/software/seaborn/tutorial/color_palettes.html). ``` #on peut remarquer que le style ggplot est resté. import seaborn as sns #5 styles disponibles #sns.set_style("whitegrid") #sns.set_style("darkgrid") #sns.set_style("white") #sns.set_style("dark") #sns.set_style("ticks") #si vous voulez définir un style temporairement with sns.axes_style("ticks"): fig = plt.figure(figsize(8,6)) ax1 = fig.add_subplot(1,1,1) plt.plot(serie1) ``` En dehors du style et des couleurs, Seaborn a mis l'accent sur : - les graphes de distribution ([univariés](https://stanford.edu/~mwaskom/software/seaborn/examples/distplot_options.html#distplot-options) / [bivariés](https://stanford.edu/~mwaskom/software/seaborn/examples/joint_kde.html#joint-kde)). Particulièrement utiles et pratiques : les [pairwiseplot](https://stanford.edu/~mwaskom/software/seaborn/tutorial/distributions.html#visualizing-pairwise-relationships-in-a-dataset) - les graphes de [régression](https://stanford.edu/~mwaskom/software/seaborn/tutorial/regression.html) - les graphes de [variables catégorielles](https://stanford.edu/~mwaskom/software/seaborn/tutorial/categorical.html) - les [heatmap](https://stanford.edu/~mwaskom/software/seaborn/examples/heatmap_annotation.html) sur les matrices de données Seaborn ce sont des graphes pensés pour l'analyse de données et la présentation de rapports à des collègues ou clients. C'est peut-être un peu moins customisable que matplotlib mais vous avez le temps avant de vous sentir limités dans les possibilités. # Matplotlib et pandas, intéractions avec seaborn Comme vu précédemment, matplotlib permet de manipuler et de représenter sous forme de graphes toutes sortes d'objets : listes, arrays numpy, Series et DataFrame pandas. Inversement, pandas a prévu des méthodes qui intègrent les objets matplotlib les plus utiles pour le tracé de graphiques. Nous allons tester un peu l'intégration [pandas/matplotlib](http://pandas.pydata.org/pandas-docs/stable/visualization.html). D'une amanière générale, tout un [écosystème](http://pandas.pydata.org/pandas-docs/stable/ecosystem.html#ecosystem-visualization) de visualisation s'est développé autour de pandas. Nous allons tester les différentes librairies évoquées. Télécharger les données de l'exercice 4 du TD sur pandas et disponible sur le site de l'INSEE [Naissances, décès et mariages de 1998 à 2013](https://www.insee.fr/fr/statistiques/2407910?sommaire=2117120#titre-bloc-3). ``` import urllib.request import zipfile def download_and_save(name, root_url): if root_url == 'xd': from pyensae.datasource import download_data download_data(name) else: response = urllib.request.urlopen(root_url+name) with open(name, "wb") as outfile: outfile.write(response.read()) def unzip(name): with zipfile.ZipFile(name, "r") as z: z.extractall(".") filenames = ["etatcivil2012_mar2012_dbase.zip", "etatcivil2012_nais2012_dbase.zip", "etatcivil2012_dec2012_dbase.zip", ] # Une copie des fichiers a été postée sur le site www.xavierdupre.fr # pour tester le notebook plus facilement. root_url = 'xd' # http://telechargement.insee.fr/fichiersdetail/etatcivil2012/dbase/' for filename in filenames: download_and_save(filename, root_url) unzip(filename) print("Download of {}: DONE!".format(filename)) ``` Penser à installer le module [dbfread](https://github.com/olemb/dbfread/) si ce n'est pas fait. ``` import pandas try: from dbfread import DBF use_dbfread = True except ImportError as e : use_dbfread = False if use_dbfread: print("use of dbfread") def dBase2df(dbase_filename): table = DBF(dbase_filename, load=True, encoding="cp437") return pandas.DataFrame(table.records) df = dBase2df('mar2012.dbf') #df.to_csv("mar2012.txt", sep="\t", encoding="utf8", index=False) else : print("use of zipped version") import pyensae.datasource data = pyensae.datasource.download_data("mar2012.zip") df = pandas.read_csv(data[0], sep="\t", encoding="utf8", low_memory = False) df.shape, df.columns ``` Dictionnaire des variables. ``` vardf = dBase2df("varlist_mariages.dbf") print(vardf.shape, vardf.columns) vardf ``` Représentez l'age des femmes en fonction de celui des hommes au moment du mariage. ``` #Calcul de l'age (au moment du mariage) df.head() #conversion des années en entiers for c in ['AMAR','ANAISF','ANAISH']: df[c]=df[c].apply(lambda x: int(x)) #calcul de l'age df['AGEF'] = df['AMAR'] - df['ANAISF'] df['AGEH'] = df['AMAR'] - df['ANAISH'] ``` Le module pandas a prévu un [wrapper](http://pandas.pydata.org/pandas-docs/stable/visualization.html) matplotlib ``` #version pandas : df.plot() #deux possibilités : l'option kind dans df.plot() df.plot(x='AGEH',y='AGEF',kind='scatter') #ou la méthode scatter() #df.plot.scatter(x='AGEH',y='AGEF') #ensemble des graphiques disponibles dans la méthode plot de pandas : df.plot.<TAB> #version matplotlib from matplotlib import pyplot as plt plt.style.use('seaborn-whitegrid') fig = plt.figure(figsize(8.5,5)) ax = fig.add_subplot(1,1,1) ax.scatter(df['AGEH'],df['AGEF'], color="#3333FF", edgecolors='#FFFFFF') plt.xlabel('AGEH') plt.ylabel('AGEH') #Si vous voulez les deux graphes en 1, il suffit de reprendre la structure de matplotlib #(notamment l'objet subplot) et de voir comment il peut etre appelé dans #chaque méthode de tracé (df.plot de pandas et sns.plot de searborn) from matplotlib import pyplot as plt plt.style.use('seaborn-whitegrid') fig = plt.figure(figsize(8.5,5)) ax1 = fig.add_subplot(1,2,1) ax2 = fig.add_subplot(1,2,2) ax1.scatter(df['AGEH'],df['AGEF'], color="#3333FF", edgecolors='#FFFFFF') df.plot(x='AGEH',y='AGEF',kind='scatter',ax=ax2) plt.xlabel('AGEH') plt.ylabel('AGEH') ``` ### Exercice 1 : analyser l'âge des hommes en fonction de l'âge des femmes Ajoutez un titre, changez le style du graphe, faites varier les couleurs (avec un camaïeu), faites une [heatmap](https://en.wikipedia.org/wiki/Heat_map) avec le wrapper pandas [hexbin](http://pandas.pydata.org/pandas-docs/stable/visualization.html#visualization-hexbin) et avec [seaborn](https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.heatmap.html). ``` df.plot.hexbin(x='AGEH', y='AGEF', gridsize=100) ``` Avec seaborn ``` import seaborn as sns sns.set_style('white') sns.set_context('paper') #il faut crééer la matrice AGEH x AGEF df['nb']=1 df[['AGEH','AGEF']] df["nb"] = 1 #pour utiliser heatmap, il faut mettre df au frmat wide (au lieu de long) => df.pivot(...) matrice = df[['nb','AGEH','AGEF']].groupby(['AGEH','AGEF'],as_index=False).count() matrice=matrice.pivot('AGEH','AGEF','nb') matrice=matrice.sort_index(axis=0,ascending=False) fig = plt.figure(figsize(8.5,5)) ax1 = fig.add_subplot(2,2,1) ax2 = fig.add_subplot(2,2,2) ax3 = fig.add_subplot(2,2,3) df.plot.hexbin(x='AGEH', y='AGEF', gridsize=100, ax=ax1) cmap=sns.blend_palette(["#CCFFFF", "#006666"], as_cmap=True) #Dans tous les graphes qui prévoient une fonction cmap vous pourrez intégrer votre propre palette de couleur sns.heatmap(matrice,annot=False, xticklabels=10,yticklabels=10,cmap=cmap,ax=ax2) sample = df.sample(100) sns.kdeplot(sample['AGEH'],sample['AGEF'],cmap=cmap,ax=ax3) ``` Seaborn est bien pensé pour les [couleurs](https://seaborn.pydata.org/tutorial/color_palettes.html). Vous pouvez intégrer des palettes convergentes, divergentes. Essayez de faire un camaïeu entre deux couleurs au fur et à mesure de l'age, pour faire ressortir les contrastes. ### Exercice 2 : représentez la répartition de la différence d'âge de couples mariés ``` df["differenceHF"] = df["ANAISH"] - df["ANAISF"] df["nb"] = 1 dist = df[["nb","differenceHF"]].groupby("differenceHF", as_index=False).count() dist.tail() #version pandas import seaborn as sns sns.set_style('whitegrid') sns.set_context('paper') fig = plt.figure(figsize(8.5,5)) ax1 = fig.add_subplot(1,2,1) ax2 = fig.add_subplot(1,2,2) df["differenceHF"].hist(figsize=(16,6), bins=50, ax=ax1) ax1.set_title('Graphique avec pandas', fontsize=15) sns.distplot(df["differenceHF"], kde=True,ax=ax2) #regardez ce que donne l'option kde ax2.set_title('Graphique avec seaborn', fontsize=15) ``` ### Exercice 3 : analyser le nombre de mariages par département ``` df["nb"] = 1 dep = df[["DEPMAR","nb"]].groupby("DEPMAR", as_index=False).sum().sort_values("nb",ascending=False) ax = dep.plot(kind = "bar", figsize=(18,6)) ax.set_xlabel("départements", fontsize=16) ax.set_title("nombre de mariages par départements", fontsize=16) ax.legend().set_visible(False) # on supprime la légende # on change la taille de police de certains labels for i,tick in enumerate(ax.xaxis.get_major_ticks()): if i > 10 : tick.label.set_fontsize(8) ``` # Exercice 4 : répartition du nombre de mariages par jour ``` df["nb"] = 1 dissem = df[["JSEMAINE","nb"]].groupby("JSEMAINE",as_index=False).sum() total = dissem["nb"].sum() repsem = dissem.cumsum() repsem["nb"] /= total sns.set_style('whitegrid') ax = dissem["nb"].plot(kind="bar") repsem["nb"].plot(ax=ax, secondary_y=True) ax.set_title("Distribution des mariages par jour de la semaine",fontsize=16) df.head() ``` # Graphes intéractifs : bokeh, altair, bqplot Pour faire simple, il est possible d'introduire du JavaScript dans l'application web locale créée par jupyter. C'est ce que fait D3.js. Les librairies interactives comme [bokeh](http://bokeh.pydata.org/en/latest/) ou [altair](https://altair-viz.github.io/) ont associé le design de [matplotlib](https://matplotlib.org/) avec des librairies javascript comme [vega-lite](https://vega.github.io/vega-lite/). L'exemple suivant utilise [bokeh](http://bokeh.pydata.org/en/latest/). ``` from bokeh.plotting import figure, show, output_notebook output_notebook() fig = figure() sample = df.sample(500) fig.scatter(sample['AGEH'],sample['AGEF']) fig.xaxis.axis_label = 'AGEH' fig.yaxis.axis_label = 'AGEH' show(fig) ``` La page [callbacks](https://bokeh.pydata.org/en/latest/docs/user_guide/interaction/callbacks.html) montre comment utiliser les interactions utilisateurs. Seul inconvénient, il faut connaître le javascript. ``` from bokeh.plotting import figure, output_file, show from bokeh.models import ColumnDataSource, HoverTool, CustomJS # define some points and a little graph between them x = [2, 3, 5, 6, 8, 7] y = [6, 4, 3, 8, 7, 5] links = { 0: [1, 2], 1: [0, 3, 4], 2: [0, 5], 3: [1, 4], 4: [1, 3], 5: [2, 3, 4] } p = figure(plot_width=400, plot_height=400, tools="", toolbar_location=None, title='Hover over points') source = ColumnDataSource({'x0': [], 'y0': [], 'x1': [], 'y1': []}) sr = p.segment(x0='x0', y0='y0', x1='x1', y1='y1', color='olive', alpha=0.6, line_width=3, source=source, ) cr = p.circle(x, y, color='olive', size=30, alpha=0.4, hover_color='olive', hover_alpha=1.0) # Add a hover tool, that sets the link data for a hovered circle code = """ var links = %s; var data = {'x0': [], 'y0': [], 'x1': [], 'y1': []}; var cdata = circle.data; var indices = cb_data.index['1d'].indices; for (i=0; i < indices.length; i++) { ind0 = indices[i] for (j=0; j < links[ind0].length; j++) { ind1 = links[ind0][j]; data['x0'].push(cdata.x[ind0]); data['y0'].push(cdata.y[ind0]); data['x1'].push(cdata.x[ind1]); data['y1'].push(cdata.y[ind1]); } } segment.data = data; """ % links callback = CustomJS(args={'circle': cr.data_source, 'segment': sr.data_source}, code=code) p.add_tools(HoverTool(tooltips=None, callback=callback, renderers=[cr])) show(p) ``` Le module [bqplot](https://github.com/bloomberg/bqplot/blob/master/examples/Interactions/Mark%20Interactions.ipynb) permet de définir des *callbacks* en Python. L'inconvénient est que cela ne fonction que depuis un notebook et il vaut mieux ne pas trop mélanger les librairies javascripts qui ne peuvent pas toujours fonctionner ensemble. # Plotly - Plotly: https://plot.ly/python/ - Doc: https://plot.ly/python/reference/ - Colors: http://www.cssportal.com/css3-rgba-generator/ ``` import pandas as pd import numpy as np ``` # Creation dataframe ``` indx = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'] value1 = [0,1,2,3,4,5,6,7,8,9] value2 = [1,5,2,3,7,5,1,8,9,1] df = {'indx': indx, 'value1': value1, 'value2': value2} df = pd.DataFrame(df) df['rate1'] = df.value1 / 100 df['rate2'] = df.value2 / 100 df = df.set_index('indx') df.head() ``` # Bars and Scatter ``` # installer plotly import plotly.plotly as py import os from pyquickhelper.loghelper import get_password user = get_password("plotly", "ensae_teaching_cs,login") pwd = get_password("plotly", "ensae_teaching_cs,pwd") try: py.sign_in(user, pwd) except Exception as e: print(e) import plotly from plotly.graph_objs import Bar, Scatter, Figure, Layout import plotly.plotly as py import plotly.graph_objs as go # BARS trace1 = go.Bar( x = df.index, y = df.value1, name='Value1', # Bar legend #orientation = 'h', marker = dict( # Colors color = 'rgba(237, 74, 51, 0.6)', line = dict( color = 'rgba(237, 74, 51, 0.6)', width = 3) )) trace2 = go.Bar( x = df.index, y = df.value2, name='Value 2', #orientation = 'h', # Uncomment to have horizontal bars marker = dict( color = 'rgba(0, 74, 240, 0.4)', line = dict( color = 'rgba(0, 74, 240, 0.4)', width = 3) )) # SCATTER trace3 = go.Scatter( x = df.index, y = df.rate1, name='Rate', yaxis='y2', # Define 2 axis marker = dict( # Colors color = 'rgba(187, 0, 0, 1)', )) trace4 = go.Scatter( x = df.index, y = df.rate2, name='Rate2', yaxis='y2', # To have a 2nd axis marker = dict( # Colors color = 'rgba(0, 74, 240, 0.4)', )) data = [trace2, trace1, trace3, trace4] layout = go.Layout( title='Stack bars and scatter', barmode ='stack', # Take value 'stack' or 'group' xaxis=dict( autorange=True, showgrid=False, zeroline=False, showline=True, autotick=True, ticks='', showticklabels=True ), yaxis=dict( # Params 1st axis #range=[0,1200000], # Set range autorange=True, showgrid=False, zeroline=False, showline=True, autotick=True, ticks='', showticklabels=True ), yaxis2=dict( # Params 2nd axis overlaying='y', autorange=True, showgrid=False, zeroline=False, showline=True, autotick=True, ticks='', side='right' )) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='marker-h-bar') trace5 = go.Scatter( x = ['h', 'h'], y = [0,0.09], yaxis='y2', # Define 2 axis showlegend = False, # Hiding legend for this trace marker = dict( # Colors color = 'rgba(46, 138, 24, 1)', ) ) from plotly import tools import plotly.plotly as py import plotly.graph_objs as go fig = tools.make_subplots(rows=1, cols=2) # 1st subplot fig.append_trace(trace1, 1, 1) fig.append_trace(trace2, 1, 1) # 2nd subplot fig.append_trace(trace3, 1, 2) fig.append_trace(trace4, 1, 2) fig.append_trace(trace5, 1, 2) # Vertical line here fig['layout'].update(height=600, width=1000, title='Two in One & Vertical line') py.iplot(fig, filename='make-subplots') ``` ### Exercice : représenter le nombre de mariages par département avec plotly ou tout autre librairie javascript [Bokeh](https://bokeh.pydata.org/en/latest/), [altair](https://altair-viz.github.io/), ...
github_jupyter
<a href="https://colab.research.google.com/github/ShreyasJothish/ai-platform/blob/master/tasks/methodology/word-embeddings/Word_Embeddings.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Word Embeddings using Word2Vec. ### Procedure 1) I shall be working with [Fake News data](https://www.kaggle.com/mrisdal/fake-news) from Kaggle as an example for Word Embedding. This data set has sufficient data containing documents to train the model on. 2) Clean/Tokenize the documents in the data set. 3) Vectorize the model using Word2Vec and explore the results like finding most similar words, finding similarity and differences. [gensim](https://radimrehurek.com/gensim/) package is used for Word2Vec functionality. ``` # Basic imports import pandas as pd import numpy as np !pip install -U gensim import gensim ``` ### Downloading Kaggle data set 1. You'll have to sign up for Kaggle and [authorize](https://github.com/Kaggle/kaggle-api#api-credentials) the API. 2. Specify the path for accessing the kaggle.json file. For Colab we can store the kaggle.json on Google Drive. 3. Download Fake News Data. 4. The data is present in compressed form this needs to be unzipped. ``` !pip install kaggle from google.colab import drive drive.mount('/content/drive') %env KAGGLE_CONFIG_DIR=/content/drive/My Drive/ !kaggle datasets download -d mrisdal/fake-news !unzip fake-news.zip df = pd.read_csv("fake.csv") df['title_text'] = df['title'] + df ['text'] df.drop(columns=['uuid', 'ord_in_thread', 'author', 'published', 'title', 'text', 'language', 'crawled', 'site_url', 'country', 'domain_rank', 'thread_title', 'spam_score', 'main_img_url', 'replies_count', 'participants_count', 'likes', 'comments', 'shares', 'type'], inplace=True) df.dropna(inplace=True) df.title_text = df.title_text.str.lower() ``` ### Data cleaning 1. The information related to document is contained in **title** and **text** columns. So I shall be using only these two columns. 2. Turn a document into clean tokens. 3. Build the model using gensim. ``` df.head() import string def clean_doc(doc): # split into tokens by white space tokens = doc.split() # remove punctuation from each token table = str.maketrans('', '', string.punctuation) tokens = [w.translate(table) for w in tokens] # remove remaining tokens that are not alphabetic tokens = [word for word in tokens if word.isalpha()] tokens = [word for word in tokens if len(word) > 1] return tokens df['cleaned'] = df.title_text.apply(clean_doc) print(df.shape) df.head() from gensim.models import Word2Vec w2v = Word2Vec(df.cleaned, min_count=20, window=3, size=300, negative=20) words = list(w2v.wv.vocab) print(f'Vocabulary Size: {len(words)}') ``` ### Verification Explore the results like finding most similar words, finding similarity and differences. ``` w2v.wv.most_similar('trump', topn=15) w2v.wv.most_similar(positive=["fbi"], topn=15) w2v.wv.doesnt_match(['fbi', 'cat', 'nypd']) w2v.wv.similarity("fbi","nypd") w2v.wv.similarity("fbi","trump") ```
github_jupyter
<p><font size="6"><b>05 - Pandas: "Group by" operations</b></font></p> > *© 2016-2018, Joris Van den Bossche and Stijn Van Hoey (<mailto:jorisvandenbossche@gmail.com>, <mailto:stijnvanhoey@gmail.com>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)* --- ``` %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') ``` # Some 'theory': the groupby operation (split-apply-combine) ``` df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'], 'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]}) df ``` ### Recap: aggregating functions When analyzing data, you often calculate summary statistics (aggregations like the mean, max, ...). As we have seen before, we can easily calculate such a statistic for a Series or column using one of the many available methods. For example: ``` df['data'].sum() ``` However, in many cases your data has certain groups in it, and in that case, you may want to calculate this statistic for each of the groups. For example, in the above dataframe `df`, there is a column 'key' which has three possible values: 'A', 'B' and 'C'. When we want to calculate the sum for each of those groups, we could do the following: ``` for key in ['A', 'B', 'C']: print(key, df[df['key'] == key]['data'].sum()) ``` This becomes very verbose when having multiple groups. You could make the above a bit easier by looping over the different values, but still, it is not very convenient to work with. What we did above, applying a function on different groups, is a "groupby operation", and pandas provides some convenient functionality for this. ### Groupby: applying functions per group The "group by" concept: we want to **apply the same function on subsets of your dataframe, based on some key to split the dataframe in subsets** This operation is also referred to as the "split-apply-combine" operation, involving the following steps: * **Splitting** the data into groups based on some criteria * **Applying** a function to each group independently * **Combining** the results into a data structure <img src="../img/splitApplyCombine.png"> Similar to SQL `GROUP BY` Instead of doing the manual filtering as above df[df['key'] == "A"].sum() df[df['key'] == "B"].sum() ... pandas provides the `groupby` method to do exactly this: ``` df.groupby('key').sum() df.groupby('key').aggregate(np.sum) # 'sum' ``` And many more methods are available. ``` df.groupby('key')['data'].sum() ``` # Application of the groupby concept on the titanic data We go back to the titanic passengers survival data: ``` df = pd.read_csv("../data/titanic.csv") df.head() ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Using groupby(), calculate the average age for each sex.</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations1.py df.groupby('Age').mean() ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Calculate the average survival ratio for all passengers.</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations2.py # df['Survived'].sum() / len(df['Survived']) df['Survived'].mean() ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Calculate this survival ratio for all passengers younger than 25 (remember: filtering/boolean indexing).</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations3.py df25 = df[df['Age'] < 25] df25['Survived'].sum() / len(df25['Survived']) ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>What is the difference in the survival ratio between the sexes?</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations4.py df.groupby('Sex')['Survived'].mean() ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Make a bar plot of the survival ratio for the different classes ('Pclass' column).</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations5.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Make a bar plot to visualize the average Fare payed by people depending on their age. The age column is devided is separate classes using the `pd.cut` function as provided below.</li> </ul> </div> ``` df['AgeClass'] = pd.cut(df['Age'], bins=np.arange(0,90,10)) df # %load _solutions/pandas_06_groupby_operations6.py ``` If you are ready, more groupby exercises can be found below. # Some more theory ## Specifying the grouper In the previous example and exercises, we always grouped by a single column by passing its name. But, a column name is not the only value you can pass as the grouper in `df.groupby(grouper)`. Other possibilities for `grouper` are: - a list of column names as strings (to group by multiple columns) - a Series (with the same index) or array - function (to be applied on the index) ``` df['Age'] < 18 df.groupby(df['Age'] < 18)['Survived'].mean() df.groupby(['Pclass', 'Sex'])['Survived'].mean() ``` ## The size of groups - value counts Oftentimes you want to know how many elements there are in a certain group (or in other words: the number of occurences of the different values from a column). To get the size of the groups, we can use `size`: ``` df.groupby('Pclass').size() df.groupby('Embarked').size() ``` Another way to obtain such counts, is to use the Series `value_counts` method: ``` df['Embarked'].value_counts() ``` # [OPTIONAL] Additional exercises using the movie data These exercises are based on the [PyCon tutorial of Brandon Rhodes](https://github.com/brandon-rhodes/pycon-pandas-tutorial/) (so credit to him!) and the datasets he prepared for that. You can download these data from here: [`titles.csv`](https://drive.google.com/open?id=0B3G70MlBnCgKajNMa1pfSzN6Q3M) and [`cast.csv`](https://drive.google.com/open?id=0B3G70MlBnCgKal9UYTJSR2ZhSW8) and put them in the `/data` folder. `cast` dataset: different roles played by actors/actresses in films - title: title of the movie - year: year it was released - name: name of the actor/actress - type: actor/actress - n: the order of the role (n=1: leading role) ``` cast = pd.read_csv('../data/cast.csv') cast.head() ``` `titles` dataset: * title: title of the movie * year: year of release ``` titles = pd.read_csv('../data/titles.csv') titles.head() ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Using `groupby()`, plot the number of films that have been released each decade in the history of cinema.</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations7.py # %load _solutions/pandas_06_groupby_operations8.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Use `groupby()` to plot the number of 'Hamlet' movies made each decade.</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations9.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>For each decade, plot all movies of which the title contains "Hamlet".</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations10.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>List the 10 actors/actresses that have the most leading roles (n=1) since the 1990's.</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations11.py # %load _solutions/pandas_06_groupby_operations12.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>In a previous exercise, the number of 'Hamlet' films released each decade was checked. Not all titles are exactly called 'Hamlet'. Give an overview of the titles that contain 'Hamlet' and an overview of the titles that start with 'Hamlet', each time providing the amount of occurrences in the data set for each of the movies</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations13.py # %load _solutions/pandas_06_groupby_operations14.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>List the 10 movie titles with the longest name.</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations15.py # %load _solutions/pandas_06_groupby_operations16.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>How many leading (n=1) roles were available to actors, and how many to actresses, in each year of the 1950s?</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations17.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>What are the 11 most common character names in movie history?</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations18.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Plot how many roles Brad Pitt has played in each year of his career.</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations19.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>What are the 10 most occurring movie titles that start with the words 'The Life'?</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations20.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Which actors or actresses were most active in the year 2010 (i.e. appeared in the most movies)?</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations21.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Determine how many roles are listed for each of 'The Pink Panther' movies.</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations22.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li> List, in order by year, each of the movies in which 'Frank Oz' has played more than 1 role.</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations23.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li> List each of the characters that Frank Oz has portrayed at least twice.</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations24.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li> Add a new column to the `cast` DataFrame that indicates the number of roles for each movie. [Hint](http://pandas.pydata.org/pandas-docs/stable/groupby.html#transformation)</li> </ul> </div> ``` # %load _solutions/pandas_06_groupby_operations25.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li> Calculate the ratio of leading actor and actress roles to the total number of leading roles per decade. </li> </ul><br> **Tip**: you can do a groupby twice in two steps, first calculating the numbers, and secondly, the ratios. </div> ``` # %load _solutions/pandas_06_groupby_operations26.py # %load _solutions/pandas_06_groupby_operations27.py # %load _solutions/pandas_06_groupby_operations28.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li> In which years the most films were released?</li> </ul><br> </div> ``` # %load _solutions/pandas_06_groupby_operations29.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>How many leading (n=1) roles were available to actors, and how many to actresses, in the 1950s? And in 2000s?</li> </ul><br> </div> ``` # %load _solutions/pandas_06_groupby_operations30.py # %load _solutions/pandas_06_groupby_operations31.py ```
github_jupyter
``` import tensorflow as tf hellow_constant = tf.constant('Hello Tensor Constant') with tf.Session() as sess: output = sess.run(hellow_constant) print(output) x = tf.placeholder(tf.string) y = tf.placeholder(tf.float32) z = tf.placeholder(tf.int32) with tf.Session() as sess: output = sess.run(x, feed_dict={x:'string', y: 123.22, z: 123}) print(output) x = tf.constant(10) y = tf.constant(2) z = tf.subtract(tf.divide(x, y), tf.cast(tf.constant(1), tf.float64)) with tf.Session() as sess: output = sess.run(z) print(output) from tensorflow.examples.tutorials.mnist import input_data def get_weights(n_features, n_labels): return tf.Variable(tf.truncated_normal((n_features, n_labels))) def get_biases(n_labels): return tf.Variable(tf.zeros(n_labels)) def linear(inputs, W, b): return tf.add(tf.matmul(inputs, W), b) def mnist_features_labels(n_labels): """ Gets the first <n> labels from the MNIST dataset :param n_labels: Number of labels to use :return: Tuple of feature list and label list """ mnist_features = [] mnist_labels = [] mnist = input_data.read_data_sets('MNIST_data/', one_hot=True) # In order to make quizzes run faster, we're only looking at 10000 images for mnist_feature, mnist_label in zip(*mnist.train.next_batch(10000)): # Add features and labels if it's for the first <n>th labels if mnist_label[:n_labels].any(): mnist_features.append(mnist_feature) mnist_labels.append(mnist_label[:n_labels]) return mnist_features, mnist_labels # Number of features (28*28 image is 784 features) n_features = 784 # Number of labels n_labels = 3 # Features and Labels features = tf.placeholder(tf.float32) labels = tf.placeholder(tf.float32) # Weights and Biases w = get_weights(n_features, n_labels) b = get_biases(n_labels) # Linear Function xW + b logits = linear(features, w, b) # Training data train_features, train_labels = mnist_features_labels(n_labels) with tf.Session() as session: # TODO: Initialize session variables tf.global_variables_initializer(session) # Softmax prediction = tf.nn.softmax(logits) # Cross entropy # This quantifies how far off the predictions were. # You'll learn more about this in future lessons. cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1) # Training loss # You'll learn more about this in future lessons. loss = tf.reduce_mean(cross_entropy) # Rate at which the weights are changed # You'll learn more about this in future lessons. learning_rate = 0.08 # Gradient Descent # This is the method used to train the model # You'll learn more about this in future lessons. optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # Run optimizer and get loss _, l = session.run( [optimizer, loss], feed_dict={features: train_features, labels: train_labels}) # Print loss print('Loss: {}'.format(l)) ```
github_jupyter
``` import os from glob import glob import pandas as pd import numpy as np from scipy import stats from matplotlib import pyplot as plt from matplotlib.colors import Normalize from matplotlib.backends.backend_pdf import PdfPages from matplotlib import gridspec import json import torch import gpytorch import h5py import collections import scipy import torch import math import seaborn as sns from bnn_priors import prior from bnn_priors.exp_utils import load_samples %matplotlib inline %config InlineBackend.print_figure_kwargs = {'bbox_inches':None} mean_covs = pd.read_pickle("Plot_MNIST_convnet_covariances_data/mean_covs.pkl.gz") ``` # Plot Figure 4 ``` sns.set(context="paper", style="white", font_scale=1.0) plt.rcParams["font.sans-serif"].insert(0, "DejaVu Sans") plt.rcParams.update({ "font.family": "sans-serif", # use serif/main font for text elements "text.usetex": False, # use inline math for ticks "pgf.rcfonts": True, # don't setup fonts from rc parameters "font.size": 10, "axes.linewidth": 0.5, 'ytick.major.width': 0.5, 'ytick.major.size': 0, 'xtick.major.width': 0.5, 'xtick.major.size': 0, "figure.dpi": 300, }) fig_width_pt = 234.8775 inches_per_pt = 1.0/72.27 # Convert pt to inches fig_width = fig_width_pt*inches_per_pt # width in inches norm = Normalize(-0.27, 0.27) margins = dict( left=0.04, right=0.1, top=0.08, bottom=0.05) plots_x = 2 wsep = hsep = 0.015 w_cov_sep = 0.04 cbar_width = 0.03 cbar_wsep = 0.01 height = width = (1 - w_cov_sep*plots_x - wsep*3*plots_x - cbar_wsep - cbar_width - margins['left'] - margins['right'])/plots_x / 3 ttl_marg=10 fig_height_mult = (margins['bottom'] + height*3 + hsep*2 + margins['top']) # make figure rectangular and correct vertical sizes hsep /= fig_height_mult height /= fig_height_mult margins['bottom'] /= fig_height_mult margins['top'] /= fig_height_mult fig = plt.figure(figsize=(fig_width, fig_width *fig_height_mult)) cbar_height = height*3 + hsep*2 key = "net.module.1.weight_prior.p" for y in range(3): for x in range(3): bottom = margins['bottom'] + (height + hsep) * (2-y) left = margins['left'] + (width +wsep) * x if x == 0: yticks = [1, 2, 3] else: yticks = [] if y == 2: xticks = [1, 2, 3] else: xticks = [] ax = fig.add_axes([left, bottom, width, height], xticks=xticks, yticks=yticks) #title=f"cov. w/ ({x + 1}, {y +1})") ax.imshow(mean_covs[key][1][y*3+x, :].reshape((3, 3)), cmap=plt.get_cmap('RdBu'), extent=[0.5, 3.5, 3.5, 0.5], norm=norm) ax.plot([x+1], [y+1], marker='x', ls='none', color='white', ms=3) if y==0 and x==1: ttl = ax.set_title("Layer 1 covariance", pad=ttl_marg) key = "net.module.4.weight_prior.p" for y in range(3): for x in range(3): bottom = margins['bottom'] + (height + hsep) * (2-y) left = margins['left'] + (width+wsep)*3 + w_cov_sep + (width +wsep) * x yticks = [] if y == 2: xticks = [1, 2, 3] else: xticks = [] ax = fig.add_axes([left, bottom, width, height], xticks=xticks, yticks=yticks) #title=f"cov. w/ ({x + 1}, {y +1})") mappable = ax.imshow(mean_covs[key][1][y*3+x, :].reshape((3, 3))*64, cmap=plt.get_cmap('RdBu'), extent=[0.5, 3.5, 3.5, 0.5], norm=norm) ax.plot([x+1], [y+1], marker='x', ls='none', color='white', markersize=3) if y==0 and x==1: ttl = ax.set_title("Layer 2 covariance", pad=ttl_marg) cbar_ax = fig.add_axes([margins['left'] + (width+wsep)*3*2 + w_cov_sep + cbar_wsep, margins['bottom'], cbar_width, cbar_height]) fig.colorbar(mappable, cax=cbar_ax, ticks=[-0.27, -0.15, 0, 0.15, 0.27]) fig.savefig("../figures/210126-mnist-covariances-all.pdf") ``` # Load weights of the MNIST network, that doesn't have batchnorm ``` directories = [*map(str, range(8))] samples = collections.defaultdict(lambda: [], {}) param_keys = None for d in directories: with h5py.File(f"../logs/sgd-no-weight-decay/mnist_classificationconvnet/{d}/samples.pt", "r") as f: if param_keys is None: param_keys = [k for k in f.keys() if k.endswith(".p")] for key in param_keys: samples[key].append(f[key][-1]) for k in samples.keys(): samples[k] = np.stack(samples[k]) samples.keys() samples_reshaped = {} mean_covs = {} for k in samples.keys(): if k in ["net.module.1.weight_prior.p", "net.module.4.weight_prior.p"]: #if k == "net.module.8.weight_prior.p": # samples_reshaped[k] = samples[k].transpose((0, 2, 1)).reshape((-1, 10)) #else: samples_reshaped[k] = samples[k].reshape((-1, 9)) mean_covs[k] = (np.mean(samples_reshaped[k], 0), np.cov(samples_reshaped[k], rowvar=False)) else: samples_reshaped[k] = samples[k] mean_covs[k] = (np.mean(samples[k]), np.var(samples[k])) pd.to_pickle(mean_covs, "3.4.1_mean_covs.pkl.gz") ```
github_jupyter
# Scripting `bettermoments` In this Notebook, we will step through how to integrate the moment map making process (in this case, a zeroth moment map, or integrated intensity map), into your workflow. This should elucidate the steps that are taken automatically when using the [command line interface](https://bettermoments.readthedocs.io/en/latest/user/command_line.html). ### Standard Imports ``` import bettermoments as bm ``` ### Load Up the Data Here the `load_cube` function will return a 3D array for the data and a 1D array of the velocity axis (this should automatically convert any frequency axis to a velocity axis). Note that as we are collapsing along the velocity axis, we have no need for spatial axes, so we do not bother creating them. ``` path = '../../../gofish/docs/user/TWHya_CS_32.fits' data, velax = bm.load_cube(path) ``` ### Spectrally Smooth the Data If you have relatively noisy data, a low level of smoothing along the spectral axis can be useful. `bettermoments` allows for two different methods: a convolution with a simple top-hat function, or the use of a [Savitzky-Golay filter](https://en.wikipedia.org/wiki/Savitzky-Golay_filter). For a top-hat convolution, you need only specify `smooth`, which describes the kernel size in the number of channels. For a Savitzky-Golay filter, you must also provide `polyorder` which describes the polynomial order which is used for the fitting. Note that `smooth` must always be larger than `polyorder`. It is important to remember that while a small level of smoothing can help with certain aspects of moment map creation, it also distorts the line profile (for example broadening the line in the case of a simple top-hat convolution). Such systematic effects must be considered when analysing the resulting moment maps. Here we just consider a smoothing with a top-hat kernel that is 3 channels wide. ``` smoothed_data = bm.smooth_data(data=data, smooth=3, polyorder=0) ``` ### Estimate the Noise of the Data We require an estimate of the noise of the data for two reasons: 1. For the estimation of the uncertainties of the moment maps. 2. For applying anything threshold clipping. To make this estimate, we assume that the noise in the image is constant both spatially (such that the primary beam correction is minimal) and spectrally. To avoid line emission, we consider the RMS of the line-free channels, defined as the first `N` and last `N` channels in the data cube. ``` rms = bm.estimate_RMS(data=data, N=5) ``` Note that the noise estimated this way will differ whether you use the `smoothed_data` or the original `data` array. When using the command line interface for `bettermoments`, the RMS will be estimated on the _smoothed_ data. ``` rms_smoothed = bm.estimate_RMS(data=smoothed_data, N=5) print('RMS = {:.1f} mJy/beam (original)'.format(rms * 1e3)) print('RMS = {:.1f} mJy/beam (smoothed)'.format(rms_smoothed * 1e3)) ``` ### User-Defined Mask Sometimes you will want to mask particular regions within your PPV cube in order to disentangle various components, for example if you have multiple hyperfine components that you want to distinguish. Often the easiest way to do this is to define a mask elsewhere and apply it to the data you are collapsing (see for example the [keplerian_mask.py](https://github.com/richteague/keplerian_mask) routine to generate a Keplerian mask). Through the `get_user_mask` function, you can load a mask (a 3D array of 1s and 0s) saved as a FITS file, and apply that to the data. If no `user_mask_path` is provided, then this simply returns an array with the same shape as `data` filled with 1s. Note that the user-defined mask _must_ share the same pixel and channel resolution, and be the same shape as the data. No aligning or reshaping is done internally with `bettermoments`. ``` user_mask = bm.get_user_mask(data=data, user_mask_path=None) ``` ### Threshold Mask A threshold mask, or a 'sigma-clip', is one of the most common approaches to masking used in moment map creation. The `get_threshold_mask` provides several features which will help you optimize your threshold masking. The `clip` argument takes a tuple of values, `clip=(-3.0, 3.0)` describing the minimum and maximum SNR of the pixels that will be removed (this is very similar to the `excludepix` argument in [CASA's immoments task](https://casa.nrao.edu/casadocs/casa-6.1.0/global-task-list/task_immoments/about), but with values given in units of sigma, the noise, rather than flux units). `clip` also accepts just a single value, and will convert that to a symmetric clip as above, for example `clip=(-2.0, 2.0)` and `clip=2.0` are equivalent. The option to provide a tuple allows the options to have asymmetric clip ranges, for example, `clip=(-np.inf, 3.0)`, to remove all pixels below 3 sigma, including high significance but negative pixels. It has been found that threshold masks can lead to large artifacts in the resulting moment map if there are large intensity gradients in low SNR regions of the PPV cube. To combate this, users have the option to first smooth the data (only temporarily to generate the threshold mask) which will allow for more conservative contours in the threshold mask. This can be achived by providing the FWHM of the Gaussian kernel used for this spatial smooth as `smooth_threshold_mask` in number of pixels. Note that because the data is smoothed, the effective RMS will drop and so the RMS is re-estimated interally on the smoothed image. Here we mask all pixels with a SNR less than 2 sigma, i.e., $|I \, / \, \sigma| < 2$. ``` threshold_mask = bm.get_threshold_mask(data=data, clip=2.0, smooth_threshold_mask=0.0) ``` ### Channel Mask For many PPV cubes, the line emission of interest only spans a small range of velocity axis. This region can be easily selected using the `firstchannel` and `lastchannel` arguments in `get_channel_mask`. Note that the `lastchannel` argument also accepts negative values, following the standard Python indexing convention, i.e., `lastchannel=-1` results in the final channel being the last. `get_channel_mask` also accepts a `user_mask` argument, which is an array the same size as the velocity axis of the data, specifying which channels to include. This may be useful if you want to integrate over several hyperfine components while excluding the line-free regions between them. ``` channel_mask = bm.get_channel_mask(data=data, firstchannel=0, lastchannel=-1) ``` ### Mask Combination All the masks can be easily combined, either with `AND` or `OR`, with the `get_combined_mask` function. This can then be applied to the data used for the moment map creation through a simple multiplication. Note for all collapse functions, pixels with a 0 value will be ignored. ``` mask = bm.get_combined_mask(user_mask=user_mask, threshold_mask=threshold_mask, channel_mask=channel_mask, combine='and') masked_data = smoothed_data * mask ``` ### Collapse the Data Now that we have a smoothed and masked dataset, we can collapse it along the velocity axis through several different methods. (https://bettermoments.readthedocs.io/en/latest/user/collapse_cube.html#). In general, most functions require the velocity axis, `velax`, the masked data data, `data`, and the RMS of the data, `rms`. The available functions can be checked through the `available_collapse_methods` function such that the desired function is `collapse_{methodname}`. ``` bm.available_collapse_methods() ``` Each function will return `moments`, an `(N, Y, X)` shaped array, where `(Y, X)` is the shape of a single channel of the data and `N` is twice the number of statistics (with the uncertainty of each value interleaved). To see which parameters are returned for each `collapse_method`, we can use the `collapse_method_products` function. For the `'zeroth'` method: ``` bm.collapse_method_products('zeroth') ``` So we have the zeroth moment, `M0`, and it's associated uncertainty `dM0`. Here we will collapse the cube to a zeroth (integrated intensity) map. ``` moments = bm.collapse_zeroth(velax=velax, data=masked_data, rms=rms) ``` ### Save the Data to FITS It's possible to work with the data directly, however it's often useful to save these for later. The `save_to_FITS` function will split up the `moments` array and save each one as a new FITS file, replacing the `.fits` exention with `_{moment_name}.fits` for easy identification. The header will be copied from the original file. ``` bm.save_to_FITS(moments=moments, method='zeroth', path=path) ```
github_jupyter
## Classes for callback implementors ``` from fastai.gen_doc.nbdoc import * from fastai.callback import * from fastai.basics import * ``` fastai provides a powerful *callback* system, which is documented on the [`callbacks`](/callbacks.html#callbacks) page; look on that page if you're just looking for how to use existing callbacks. If you want to create your own, you'll need to use the classes discussed below. A key motivation for the callback system is that additional functionality can be entirely implemented in a single callback, so that it's easily read. By using this trick, we will have different methods categorized in different callbacks where we will find clearly stated all the interventions the method makes in training. For instance in the [`LRFinder`](/callbacks.lr_finder.html#LRFinder) callback, on top of running the fit function with exponentially growing LRs, it needs to handle some preparation and clean-up, and all this code can be in the same callback so we know exactly what it is doing and where to look if we need to change something. In addition, it allows our [`fit`](/basic_train.html#fit) function to be very clean and simple, yet still easily extended. So far in implementing a number of recent papers, we haven't yet come across any situation where we had to modify our training loop source code - we've been able to use callbacks every time. ``` show_doc(Callback) ``` To create a new type of callback, you'll need to inherit from this class, and implement one or more methods as required for your purposes. Perhaps the easiest way to get started is to look at the source code for some of the pre-defined fastai callbacks. You might be surprised at how simple they are! For instance, here is the **entire** source code for [`GradientClipping`](/train.html#GradientClipping): ```python @dataclass class GradientClipping(LearnerCallback): clip:float def on_backward_end(self, **kwargs): if self.clip: nn.utils.clip_grad_norm_(self.learn.model.parameters(), self.clip) ``` You generally want your custom callback constructor to take a [`Learner`](/basic_train.html#Learner) parameter, e.g.: ```python @dataclass class MyCallback(Callback): learn:Learner ``` Note that this allows the callback user to just pass your callback name to `callback_fns` when constructing their [`Learner`](/basic_train.html#Learner), since that always passes `self` when constructing callbacks from `callback_fns`. In addition, by passing the learner, this callback will have access to everything: e.g all the inputs/outputs as they are calculated, the losses, and also the data loaders, the optimizer, etc. At any time: - Changing self.learn.data.train_dl or self.data.valid_dl will change them inside the fit function (we just need to pass the [`DataBunch`](/basic_data.html#DataBunch) object to the fit function and not data.train_dl/data.valid_dl) - Changing self.learn.opt.opt (We have an [`OptimWrapper`](/callback.html#OptimWrapper) on top of the actual optimizer) will change it inside the fit function. - Changing self.learn.data or self.learn.opt directly WILL NOT change the data or the optimizer inside the fit function. In any of the callbacks you can unpack in the kwargs: - `n_epochs`, contains the number of epochs the training will take in total - `epoch`, contains the number of the current - `iteration`, contains the number of iterations done since the beginning of training - `num_batch`, contains the number of the batch we're at in the dataloader - `last_input`, contains the last input that got through the model (eventually updated by a callback) - `last_target`, contains the last target that gor through the model (eventually updated by a callback) - `last_output`, contains the last output spitted by the model (eventually updated by a callback) - `last_loss`, contains the last loss computed (eventually updated by a callback) - `smooth_loss`, contains the smoothed version of the loss - `last_metrics`, contains the last validation loss and metrics computed - `pbar`, the progress bar - [`train`](/train.html#train), flag to know if we're in training mode or not ### Methods your subclass can implement All of these methods are optional; your subclass can handle as many or as few as you require. ``` show_doc(Callback.on_train_begin) ``` Here we can initiliaze anything we need. The optimizer has now been initialized. We can change any hyper-parameters by typing, for instance: ``` self.opt.lr = new_lr self.opt.mom = new_mom self.opt.wd = new_wd self.opt.beta = new_beta ``` ``` show_doc(Callback.on_epoch_begin) ``` This is not technically required since we have `on_train_begin` for epoch 0 and `on_epoch_end` for all the other epochs, yet it makes writing code that needs to be done at the beginning of every epoch easy and more readable. ``` show_doc(Callback.on_batch_begin) ``` Here is the perfect place to prepare everything before the model is called. Example: change the values of the hyperparameters (if we don't do it on_batch_end instead) If we return something, that will be the new value for `xb`,`yb`. ``` show_doc(Callback.on_loss_begin) ``` Here is the place to run some code that needs to be executed after the output has been computed but before the loss computation. Example: putting the output back in FP32 when training in mixed precision. If we return something, that will be the new value for the output. ``` show_doc(Callback.on_backward_begin) ``` Here is the place to run some code that needs to be executed after the loss has been computed but before the gradient computation. Example: `reg_fn` in RNNs. If we return something, that will be the new value for loss. Since the recorder is always called first, it will have the raw loss. ``` show_doc(Callback.on_backward_end) ``` Here is the place to run some code that needs to be executed after the gradients have been computed but before the optimizer is called. ``` show_doc(Callback.on_step_end) ``` Here is the place to run some code that needs to be executed after the optimizer step but before the gradients are zeroed ``` show_doc(Callback.on_batch_end) ``` Here is the place to run some code that needs to be executed after a batch is fully done. Example: change the values of the hyperparameters (if we don't do it on_batch_begin instead) If we return true, the current epoch is interrupted (example: lr_finder stops the training when the loss explodes) ``` show_doc(Callback.on_epoch_end) ``` Here is the place to run some code that needs to be executed at the end of an epoch. Example: Save the model if we have a new best validation loss/metric. If we return true, the training stops (example: early stopping) ``` show_doc(Callback.on_train_end) ``` Here is the place to tidy everything. It's always executed even if there was an error during the training loop, and has an extra kwarg named exception to check if there was an exception or not. Examples: save log_files, load best model found during training ``` show_doc(Callback.get_state) ``` This is used internally when trying to export a [`Learner`](/basic_train.html#Learner). You won't need to subclass this function but you can add attribute names to the lists `exclude` or `not_min`of the [`Callback`](/callback.html#Callback) you are designing. Attributes in `exclude` are never saved, attributes in `not_min` only if `minimal=False`. ## Annealing functions The following functions provide different annealing schedules. You probably won't need to call them directly, but would instead use them as part of a callback. Here's what each one looks like: ``` annealings = "NO LINEAR COS EXP POLY".split() fns = [annealing_no, annealing_linear, annealing_cos, annealing_exp, annealing_poly(0.8)] for fn, t in zip(fns, annealings): plt.plot(np.arange(0, 100), [fn(2, 1e-2, o) for o in np.linspace(0.01,1,100)], label=t) plt.legend(); show_doc(annealing_cos) show_doc(annealing_exp) show_doc(annealing_linear) show_doc(annealing_no) show_doc(annealing_poly) show_doc(CallbackHandler) ``` You probably won't need to use this class yourself. It's used by fastai to combine all the callbacks together and call any relevant callback functions for each training stage. The methods below simply call the equivalent method in each callback function in [`self.callbacks`](/callbacks.html#callbacks). ``` show_doc(CallbackHandler.on_backward_begin) show_doc(CallbackHandler.on_backward_end) show_doc(CallbackHandler.on_batch_begin) show_doc(CallbackHandler.on_batch_end) show_doc(CallbackHandler.on_epoch_begin) show_doc(CallbackHandler.on_epoch_end) show_doc(CallbackHandler.on_loss_begin) show_doc(CallbackHandler.on_step_end) show_doc(CallbackHandler.on_train_begin) show_doc(CallbackHandler.on_train_end) show_doc(CallbackHandler.set_dl) show_doc(OptimWrapper) ``` This is a convenience class that provides a consistent API for getting and setting optimizer hyperparameters. For instance, for [`optim.Adam`](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam) the momentum parameter is actually `betas[0]`, whereas for [`optim.SGD`](https://pytorch.org/docs/stable/optim.html#torch.optim.SGD) it's simply `momentum`. As another example, the details of handling weight decay depend on whether you are using `true_wd` or the traditional L2 regularization approach. This class also handles setting different WD and LR for each layer group, for discriminative layer training. ``` show_doc(OptimWrapper.clear) show_doc(OptimWrapper.create) show_doc(OptimWrapper.new) show_doc(OptimWrapper.read_defaults) show_doc(OptimWrapper.read_val) show_doc(OptimWrapper.set_val) show_doc(OptimWrapper.step) show_doc(OptimWrapper.zero_grad) show_doc(SmoothenValue) ``` Used for smoothing loss in [`Recorder`](/basic_train.html#Recorder). ``` show_doc(SmoothenValue.add_value) show_doc(Stepper) ``` Used for creating annealing schedules, mainly for [`OneCycleScheduler`](/callbacks.one_cycle.html#OneCycleScheduler). ``` show_doc(Stepper.step) show_doc(AverageMetric) ``` See the documentation on [`metrics`](/metrics.html#metrics) for more information. ### Callback methods You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality. ``` show_doc(AverageMetric.on_epoch_begin) show_doc(AverageMetric.on_batch_end) show_doc(AverageMetric.on_epoch_end) ``` ## Undocumented Methods - Methods moved below this line will intentionally be hidden ## New Methods - Please document or move to the undocumented section
github_jupyter
What is PyTorch? ================ It’s a Python-based scientific computing package targeted at two sets of audiences: - A replacement for NumPy to use the power of GPUs - a deep learning research platform that provides maximum flexibility and speed Getting Started --------------- Tensors ^^^^^^^ Tensors are similar to NumPy’s ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing. ** Here are some high frequency operations you should get used to ** ``` import cv2 import numpy as np %matplotlib inline #The line above is necesary to show Matplotlib's plots inside a Jupyter Notebook from matplotlib import pyplot as plt from __future__ import print_function import torch ``` ### Construct a 5x3 matrix, uninitialized using [torch.empty]() ``` x = torch.empty(5, 3) print(x) # other examples torch.normal(0,1,[2,2]) torch.randperm(10) torch.linspace(1,10,10) ``` ### Print out the size of a tensor. you will be doing this frequently if developing/debuggin a neural network ``` x.size() ``` ### Construct a matrix filled zeros and of dtype floating point 16. Here is a link to available [types](https://pytorch.org/docs/stable/tensor_attributes.html#torch.torch.dtype) Can you change long to floating point16 below <details> <summary>Hint</summary> <p> torch.zeros(5, 3, dtype=torch.float16) </p></details> ``` x = torch.zeros(5, 3, dtype=torch.long) print(x) ``` ### Element operations examples of element operations ![image.png](attachment:image.png) do an element wise add of A and B ``` A = torch.rand(5, 3) B = torch.rand(5, 3) print(A) print(B) print(A + B) ``` ### Alternate method using torch.add ``` # more than one way to do it [operator overloading] orch.add(A, B) A.add(B) ``` ### Addition: providing an output tensor as argument ``` result = torch.empty(5, 3) torch.add(A, B, out=result) print(result) ``` ### Addition: in-place ``` #### adds x to y B.add_(A) print(B) ``` <div class="alert alert-info"><h4>Note</h4><p>Any operation that mutates a tensor in-place is post-fixed with an ``_``. For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.</p></div> ### Linear Alg operations - Matrix Multiply Example ![image.png](attachment:image.png) ``` a = torch.randint(4,(2,3)) b = torch.randint(4,(3,2)) print(a) print(b) # all equivalent! # 2x3 @ 3x2 ~ 2x2 a.mm(b) torch.matmul(a,b) torch.mm(a,b) a.T.mm(a) ``` ### Create a onehot vector ``` batch_size = 5 nb_digits = 10 # Dummy input that HAS to be 2D for the scatter (you can use view(-1,1) if needed) y = torch.LongTensor(batch_size,1).random_() % nb_digits # One hot encoding buffer that you create out of the loop and just keep reusing y_onehot = torch.FloatTensor(batch_size, nb_digits) # In your for loop y_onehot.zero_() y_onehot.scatter_(1, y, 1) print(y) print(y_onehot) ``` ### Use argmax to grab the index of the highest value ``` A = torch.rand(3,4,5) print(A) A.argmax(dim=2) ``` ### Aggregation over a dimension ``` x = torch.ones([2,3,4]) # inplace multiply a selected column x[0,:,0].mul_(30) x #Suppose the shape of the input is (m, n, k) #If dim=0 is specified, the shape of the output is (1, n, k) or (n, k) #If dim=1 is specified, the shape of the output is (m, 1, k) or (m, k) #If dim=2 is specified, the shape of the output is (m, n, 1) or (m, n) x.sum(dim=1) ``` ### Broadcasting ``` x = torch.ones([10,10]) y = torch.linspace(1,10,10) print(x.size()) print(y.size()) z = x + y ### Masking mask = z>4 print(mask.size()) mask # Apply mask, but observe dim change new =z[z>4] print(new.size()) new ``` ### You can use standard NumPy-like indexing with all bells and whistles! Example Grab the middle column of A (index = 1) ``` A = torch.rand(3,3) print(A) print(A[:, 1]) ``` ### Resizing: If you want to resize/reshape tensor, you can use ``torch.view``: ``` x = torch.randn(4, 4) y = x.view(16) z = x.view(-1, 8) # the size -1 is inferred from other dimensions print(x.size(), y.size(), z.size()) ``` ### If you have a one element tensor, use ``.item()`` to get the value as a Python number ``` x = torch.randn(1) print(x) print(x.item()) ``` **Read later:** 100+ Tensor operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random numbers, etc., are described `here <http://pytorch.org/docs/torch>`_. NumPy Bridge ------------ Converting a Torch Tensor to a NumPy array and vice versa is a breeze. The Torch Tensor and NumPy array will share their underlying memory locations, and changing one will change the other. ### Converting a Torch Tensor to a NumPy Array ``` a = torch.ones(5) print(a) b = a.numpy() print(b) ``` ### See how the numpy array changed in value. ``` a.add_(1) print(a) print(b) ``` Converting NumPy Array to Torch Tensor ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ See how changing the np array changed the Torch Tensor automatically ``` import numpy as np a = np.ones(5) b = torch.from_numpy(a) np.add(a, 1, out=a) print(a) print(b) ``` All the Tensors on the CPU except a CharTensor support converting to NumPy and back. CUDA Tensors ------------ Tensors can be moved onto any device using the ``.to`` method. ``` # let us run this cell only if CUDA is available # We will use ``torch.device`` objects to move tensors in and out of GPU x = torch.rand(2,2,2) if torch.cuda.is_available(): device = torch.device("cuda") # a CUDA device object y = torch.ones_like(x, device=device) # directly create a tensor on GPU x = x.to(device) # or just use strings ``.to("cuda")`` z = x + y print(z) print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together! ``` ### ND Tensors When working with neural networks, you are always dealing with multidimensional arrays. Here are some quick tricks #### Assume A is a 32x32 RGB image ``` ## 3D Tensors import torch A = torch.rand(32,32,3) plt.imshow(A) ``` ### Slicing Tensors - grab 'RED' dimension ``` red_data = A[:,:,0] #0 represents the first channel of RGB red_data.size() ``` ### Swap the RGB dimension and make the tensor a 3x32x32 tensor ``` A_rgb_first = A.permute(2,0,1) print(A_rgb_first.size()) ``` ### Add a BatchSize to our Image Tensor Usually you need to do this to run inference on your trained model ``` Anew = A.unsqueeze(0) print(Anew.size()) ``` ### Drop the tensor dimension. sometimes like in the example above, you might have a tensor with on of the dimensions equal to one. Use **squeeze()** to drop that dimension> ``` print(Anew.squeeze(0).size()) ```
github_jupyter
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS109B Introduction to Data Science ## Lab 8: Recurrent Neural Networks and Introduction to Natural Language Processing **Harvard University**<br/> **Spring 2022**<br/> **Instructors**: Mark Glickman & Pavlos Protopapas<br/> **Lab Leaders**: Marios Mattheakis & Chris Gumb <br/> ## Learning Objectives By the end of this Lab, you should understand how to: - use `keras` for constructing a simple RNN for time-series prediction - perform basic preprocessing on text data (stemming, tokenization, padding, one-hot encoding) - Feed Forward NNs for NLP tasks - add embedding layers to improve the performance - use `keras` simple RNNs for NLP - inspect the embedding space <a id="contents"></a> ## Notebook Contents - [**Simple RNNs**](#rnn_intro) - [Time-series prediction](#timeSeries) - [Activity 1: Forecasting timeseries](#act1) - [**Introduction to NLP**](#NLP_intro) - [Case Study: IMDB Review Dataset](#imdb) - [**Preprocessing Text Data**](#prep) - [Tokenization](#token) - [Stemming](#stem) - [Padding](#pad) - [Numerical Encoding](#encode) - [**Neural Networks for NLP**](#NN) - [Feed Forward Neural Networks](#FFNN) - [Embedding layer](#embedding) - [Activity 2: Recurrent Neural Networks with embeddings](#act2) - [**Extra Material: Inspecting the embedding space**](#SM) ``` import tensorflow as tf from tensorflow.keras.datasets import imdb from tensorflow.keras.models import Sequential, Model from tensorflow.keras.layers import Dense, Embedding, SimpleRNN, Flatten #GRU, LSTM from sklearn.model_selection import train_test_split import tensorflow_datasets from matplotlib import pyplot as plt import numpy as np import pandas as pd # fix random seed for reproducibility np.random.seed(109) import warnings warnings.filterwarnings('ignore') ``` # Simple Recurrent Neural Networks (RNNs) <div id='rnn_intro'> An RNN is similar to a FFNN in that there is an input layer, a hidden layer, and an output layer. The input layer is fully connected to the hidden layer, and the hidden layer is fully connected to the output layer. However, the crux of what makes it a **recurrent** neural network is that the hidden layer for a given time _t_ is not only based on the input layer at time _t_ but also the hidden layer from time _t-1_. Here's a popular blog post on [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). In Keras, the vanilla RNN unit is implemented the`SimpleRNN` layer: ``` tf.keras.layers.SimpleRNN( units, activation='tanh', use_bias=True, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal', bias_initializer='zeros', kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, dropout=0.0, recurrent_dropout=0.0, return_sequences=False, return_state=False, go_backwards=False, stateful=False, unroll=False, **kwargs ) ``` For more details check Keras' documention https://www.tensorflow.org/api_docs/python/tf/keras/layers/SimpleRNN. As you can see, recurrent layers in Keras take many arguments. We only need to be concerned with `units`, which specifies the size of the hidden state. **REMOVE**, and `return_sequences`, which will be discussed shortly. For the moment is it fine to leave this set to the default of `False`. As you will see next week simple RNNs have some serious problems and limitations, like the gradient vanishing/exploding issue. Due to these limitations, simple RNN unit tends not to be used much in practice. For this reason it seems that the Keras developers neglected to implement GPU acceleration for this layer! Later in the Lab, you will notice that training an RNN is slower the training an FFNN even when the RNN has fewer parameters. https://www.tensorflow.org/api_docs/python/tf/keras/layers/SimpleRNN ## Time-series prediction <div id = 'timeSeries'> RNNs become effective in learning from sequential data like time series and text. Let's start this journey in RNNs by predicting a noisy time series. Generate some synthetic sequential noisy data ``` N = 1000 Tp = 800 t=np.arange(0,N) x=np.sin(0.02*t)* 1*np.sin(0.05*t) + 2*np.exp(-(t-500)**2/1000) #Add gaussian (white) noise x += np.random.rand(N) df = pd.DataFrame(x) plt.plot(t, x,'k') plt.xlabel('t'); plt.xlabel('x'); plt.xlabel('Time'); plt.ylabel('Series') ``` #### Split data into training and testing sets Note, this is forecasting, so we do not know the future ``` values=df.values train,test = values[0:Tp,:], values[Tp:N,:] plt.plot(df[0:Tp], 'b', label='training') plt.plot(df[Tp:N], 'g', label='testing') plt.axvline(df.index[Tp], c="r") plt.xlabel('Time'); plt.ylabel('Series') plt.legend() ``` #### Prepare the data RNNs require a step value that contains `n` number of elements as an input sequence. Here, we define it as a `step`. Let's understand this concept through two simple cases. Cosidere the input `x` and the output `y`: - For step=1: - x=[1,2,3,4,5] - y=[2,3,4,5,6] - For step=2: - x=[ (1,2), (2,3), (3,4) (4,5) ] - y=[3,4,5,6] The sizes of `x` and `y` are different. We can fix this by adding step size into the training and test data. ``` print(train.shape, test.shape ) step = 4 # add step elements into train and test test = np.append(test,np.repeat(test[-1,],step)) train = np.append(train,np.repeat(train[-1,],step)) print(train.shape, test.shape ) ``` Convert the datasets into the matrix with step value as it has shown above explation. ``` def convertToMatrix(data, step): X, Y =[], [] for i in range(len(data)-step): d=i+step X.append(data[i:d,]) Y.append(data[d,]) return np.array(X), np.array(Y) trainX, trainY =convertToMatrix(train,step) testX, testY =convertToMatrix(test,step) print('Shapes of the training dataset for (x,y): ', trainX.shape, trainY.shape) print('Shapes of the testing dataset for (x,y) : ', testX.shape, testY.shape) ``` Finally, we reshape `trainX` and `testX` to fit with the Keras RNN model that requires three-dimensional input data. ``` trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1])) print(trainX.shape, testX.shape) model = Sequential() # Here, we add the RNN unit. Keras makes it easy for us model.add(SimpleRNN(units=32, input_shape=(1,step), activation="relu")) # model.add(Dense(8, activation="relu")) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer= 'adam' ) model.summary() model.fit(trainX,trainY, epochs=100, batch_size=32, verbose=0) trainPredict = model.predict(trainX) testPredict = model.predict(testX) # concate train and test predictions for plotting purposes predicted = np.concatenate((trainPredict,testPredict),axis=0) trainScore = model.evaluate(trainX, trainY, verbose=0) testScore = model.evaluate(testX, testY, verbose=0) print('Train score: ', trainScore) print('Test score: ', testScore) index = df.index.values plt.plot(df[0:Tp], 'b', label='training') plt.plot(df[Tp:N], 'g', label='testing') # plt.plot(index,predicted) plt.plot(predicted, 'm', label='network') plt.axvline(df.index[Tp], c="r") plt.xlabel('Time'); plt.ylabel('Series') plt.legend() ``` # Activity 1 <div id='act1'></div> - Repeat the above experiment for different steps in the range [1, 10, 100]. - Does the step affect the performance? Make some comments ``` # you code here plt.figure(figsize= [16,6] ) ## define step range step_range = [1,10, 100] i=0 for step in step_range: ## repeat this step because we change the train/test during in the loop values=df.values train,test = values[0:Tp,:], values[Tp:N,:] ## add step elements into train and test test = np.append(test,np.repeat(test[-1,],step)) train = np.append(train,np.repeat(train[-1,],step)) ## Convert to matrix (used the helper function 'conertToMatrix' defined earlier ) trainX, trainY =convertToMatrix(train,step) testX, testY =convertToMatrix(test,step) ## reshape X sets to be 3-dimensional as RNN unit expects trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1])) ## Construct the network architecture model = Sequential() model.add(SimpleRNN(units=32, input_shape=(1,step), activation="relu")) model.add(Dense(8, activation="relu")) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer= 'adam' ) ## Evaluate the model model.fit(trainX,trainY, epochs=100, batch_size=32, verbose=0) trainPredict = model.predict(trainX) testPredict = model.predict(testX) ## Concate train and test predictions and calculate the scores predicted = np.concatenate((trainPredict,testPredict),axis=0) trainScore = model.evaluate(trainX, trainY, verbose=0) testScore = model.evaluate(testX, testY, verbose=0) ## make somes subpots ax = plt.subplot(1, 3, i + 1) index = df.index.values ax.plot(df[0:Tp], 'b', label='training') ax.plot(df[Tp:N], 'g', label='testing') ax.plot(predicted, 'm', label='network') ax.axvline(df.index[Tp], c="r") ax.set_xlabel('Time'); # plt.set_ylabel('Series') ax.set_title('Step = ' + str(step) + '\n Train score = ' + str(round(trainScore,2)) + '; Test score = ' + str(round(testScore,2))) ax.legend() i+=1 ``` # Introduction to NLP <div id = 'LP_intro'></div> ## Case Study: IMDB Review Classifier <div id='imdb'></div> <!-- <img src='fig/manyto1.png' width='300px'> --> Let's frame our introduction to NLP around the example of a text classifier. Specifically, we'll build and evaluate various models that all attempt to descriminate between positive and negative reviews through the Internet Movie Database (IMDB). The dataset is again made available to us through the tensorflow datasets API. ``` (train, test), info = tensorflow_datasets.load('imdb_reviews', split=['train', 'test'], with_info=True) ``` The helpful `info` object provides details about the dataset. ``` info ``` We see that the dataset consists of text reviews and binary good/bad labels. Here are two examples: ``` labels = {0: 'bad', 1: 'good'} seen = {'bad': False, 'good': False} for review in train: label = review['label'].numpy() if not seen[labels[label]]: print(f"text:\n{review['text'].numpy().decode()}\n") print(f"label: {labels[label]}\n") seen[labels[label]] = True if all(val == True for val in seen.values()): break ``` # Preprocessing Text Data <div id='prep'></div> Computers have no built-in knowledge of language and cannot understand text data in any rich way that humans do -- at least not without some help! The first crucial step in natural language processing is to clean and preprocess your data so that your algorithms and models can make use of it. We'll look at a few preprocess steps: - Tokenization - Stemming - Padding - Numerical encoding Depending on your NLP task, you may (or may not) want to take additional preprocessing steps which we will not cover here. These can include: - converting all characters to lowercase - treating each punctuation mark as a token (e.g., , . ! ? are each separate tokens) - removing punctuation altogether - separating each sentence with a unique symbol (e.g., <S> and </S>) - removing words that are incredibly common (e.g., function words, (in)definite articles). These are referred to as 'stopwords'). - Lemmatizing (replacing words with their 'dictionary entry form') Useful NLP Python libraries such as [NLTK](https://www.nltk.org/) and [spaCy](https://spacy.io/) provide built in methods for many of these preprocessing steps. <!-- <div class='exercise' id='token'><b>Tokenization</b></div></br> --> ## Tokenization <div id='token'></div> **Tokenization** is the process of breaking a document down into words, punctuation marks, numeric digits, etc. **Tokens** are the atomic units of meaning which our model will be working with. What should these units be? These could be characters, words, or even sentences. For our movie review classifier we will be working at the word level. For this example we will process just a subset of the original dataset. ``` SAMPLE_SIZE = 10 # # of the reviews to be considered subset = list(train.take(SAMPLE_SIZE)) subset[5] ``` The TFDS format process datasets into a standard format and therefore, allows for the construction of efficient preprocessing pipelines. But for our own preprocessing example we will be primarily working with Python `list` objects. This gives us a chance to practice the Python **list comprehension** which is a powerful tool to have at your disposal. It will serve you well when processing arbitrary text which may not already be in a nice TFDS format (such as in the HW 😉). We'll convert our data subset into X and y lists. ``` X = [x['text'].numpy().decode() for x in subset] y = [x['label'].numpy() for x in subset] print(f'X has {len(X)} reviews') print(f'y has {len(y)} labels') N_CHARS = 20 print(f'First {N_CHARS} characters of all reviews:\n{[x[:20]+"..." for x in X]}\n') print(f'All labels:\n{y}') ``` Each observation in `X` is a review. A review is a `str` object which we can think of as a sequence of characters. This is indeed how Python treats strings as made clear by how we are printing 'slices' of each review in the code cell above.<br> In this example, we will work at a word level. This means that our observations should be organized as **sequences of words** rather than sequences of characters. In general, we can prepare our data in different ways like at a character level. ``` # list comprehensions again to the rescue! X_ = [x.split() for x in X] # keep this temporal object for a comparison purpose, will see shortly X = [x.split() for x in X] ``` Now let's look at the first 10 **tokens** in the first 2 reviews. ``` print('Review 1: ', X[0][:10]) print('Review 2: ', X[1][:10]) ``` ## Stemming <div id='stem'></div> **Stemming** is the process of producing morphological variants of a root/base word. For example, a stemming algorithm reduces the words "chocolates", "chocolatey", "choco" to the root word, "chocolate" or the words "likes", "liked", "likely", "liking" to "like". Stemming is desirable as it may reduce redundancy as most of the time the word stem and their inflected/derived words mean the same. Here, we use the package **Natural Language Tool Kit (NLTK)** for more information check [here](https://www.nltk.org/api/nltk.stem.html) ``` from nltk.stem import PorterStemmer # object for steamming ps = PorterStemmer() # perform stemming in all the sentences X = [[ps.stem(w) for w in x] for x in X] ``` Inspect the above nested list comprehension: ``` i=0 for words in X: j=0 for w in words: X[i][j]=ps.stem(w) j+=1 i+=1 ``` Let's compare the words before and after stemming ``` for i in range(20): print(X_[0][i], ' --> ', X[0][i]) ``` <div style="background-color:#b3e6ff"> <b>Q</b>: Should we always use stemming? </div> In classification tasks (like sentiment analysis) stemming is fine. But what about in a text generation task? ## Padding <div id='pad'></div> Let's take a look at the lengths of the reviews in our subset. ``` [len(x) for x in X] ``` If we were training our RNN one sentence at a time, it would be okay to have sentences of varying lengths. However, as with any neural network, it can be sometimes be advantageous to train inputs in batches. When doing so with RNNs, our input tensors need to be of the same length/dimensions. Here are two examples of tokenized reviews padded to have a length of 5. ``` ['I', 'loved', 'it', '<PAD>', '<PAD>'] ['It', 'stinks', '<PAD>', '<PAD>', '<PAD>'] ``` Now let's pad our own examples. Note that 'padding' in this context also means truncating sequences that are longer than our specified max length. ``` MAX_LEN = 500 PAD = '<PAD>' # truncate X = [x[:MAX_LEN] for x in X] # pad for x in X: while len(x) < MAX_LEN: x.append(PAD) [len(x) for x in X] ``` Now all reviews have the same length! ## Numerical Encoding <div id='encode'></div> If each review in our dataset is an observation, then the features of each observation are the tokens, in this case, words. But these words are still **strings**. Our machine learning methods require us to be able to multiple our features by weights. If we want to use these words as inputs for a neural network we'll have to convert them into some **numerical representation**. One solution is to create a **one-to-one mapping** between unique words and integers. If the five sentences below were our entire corpus, our conversion would look this: 1. i have books - [1, 4, 2] 2. interesting books are useful [11,2,9,8] 3. i have computers [1,4,3] 4. computers are interesting and useful [3,5,11,10,8] 5. books and computers are both valuable. [2,10,3,9,13,12] 6. bye bye [7,7] I-1, books-2, computers-3, have-4, are-5, computers-6,bye-7, useful-8, are-9, and-10,interesting-11, valuable-12, both-13 To accomplish this we'll first need to know what all the unique words are in our dataset. ``` all_tokens = [word for review in X for word in review] # sanity check len(all_tokens), sum([len(x) for x in X]) ``` Casting our `list` of words into a `set` is a great way to get all the *unique* words in the data. Hence, we build our **vocabulary**. ``` vocab = sorted(set(all_tokens)) print('Unique Words in our vocabulary:', len(vocab)) ``` You can easily check that the vocabulary will be larger if stemming is not applied. Check it by yourself. Now we need to create a mapping from words to integers. For this we will perform a **dictionary comprehension**. ``` word2idx = {word: idx for idx, word in enumerate(vocab)} word2idx ``` We repeat the process, this time mapping integers to words. ``` idx2word = {idx: word for idx, word in enumerate(vocab)} idx2word ``` Now, perform the mapping to encode the observations in our subset. One more ***nested list comprehensions***! ``` X_proc = [[word2idx[word] for word in review] for review in X] X_proc[0][:10], X_proc[1][:10] ``` # Neural Networks for NLP <div id='NN'></div> `X_proc` is a list of lists but if we are going to feed it into a `keras` model we should convert both it and `y` into `numpy` arrays. Just a reminder that `y` is the response variable: ``` X = [x['text'].numpy().decode() for x in subset] y = [x['label'].numpy() for x in subset] ``` ``` X_proc = np.hstack(X_proc).reshape(-1, MAX_LEN) y = np.array(y) print(X_proc.shape, y.shape) X_proc, y ``` ## Feed Forward Neural Network <div id='NN'></div> Now, just to show that we've successfully processed the data, we perform a test train split and feed it into an FFNN. ``` X_train, X_test, y_train, y_test = train_test_split(X_proc, y, test_size=0.2, stratify=y) model = Sequential() model.add(Dense(250, activation='relu',input_dim=MAX_LEN)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary()) model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=5, batch_size=2, verbose=2) scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1]*100)) ``` It worked! Is this a good performance? Well, our subset is balanced and very small. So we shouldn't get excited about this results. Note that adding more layers or neurons does not improve the performance, check it by your own! <br> ### Load more clean data The IMDB dataset is very popular so `keras` also includes an alternative method for loading the data. This method can save us a lot of time for many reasons: - Cleaned text with less meaningless punctuation - Pre-tokenized and numerically encoded - Allows us to specify maximum vocabulary size - more ... ``` from tensorflow.keras.datasets import imdb # We want to have a finite vocabulary to make sure that our word matrices are not arbitrarily small MAX_VOCAB = 10000 INDEX_FROM = 3 # word index offset (X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=MAX_VOCAB, index_from=INDEX_FROM) ``` `get_word_index` will load a json object we can store in a dictionary. This gives us the word-to-integer mapping. ``` word2idx = imdb.get_word_index(path='imdb_word_index.json') word2idx = {k:(v + INDEX_FROM) for k,v in word2idx.items()} word2idx["<PAD>"] = 0 word2idx["<START>"] = 1 word2idx["<UNK>"] = 2 word2idx["<UNUSED>"] = 3 word2idx idx2word = {v: k for k,v in word2idx.items()} idx2word ``` We can see that the text data is already preprocessed for us. ``` print('Number of reviews', len(X_train)) print('Length of first and fifth review before padding', len(X_train[0]) ,len(X_train[4]),'\n') print('First review: ', X_train[0],'\n') print('First label: ', y_train[0],'\n') ``` Here is an example review using the index-to-word mapping we created from the loaded JSON file to view the a review in its original form. ``` def show_review(x): review = ' '.join([idx2word[idx] for idx in x]) print(review) show_review(X_train[0]) ``` NOTE: This text is not comming with **padding** and **stemming**. Looking at the distribution of lengths will help us determine what a reasonable length to pad to will be. ``` plt.hist([len(x) for x in X_train]) plt.title('review lengths'); ``` We saw one way of doing this earlier, but Keras actually has a built in `pad_sequences` helper function. This handles both padding and truncating. By default padding is added to the *beginning* of a sequence. <div class="exercise" style="background-color:#b3e6ff"> <b>Q</b>: Why might we want to truncate? Why might we want to pad from the beginning of a sequence (sentence in this case)? </div> - Unless we truncate we need to pad every sentence according to the longest sentence. That will require too much padding providing a lot of useless information and long vectors which might be computationally costly. - Padding in the beginning of a sentence, retain the most important information in the end of sequence that sometime ehnances the performance since it keeps the 'short' memory more informative. ``` from tensorflow.keras.preprocessing.sequence import pad_sequences MAX_LEN = 500 X_train = pad_sequences(X_train, maxlen=MAX_LEN, padding='pre') #padding='post' will pad in the end of a sequence X_test = pad_sequences(X_test, maxlen=MAX_LEN, padding='pre') print('Length of first and fifth review after padding', len(X_train[0]) ,len(X_train[4])) print("Note that earlier the lenghts were 218 and 147.") print((X_train.shape)) X_train[0] ``` ## Model 1: Naive Feed-Forward Network <div id='FFNN'></div> ``` model = Sequential(name='Naive_FFNN') model.add(Dense(250, activation='relu',input_dim=MAX_LEN)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary()) model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=128, verbose=2) scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1]*100)) ``` <div class="exercise" style="background-color:#b3e6ff"> <b>Q</b>: Why was the performance so poor? How could we improve our encoding? <b>A</b>: The 'magic' Embedding Layer </div> ## Model 2: Feed-Forward Network with Embeddings <div id='embedding'></div> <img src='wordembedding2.png' width=450px> Embedding process is a linear projection from one vector space to another. For NLP, we usually use embeddings to project the **sparse one-hot encodings** of words on to **a more compact lower-dimensional** continuous space. We can view this embedding layer process as a transformation from $\mathbb{R}^\text{inp} \rightarrow$ $\mathbb{R}^\text{emb}$ This **not only reduces dimensionality** but also **allows semantic similarities** between tokens to be captured by 'similiarities' between the embedding vectors. This was not possible with one-hot encoding as all vectors there were orthogonal to one another. <img src='wordembedding.png' width=450px> It is also possible to load pretrained embeddings that were learned from giant corpora. This would be an instance of transfer learning. If you are interested in learning more, start with the astromonically impactful papers of [word2vec](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) and [GloVe](https://www.aclweb.org/anthology/D14-1162.pdf). Next **Advanced Section** will focus on *word2vec*. In Keras we use the [`Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer: ``` tf.keras.layers.Embedding( input_dim, output_dim, embeddings_initializer='uniform', embeddings_regularizer=None, activity_regularizer=None, embeddings_constraint=None, mask_zero=False, input_length=None, **kwargs ) ``` We'll need to specify the `input_dim` and `output_dim`. Since we are working with sequences we also need to set the `input_length`. Let's implement this ``` MAX_LEN EMBED_DIM = 2 model.reset_states() model = Sequential(name='embedding_FFNN') ## EMBEDDING AND FLATTEN LAYERs model.add(Embedding(MAX_VOCAB, EMBED_DIM, input_length=MAX_LEN)) model.add(Flatten()) #- model.add(Dense(250, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary()) model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=128, verbose=2) scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1]*100)) ``` WoW! Notice the huge improvement in the performance. Embedding layer really helps! NOTE: We need a flatten layer to correct the dimensions. The embedding layer returns a matrix where each column corresponds to a word encoding. However, the next `Dense` layer is expecting a vector instead of a matrix # Activity 2: RNN with embedding for NLP <div id='act2'></div> <img src='simplernn.png' width=300px> - Construct a network architecture with: - an embedding layer - SimpleRNN unit of 250 neurons - Dense layer - Train this network on the data used in the previous example, namely `X_train`, `y_train` - Train for 3 epochs and for a batch_size=128. It is slow because it is not run on GPUs. - Accordingly, evaluate on `X_test`, `y_test` datasets - Report the accuracy score on the testing set - Can you see any improvement comparing to the FFNN model? Make some comments ``` # Your code here model = Sequential(name='SimpleRNN') model.add(Embedding(MAX_VOCAB, EMBED_DIM, input_length=MAX_LEN)) # model.add(tf.keras.layers.GRU(250)) # FOR GRU implementation model.add(SimpleRNN(250)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary()) model.fit(X_train, y_train, validation_data=(X_test, y_test), # validation_split=0.2, epochs=3, batch_size=128) scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1]*100)) ``` Notice that we do not get any improvement comparing to FFNNs. What is going on here??? Why does FFNN perform better that RNNs? It is because this task is extremely easy and the network does not really need memory to make a good prediction. Just some key words appearing in the text like "terrible" or "amazing" can determine the prediction. In more challenging tasks, like mult-categorical classification and text generation, memory is crucial and recurrency is a way to make it. Next week you will see some more efficient RNN architectures like **LSTM** and **GRU**. These are much more efficient RNNs and can also be implemented on GPUs. # Extra Material <div id='SM'></div> ## Inpsecting the embedding space Let's train again the FFNN with the embeddings layer ``` EMBED_DIM = 2 model.reset_states() model = Sequential(name='embedding_FFNN') model.add(Embedding(MAX_VOCAB, EMBED_DIM, input_length=MAX_LEN)) model.add(Flatten()) model.add(Dense(250, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary()) model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=128, verbose=0) scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1]*100)) ``` #### Get access to the embeddings or embedding space or latent space ``` from tensorflow.keras import backend # with a Sequential model get_embed_out = backend.function( [model.layers[0].input], [model.layers[1].output]) layer_output = get_embed_out([X_test[0]]) # layer_output = get_embed_out([X_train[0]]) print(type(layer_output), len(layer_output), layer_output[0].shape) words = layer_output[0] plt.plot(words[:,0], words[:,1],'bo') ``` #### Create a list of some representative words and check where they live in the embedding space. Can you see any meaningful patern? ``` review = ['great', 'pleasure', 'good', 'awesome', 'movie', 'and', 'was' , 'bad', 'boring' , 'crap'] enc_review = tf.constant([word2idx[word] for word in review]) enc_review words = get_embed_out([enc_review])[0] plt.figure(figsize=[10,10]) plt.plot(words[:,0], words[:,1], 'ob') for i, txt in enumerate(review): plt.annotate(txt, (words[i,0], words[i,1]), size=18) ```
github_jupyter
``` !wget www.di.ens.fr/~lelarge/MNIST.tar.gz !tar -zxvf MNIST.tar.gz import torch import torchvision from torchvision.datasets import MNIST from torch.utils.data import random_split, DataLoader import torch.nn.functional as F from collections import namedtuple import matplotlib.pyplot as plt Dataset = MNIST(root="./", transform=torchvision.transforms.ToTensor()) ValSplit = 0.2 TrainSZ, ValSZ = (int(len(Dataset) *( 1-ValSplit )), int(len(Dataset) * ValSplit)) TrainData, ValData = random_split(Dataset, (TrainSZ, ValSZ)) BatchSZ = 100 TrainLoader = DataLoader(TrainData, BatchSZ, shuffle= True) ValLoader = DataLoader(ValData, BatchSZ) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') device ModelStats = namedtuple('ModelStats', ['Loss', 'Accuracy']) class CNNModel(torch.nn.Module): def __init__(self, InputSZ, NClasses): super().__init__() self.InputSize = InputSZ self.NumClasses = NClasses self.conv1 = torch.nn.Conv2d(1, 32, kernel_size=5) self.conv2 = torch.nn.Conv2d(32, 32, kernel_size=5) self.conv3 = torch.nn.Conv2d(32,64, kernel_size=5) self.fc1 = torch.nn.Linear(3*3*64, 256) self.fc2 = torch.nn.Linear(256, NClasses) def forward(self, x): x = F.relu(self.conv1(x)) x = F.relu(F.max_pool2d(self.conv2(x), 2)) x = F.dropout(x, p=0.5, training=self.training) x = F.relu(F.max_pool2d(self.conv3(x),2)) x = F.dropout(x, p=0.5, training=self.training) x = x.view(-1,3*3*64 ) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return x def Accuracy(self, Out, lable): val, idx = torch.max(Out, dim = 1) return torch.tensor(torch.sum(idx == lable).item() / len(idx)) def Step(self, Batch, Validation: bool): img, lbl = Batch img = img.to(device) lbl = lbl.to(device) out = self(img) loss = F.cross_entropy(out, lbl) if Validation: accuracy = self.Accuracy(out, lbl) return ModelStats(loss, accuracy) else: return loss def EndValidationEpoch(self, outputs): b_loss = [x.Loss for x in outputs] b_acc = [x.Accuracy for x in outputs] e_loss = torch.mean(torch.stack(b_loss)) e_acc = torch.mean(torch.stack(b_acc)) return ModelStats(e_loss.item(), e_acc.item()) def EndEpoch(self, e, res): print(f"Epoch [{e}] Finished with Loss = {res.Loss:.4}, Accuracy = {(res.Accuracy * 100):.4}%") def EvaluateModel(model, Loader): out = [model.Step(b, True) for b in Loader] return model.EndValidationEpoch(out) def Fit(epochs, model, TrainLoader, ValLoader, opt): History = [] for epoch in range(epochs): for batch in TrainLoader: loss = model.Step(batch, False) loss.backward() opt.step() opt.zero_grad() res = EvaluateModel(model, ValLoader) model.EndEpoch(epoch, res) History.append(res) return History MODEL = CNNModel(28*28, 10) MODEL.to(device) LR = 0.001 MOMENTUM = 0.9 EPOCHS = 15 Optimizer = torch.optim.Adam(MODEL.parameters(), lr= LR) Hist = Fit(EPOCHS, MODEL, TrainLoader, ValLoader, Optimizer) Losses = [x.Loss for x in Hist] Accs = [x.Accuracy for x in Hist] plt.plot(Losses, label= "Losses") plt.plot(Accs, label= "Accuracy") plt.title("Loss vs Accuracy") plt.legend() plt.show() TestDataset= MNIST(root="./", train=False,transform=torchvision.transforms.ToTensor()) def Predict_Image(img, model, lbl): plt.imshow(img[0], cmap="gray") img = torch.unsqueeze(img, 0) img = img.to(device) out = model(img) _, prediction = torch.max(out, dim=1) print(f"Model predicts {prediction[0].item()}, Truth = {lbl}") img, lbl = TestDataset[9870] Predict_Image(img, MODEL, lbl) TestLoader = DataLoader(TestDataset, 10, shuffle=True) res = EvaluateModel(MODEL, TestLoader) res import torch.onnx as onnx torch.save(MODEL.state_dict(), "CNNModel.pt") input_image = torch.zeros((1,1,28,28)).to(device) onnx.export(MODEL, input_image, 'CNNModel.onnx') ```
github_jupyter
# Aim of this notebook * To construct the singular curve of universal type to finalize the solution of the optimal control problem # Preamble ``` from sympy import * init_printing(use_latex='mathjax') # Plotting %matplotlib inline ## Make inline plots raster graphics from IPython.display import set_matplotlib_formats ## Import modules for plotting and data analysis import matplotlib.pyplot as plt from matplotlib import gridspec,rc,colors import matplotlib.ticker as plticker ## Parameters for seaborn plots import seaborn as sns sns.set(style='white',font_scale=1.25, rc={"xtick.major.size": 6, "ytick.major.size": 6, 'text.usetex': False, 'font.family': 'serif', 'font.serif': ['Times']}) import pandas as pd pd.set_option('mode.chained_assignment',None) import numpy as np from scipy.optimize import fsolve, root from scipy.integrate import ode backend = 'dopri5' import warnings # Timer import time from copy import deepcopy from itertools import cycle palette_size = 10; clrs = sns.color_palette("Reds",palette_size) iclrs = cycle(clrs) # iterated colors # Suppress warnings import warnings warnings.filterwarnings("ignore") ``` # Parameter values * Birth rate and const of downregulation are defined below in order to fit some experim. data ``` d = .13 # death rate α = .3 # low equilibrium point at expression of the main pathway (high equilibrium is at one) θ = .45 # threshold value for the expression of the main pathway κ = 40 # robustness parameter ``` * Symbolic variables - the list insludes μ & μbar, because they will be varied later ``` σ, φ0, φ, x, μ, μbar = symbols('sigma, phi0, phi, x, mu, mubar') ``` * Main functions ``` A = 1-σ*(1-θ) Eminus = (α*A-θ)**2/2 ΔE = A*(1-α)*((1+α)*A/2-θ) ΔEf = lambdify(σ,ΔE) ``` * Birth rate and cost of downregulation ``` b = (0.1*(exp(κ*(ΔEf(1)))+1)-0.14*(exp(κ*ΔEf(0))+1))/(exp(κ*ΔEf(1))-exp(κ*ΔEf(0))) # birth rate χ = 1-(0.14*(exp(κ*ΔEf(0))+1)-b*exp(κ*ΔEf(0)))/b b, χ c_relative = 0.2 c = c_relative*(b-d)/b+(1-c_relative)*χ/(exp(κ*ΔEf(0))+1) # cost of resistance c ``` * Hamiltonian *H* and a part of it ρ that includes the control variable σ ``` h = b*(χ/(exp(κ*ΔE)+1)*(1-x)+c*x) H = -φ0 + φ*(b*(χ/(exp(κ*ΔE)+1)-c)*x*(1-x)+μ*(1-x)/(exp(κ*ΔE)+1)-μbar*exp(-κ*Eminus)*x) + h ρ = (φ*(b*χ*x+μ)+b*χ)/(exp(κ*ΔE)+1)*(1-x)-φ*μbar*exp(-κ*Eminus)*x H, ρ ``` * Same but for no treatment (σ = 0) ``` h0 = h.subs(σ,0) H0 = H.subs(σ,0) ρ0 = ρ.subs(σ,0) H0, ρ0 ``` * Machinery: definition of the Poisson brackets ``` PoissonBrackets = lambda H1, H2: diff(H1,x)*diff(H2,φ)-diff(H1,φ)*diff(H2,x) ``` * Necessary functions and defining the right hand side of dynamical equations ``` ρf = lambdify((x,φ,σ,μ,μbar),ρ) ρ0f = lambdify((x,φ,μ,μbar),ρ0) dxdτ = lambdify((x,φ,σ,μ,μbar),-diff(H,φ)) dφdτ = lambdify((x,φ,σ,μ,μbar),diff(H,x)) dVdτ = lambdify((x,σ),h) dρdσ = lambdify((σ,x,φ,μ,μbar),diff(ρ,σ)) dδρdτ = lambdify((x,φ,σ,μ,μbar),-PoissonBrackets(ρ0-ρ,H)) def ode_rhs(t,state,μ,μbar): x, φ, V, δρ = state σs = [0,1] if (dρdσ(1.,x,φ,μ,μbar)<0) and (dρdσ(θ,x,φ,μ,μbar)>0): σstar = fsolve(dρdσ,.8,args=(x,φ,μ,μbar,))[0] else: σstar = 1.; if ρf(x,φ,σstar,μ,μbar) < ρ0f(x,φ,μ,μbar): sgm = 0 else: sgm = σstar return [dxdτ(x,φ,sgm,μ,μbar),dφdτ(x,φ,sgm,μ,μbar),dVdτ(x,sgm),dδρdτ(x,φ,σstar,μ,μbar)] def get_primary_field(name, experiment,μ,μbar): solutions = {} solver = ode(ode_rhs).set_integrator(backend) τ0 = experiment['τ0'] tms = np.linspace(τ0,experiment['T_end'],1e3+1) for x0 in experiment['x0']: δρ0 = ρ0.subs(x,x0).subs(φ,0)-ρ.subs(x,x0).subs(φ,0).subs(σ,1.) solver.set_initial_value([x0,0,0,δρ0],0.).set_f_params(μ,μbar) sol = []; k = 0; while (solver.t < experiment['T_end']) and (solver.y[0]<=1.) and (solver.y[0]>=0.): solver.integrate(tms[k]) sol.append([solver.t]+list(solver.y)) k += 1 solutions[x0] = {'solution': sol} for x0, entry in solutions.items(): entry['τ'] = [entry['solution'][j][0] for j in range(len(entry['solution']))] entry['x'] = [entry['solution'][j][1] for j in range(len(entry['solution']))] entry['φ'] = [entry['solution'][j][2] for j in range(len(entry['solution']))] entry['V'] = [entry['solution'][j][3] for j in range(len(entry['solution']))] entry['δρ'] = [entry['solution'][j][4] for j in range(len(entry['solution']))] return solutions def get_δρ_value(tme,x0,μ,μbar): solver = ode(ode_rhs).set_integrator(backend) δρ0 = ρ0.subs(x,x0).subs(φ,0)-ρ.subs(x,x0).subs(φ,0).subs(σ,1.) solver.set_initial_value([x0,0,0,δρ0],0.).set_f_params(μ,μbar) while (solver.t < tme) and (solver.y[0]<=1.) and (solver.y[0]>=0.): solver.integrate(tme) sol = [solver.t]+list(solver.y) return solver.y[3] def get_δρ_ending(params,μ,μbar): tme, x0 = params solver = ode(ode_rhs).set_integrator(backend) δρ0 = ρ0.subs(x,x0).subs(φ,0)-ρ.subs(x,x0).subs(φ,0).subs(σ,1.) solver.set_initial_value([x0,0,0,δρ0],0.).set_f_params(μ,μbar) δτ = 1.0e-8; tms = [tme,tme+δτ] _k = 0; sol = [] while (_k<len(tms)):# and (solver.y[0]<=1.) and (solver.y[0]>=0.): solver.integrate(tms[_k]) sol.append(solver.y) _k += 1 #print(sol) return(sol[0][3],(sol[1][3]-sol[0][3])/δτ) def get_state(tme,x0,μ,μbar): solver = ode(ode_rhs).set_integrator(backend) δρ0 = ρ0.subs(x,x0).subs(φ,0)-ρ.subs(x,x0).subs(φ,0).subs(σ,1.) solver.set_initial_value([x0,0,0,δρ0],0.).set_f_params(μ,μbar) δτ = 1.0e-8; tms = [tme,tme+δτ] _k = 0; sol = [] while (solver.t < tms[-1]) and (solver.y[0]<=1.) and (solver.y[0]>=0.): solver.integrate(tms[_k]) sol.append(solver.y) _k += 1 return(list(sol[0])+[(sol[1][3]-sol[0][3])/δτ]) ``` # Machinery for the universal line * To find the universal singular curve we need to define two parameters ``` γ0 = PoissonBrackets(PoissonBrackets(H,H0),H) γ1 = PoissonBrackets(PoissonBrackets(H0,H),H0) ``` * The dynamics ``` dxdτSingExpr = -(γ0*diff(H0,φ)+γ1*diff(H,φ))/(γ0+γ1) dφdτSingExpr = (γ0*diff(H0,x)+γ1*diff(H,x))/(γ0+γ1) dVdτSingExpr = (γ0*h0+γ1*h)/(γ0+γ1) σSingExpr = γ1*σ/(γ0+γ1) ``` * Machinery for Python: lambdify the functions above ``` dxdτSing = lambdify((x,φ,σ,μ,μbar),dxdτSingExpr) dφdτSing = lambdify((x,φ,σ,μ,μbar),dφdτSingExpr) dVdτSing = lambdify((x,φ,σ,μ,μbar),dVdτSingExpr) σSing = lambdify((x,φ,σ,μ,μbar),σSingExpr) def ode_rhs_Sing(t,state,μ,μbar): x, φ, V = state if (dρdσ(1.,x,φ,μ,μbar)<0) and (dρdσ(θ,x,φ,μ,μbar)>0): σstar = fsolve(dρdσ,.8,args=(x,φ,μ,μbar,))[0] else: σstar = 1.; #print([σstar,σSing(x,φ,σstar,μ,μbar)]) return [dxdτSing(x,φ,σstar,μ,μbar),dφdτSing(x,φ,σstar,μ,μbar),dVdτSing(x,φ,σstar,μ,μbar)] # def ode_rhs_Sing(t,state,μ,μbar): # x, φ, V = state # if (dρdσ(1.,x,φ,μ,μbar)<0) and (dρdσ(θ,x,φ,μ,μbar)>0): # σstar = fsolve(dρdσ,.8,args=(x,φ,μ,μbar,))[0] # else: # σstar = 1.; # σTrav = fsolve(lambda σ: dxdτ(x,φ,σ,μ,μbar)-dxdτSing(x,φ,σstar,μ,μbar),.6)[0] # print([σstar,σTrav]) # return [dxdτSing(x,φ,σstar,μ,μbar),dφdτSing(x,φ,σstar,μ,μbar),dVdτ(x,σTrav)] def get_universal_curve(end_point,tmax,Nsteps,μ,μbar): tms = np.linspace(end_point[0],tmax,Nsteps); solver = ode(ode_rhs_Sing).set_integrator(backend) solver.set_initial_value(end_point[1:4],end_point[0]).set_f_params(μ,μbar) _k = 0; sol = [] while (solver.t < tms[-1]): solver.integrate(tms[_k]) sol.append([solver.t]+list(solver.y)) _k += 1 return sol def get_σ_universal(tme,end_point,μ,μbar): δτ = 1.0e-8; tms = [tme,tme+δτ] solver = ode(ode_rhs_Sing).set_integrator(backend) solver.set_initial_value(end_point[1:4],end_point[0]).set_f_params(μ,μbar) _k = 0; sol = [] while (solver.t < tme+δτ): solver.integrate(tms[_k]) sol.append([solver.t]+list(solver.y)) _k += 1 x, φ = sol[0][:2] sgm = fsolve(lambda σ: dxdτ(x,φ,σ,μ,μbar)-(sol[1][0]-sol[0][0])/δτ,θ/2)[0] return sgm def get_state_universal(tme,end_point,μ,μbar): solver = ode(ode_rhs_Sing).set_integrator(backend) solver.set_initial_value(end_point[1:4],end_point[0]).set_f_params(μ,μbar) solver.integrate(tme) return [solver.t]+list(solver.y) def ode_rhs_with_σstar(t,state,μ,μbar): x, φ, V = state if (dρdσ(1.,x,φ,μ,μbar)<0) and (dρdσ(θ,x,φ,μ,μbar)>0): σ = fsolve(dρdσ,.8,args=(x,φ,μ,μbar,))[0] else: σ = 1.; return [dxdτ(x,φ,σ,μ,μbar),dφdτ(x,φ,σ,μ,μbar),dVdτ(x,σ)] def ode_rhs_with_given_σ(t,state,σ,μ,μbar): x, φ, V = state return [dxdτ(x,φ,σ,μ,μbar),dφdτ(x,φ,σ,μ,μbar),dVdτ(x,σ)] def get_trajectory_with_σstar(starting_point,tmax,Nsteps,μ,μbar): tms = np.linspace(starting_point[0],tmax,Nsteps) solver = ode(ode_rhs_with_σstar).set_integrator(backend) solver.set_initial_value(starting_point[1:],starting_point[0]).set_f_params(μ,μbar) sol = []; _k = 0; while solver.t < max(tms) and (solver.y[0]<=1.) and (solver.y[0]>=0.): solver.integrate(tms[_k]) sol.append([solver.t]+list(solver.y)) _k += 1 return sol def get_trajectory_with_given_σ(starting_point,tmax,Nsteps,σ,μ,μbar): tms = np.linspace(starting_point[0],tmax,100) solver = ode(ode_rhs_with_given_σ).set_integrator(backend) solver.set_initial_value(starting_point[1:],starting_point[0]).set_f_params(σ,μ,μbar) sol = []; _k = 0; while solver.t < max(tms) and (solver.y[0]<=1.) and (solver.y[0]>=0.): solver.integrate(tms[_k]) sol.append([solver.t]+list(solver.y)) _k += 1 return sol def get_state_with_σstar(tme,starting_point,μ,μbar): solver = ode(ode_rhs_with_σstar).set_integrator(backend) solver.set_initial_value(starting_point[1:4],starting_point[0]).set_f_params(μ,μbar) solver.integrate(tme) return [solver.t]+list(solver.y) def get_finalizing_point_from_universal_curve(tme,tmx,end_point,μ,μbar): unv_point = get_state_universal(tme,end_point,μ,μbar) return get_state_with_σstar(tmx,unv_point,μ,μbar)[1] ``` # Field of optimal trajectories as the solution of the Bellman equation * μ & μbar are varied by *T* and *T*bar ($\mu=1/T$ and $\bar\mu=1/\bar{T}$) ``` tmx = 180. end_switching_curve = {'t': 24., 'x': .9/.8} # for Τ, Τbar in zip([28]*5,[14,21,28,35,60]): for Τ, Τbar in zip([28],[60]): μ = 1./Τ; μbar = 1./Τbar print("Parameters: μ = %.5f, μbar = %.5f"%(μ,μbar)) end_switching_curve['t'], end_switching_curve['x'] = root(get_δρ_ending,(end_switching_curve['t'],end_switching_curve['x']),args=(μ,μbar)).x end_point = [end_switching_curve['t']]+get_state(end_switching_curve['t'],end_switching_curve['x'],μ,μbar) print("Ending point for the switching line: τ = %.1f days, x = %.1f%%" % (end_point[0], end_point[1]*100)) print("Checking the solution - should give zero values: ") print(get_δρ_ending([end_switching_curve['t'],end_switching_curve['x']],μ,μbar)) print("* Constructing the primary field") experiments = { 'sol1': { 'T_end': tmx, 'τ0': 0., 'x0': list(np.linspace(0,end_switching_curve['x']-(1e-3),10))+list(np.linspace(end_switching_curve['x']+(1e-6),1.,10)) } } primary_field = [] for name, values in experiments.items(): primary_field.append(get_primary_field(name,values,μ,μbar)) print("* Constructing the switching curve") switching_curve = [] x0s = np.linspace(end_switching_curve['x'],1,21); _y = end_switching_curve['t'] for x0 in x0s: tme = fsolve(get_δρ_value,_y,args=(x0,μ,μbar))[0] if (tme>0): switching_curve = switching_curve+[[tme,get_state(tme,x0,μ,μbar)[0]]] _y = tme print("* Constructing the universal curve") universal_curve = get_universal_curve(end_point,tmx,25,μ,μbar) print("* Finding the last characteristic") #time0 = time.time() # tuniv = fsolve(get_finalizing_point_from_universal_curve,tmx-40.,args=(tmx,end_point,μ,μbar,))[0] tuniv = root(get_finalizing_point_from_universal_curve,tmx-40,args=(tmx,end_point,μ,μbar)).x print(tuniv) #print("The proccess to find the last characteristic took %0.1f minutes" % ((time.time()-time0)/60.)) univ_point = get_state_universal(tuniv,end_point,μ,μbar) print("The last point on the universal line:") print(univ_point) last_trajectory = get_trajectory_with_σstar(univ_point,tmx,50,μ,μbar) print("Final state:") final_state = get_state_with_σstar(tmx,univ_point,μ,μbar) print(final_state) print("Fold-change in tumor size: %.2f"%(exp((b-d)*tmx-final_state[-1]))) # Plotting plt.rcParams['figure.figsize'] = (6.75, 4) _k = 0 for solutions in primary_field: for x0, entry in solutions.items(): plt.plot(entry['τ'], entry['x'], 'k-', linewidth=.9, color=clrs[_k%palette_size]) _k += 1 plt.plot([x[0] for x in switching_curve],[x[1] for x in switching_curve],linewidth=2,color="red") plt.plot([end_point[0]],[end_point[1]],marker='o',color="red") plt.plot([x[0] for x in universal_curve],[x[1] for x in universal_curve],linewidth=2,color="red") plt.plot([x[0] for x in last_trajectory],[x[1] for x in last_trajectory],linewidth=.9,color="black") plt.xlim([0,tmx]); plt.ylim([0,1]); plt.xlabel("time, days"); plt.ylabel("fraction of resistant cells") plt.show() print() import csv from numpy.linalg import norm File = open("../figures/draft/sensitivity_mu-high_cost.csv", 'w') File.write("T,Tbar,mu,mubar,sw_start_x,sw_end_t,sw_end_x,univ_point_t,univ_point_x,outcome,err_sw_t,err_sw_x\n") writer = csv.writer(File,lineterminator='\n') end_switching_curve0 = {'t': 40.36, 'x': .92} end_switching_curve_prev_t = end_switching_curve0['t'] tuniv = tmx-30. Ts = np.arange(40,3,-1) #Τbars; Τbars = np.arange(40,3,-1) #np.arange(120,1,-1) #need to change here if more for Τ in Ts: μ = 1./Τ end_switching_curve = deepcopy(end_switching_curve0) for Τbar in Τbars: μbar = 1./Τbar print("* Parameters: T = %.1f, Tbar = %.1f (μ = %.5f, μbar = %.5f)"%(Τ,Τbar,μ,μbar)) success = False; err = 1. while (not success)|(norm(err)>1e-6): end_switching_curve = {'t': 2*end_switching_curve['t']-end_switching_curve_prev_t-.001, 'x': end_switching_curve['x']-0.002} sol = root(get_δρ_ending,(end_switching_curve['t'],end_switching_curve['x']),args=(μ,μbar)) end_switching_curve_prev_t = end_switching_curve['t'] end_switching_curve_prev_x = end_switching_curve['x'] end_switching_curve['t'], end_switching_curve['x'] = sol.x success = sol.success err = get_δρ_ending([end_switching_curve['t'],end_switching_curve['x']],μ,μbar) if (not success): print("! Trying again...", sol.message) elif (norm(err)>1e-6): print("! Trying again... Convergence is not sufficient") else: end_point = [end_switching_curve['t']]+get_state(end_switching_curve['t'],end_switching_curve['x'],μ,μbar) print("Ending point: t = %.2f, x = %.2f%%"%(end_switching_curve['t'],100*end_switching_curve['x'])," Checking the solution:",err) universal_curve = get_universal_curve(end_point,tmx,25,μ,μbar) tuniv = root(get_finalizing_point_from_universal_curve,tuniv,args=(tmx,end_point,μ,μbar)).x err_tuniv = get_finalizing_point_from_universal_curve(tuniv,tmx,end_point,μ,μbar) univ_point = get_state_universal(tuniv,end_point,μ,μbar) print("tuniv = %.2f"%tuniv,"xuniv = %.2f%%"%(100*univ_point[1])," Checking the solution: ",err_tuniv) final_state = get_state_with_σstar(tmx,univ_point,μ,μbar) outcome = exp((b-d)*tmx-final_state[-1]) print("Fold-change in tumor size: %.2f"%(outcome)) output = [Τ,Τbar,μ,μbar,end_switching_curve['x'],end_point[0],end_point[1]]+list(univ_point[0:2])+[outcome]+list(err)+[err_tuniv] writer.writerow(output) if (Τbar==Τ): end_switching_curve0 = deepcopy(end_switching_curve) File.close() ```
github_jupyter
# Construction In this section, we construct two classes to implement a basic feed-forward neural network. For simplicity, both are limited to one hidden layer, though the number of neurons in the input, hidden, and output layers is flexible. The two differ in how they combine results across observations. The first loops through observations and adds the individual gradients while the second calculates the entire gradient across observatinos in one fell swoop. Let's start by importing `numpy`, some visualization packages, and two datasets: the {doc}`Boston </content/appendix/data>` housing and {doc}`breast cancer </content/appendix/data>` datasets from `scikit-learn`. We will use the former for regression and the latter for classification. We also split each dataset into a train and test set. This is done with the hidden code cell below ``` ## Import numpy and visualization packages import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn import datasets ## Import Boston and standardize np.random.seed(123) boston = datasets.load_boston() X_boston = boston['data'] X_boston = (X_boston - X_boston.mean(0))/(X_boston.std(0)) y_boston = boston['target'] ## Train-test split np.random.seed(123) test_frac = 0.25 test_size = int(len(y_boston)*test_frac) test_idxs = np.random.choice(np.arange(len(y_boston)), test_size, replace = False) X_boston_train = np.delete(X_boston, test_idxs, 0) y_boston_train = np.delete(y_boston, test_idxs, 0) X_boston_test = X_boston[test_idxs] y_boston_test = y_boston[test_idxs] ## Import cancer and standardize np.random.seed(123) cancer = datasets.load_breast_cancer() X_cancer = cancer['data'] X_cancer = (X_cancer - X_cancer.mean(0))/(X_cancer.std(0)) y_cancer = 1*(cancer['target'] == 1) ## Train-test split np.random.seed(123) test_frac = 0.25 test_size = int(len(y_cancer)*test_frac) test_idxs = np.random.choice(np.arange(len(y_cancer)), test_size, replace = False) X_cancer_train = np.delete(X_cancer, test_idxs, 0) y_cancer_train = np.delete(y_cancer, test_idxs, 0) X_cancer_test = X_cancer[test_idxs] y_cancer_test = y_cancer[test_idxs] ``` Before constructing classes for our network, let's build our activation functions. Below we implement the ReLU function, sigmoid function, and the linear function (which simply returns its input). Let's also combine these functions into a dictionary so we can identify them with a string argument. ``` ## Activation Functions def ReLU(h): return np.maximum(h, 0) def sigmoid(h): return 1/(1 + np.exp(-h)) def linear(h): return h activation_function_dict = {'ReLU':ReLU, 'sigmoid':sigmoid, 'linear':linear} ``` ## 1. The Loop Approach Next, we construct a class for fitting feed-forward networks by looping through observations. This class conducts gradient descent by calculating the gradients based on one observation at a time, looping through all observations, and summing the gradients before adjusting the weights. Once instantiated, we fit a network with the `fit()` method. This method requires training data, the number of nodes for the hidden layer, an activation function for the first and second layers' outputs, a loss function, and some parameters for gradient descent. After storing those values, the method randomly instantiates the network's weights: `W1`, `c1`, `W2`, and `c2`. It then passes the data through this network to instantiate the output values: `h1`, `z1`, `h2`, and `yhat` (equivalent to `z2`). We then begin conducting gradient descent. Within each iteration of the gradient descent process, we also iterate through the observations. For each observation, we calculate the derivative of the loss for that observation with respect to the network's weights. We then sum these individual derivatives and adjust the weights accordingly, as is typical in gradient descent. The derivatives we calculate are covered in the {doc}`concept section </content/c7/concept>`. Once the network is fit, we can form predictions with the `predict()` method. This simply consists of running test observations through the network and returning their outputs. ``` class FeedForwardNeuralNetwork: def fit(self, X, y, n_hidden, f1 = 'ReLU', f2 = 'linear', loss = 'RSS', lr = 1e-5, n_iter = 1e3, seed = None): ## Store Information self.X = X self.y = y.reshape(len(y), -1) self.N = len(X) self.D_X = self.X.shape[1] self.D_y = self.y.shape[1] self.D_h = n_hidden self.f1, self.f2 = f1, f2 self.loss = loss self.lr = lr self.n_iter = int(n_iter) self.seed = seed ## Instantiate Weights np.random.seed(self.seed) self.W1 = np.random.randn(self.D_h, self.D_X)/5 self.c1 = np.random.randn(self.D_h, 1)/5 self.W2 = np.random.randn(self.D_y, self.D_h)/5 self.c2 = np.random.randn(self.D_y, 1)/5 ## Instantiate Outputs self.h1 = np.dot(self.W1, self.X.T) + self.c1 self.z1 = activation_function_dict[f1](self.h1) self.h2 = np.dot(self.W2, self.z1) + self.c2 self.yhat = activation_function_dict[f2](self.h2) ## Fit Weights for iteration in range(self.n_iter): dL_dW2 = 0 dL_dc2 = 0 dL_dW1 = 0 dL_dc1 = 0 for n in range(self.N): # dL_dyhat if loss == 'RSS': dL_dyhat = -2*(self.y[n] - self.yhat[:,n]).T # (1, D_y) elif loss == 'log': dL_dyhat = (-(self.y[n]/self.yhat[:,n]) + (1-self.y[n])/(1-self.yhat[:,n])).T # (1, D_y) ## LAYER 2 ## # dyhat_dh2 if f2 == 'linear': dyhat_dh2 = np.eye(self.D_y) # (D_y, D_y) elif f2 == 'sigmoid': dyhat_dh2 = np.diag(sigmoid(self.h2[:,n])*(1-sigmoid(self.h2[:,n]))) # (D_y, D_y) # dh2_dc2 dh2_dc2 = np.eye(self.D_y) # (D_y, D_y) # dh2_dW2 dh2_dW2 = np.zeros((self.D_y, self.D_y, self.D_h)) # (D_y, (D_y, D_h)) for i in range(self.D_y): dh2_dW2[i] = self.z1[:,n] # dh2_dz1 dh2_dz1 = self.W2 # (D_y, D_h) ## LAYER 1 ## # dz1_dh1 if f1 == 'ReLU': dz1_dh1 = 1*np.diag(self.h1[:,n] > 0) # (D_h, D_h) elif f1 == 'linear': dz1_dh1 = np.eye(self.D_h) # (D_h, D_h) # dh1_dc1 dh1_dc1 = np.eye(self.D_h) # (D_h, D_h) # dh1_dW1 dh1_dW1 = np.zeros((self.D_h, self.D_h, self.D_X)) # (D_h, (D_h, D_X)) for i in range(self.D_h): dh1_dW1[i] = self.X[n] ## DERIVATIVES W.R.T. LOSS ## dL_dh2 = dL_dyhat @ dyhat_dh2 dL_dW2 += dL_dh2 @ dh2_dW2 dL_dc2 += dL_dh2 @ dh2_dc2 dL_dh1 = dL_dh2 @ dh2_dz1 @ dz1_dh1 dL_dW1 += dL_dh1 @ dh1_dW1 dL_dc1 += dL_dh1 @ dh1_dc1 ## Update Weights self.W1 -= self.lr * dL_dW1 self.c1 -= self.lr * dL_dc1.reshape(-1, 1) self.W2 -= self.lr * dL_dW2 self.c2 -= self.lr * dL_dc2.reshape(-1, 1) ## Update Outputs self.h1 = np.dot(self.W1, self.X.T) + self.c1 self.z1 = activation_function_dict[f1](self.h1) self.h2 = np.dot(self.W2, self.z1) + self.c2 self.yhat = activation_function_dict[f2](self.h2) def predict(self, X_test): self.h1 = np.dot(self.W1, X_test.T) + self.c1 self.z1 = activation_function_dict[self.f1](self.h1) self.h2 = np.dot(self.W2, self.z1) + self.c2 self.yhat = activation_function_dict[self.f2](self.h2) return self.yhat ``` Let's try building a network with this class using the `boston` housing data. This network contains 8 neurons in its hidden layer and uses the ReLU and linear activation functions after the first and second layers, respectively. ``` ffnn = FeedForwardNeuralNetwork() ffnn.fit(X_boston_train, y_boston_train, n_hidden = 8) y_boston_test_hat = ffnn.predict(X_boston_test) fig, ax = plt.subplots() sns.scatterplot(y_boston_test, y_boston_test_hat[0]) ax.set(xlabel = r'$y$', ylabel = r'$\hat{y}$', title = r'$y$ vs. $\hat{y}$') sns.despine() ``` We can also build a network for binary classification. The model below attempts to predict whether an individual's cancer is malignant or benign. We use the log loss, the sigmoid activation function after the second layer, and the ReLU function after the first. ``` ffnn = FeedForwardNeuralNetwork() ffnn.fit(X_cancer_train, y_cancer_train, n_hidden = 8, loss = 'log', f2 = 'sigmoid', seed = 123, lr = 1e-4) y_cancer_test_hat = ffnn.predict(X_cancer_test) np.mean(y_cancer_test_hat.round() == y_cancer_test) ``` ## 2. The Matrix Approach Below is a second class for fitting neural networks that runs *much* faster by simultaneously calculating the gradients across observations. The math behind these calculations is outlined in the {doc}`concept section </content/c7/concept>`. This class's fitting algorithm is identical to that of the one above with one big exception: we don't have to iterate over observations. Most of the following gradient calculations are straightforward. A few require a tensor dot product, which is easily done using numpy. Consider the following gradient: $$ \frac{\partial \mathcal{L}}{\partial \mathbf{W}^{(L)}_{i, j}} = \sum_{n = 1}^N (\nabla \mathbf{H}^{(L)})_{i, n}\cdot \mathbf{Z}^{(L-1)}_{j, n}. $$ In words, $\partial\mathcal{L}/\partial \mathbf{W}^{(L)}$ is a matrix whose $(i, j)^\text{th}$ entry equals the sum across the $i^\text{th}$ row of $\nabla \mathbf{H}^{(L)}$ multiplied element-wise with the $j^\text{th}$ row of $\mathbf{Z}^{(L-1)}$. This calculation can be accomplished with `np.tensordot(A, B, (1,1))`, where `A` is $\nabla \mathbf{H}^{(L)}$ and `B` is $\mathbf{Z}^{(L-1)}$. `np.tensordot()` sums the element-wise product of the entries in `A` and the entries in `B` along a specified index. Here we specify the index with `(1,1)`, saying we want to sum across the columns for each. Similarly, we will use the following gradient: $$ \frac{\partial \mathcal{L}}{\partial \mathbf{Z}^{(L-1)}_{i, n}} = \sum_{d = 1}^{D_y} (\nabla \mathbf{H}^{(L)})_{d, n}\cdot \mathbf{W}^{(L)}_{d, i}. $$ Letting `C` represent $\mathbf{W}^{(L)}$, we can calculate this gradient in numpy with `np.tensordot(C, A, (0,0))`. ``` class FeedForwardNeuralNetwork: def fit(self, X, Y, n_hidden, f1 = 'ReLU', f2 = 'linear', loss = 'RSS', lr = 1e-5, n_iter = 5e3, seed = None): ## Store Information self.X = X self.Y = Y.reshape(len(Y), -1) self.N = len(X) self.D_X = self.X.shape[1] self.D_Y = self.Y.shape[1] self.Xt = self.X.T self.Yt = self.Y.T self.D_h = n_hidden self.f1, self.f2 = f1, f2 self.loss = loss self.lr = lr self.n_iter = int(n_iter) self.seed = seed ## Instantiate Weights np.random.seed(self.seed) self.W1 = np.random.randn(self.D_h, self.D_X)/5 self.c1 = np.random.randn(self.D_h, 1)/5 self.W2 = np.random.randn(self.D_Y, self.D_h)/5 self.c2 = np.random.randn(self.D_Y, 1)/5 ## Instantiate Outputs self.H1 = (self.W1 @ self.Xt) + self.c1 self.Z1 = activation_function_dict[self.f1](self.H1) self.H2 = (self.W2 @ self.Z1) + self.c2 self.Yhatt = activation_function_dict[self.f2](self.H2) ## Fit Weights for iteration in range(self.n_iter): # Yhat # if self.loss == 'RSS': self.dL_dYhatt = -(self.Yt - self.Yhatt) # (D_Y x N) elif self.loss == 'log': self.dL_dYhatt = (-(self.Yt/self.Yhatt) + (1-self.Yt)/(1-self.Yhatt)) # (D_y x N) # H2 # if self.f2 == 'linear': self.dYhatt_dH2 = np.ones((self.D_Y, self.N)) elif self.f2 == 'sigmoid': self.dYhatt_dH2 = sigmoid(self.H2) * (1- sigmoid(self.H2)) self.dL_dH2 = self.dL_dYhatt * self.dYhatt_dH2 # (D_Y x N) # c2 # self.dL_dc2 = np.sum(self.dL_dH2, 1) # (D_y) # W2 # self.dL_dW2 = np.tensordot(self.dL_dH2, self.Z1, (1,1)) # (D_Y x D_h) # Z1 # self.dL_dZ1 = np.tensordot(self.W2, self.dL_dH2, (0, 0)) # (D_h x N) # H1 # if self.f1 == 'ReLU': self.dL_dH1 = self.dL_dZ1 * np.maximum(self.H1, 0) # (D_h x N) elif self.f1 == 'linear': self.dL_dH1 = self.dL_dZ1 # (D_h x N) # c1 # self.dL_dc1 = np.sum(self.dL_dH1, 1) # (D_h) # W1 # self.dL_dW1 = np.tensordot(self.dL_dH1, self.Xt, (1,1)) # (D_h, D_X) ## Update Weights self.W1 -= self.lr * self.dL_dW1 self.c1 -= self.lr * self.dL_dc1.reshape(-1, 1) self.W2 -= self.lr * self.dL_dW2 self.c2 -= self.lr * self.dL_dc2.reshape(-1, 1) ## Update Outputs self.H1 = (self.W1 @ self.Xt) + self.c1 self.Z1 = activation_function_dict[self.f1](self.H1) self.H2 = (self.W2 @ self.Z1) + self.c2 self.Yhatt = activation_function_dict[self.f2](self.H2) def predict(self, X_test): X_testt = X_test.T self.h1 = (self.W1 @ X_testt) + self.c1 self.z1 = activation_function_dict[self.f1](self.h1) self.h2 = (self.W2 @ self.z1) + self.c2 self.Yhatt = activation_function_dict[self.f2](self.h2) return self.Yhatt ``` We fit networks of this class in the same way as before. Examples of regression with the `boston` housing data and classification with the `breast_cancer` data are shown below. ``` ffnn = FeedForwardNeuralNetwork() ffnn.fit(X_boston_train, y_boston_train, n_hidden = 8) y_boston_test_hat = ffnn.predict(X_boston_test) fig, ax = plt.subplots() sns.scatterplot(y_boston_test, y_boston_test_hat[0]) ax.set(xlabel = r'$y$', ylabel = r'$\hat{y}$', title = r'$y$ vs. $\hat{y}$') sns.despine() ffnn = FeedForwardNeuralNetwork() ffnn.fit(X_cancer_train, y_cancer_train, n_hidden = 8, loss = 'log', f2 = 'sigmoid', seed = 123, lr = 1e-4) y_cancer_test_hat = ffnn.predict(X_cancer_test) np.mean(y_cancer_test_hat.round() == y_cancer_test) ```
github_jupyter
# Shor's Algorithm for Factorization of Integers Given a large number $N$, say with at least 100 digits, how can we find a factor of $N$? There are several famous classical algorithms, and [Wikipedia](https://en.wikipedia.org/wiki/Integer_factorization) contains an exhaustive list of these algorithms. The best known algorithm for huge integers is the _general field sieve_ algorithm, which has a runtime of $\exp(O(\ln N)^{1/3}(\ln\ln N)^{2/3})$. Factorization is definitely a hard problem, but it is not known whether it can be solved in $\text{poly}(n)$ time (where $n = \lceil\log_2 N\rceil$). It is widely assumed that Factorization is not in $P$, and this is the base of cryptographic protocols such as RSA that are in use today. Shor's algorithm provides a fast quantum solution to the factoring problem (in time polynomial in the number of input bits), and we shall see in this tutorial how exactly it finds factors of composite numbers. ## Part I: Reduction of Factorization to Order-finding Let us start by picking an $x$ uniformly at random from $\{2,\ldots, N-1\}$. The [Euclidean Algorithm](http://www-math.ucdenver.edu/~wcherowi/courses/m5410/exeucalg.html) is able to determine $\text{gcd}(x,N)$ efficiently. If $\text{gcd}(x,N)\neq 1$, we were lucky and already find a factor of $N$! Otherwise, there's more work left... Let $r\ge 1$ be the smallest integer (known as the *order*) such that $x^r\equiv 1 \mod N$. If $r$ is even, we know that $$ (x^{r/2}-1)(x^{r/2}+1)\equiv 0 \mod N,$$ implying $\text{gcd}(x^{r/2}-1, N)$ or $\text{gcd}(x^{r/2}+1, N)$ will give a factor $d$ of $N$. We can then run the same order-finding algorithm recursively on $N/d$. Thus, if we have an efficient way of calculating $r = A(x,N)$, the order of $x$ modulo $N$, we can solve the factorization problem as follows: > factorize($N$): > + pick $x$ uniformly at random from $\{2,\ldots,N-1\}$. > + if $d = \text{gcd}(x,N)\neq 1$, return $d$ as a factor and run _factorize($N/d$)_. > + else: > - let $r = A(x,N)$. > - if $r$ is even, $\text{gcd}(x^{r/2}-1, N)$ or $\text{gcd}(x^{r/2}+1, N)$ will give a factor $d$. return $d$ and run _factorize($N/d$)_. > - else pick another $x$ uniformly at random and repeat. ## Part II: Finding the order In order to compute the value of $r = A(x,n)$, we shall first require a unitary operator $U_x$ such that $$U_x\lvert j\rangle_t \lvert k\rangle_n = \lvert j\rangle_t \lvert k\oplus (x^j\mbox{ mod } N)\rangle_n.$$ Then, we consider the following circuit with Register 1 of $t$ qubits and Register 2 of $n$ qubits: <img src="./img/104_shor_ckt.png" alt="shor-circuit" width="600"/> Here $n=\lceil \log_2N\rceil$ and $t=\lceil 2\log_2 N\rceil$ in general. The choice of $t$ can be simplified to $t=n=\lceil \log_2N\rceil$ if $r=A(x,n)$ is a power of $2$. We shall consider this special case first. ### Case 1: Order is a power of 2 Then, after the initialization we obtain the state $$\varphi_1 = \frac 1{\sqrt{2^t}}\sum_{j=0}^{2^t-1}\lvert j\rangle \lvert 0\rangle,$$ And therefore, $$\varphi_2 = U_x\varphi_1 = \frac 1{\sqrt{2^t}}\sum_{j=0}^{2^t-1}\lvert j\rangle \lvert x^j\mbox{ mod }N\rangle.$$ $\varphi_2$ could be thought of as an encoding of all $x^j\mbox{ mod }N$ calculated for each integer $j<2^t$, and we would be interested in finding the smallest $j$ for which $x^j\mbox{ mod }N=1$. For simplicity of calculations, we first measure the second register. Note that as every $j$ can be written as $$ j = ar+b, \mbox{ where }0\le a < 2^t/r \mbox{ and } 0\le b <r,$$ we can then write $\varphi_2$ as the following double sum: $$\varphi_2 = \frac 1{\sqrt{2^t}}\sum_{b=0}^{r-1}\sum_{a=0}^{2^t/r -1}\lvert ar+b\rangle\lvert x^{ar+b}\mbox{ mod }N\rangle.$$ Note that $x^{ar+b}\mbox{ mod }N = x^b\mbox{ mod }N$. Also, recall that $2^t/r$ is an integer since $r$ is a power of $2$ in this case. Thus, we can finally write $$\varphi_2 = \frac 1{\sqrt{2^t}}\sum_{b=0}^{r-1}\sum_{a=0}^{2^t/r -1}\lvert ar+b\rangle\lvert x^{b}\mbox{ mod }N\rangle.$$ Now we measure the second register. Each choice $x^0, \ldots, x^{r-1}$ are equally probably to be measured. Say the output of the measurement is $x^{b_0}$, then $\varphi_2$ ends up in the following state: $$\varphi_3 = \frac{\sqrt r}{\sqrt{2^t}}\sum_{a=0}^{2^t/r - 1} \lvert ar+b_0\rangle \lvert x^{b_0}\mbox{ mod }N \rangle.$$ Now the only uncertainty is in the first register, and if measured, we'll see a probability of $r/2^t$ of each state $\lvert ar+b_0\rangle, 0\le a<2^t/r$ to be measured. Finally, we apply inverse QFT on the first register. Recall that we already covered the action of $\mbox{QFT}$ and $\mbox{QFT}^\dagger$, and this lets us compute the final quantum state of the circuit as follows: $$ \begin{aligned}\varphi_4 &= \frac{\sqrt{r}}{\sqrt{2^t}} \sum_{a=0}^{2^t/r-1}\left[\frac1{\sqrt{2^t}}\sum_{j=0}^{2^t-1}\exp\left(\frac{-2\pi i j(ar+b_0)}{2^r}\right)\lvert j\rangle \right]\lvert x^{b_0}\mbox{ mod }N \rangle\\ & = \frac1{\sqrt r}\left[\sum_{j=0}^{2^t-1}\left(\frac r{2^t}\sum_{a=0}^{2^t/r-1}\exp\left(\frac{-2\pi ija}{2^t/r}\right) \right)\exp\left(\frac{-2\pi ijb_0}{2^t}\right)\lvert j\rangle\right]\lvert x^{b_0}\mbox{ mod }N \rangle\end{aligned}.$$ Using the Fourier identity $\frac 1N\sum_{j=0}^{N-1} \exp(2\pi ijk/N) = 1$ if $k$ is a multiple of $N$ and $0$ otherwise, we see that the expression in the inner parentheses is $0$ most of the time. Only when $j$ is a multiple of $2^t/r$, we obtain a nonzero expression. Thus, $$\varphi_4 = \frac 1{\sqrt r}\sum_{k=0}^{r-1}\exp\left(\frac{-2\pi ikb_0}r\right) \lvert k2^t/r\rangle \lvert x^{b_0}\mbox{ mod }N\rangle.$$ The measurement outcomes from measuring $\varphi_4$ therefore are $k2^t/r$, for $0\le k \le r-1$. Let us now measure $\varphi_4$. We now do the following steps with the outcome $B = k_02^t/r$. + If the outcome $B$ is $0$, then we obtain no information about $r$, and will run our circuit again. + If we $B = k_02^t/r$ for some $0 < k_0\le r-1$, compute $k_0/r = B/2^t$. We then know that the denominator of $B/2^t$ _divides_ $r$. - Let $r_1$ be the denominator. If $x^{r_1}\mbox{ mod }N = 1$, $r_1$ is the order, and we can stop. - Otherwise, Let $r_2 = r/r_1$. Note that $r_2$ is the order of $x^{r_1}$, and we run the algorithm again to find the order of $x^{r_1}$. We apply the algorithm recursively until we find the entire order $r$. This is the full description of Shor's algorithm. Although technical, it's an efficient algorithm with clear basic steps. For now, we postpone the discussion of Case 2 to a later section. ### Implementation of Shor's Algorithm for Case 1 Let us now implement Shor's Algorithm to factorize $N=15$. We choose $15$ since the orders of the numbers $\{1,2,4,7,8,11,13,14\}$ which are less than $15$ and coprime to $15$ are all $2$ or $4$, leading to an ideal circuit for us. Recall that we need $t=4$ qubits in the first register and $n=4$ qubits in the second register! We shall now focus on finding the order of $7$ modulo $15$. ### Implementing $U_x$ One of the main challenges in implementing Shor's algorithm is to create the circuit $U_x$ using physical gates. We take the implementation for $x^b\mbox{ mod }N$ for $N=15$ from [Markov-Saeedi](https://arxiv.org/pdf/1202.6614.pdf). ``` # Install blueqat! # !pip install blueqat # Import libraries from blueqat import Circuit import numpy as np # Recall QFT dagger from our previous tutorial def apply_qft_dagger(circuit: Circuit(), qubits): num_qubits = len(qubits) # Reverse the order of qubits at the end for i in range(int(num_qubits/2)): circuit.swap(qubits[i],qubits[num_qubits-i-1]) for i in range(num_qubits): for j in range(i): circuit.cphase(-np.pi/(2 ** (j-i)))[qubits[j],qubits[i]] circuit.h[qubits[i]] # Implementation of U_x as a black box. More details can be found in the previously mentioned paper. def apply_U_7_mod15(circuit: Circuit(), qubits): assert len(qubits) == 8, 'Must have 8 qubits as input.' circuit.x[qubits[7]] circuit.ccx[qubits[0],qubits[6],qubits[7]] circuit.ccx[qubits[0],qubits[7],qubits[6]] circuit.ccx[qubits[0],qubits[6],qubits[7]] circuit.ccx[qubits[0],qubits[5],qubits[6]] circuit.ccx[qubits[0],qubits[6],qubits[5]] circuit.ccx[qubits[0],qubits[5],qubits[6]] circuit.ccx[qubits[0],qubits[4],qubits[5]] circuit.ccx[qubits[0],qubits[5],qubits[4]] circuit.ccx[qubits[0],qubits[4],qubits[5]] circuit.cx[qubits[0],qubits[4]] circuit.cx[qubits[0],qubits[5]] circuit.cx[qubits[0],qubits[6]] circuit.cx[qubits[0],qubits[7]] circuit.ccx[qubits[1],qubits[6],qubits[7]] circuit.ccx[qubits[1],qubits[7],qubits[6]] circuit.ccx[qubits[1],qubits[6],qubits[7]] circuit.ccx[qubits[1],qubits[5],qubits[6]] circuit.ccx[qubits[1],qubits[6],qubits[5]] circuit.ccx[qubits[1],qubits[5],qubits[6]] circuit.ccx[qubits[1],qubits[4],qubits[5]] circuit.ccx[qubits[1],qubits[5],qubits[4]] circuit.ccx[qubits[1],qubits[4],qubits[5]] circuit.cx[qubits[1],qubits[4]] circuit.cx[qubits[1],qubits[5]] circuit.cx[qubits[1],qubits[6]] circuit.cx[qubits[1],qubits[7]] circuit.ccx[qubits[1],qubits[6],qubits[7]] circuit.ccx[qubits[1],qubits[7],qubits[6]] circuit.ccx[qubits[1],qubits[6],qubits[7]] circuit.ccx[qubits[1],qubits[5],qubits[6]] circuit.ccx[qubits[1],qubits[6],qubits[5]] circuit.ccx[qubits[1],qubits[5],qubits[6]] circuit.ccx[qubits[1],qubits[4],qubits[5]] circuit.ccx[qubits[1],qubits[5],qubits[4]] circuit.ccx[qubits[1],qubits[4],qubits[5]] circuit.cx[qubits[1],qubits[4]] circuit.cx[qubits[1],qubits[5]] circuit.cx[qubits[1],qubits[6]] circuit.cx[qubits[1],qubits[7]] circuit.ccx[qubits[2],qubits[6],qubits[7]] circuit.ccx[qubits[2],qubits[7],qubits[6]] circuit.ccx[qubits[2],qubits[6],qubits[7]] circuit.ccx[qubits[2],qubits[5],qubits[6]] circuit.ccx[qubits[2],qubits[6],qubits[5]] circuit.ccx[qubits[2],qubits[5],qubits[6]] circuit.ccx[qubits[2],qubits[4],qubits[5]] circuit.ccx[qubits[2],qubits[5],qubits[4]] circuit.ccx[qubits[2],qubits[4],qubits[5]] circuit.cx[qubits[2],qubits[4]] circuit.cx[qubits[2],qubits[5]] circuit.cx[qubits[2],qubits[6]] circuit.cx[qubits[2],qubits[7]] circuit.ccx[qubits[2],qubits[6],qubits[7]] circuit.ccx[qubits[2],qubits[7],qubits[6]] circuit.ccx[qubits[2],qubits[6],qubits[7]] circuit.ccx[qubits[2],qubits[5],qubits[6]] circuit.ccx[qubits[2],qubits[6],qubits[5]] circuit.ccx[qubits[2],qubits[5],qubits[6]] circuit.ccx[qubits[2],qubits[4],qubits[5]] circuit.ccx[qubits[2],qubits[5],qubits[4]] circuit.ccx[qubits[2],qubits[4],qubits[5]] circuit.cx[qubits[2],qubits[4]] circuit.cx[qubits[2],qubits[5]] circuit.cx[qubits[2],qubits[6]] circuit.cx[qubits[2],qubits[7]] circuit.ccx[qubits[2],qubits[6],qubits[7]] circuit.ccx[qubits[2],qubits[7],qubits[6]] circuit.ccx[qubits[2],qubits[6],qubits[7]] circuit.ccx[qubits[2],qubits[5],qubits[6]] circuit.ccx[qubits[2],qubits[6],qubits[5]] circuit.ccx[qubits[2],qubits[5],qubits[6]] circuit.ccx[qubits[2],qubits[4],qubits[5]] circuit.ccx[qubits[2],qubits[5],qubits[4]] circuit.ccx[qubits[2],qubits[4],qubits[5]] circuit.cx[qubits[2],qubits[4]] circuit.cx[qubits[2],qubits[5]] circuit.cx[qubits[2],qubits[6]] circuit.cx[qubits[2],qubits[7]] circuit.ccx[qubits[2],qubits[6],qubits[7]] circuit.ccx[qubits[2],qubits[7],qubits[6]] circuit.ccx[qubits[2],qubits[6],qubits[7]] circuit.ccx[qubits[2],qubits[5],qubits[6]] circuit.ccx[qubits[2],qubits[6],qubits[5]] circuit.ccx[qubits[2],qubits[5],qubits[6]] circuit.ccx[qubits[2],qubits[4],qubits[5]] circuit.ccx[qubits[2],qubits[5],qubits[4]] circuit.ccx[qubits[2],qubits[4],qubits[5]] circuit.cx[qubits[2],qubits[4]] circuit.cx[qubits[2],qubits[5]] circuit.cx[qubits[2],qubits[6]] circuit.cx[qubits[2],qubits[7]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[7],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[4]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.cx[qubits[3],qubits[4]] circuit.cx[qubits[3],qubits[5]] circuit.cx[qubits[3],qubits[6]] circuit.cx[qubits[3],qubits[7]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[7],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[4]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.cx[qubits[3],qubits[4]] circuit.cx[qubits[3],qubits[5]] circuit.cx[qubits[3],qubits[6]] circuit.cx[qubits[3],qubits[7]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[7],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[4]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.cx[qubits[3],qubits[4]] circuit.cx[qubits[3],qubits[5]] circuit.cx[qubits[3],qubits[6]] circuit.cx[qubits[3],qubits[7]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[7],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[4]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.cx[qubits[3],qubits[4]] circuit.cx[qubits[3],qubits[5]] circuit.cx[qubits[3],qubits[6]] circuit.cx[qubits[3],qubits[7]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[7],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[4]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.cx[qubits[3],qubits[4]] circuit.cx[qubits[3],qubits[5]] circuit.cx[qubits[3],qubits[6]] circuit.cx[qubits[3],qubits[7]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[7],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[4]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.cx[qubits[3],qubits[4]] circuit.cx[qubits[3],qubits[5]] circuit.cx[qubits[3],qubits[6]] circuit.cx[qubits[3],qubits[7]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[7],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[4]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.cx[qubits[3],qubits[4]] circuit.cx[qubits[3],qubits[5]] circuit.cx[qubits[3],qubits[6]] circuit.cx[qubits[3],qubits[7]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[7],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[7]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[6],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[6]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.ccx[qubits[3],qubits[5],qubits[4]] circuit.ccx[qubits[3],qubits[4],qubits[5]] circuit.cx[qubits[3],qubits[4]] circuit.cx[qubits[3],qubits[5]] circuit.cx[qubits[3],qubits[6]] circuit.cx[qubits[3],qubits[7]] # Now we build the Shor Circuit! circuit = Circuit(8) register1 = list(range(4)) register2 = list(range(4,8)) qubits = register1+register2 for i in register1: circuit.h[i] apply_U_7_mod15(circuit, qubits) apply_qft_dagger(circuit, register1) # Run and measure the outcome from first register. # In the theory we measure the second register first to reduce complexity of calculations. # But that is unnecessary for the final outcome. circuit.m[0:4].run(shots=10000) ``` The states that have been amplified are $\lvert 0000\rangle, \lvert 0100\rangle, \lvert 1000\rangle, \lvert 1100\rangle$. These represent $0,4,8,12$ in decimal. Hence, $k_0/r = B/16$ can take values $0, \frac 14, \frac 12, \frac 34$, leading to guesses $1,2$ or $4$ for the order. It is clear that $7^1, 7^2$ are not $1 \mbox{ mod }15$, and therefore we end up at $4$ being the order of $7$ modulo $15$! ### Case 2: What if order isn't a power of 2? If not, recall that we chose $t$ such that $N^2\le 2^t < 2N^2$ (equivalent to saying $t=\lceil2 \log_2 N\rceil$). We now work out an example of Shor's Algorithm for $N=21$, and demonstrate the generalization. Let us select $x=2$, coprime with $N$. Then, we observe that $$\begin{aligned}\varphi_1 &= \frac 1{\sqrt{512}}\sum_{j=0}^{512}\lvert j\rangle \lvert 0\rangle, \\ \varphi_2 & = \frac 1{\sqrt{512}}\sum_{j=0}^{512}\lvert j\rangle \lvert 2^j\mbox{ mod }N\rangle \\ & = \frac 1{\sqrt{512}}\left[\psi_0\lvert 1\rangle + \psi_1\lvert 2\rangle + \psi_2\lvert 4\rangle + \psi_3\lvert 8\rangle + \psi_{4}\lvert 16\rangle + \psi_{5}\lvert 11\rangle\right],\end{aligned}$$ Where $\psi_0 = \lvert 0\rangle + \lvert 6\rangle + \cdots + \lvert 510\rangle$ is the superposition of all states $0\mbox{ mod }6$, $\psi_1$ those of $1\mbox{ mod }6$, etc. Now suppose we measure $4$ in the second register, then $$\varphi_3 = \frac1{\sqrt{86}} \sum_{a=0}^{85} \lvert 6a+2\rangle,$$ Implying $$\varphi_4 = \frac 1{\sqrt{512}}\sum_{j=0}^{511}\left[\left(\frac{1}{\sqrt{86}}\sum_{a=0}^{85}\exp(\frac{-2\pi i \cdot 6 j a}{512})\right)\exp(\frac{-2\pi i\cdot 2j}{512})\lvert j\rangle\right]\lvert 2\rangle.$$ Then, the probability of measuring state $j$ is $$P(j) = \frac{1}{512\cdot 86}\left\lvert\sum_{a=0}^{85}\exp(\frac{-2\pi i \cdot 6 j a}{512})\right\rvert^2.$$ Let's plot $P(j)$ versus $j$. ``` import matplotlib.pyplot as plt N = 512 P = [] for j in range (N): s = 0 for a in range(86): theta = -2*np.pi*6*j*a/float(512) s += complex(np.cos(theta), np.sin(theta)) P.append((s.real **2 + s.imag ** 2)/float(86*512)) # print(P) # the histogram of the data plt.plot(range(N),P) plt.xlim(0,N) plt.xlabel('j') plt.ylabel('P(j)') plt.show() ``` We see 5 peaks achieved at the following points: ``` peaks = [i for i in range(N) if P[i] > 0.05] print(peaks) ``` If we measure $0$, we are out of luck, and have to re-run the algorithm. Suppose instead we measure $B=85$. Then, $\frac B{512}=\frac{85}{512}$, and we are supposed to figure out $r$ from here. Note that $\frac{85}{512}$ is a rational approximation of $\frac{k_0}{r}$, and so we can use the method of partial fractions to figure out $r$! We do this as follows: ``` import pandas as pd import fractions rows = [] for i in peaks: f = fractions.Fraction(i/512).limit_denominator(15) rows.append([i, f.denominator]) print(pd.DataFrame(rows, columns=["Peak", "Guess for r"])) ``` If we guess $r=6$, we're good as that is the order of $2$ modulo $21$! However, if we stumble upon $2$ or $3$, we can easily check that $2^2$ and $2^3$ are not $1\mbox{ mod }21$, and continue running the algorithm recursively to find the order of $2^2$ or $2^3$, respectively. ## Conclusion This gives us a complete picture of how Shor's algorithm works. We make one final remark, that the more qubits $t$ that we reserve in the first register, the better the accuracy of the algorithm becomes due to higher peaks. ## Further Reading and References 1. [Prof. Bernhard Ömer's Webpage](http://tph.tuwien.ac.at/~oemer/doc/quprog/node18.html) 2. [Markov-Saeedi: "Constant-Optimized Quantum Circuits for Modular Multiplication and Exponentiation"](https://arxiv.org/pdf/1202.6614.pdf) 3. [Quirk Circuit for Shor's Algorithm](tinyurl.com/8awfhrkd) 4. [Wikipedia page on Shor's Algorithm](https://en.wikipedia.org/wiki/Shor%27s_algorithm) 5. [IBM Composer Guide on Shor's Algorithm (in qiskit)](https://quantum-computing.ibm.com/composer/docs/iqx/guide/shors-algorithm)
github_jupyter
<a href="https://colab.research.google.com/github/cyberboysumanjay/RcloneLab/blob/master/RcloneLab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> #### 📚 For more information please visit our [GitHub](https://github.com/cyberboysumanjay/RcloneLab/). # ![](https://cyberboysumanjay.github.io/RcloneLab/img/title_rclonelab.png =x45) ``` #@markdown <h3>📝 Note: Run this before use RcloneLab.</h3> Setup_Time_Zone = False #@param {type:"boolean"} import os; from google.colab import files; from IPython.display import HTML, clear_output def upload_conf(): try: display(HTML("<h2 style=\"font-family:Trebuchet MS;color:#446785;\">Please upload the config file of rclone (rclone.conf) from your computer.</h2><br>")) for fn in files.upload().keys(): upload_conf = "{name}".format(name=fn) os.environ["rclone_conf"] = upload_conf if os.path.isfile("/content/" + upload_conf) == True: !mv -f $rclone_conf /root/.rclone.conf !chmod 666 /root/.rclone.conf clear_output() if Setup_Time_Zone == True: !sudo dpkg-reconfigure tzdata clear_output() return True else: return False except: clear_output() return False if os.path.isfile("/usr/bin/rclone") == True: if upload_conf() == True: display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#00b24c;\">Config has been changed.</h2><br></center>")) else: display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#ce2121;\">File upload has been cancelled during upload file.</h2><br></center>")) else: if upload_conf() == True: !rm -rf /content/sample_data/ !curl -s https://rclone.org/install.sh | sudo bash clear_output() display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#446785;\">Installation has been successfully completed.</h2><br></center>")) else: display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#ce2121;\">File upload has been cancelled during upload file.</h2><br></center>")) # ============================= FORM ============================= # Mode = "Config" #@param ["Config", "Copy", "Move", "Sync", "Checker", "Deduplicate", "Remove Empty Directories", "Empty Trash", "qBittorrent", "rTorrent"] Compare = "Size & Mod-Time" #@param ["Size & Mod-Time", "Size & Checksum", "Only Mod-Time", "Only Size", "Only Checksum"] Source = "" #@param {type:"string"} Destination = "" #@param {type:"string"} Transfers = 10 #@param {type:"slider", min:1, max:20, step:1} Checkers = 20 #@param {type:"slider", min:1, max:40, step:1} #@markdown --- #@markdown <center><h3><font color="#3399ff"><b>⚙️ Global Configuration ⚙️</b></font></h3></center> Simple_Ouput = True #@param {type:"boolean"} Skip_files_that_are_newer_on_the_destination = False #@param {type:"boolean"} Skip_all_files_that_exist = False #@param {type:"boolean"} Do_not_cross_filesystem_boundaries = False Do_not_update_modtime_if_files_are_identical = False #@param {type:"boolean"} Large_amount_of_files_optimization = False Google_Drive_optimization = False #@param {type:"boolean"} Dry_Run = False #@param {type:"boolean"} Output_Log_File = "OFF" #@param ["OFF", "NOTICE", "INFO", "ERROR", "DEBUG"] Extra_Arguments = "" #@param {type:"string"} #@markdown <center><h3><font color="#3399ff"><b>↪️ Sync Configuration ↩️</b></font></h3></center> Sync_Mode = "Delete during transfer" #@param ["Delete during transfer", "Delete before transfering", "Delete after transfering"] Track_Renames = False #@param {type:"boolean"} #@markdown <center><h3><font color="#3399ff"><b>💞 Deduplicate Configuration 💞</b></font></h3></center> Deduplicate_Mode = "Interactive" #@param ["Interactive", "Skip", "First", "Newest", "Oldest", "Largest", "Rename"] Deduplicate_Use_Trash = True #@param {type:"boolean"} # ================================================================ # ### Variable Declaration # Optimized for Google Colaboratory os.environ["bufferC"] = "--buffer-size 96M" if Compare == "Size & Checksum": os.environ["compareC"] = "-c" elif Compare == "Only Mod-Time": os.environ["compareC"] = "--ignore-size" elif Compare == "Only Size": os.environ["compareC"] = "--size-only" elif Compare == "Only Checksum": os.environ["compareC"] = "-c --ignore-size" else: os.environ["compareC"] = "" os.environ["sourceC"] = Source os.environ["destinationC"] = Destination os.environ["transfersC"] = "--transfers "+str(Transfers) os.environ["checkersC"] = "--checkers "+str(Checkers) if Skip_files_that_are_newer_on_the_destination == True: os.environ["skipnewC"] = "-u" else: os.environ["skipnewC"] = "" if Skip_all_files_that_exist == True: os.environ["skipexistC"] = "--ignore-existing" else: os.environ["skipexistC"] = "" if Do_not_cross_filesystem_boundaries == True: os.environ["nocrossfilesystemC"] = "--one-file-system" else: os.environ["nocrossfilesystemC"] = "" if Do_not_update_modtime_if_files_are_identical == True: os.environ["noupdatemodtimeC"] = "--no-update-modtime" else: os.environ["noupdatemodtimeC"] = "" if Large_amount_of_files_optimization == True: os.environ["filesoptimizeC"] = "--fast-list" else: os.environ["filesoptimizeC"] = "" if Google_Drive_optimization == True: os.environ["driveoptimizeC"] = "--drive-chunk-size 32M --drive-acknowledge-abuse --drive-keep-revision-forever" else: os.environ["driveoptimizeC"] = "" if Dry_Run == True: os.environ["dryrunC"] = "-n" else: os.environ["dryrunC"] = "" if Output_Log_File != "OFF": os.environ["statsC"] = "--log-file=/root/.rclone_log/rclone_log.txt" else: if Simple_Ouput == True: os.environ["statsC"] = "-v --stats-one-line --stats=5s" else: os.environ["statsC"] = "-v --stats=5s" if Output_Log_File == "INFO": os.environ["loglevelC"] = "--log-level INFO" elif Output_Log_File == "ERROR": os.environ["loglevelC"] = "--log-level ERROR" elif Output_Log_File == "DEBUG": os.environ["loglevelC"] = "--log-level DEBUG" else: os.environ["loglevelC"] = "" os.environ["extraC"] = Extra_Arguments if Sync_Mode == "Delete during transfer": os.environ["syncmodeC"] = "--delete-during" elif Sync_Mode == "Delete before transfering": os.environ["syncmodeC"] = "--delete-before" elif Sync_Mode == "Delete after transfering": os.environ["syncmodeC"] = "--delete-after" if Track_Renames == True: os.environ["trackrenamesC"] = "--track-renames" else: os.environ["trackrenamesC"] = "" if Deduplicate_Mode == "Interactive": os.environ["deduplicateC"] = "interactive" elif Deduplicate_Mode == "Skip": os.environ["deduplicateC"] = "skip" elif Deduplicate_Mode == "First": os.environ["deduplicateC"] = "first" elif Deduplicate_Mode == "Newest": os.environ["deduplicateC"] = "newest" elif Deduplicate_Mode == "Oldest": os.environ["deduplicateC"] = "oldest" elif Deduplicate_Mode == "Largest": os.environ["deduplicateC"] = "largest" elif Deduplicate_Mode == "Rename": os.environ["deduplicateC"] = "rename" if Deduplicate_Use_Trash == True: os.environ["deduplicatetrashC"] = "" else: os.environ["deduplicatetrashC"] = "--drive-use-trash=false" ### rclone Execution if Output_Log_File != "OFF" and Mode != "Config": !mkdir -p -m 666 /root/.rclone_log/ display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#ce2121;\">Logging enables, rclone will not output log through the terminal, please wait until finished.</h2><br></center>")) if Mode == "Config": !rclone --config=/root/.rclone.conf config elif Mode == "Copy": !rclone --config=/root/.rclone.conf copy "$sourceC" "$destinationC" $transfersC $checkersC $statsC $loglevelC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC elif Mode == "Move": !rclone --config=/root/.rclone.conf move "$sourceC" "$destinationC" $transfersC $checkersC $statsC $loglevelC --delete-empty-src-dirs $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC elif Mode == "Sync": !rclone --config=/root/.rclone.conf sync "$sourceC" "$destinationC" $transfersC $checkersC $statsC $loglevelC $syncmodeC $trackrenamesC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC elif Mode == "Checker": !rclone --config=/root/.rclone.conf check "$sourceC" "$destinationC" $checkersC $statsC $loglevelC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC elif Mode == "Deduplicate": !rclone --config=/root/.rclone.conf dedupe "$sourceC" $checkersC $statsC $loglevelC --dedupe-mode $deduplicateC $deduplicatetrashC $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC elif Mode == "Remove Empty Directories": !rclone --config=/root/.rclone.conf rmdirs "$sourceC" $statsC $loglevelC $dryrunC $extraC elif Mode == "Empty Trash": !rclone --config=/root/.rclone.conf cleanup "$sourceC" $statsC $loglevelC $dryrunC $extraC elif Mode == "qBittorrent": !chmod -R 666 /content/qBittorrent/ !rclone --config=/root/.rclone.conf move "/content/qBittorrent/" "$destinationC" $transfersC $checkersC $statsC $loglevelC --delete-empty-src-dirs --exclude /favicon.ico $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC elif Mode == "rTorrent": !chmod -R 666 /content/rTorrent/ !rclone --config=/root/.rclone.conf move "/content/rTorrent/" "$destinationC" $transfersC $checkersC $statsC $loglevelC --delete-empty-src-dirs $compareC $skipnewC $skipexistC $nocrossfilesystemC $noupdatemodtimeC $bufferC $filesoptimizeC $driveoptimizeC $dryrunC $extraC ### Log Output if Output_Log_File != "OFF" and Mode != "Config": ### Rename log file and output settings. !mv /root/.rclone_log/rclone_log.txt /root/.rclone_log/rclone_log_$(date +%Y-%m-%d_%H.%M.%S).txt with open("/root/.rclone_log/" + Mode + "_settings.txt", "w") as f: f.write("Mode: " + Mode + \ "\nCompare: " + Compare + \ "\nSource: \"" + Source + \ "\"\nDestination: \"" + Destination + \ "\"\nTransfers: " + str(Transfers) + \ "\nCheckers: " + str(Checkers) + \ "\nSkip files that are newer on the destination: " + str(Skip_files_that_are_newer_on_the_destination) + \ "\nSkip all files that exist: " + str(Skip_all_files_that_exist) + \ "\nDo not cross filesystem boundaries: " + str(Do_not_cross_filesystem_boundaries) + \ "\nDo not update modtime if files are identical: " + str(Do_not_update_modtime_if_files_are_identical) + \ "\nDry-Run: " + str(Dry_Run) + \ "\nOutput Log Level: " + Output_Log_File + \ "\nExtra Arguments: \"" + Extra_Arguments + \ "\"\nSync Moden: " + Sync_Mode + \ "\nTrack Renames: " + str(Track_Renames) + \ "\nDeduplicate Mode: " + Deduplicate_Mode + \ "\nDeduplicate Use Trash: " + str(Deduplicate_Use_Trash)) ### Compressing log file. !rm -f /root/rclone_log.zip !zip -r -q -j -9 /root/rclone_log.zip /root/.rclone_log/ !rm -rf /root/.rclone_log/ !mkdir -p -m 666 /root/.rclone_log/ ### Send Log if os.path.isfile("/root/rclone_log.zip") == True: try: files.download("/root/rclone_log.zip") !rm -f /root/rclone_log.zip display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#446785;\">Sending log to your browser...</h2><br></center>")) except: !mv /root/rclone_log.zip /content/rclone_log_$(date +%Y-%m-%d_%H.%M.%S).zip display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#446785;\">You can use file explorer to download the log file.</h2><br><img src=\"https://cyberboysumanjay.github.io/RcloneLab/res/rclonelab/01.png\"><br></center>")) else: clear_output() display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#ce2121;\">There is no log file.</h2><br></center>")) ### Operation has been successfully completed. if Mode != "Config": display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#00b24c;\">✅ Operation has been successfully completed.</h2><br></center>")) ``` # ![qBittorrent](https://cyberboysumanjay.github.io/RcloneLab/img/title_qbittorrent.png =x45) ``` # ============================= FORM ============================= # #@markdown <h3>📝 Note: Run this cell and relax.</h3> Install_File_Server = False #@param {type:"boolean"} # ================================================================ # import os, string, random, psutil, IPython, uuid import ipywidgets as widgets from IPython.display import HTML, clear_output from google.colab import output SuccessRun = widgets.Button( description='✔ Successfully', disabled=True, button_style='success' ) UnsuccessfullyRun = widgets.Button( description='✘ Unsuccessfully', disabled=True, button_style='danger' ) class MakeButton(object): def __init__(self, title, callback): self._title = title self._callback = callback def _repr_html_(self): callback_id = 'button-' + str(uuid.uuid4()) output.register_callback(callback_id, self._callback) template = """<button class="p-Widget jupyter-widgets jupyter-button widget-button mod-info" id="{callback_id}">{title}</button> <script> document.querySelector("#{callback_id}").onclick = (e) => {{ google.colab.kernel.invokeFunction('{callback_id}', [], {{}}) e.preventDefault(); }}; </script>""" html = template.format(title=self._title, callback_id=callback_id) return html def RandomGenerator(size=4, chars=string.digits): return "".join(random.choice(chars) for x in range(size)) def CheckProcess(processName): for proc in psutil.process_iter(): try: if processName.lower() in proc.name().lower(): return True except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess): pass return False; def AutoSSH(name,port): get_ipython().system_raw("autossh -l " + name + " -M 0 -fNT -o 'StrictHostKeyChecking=no' -o 'ServerAliveInterval 300' -o 'ServerAliveCountMax 30' -R 80:localhost:" + port + " ssh.localhost.run &") get_ipython().system_raw("autossh -M 0 -fNT -o 'StrictHostKeyChecking=no' -o 'ServerAliveInterval 300' -o 'ServerAliveCountMax 30' -R " + name + ":80:localhost:" + port + " serveo.net &") def File_Server(): if os.path.isfile("/content/qBittorrent/favicon.ico") == False: !wget -q https://cyberboysumanjay.github.io/RcloneLab/res/qbittorrent/favicon.ico -O /content/qBittorrent/favicon.ico !chmod 666 /content/qBittorrent/favicon.ico if os.path.isdir("/tools/node/lib/node_modules/http-server/") == True: get_ipython().system_raw("http-server /content/qBittorrent/ -p 7007 -i false -s -c-1 --no-dotfiles &") else: !npm install -g http-server get_ipython().system_raw("http-server /content/qBittorrent/ -p 7007 -i false -s -c-1 --no-dotfiles &") AutoSSH("fs" + Random_Number, "7007") def Start_Server(): if CheckProcess("qbittorrent-nox") == False: !qbittorrent-nox -d --webui-port=6006 if Install_File_Server == True: File_Server() try: try: Random_Number except NameError: !rm -rf /content/sample_data/ !apt update -qq -y !npm i -g npm !yes "" | add-apt-repository ppa:qbittorrent-team/qbittorrent-stable !apt install qbittorrent-nox -qq -y if os.path.isfile("/usr/bin/autossh") == False: !apt install autossh -qq -y !mkdir -p -m 666 /{content/qBittorrent,root/{.qBittorrent_temp,.config/qBittorrent}} !wget -q https://cyberboysumanjay.github.io/RcloneLab/res/qbittorrent/qBittorrent.conf -O /root/.config/qBittorrent/qBittorrent.conf Random_Number = RandomGenerator() AutoSSH("qb" + Random_Number, "6006") Start_Server() clear_output() display(SuccessRun) display(MakeButton("Recheck", Start_Server)) if Install_File_Server == True: display(HTML("<h2 style=\"font-family:Trebuchet MS;color:#446785;\">File Server</h2><h4 style=\"font-family:Trebuchet MS;color:#446785;\"><a style=\"font-family:Trebuchet MS;color:#356ebf;\" href=\"https://fs" + Random_Number + ".localhost.run\" target=\"_blank\">Website 1</a><br>" \ "<a style=\"font-family:Trebuchet MS;color:#356ebf;\" href=\"https://fs" + Random_Number + ".serveo.net\" target=\"_blank\">Website 2</a></h3>")) display(HTML("<h2 style=\"font-family:Trebuchet MS;color:#446785;\">qBittorrent</h2><h4 style=\"font-family:Trebuchet MS;color:#446785;\"><a style=\"font-family:Trebuchet MS;color:#356ebf;\" href=\"http://qb" + Random_Number + ".localhost.run\" target=\"_blank\">Website 1</a><br>" \ "<a style=\"font-family:Trebuchet MS;color:#356ebf;\" href=\"http://qb" + Random_Number + ".serveo.net\" target=\"_blank\">Website 2</a><br>" \ "Username: rclonelab<br>Password: rclonelab</h4><br>")) except: clear_output() display(UnsuccessfullyRun) ``` # ![rTorrent](https://cyberboysumanjay.github.io/RcloneLab/img/title_rtorrent_flood.png =x45) ``` # ============================= FORM ============================= # #@markdown <h3>📝 Note: Run this cell and relax.</h3> # ================================================================ # import os, string, random, psutil, IPython, uuid import ipywidgets as widgets from IPython.display import HTML, clear_output from google.colab import output SuccessRun = widgets.Button( description='✔ Successfully', disabled=True, button_style='success' ) UnsuccessfullyRun = widgets.Button( description='✘ Unsuccessfully', disabled=True, button_style='danger' ) class MakeButton(object): def __init__(self, title, callback): self._title = title self._callback = callback def _repr_html_(self): callback_id = 'button-' + str(uuid.uuid4()) output.register_callback(callback_id, self._callback) template = """<button class="p-Widget jupyter-widgets jupyter-button widget-button mod-info" id="{callback_id}">{title}</button> <script> document.querySelector("#{callback_id}").onclick = (e) => {{ google.colab.kernel.invokeFunction('{callback_id}', [], {{}}) e.preventDefault(); }}; </script>""" html = template.format(title=self._title, callback_id=callback_id) return html def random_generator(size=4, chars=string.digits): return "".join(random.choice(chars) for x in range(size)) def CheckProcess(processName): for proc in psutil.process_iter(): try: if processName.lower() in proc.name().lower(): return True except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess): pass return False; def Start_Server(): if CheckProcess("screen") == False or CheckProcess("rtorrent main") == False: !/usr/bin/screen -d -m -fa -S rtorrent /usr/bin/rtorrent get_ipython().system_raw("NODE_ENV=production npm start --prefix /root/.Flood/ &") try: if os.path.isfile("/usr/bin/rtorrent") == False: !rm -rf /content/sample_data/ !apt update -qq -y !npm i -g npm !apt install rtorrent screen -qq -y if os.path.isfile("/usr/bin/autossh") == False: !apt install autossh -qq -y !mkdir -p -m 666 /{content/rTorrent/,root/.rTorrent_session} !mkdir -p -m 777 /root/.Flood/ !wget -q https://cyberboysumanjay.github.io/RcloneLab/res/rtorrent/rtorrent.rc -O /root/.rtorrent.rc !chmod 666 /root/.rtorrent.rc !git clone https://github.com/cyberboysumanjay/FloodLab.git /root/.Flood/ rT_Link = "rt" + random_generator() get_ipython().system_raw("autossh -M 0 -fNT -o 'StrictHostKeyChecking=no' -o 'ServerAliveInterval 300' -o 'ServerAliveCountMax 30' -R " + rT_Link + ":80:localhost:3000 serveo.net &") get_ipython().system_raw("autossh -l " + rT_Link + " -M 0 -fNT -o 'StrictHostKeyChecking=no' -o 'ServerAliveInterval 300' -o 'ServerAliveCountMax 30' -R 80:localhost:3000 ssh.localhost.run &") Start_Server() clear_output() display(SuccessRun) display(MakeButton("Recheck", Start_Server)) display(HTML("<h2 style=\"font-family:Trebuchet MS;color:#446785;\">rTorrent<sup><sup><font size=\"1\">+Flood</font></sup></sup></h2><h4 style=\"font-family:Trebuchet MS;color:#446785;\"><a style=\"font-family:Trebuchet MS;color:#356ebf;\" href=\"https://" + rT_Link + ".localhost.run\" target=\"_blank\">Website 1</a><br>" \ "<a style=\"font-family:Trebuchet MS;color:#356ebf;\" href=\"https://" + rT_Link + ".serveo.net\" target=\"_blank\">Website 2</a><br>IP: 127.0.0.1<br>Port: 5000</h4><br>")) except: clear_output() display(UnsuccessfullyRun) ``` # ![Utility](https://cyberboysumanjay.github.io/RcloneLab/img/title_utility.png =x45) ``` # ============================= FORM ============================= # #@markdown <h3>Ubuntu Virtual Machine Updater</h3> # ================================================================ # from IPython.display import HTML, clear_output !apt update -qq -y !apt upgrade -qq -y !npm i -g npm clear_output() display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#446785;\">An update has been successfully completed.</h2><br></center>")) # ============================= FORM ============================= # #@markdown <h3>Check VM's Status</h3> Loop_Check = False #@param {type:"boolean"} Loop_Interval = 5 #@param {type:"slider", min:1, max:15, step:1} # ================================================================ # import time from IPython.display import clear_output Loop = True try: while Loop == True: clear_output() !top -b -n1 if Loop_Check == False: Loop = False else: time.sleep(Loop_Interval) except: clear_output() # ============================= FORM ============================= # #@markdown <h3>Get VM's Specification</h3> Output_Format = "TEXT" #@param ["TEXT", "HTML", "XML", "JSON"] Short_Output = False #@param {type:"boolean"} # ================================================================ # import os from google.colab import files from IPython.display import HTML, clear_output try: Output_Format_Ext except NameError: !apt install lshw -qq -y clear_output() if Short_Output == True: os.environ["outputformatC"] = "txt" os.environ["outputformat2C"] = "-short" Output_Format_Ext = "txt" elif Output_Format == "TEXT": os.environ["outputformatC"] = "txt" os.environ["outputformat2C"] = "" Output_Format_Ext = "txt" else: os.environ["outputformatC"] = Output_Format.lower() os.environ["outputformat2C"] = "-"+Output_Format.lower() Output_Format_Ext = Output_Format.lower() !lshw $outputformat2C > Specification.$outputformatC files.download("/content/Specification." + Output_Format_Ext) !rm -f /content/Specification.$outputformatC display(HTML("<center><h2 style=\"font-family:Trebuchet MS;color:#446785;\">Sending log to your browser...</h2><br></center>")) ```
github_jupyter
# fastai and the New DataBlock API > A quick glance at the new top-level api - toc: true - badges: true - comments: true - image: images/chart-preview.png - category: DataBlock --- This blog is also a Jupyter notebook available to run from the top down. There will be code snippets that you can then run in any environment. In this section I will be posting what version of `fastai2` and `fastcore` I am currently running at the time of writing this: * `fastai2`: 0.0.13 * `fastcore`: 0.1.15 --- ## What is the `DataBlock` API? The `DataBlock` API is certainly nothing new to `fastai`. It was here in a lesser form in the previous version, and the start of an *idea*. This idea was: "How do we let the users of the `fastai` library build `DataLoaders` in a way that is simple enough that someone with minimal coding knowledge could get the hang of it, but be advanced enough to allow for exploration." The old version was a struggle to do this from a high-level API standpoint, as you were very limited in what you could do: variables must be passed in a particular order, the error checking wasn't very explanatory (to those unaccustomed to debugging issues), and while the general idea seemed to flow, sometimes it didn't quite work well enough. For our first example, we'll look at the Pets dataset and compare it from `fastai` version 1 to `fastai` version 2 The `DataBlock` itself is built on "building blocks", think of them as legos. (For more information see [fastai: A Layered API for Deep Learning](https://arxiv.org/abs/2002.04688)) They can go in any order but together they'll always build something. Our lego bricks go by these general names: * `blocks` * `get_items` * `get_x`/`get_y` * `getters` * `splitter` * `item_tfms` * `batch_tfms` We'll be exploring each one more closely throughout this series, so we won't hit on all of them today ## Importing from the library The library itself is still split up into modules, similar to the first version where we have Vision, Text, and Tabular. To import from these libraries, we'll be calling their `.all` files. Our example problem for today will involve Computer Vision so we will call from the `.vision` library ``` from fastai2.vision.all import * ``` ## Pets Pets is a dataset in which you try to identify one of 37 different species of cats and dogs. To get the dataset, we're going to use functions very familiar to those that used fastai version 1. We'll use `untar_data` to grab the dataset we want. In our case, the Pets dataset lives in `URLs.PETS` ``` URLs.PETS path = untar_data(URLs.PETS) ``` ### Looking at the dataset When starting to look at adapting the API for a particular problem, we need to know just *how* the data is stored. We have an image problem here so we can use the `get_image_files` function to go grab all the file locations of our images and we can look at the data! ``` fnames = get_image_files(path/'images') ``` To investigate how the files are named and where they are located, let's look at the first one: ``` fnames[0] ``` Now as `get_image_files` grabs the filename of our `x` for us, we don't need to include our `get_x` here (which defaults to `None`) as we just want to use this filepath! Now onto our file paths and how they relate to our labels. If we look at our returned path, this particular image has the class of **pug**. Where do I see that? **Here**: Path('/root/.fastai/data/oxford-iiit-pet/images/**pug**_119.jpg') All the images follow this same format, and we can use a [Regular Expression](https://www.rexegg.com/): to get it out. In our case, it would look something like so: ``` pat = r'([^/]+)_\d+.*$' ``` How do we know it worked? Let's apply it to the first file path real quick with `re.search` where we pass in the pattern followed by an item to try and find a match in the first group (set of matches) with a Regular Expression ``` re.search(pat, str(fnames[0])).group(1) ``` We have our label! So what parts do we have so far? We know how to grab our items (`get_items` and `get_x`), our labels (`get_y`), what's left? Well, we'll want some way to split our data and our data augmentation. Let's focus on the prior. ### Splitting and Augmentation Any time we train a model, the data must be split between a training and validation dataset. The general idea is that the training dataset is what the model adjusts and fits its weights to, while the validation set is for us to understand how the model is performing. `fastai2` has a family of split functions to look at that will slowly get covered throughout these blogs. For today we'll randomly split our data so 80% goes into our training set and 20% goes into the validation. We can utilize `RandomSplitter` to do so by passing in a percentage to split by, and optionally a seed as well to get the same validation split on multiple runs ``` splitter = RandomSplitter(valid_pct=0.2, seed=42) ``` How is this splitter applied? The splitter itself is a function that we can then apply over some set of data or numbers (an array). It works off of indexes. What does that look like? Let's see: ``` splitter(fnames) ``` That doesn't look like filenames! Correct, instead its the **location** in our list of filenames and what group it belongs to. What this special looking list (or `L`) also tells us is how many *items* are in each list. In this example, the first (which is our training data) has 5,912 samples and the second (which is our validation) contains 1,478 samples. Now let's move onto the augmentation. As noted earlier, there are two kinds: `item_tfms` and `batch_tfms`. Each do what it sounds like: an item transform is applied on an individual item basis, and a batch transform is applied over each batch of data. The role of the item transform is to prepare everything for a batch level (and to apply any specific item transformations you need), and the batch transform is to further apply any augmentations on the batch level efficently (normalization of your data also happens on a batch level). One of the **biggest** differences between the two though is *where* each is done. Item transforms are done on the **CPU** while batch transforms are performed on the **GPU**. Now that we know this, let's build a basic transformation pipeline that looks something like so: 1. Resize our images to a fixed size (224x224 pixels) 2. After they are batched together, choose a quick basic augmentation function 3. Normalize all of our image data Let's build it! ``` item_tfms = [Resize(224, method='crop')] batch_tfms=[*aug_transforms(size=256), Normalize.from_stats(*imagenet_stats)] ``` Woah, woah, woah, what in the world is this `aug_transforms` thing you just showed me I hear you ask? It runs a series of augmentations similar to the `get_transforms()` from version 1. The entire list is quite exhaustive and we'll discuss it in a later blog, but for now know we can pass in an image size to resize our images to (we'll make our images a bit larger, doing 256x256). Alright, we know how we want to get our data, how to label it, split it, and augment it, what's left? That `block` bit I mentioned before. ### The `Block` `Block`'s are used to help nest transforms inside of pre-defined problem domains. Lazy-man's explaination? If it's an image problem I can tell the library to use `Pillow` without explicitly saying it, or if we have a Bounding Box problem I can tell the DataBlock to expect two coordinates for boxes and to apply the transforms for points, again without explicitly saying these transforms. What will we use today? Well let's think about our problem: we are using an image for our `x`, and our labels (or `y`'s) are some category. Is there blocks for this? Yes! And they're labeled `ImageBlock` and `CategoryBlock`! Remember how I said it just "made more sense?" This is a direct example. Let's define them: ``` blocks = (ImageBlock, CategoryBlock) ``` ## Now let's build this `DataBlock` thing already! Alright we have all the pieces now, let's see how they fit together. We'll all wrap them up in a nice little package of a `DataBlock`. Think of the `DataBlock` as a list of instructions to do when we're building batches and our `DataLoaders`. It doesn't need any items explicitly to be done, and instead is a blueprint of how to operate. We define it like so: ``` block = DataBlock(blocks=blocks, get_items=get_image_files, get_y=RegexLabeller(pat), splitter=splitter, item_tfms=item_tfms, batch_tfms=batch_tfms) ``` Once we have our `DataBlock`, we can build some `DataLoaders` off of it. To do so we simply pass in a source for our data that our `DataBlock` would be expecting, specifically our `get_x` and `get_y`, so we'll follow the same idea we did above to get our filenames and pass in a path to the folder we want to use along with a batch size: ``` dls = block.dataloaders(path, bs=64) ``` While it's a bit long, you can understand why we had to define everything the way that we did. If you're used to how fastai v1 looked with the `ImageDataBunch.from_x`, well this is stil here too: ``` dls = ImageDataLoaders.from_name_re(path, fnames, pat, item_tfms=item_tfms, batch_tfms=batch_tfms, bs=64) ``` I'm personally a much larger fan of the first example, and if you're planning on using the library quite a bit you should get used to it more as well! This blog series will be focusing on that nomenclature specifically. To make sure everything looks okay and we like our augmentation we can show a batch of images from our `DataLoader`. It's as simple as: ``` dls.show_batch() ``` ## Fitting a Model Now from here everything looks and behaves exactly how it did in `fastai` version 1: 1. Define a `Learner` 2. Find a learning rate 3. Fit We'll quickly see that `fastai2` has a quick function for transfer learning problems like we are doing, but first let's build the `Learner`. This will use `cnn_learner`, as we are doing transfer learning, and we'll tell the function to use a `resnet34` architecture with accuracy metrics ``` learn = cnn_learner(dls, resnet34, metrics=accuracy) ``` Now normally we would do `learn.lr_find()` and find a learning rate, but with the new library, we now have a `fine_tune()` function we can use instead specifically designed for transfer learning scenarios. It runs a specified number of epochs (the number of times we fully go through the dataset) on a frozen model (where all but the last layer's weights are not trainable) and then the last few will be on an unfrozen model (where all weights are trainable again). When just passing in one set of epochs, like below, it will run frozen for one and unfrozen for the rest. Let's try it! ``` learn.fine_tune(3) ``` As we can see we did pretty goood just with this default! Generally when the accuracy is this high, we want to turn instead to `error_rate` for our metric, as this would show ~6.5% and is a better comparison when it gets very fine tuned. But that's it for this first introduction! We looked at how the Pets dataset can be loaded into the new high-level `DataBlock` API, and what it's built with. In the next blog we will be exploring more variations with the `DataBlock` as we get more and more creative. Thanks for reading!
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Load images with tf.data <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/load_data/images.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/load_data/images.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> > Note: This is an archived TF1 notebook. These are configured to run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate) but will run in TF1 as well. To use TF1 in Colab, use the [%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb) magic. This tutorial provides a simple example of how to load an image dataset using `tf.data`. The dataset used in this example is distributed as directories of images, with one class of image per directory. ## Setup ``` import tensorflow.compat.v1 as tf tf.__version__ AUTOTUNE = tf.data.experimental.AUTOTUNE ``` ## Download and inspect the dataset ### Retrieve the images Before you start any training, you'll need a set of images to teach the network about the new classes you want to recognize. You've created an archive of creative-commons licensed flower photos to use initially. ``` import pathlib data_root_orig = tf.keras.utils.get_file('flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) data_root = pathlib.Path(data_root_orig) print(data_root) ``` After downloading 218MB, you should now have a copy of the flower photos available: ``` for item in data_root.iterdir(): print(item) import random all_image_paths = list(data_root.glob('*/*')) all_image_paths = [str(path) for path in all_image_paths] random.shuffle(all_image_paths) image_count = len(all_image_paths) image_count all_image_paths[:10] ``` ### Inspect the images Now let's have a quick look at a couple of the images, so you know what you're dealing with: ``` import os attributions = (data_root/"LICENSE.txt").open(encoding='utf-8').readlines()[4:] attributions = [line.split(' CC-BY') for line in attributions] attributions = dict(attributions) import IPython.display as display def caption_image(image_path): image_rel = pathlib.Path(image_path).relative_to(data_root) return "Image (CC BY 2.0) " + ' - '.join(attributions[str(image_rel)].split(' - ')[:-1]) for n in range(3): image_path = random.choice(all_image_paths) display.display(display.Image(image_path)) print(caption_image(image_path)) print() ``` ### Determine the label for each image List the available labels: ``` label_names = sorted(item.name for item in data_root.glob('*/') if item.is_dir()) label_names ``` Assign an index to each label: ``` label_to_index = dict((name, index) for index,name in enumerate(label_names)) label_to_index ``` Create a list of every file, and its label index ``` all_image_labels = [label_to_index[pathlib.Path(path).parent.name] for path in all_image_paths] print("First 10 labels indices: ", all_image_labels[:10]) ``` ### Load and format the images TensorFlow includes all the tools you need to load and process images: ``` img_path = all_image_paths[0] img_path ``` here is the raw data: ``` img_raw = tf.io.read_file(img_path) print(repr(img_raw)[:100]+"...") ``` Decode it into an image tensor: ``` img_tensor = tf.image.decode_image(img_raw) print(img_tensor.shape) print(img_tensor.dtype) ``` Resize it for your model: ``` img_final = tf.image.resize(img_tensor, [192, 192]) img_final = img_final/255.0 print(img_final.shape) print(img_final.numpy().min()) print(img_final.numpy().max()) ``` Wrap up these up in simple functions for later. ``` def preprocess_image(image): image = tf.image.decode_jpeg(image, channels=3) image = tf.image.resize(image, [192, 192]) image /= 255.0 # normalize to [0,1] range return image def load_and_preprocess_image(path): image = tf.read_file(path) return preprocess_image(image) import matplotlib.pyplot as plt img_path = all_image_paths[0] label = all_image_labels[0] plt.imshow(load_and_preprocess_image(img_path)) plt.grid(False) plt.xlabel(caption_image(img_path).encode('utf-8')) plt.title(label_names[label].title()) print() ``` ## Build a `tf.data.Dataset` ### A dataset of images The easiest way to build a `tf.data.Dataset` is using the `from_tensor_slices` method. Slicing the array of strings results in a dataset of strings: ``` path_ds = tf.data.Dataset.from_tensor_slices(all_image_paths) ``` The `output_shapes` and `output_types` fields describe the content of each item in the dataset. In this case it is a set of scalar binary-strings ``` print('shape: ', repr(path_ds.output_shapes)) print('type: ', path_ds.output_types) print() print(path_ds) ``` Now create a new dataset that loads and formats images on the fly by mapping `preprocess_image` over the dataset of paths. ``` image_ds = path_ds.map(load_and_preprocess_image, num_parallel_calls=AUTOTUNE) import matplotlib.pyplot as plt plt.figure(figsize=(8,8)) for n,image in enumerate(image_ds.take(4)): plt.subplot(2,2,n+1) plt.imshow(image) plt.grid(False) plt.xticks([]) plt.yticks([]) plt.xlabel(caption_image(all_image_paths[n])) plt.show() ``` ### A dataset of `(image, label)` pairs Using the same `from_tensor_slices` method you can build a dataset of labels ``` label_ds = tf.data.Dataset.from_tensor_slices(tf.cast(all_image_labels, tf.int64)) for label in label_ds.take(10): print(label_names[label.numpy()]) ``` Since the datasets are in the same order you can just zip them together to get a dataset of `(image, label)` pairs. ``` image_label_ds = tf.data.Dataset.zip((image_ds, label_ds)) ``` The new dataset's `shapes` and `types` are tuples of shapes and types as well, describing each field: ``` print(image_label_ds) ``` Note: When you have arrays like `all_image_labels` and `all_image_paths`, an alternative to using `tf.data.dataset.Dataset.zip` is slicing the pair of arrays. ``` ds = tf.data.Dataset.from_tensor_slices((all_image_paths, all_image_labels)) # The tuples are unpacked into the positional arguments of the mapped function def load_and_preprocess_from_path_label(path, label): return load_and_preprocess_image(path), label image_label_ds = ds.map(load_and_preprocess_from_path_label) image_label_ds ``` ### Basic methods for training To train a model with this dataset you will want the data: * To be well shuffled. * To be batched. * To repeat forever. * To have batches available as soon as possible. These features can be easily added using the `tf.data` api. ``` BATCH_SIZE = 32 # Setting a shuffle buffer size as large as the dataset ensures that the data is # completely shuffled. ds = image_label_ds.shuffle(buffer_size=image_count) ds = ds.repeat() ds = ds.batch(BATCH_SIZE) # `prefetch` lets the dataset fetch batches, in the background while the model is training. ds = ds.prefetch(buffer_size=AUTOTUNE) ds ``` There are a few things to note here: 1. The order is important. * A `.shuffle` *after* a `.repeat` would shuffle items across epoch boundaries (some items will be seen twice before others are seen at all). * A `.shuffle` *after* a `.batch` would shuffle the order of the batches, but not shuffle the items across batches. 1. Use a `buffer_size` the same size as the dataset for a full shuffle. Up to the dataset size, large values provide better randomization, but use more memory. 1. The shuffle buffer is filled before any elements are pulled from it. So a large `buffer_size` may cause a delay when your `Dataset` is starting. 1. The shuffled dataset doesn't report the end of a dataset until the shuffle-buffer is completely empty. The `Dataset` is restarted by `.repeat`, causing another wait for the shuffle-buffer to be filled. This last point, as well as the order of `.shuffle` and `.repeat`, can be addressed by using the `tf.data.Dataset.apply` method with the fused `tf.data.experimental.shuffle_and_repeat` function: ``` ds = image_label_ds.apply( tf.data.experimental.shuffle_and_repeat(buffer_size=image_count)) ds = ds.batch(BATCH_SIZE) ds = ds.prefetch(buffer_size=AUTOTUNE) ds ``` * For more on ordering the operations, see [Repeat and Shuffle](https://www.tensorflow.org/r1/guide/performance/datasets#repeat_and_shuffle) in the Input Pipeline Performance guide. ### Pipe the dataset to a model Fetch a copy of MobileNet v2 from `tf.keras.applications`. This will be used for a simple transfer learning example. Set the MobileNet weights to be non-trainable: ``` mobile_net = tf.keras.applications.MobileNetV2(input_shape=(192, 192, 3), include_top=False) mobile_net.trainable=False ``` This model expects its input to be normalized to the `[-1,1]` range: ``` help(keras_applications.mobilenet_v2.preprocess_input) ``` <pre> ... This function applies the "Inception" preprocessing which converts the RGB values from [0, 255] to [-1, 1] ... </pre> So before passing data to the MobileNet model, you need to convert the input from a range of `[0,1]` to `[-1,1]`. ``` def change_range(image,label): return 2*image-1, label keras_ds = ds.map(change_range) ``` The MobileNet returns a `6x6` spatial grid of features for each image. Pass it a batch of images to see: ``` # The dataset may take a few seconds to start, as it fills its shuffle buffer. image_batch, label_batch = next(iter(keras_ds)) feature_map_batch = mobile_net(image_batch) print(feature_map_batch.shape) ``` Because of this output shape, build a model wrapped around MobileNet using `tf.keras.layers.GlobalAveragePooling2D` to average over the space dimensions before the output `tf.keras.layers.Dense` layer: ``` model = tf.keras.Sequential([ mobile_net, tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(len(label_names), activation = 'softmax')]) ``` Now it produces outputs of the expected shape: ``` logit_batch = model(image_batch).numpy() print("min logit:", logit_batch.min()) print("max logit:", logit_batch.max()) print() print("Shape:", logit_batch.shape) ``` Compile the model to describe the training procedure: ``` model.compile(optimizer=tf.train.AdamOptimizer(), loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=["accuracy"]) ``` There are 2 trainable variables: the Dense `weights` and `bias`: ``` len(model.trainable_variables) model.summary() ``` Train the model. Normally you would specify the real number of steps per epoch, but for demonstration purposes only run 3 steps. ``` steps_per_epoch=tf.ceil(len(all_image_paths)/BATCH_SIZE).numpy() steps_per_epoch model.fit(ds, epochs=1, steps_per_epoch=3) ``` ## Performance Note: This section just shows a couple of easy tricks that may help performance. For an in depth guide see [Input Pipeline Performance](https://www.tensorflow.org/r1/guide/performance/datasets). The simple pipeline used above reads each file individually, on each epoch. This is fine for local training on CPU but may not be sufficient for GPU training, and is totally inappropriate for any sort of distributed training. To investigate, first build a simple function to check the performance of our datasets: ``` import time def timeit(ds, batches=2*steps_per_epoch+1): overall_start = time.time() # Fetch a single batch to prime the pipeline (fill the shuffle buffer), # before starting the timer it = iter(ds.take(batches+1)) next(it) start = time.time() for i,(images,labels) in enumerate(it): if i%10 == 0: print('.',end='') print() end = time.time() duration = end-start print("{} batches: {} s".format(batches, duration)) print("{:0.5f} Images/s".format(BATCH_SIZE*batches/duration)) print("Total time: {}s".format(end-overall_start)) ``` The performance of the current dataset is: ``` ds = image_label_ds.apply( tf.data.experimental.shuffle_and_repeat(buffer_size=image_count)) ds = ds.batch(BATCH_SIZE).prefetch(buffer_size=AUTOTUNE) ds timeit(ds) ``` ### Cache Use `tf.data.Dataset.cache` to easily cache calculations across epochs. This is especially performant if the data fits in memory. Here the images are cached, after being pre-precessed (decoded and resized): ``` ds = image_label_ds.cache() ds = ds.apply( tf.data.experimental.shuffle_and_repeat(buffer_size=image_count)) ds = ds.batch(BATCH_SIZE).prefetch(buffer_size=AUTOTUNE) ds timeit(ds) ``` One disadvantage to using an in-memory cache is that the cache must be rebuilt on each run, giving the same startup delay each time the dataset is started: ``` timeit(ds) ``` If the data doesn't fit in memory, use a cache file: ``` ds = image_label_ds.cache(filename='./cache.tf-data') ds = ds.apply( tf.data.experimental.shuffle_and_repeat(buffer_size=image_count)) ds = ds.batch(BATCH_SIZE).prefetch(1) ds timeit(ds) ``` The cache file also has the advantage that it can be used to quickly restart the dataset without rebuilding the cache. Note how much faster it is the second time: ``` timeit(ds) ``` ### TFRecord File #### Raw image data TFRecord files are a simple format for storing a sequence of binary blobs. By packing multiple examples into the same file, TensorFlow is able to read multiple examples at once, which is especially important for performance when using a remote storage service such as GCS. First, build a TFRecord file from the raw image data: ``` image_ds = tf.data.Dataset.from_tensor_slices(all_image_paths).map(tf.read_file) tfrec = tf.data.experimental.TFRecordWriter('images.tfrec') tfrec.write(image_ds) ``` Next build a dataset that reads from the TFRecord file and decodes/reformats the images using the `preprocess_image` function you defined earlier. ``` image_ds = tf.data.TFRecordDataset('images.tfrec').map(preprocess_image) ``` Zip that with the labels dataset you defined earlier, to get the expected `(image,label)` pairs. ``` ds = tf.data.Dataset.zip((image_ds, label_ds)) ds = ds.apply( tf.data.experimental.shuffle_and_repeat(buffer_size=image_count)) ds=ds.batch(BATCH_SIZE).prefetch(AUTOTUNE) ds timeit(ds) ``` This is slower than the `cache` version because you have not cached the preprocessing. #### Serialized Tensors To save some preprocessing to the TFRecord file, first make a dataset of the processed images, as before: ``` paths_ds = tf.data.Dataset.from_tensor_slices(all_image_paths) image_ds = paths_ds.map(load_and_preprocess_image) image_ds ``` Now instead of a dataset of `.jpeg` strings, this is a dataset of tensors. To serialize this to a TFRecord file you first convert the dataset of tensors to a dataset of strings. ``` ds = image_ds.map(tf.serialize_tensor) ds tfrec = tf.data.experimental.TFRecordWriter('images.tfrec') tfrec.write(ds) ``` With the preprocessing cached, data can be loaded from the TFRecord file quite efficiently. Just remember to de-serialize the tensor before trying to use it. ``` ds = tf.data.TFRecordDataset('images.tfrec') def parse(x): result = tf.parse_tensor(x, out_type=tf.float32) result = tf.reshape(result, [192, 192, 3]) return result ds = ds.map(parse, num_parallel_calls=AUTOTUNE) ds ``` Now, add the labels and apply the same standard operations as before: ``` ds = tf.data.Dataset.zip((ds, label_ds)) ds = ds.apply( tf.data.experimental.shuffle_and_repeat(buffer_size=image_count)) ds=ds.batch(BATCH_SIZE).prefetch(AUTOTUNE) ds timeit(ds) ```
github_jupyter
<a href="https://colab.research.google.com/github/cohmathonc/biosci670/blob/master/IntroductionComputationalMethods/exercises/07_ODEs.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import numpy as np import matplotlib.pylab as plt ``` ## Solving ODEs You see the following plot (if not displayed, view [here](https://github.com/cohmathonc/biosci670/blob/master/IntroductionComputationalMethods/exercises/figs/ODE_oscillating_solution.png)) in a report: <img src="figs/ODE_oscillating_solution.png"> The authors state that the plot shows the simulated time evolution of a specific cell growth model, and they report an oscillatory growth pattern. They also mention that the explicit Euler method was used to solve the ODE that describes the growth model. 1. You suspect that the reported oscillatory pattern might be a numerical artifact. Why? (Review *1.1.1 Euler Method (explicit)* in the [ODE notebook](https://github.com/cohmathonc/biosci670/blob/master/IntroductionComputationalMethods/05_IntroCompMethods_SolvingODEs.ipynb)) 2. How could the authors test numerically whether the pattern that they discovered is indeed a feature of their ODE model? 3. After contacting the authors, you learned that their ODE is based on the *logistic growth model* $$\frac{dy}{dt}=r\,y\, (1-\frac{y}{K}) \, ,\tag{1}$$ with parameter choices $r=0.1$, $K=1$and initial value $y_0=0.1$. Approximate $y(t)$ numerically in the range from $t_0=0$ and $t_\text{max}=100$ using the explicit Euler method. Solve the model for 3 different step sizes $\Delta t = \{10, 5, 1\}$. What are the implications for question (1)? 4. (**optional**) As in question (3) but now use the Runge-Kutta RK4 method with fixed time step (see [notebook](https://github.com/cohmathonc/biosci670/blob/master/IntroductionComputationalMethods/05_IntroCompMethods_SolvingODEs.ipynb)) for solving the ODE. 5. (**optional**) Use the RK45 method with adaptive time stepping provided by `scipy.integrate` through the `solve_ivp` interface. The solution object contains time steps and estimated function values resulting from adaptive time stepping. 6. (**optional**) Read the description of the `rtol` option of `scipy.integrate.solve_ivp`. How do different choices of `rtol` to affect the size/number of time-steps computed by the adaptive algorithm. ###### About This notebook is part of the *biosci670* course on *Mathematical Modeling and Methods for Biomedical Science*. See https://github.com/cohmathonc/biosci670 for more information and material.
github_jupyter
``` #import packages import tensorflow as tf from tensorflow.keras import layers import tensorflow_datasets as tfds import matplotlib.pylab as plt import os import zipfile from tensorflow.keras.preprocessing.image import ImageDataGenerator local_zip = '../Dataset/horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('../Dataset') local_zip = '../Dataset/validation-horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('../Dataset/validation-horse-or-human') zip_ref.close() # Directory with our training horse pictures train_horse_dir = os.path.join('../Dataset/horse-or-human/horses') # Directory with our training human pictures train_human_dir = os.path.join('../Dataset/horse-or-human/humans') # Directory with our training horse pictures validation_horse_dir = os.path.join('../Dataset/validation-horse-or-human/horses') # Directory with our training human pictures validation_human_dir = os.path.join('../Dataset/validation-horse-or-human/humans') train_horse_names = os.listdir(train_horse_dir) print(train_horse_names[:10]) train_human_names = os.listdir(train_human_dir) print(train_human_names[:10]) validation_horse_names = os.listdir(validation_horse_dir) print(validation_horse_names[:10]) validation_human_names = os.listdir(validation_human_dir) print(validation_human_names[:10]) print('total training horse images:', len(os.listdir(train_horse_dir))) print('total training human images:', len(os.listdir(train_human_dir))) print('total validation horse images:', len(os.listdir(validation_horse_dir))) print('total validation human images:', len(os.listdir(validation_human_dir))) # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1/255) validation_datagen = ImageDataGenerator(rescale=1/255) # Flow training images in batches of 128 using train_datagen generator train_generator = train_datagen.flow_from_directory( '../Dataset/horse-or-human/', # This is the source directory for training images target_size=(224, 224), # All images will be resized to 150x150 batch_size=128, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') # Flow training images in batches of 128 using train_datagen generator validation_generator = validation_datagen.flow_from_directory( '../Dataset/validation-horse-or-human/', # This is the source directory for training images target_size=(224, 224), # All images will be resized to 150x150 batch_size=32, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') IMG_SHAPE = (224, 224, 3) # Create the base model from the pre-trained model MobileNet V2 base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') base_model.compile(optimizer=tf.keras.optimizers.Adam(), loss='binary_crossentropy', metrics=['accuracy']) average_pool = tf.keras.Sequential() average_pool.add(layers.AveragePooling2D()) average_pool.add(layers.Flatten()) average_pool.add(layers.Dense(1, activation='sigmoid')) standard_model = tf.keras.Sequential([base_model, average_pool]) standard_model.compile(optimizer=tf.keras.optimizers.Adam(), loss='binary_crossentropy', metrics=['accuracy']) # import PIL history = standard_model.fit( train_generator, steps_per_epoch=5, epochs=5, verbose=1, validation_data = validation_generator, validation_steps = 3) a = history.history['accuracy'] v_a = history.history['val_accuracy'] l = history.history['loss'] v_l = history.history['val_loss'] plt.figure(figsize=(8, 8)) plt.subplot(2, 1, 1) plt.plot(a, label='Accuracy of Training Data ') plt.plot(v_a, label='Accuracy of Validation Data') plt.legend(loc='lower left') plt.ylabel('Accuracy') plt.ylim([min(plt.ylim()),1.2]) plt.title('Accuracy of Transfer Model') plt.subplot(2, 1, 2) plt.plot(l, label='Loss for Training Data') plt.plot(v_l, label='Loss for Validation Data') plt.legend(loc='upper right') plt.ylabel('Cross-Entropy') plt.ylim([-0.5,1.2]) plt.title('Loss for Transfer Model') plt.xlabel('Epochs') plt.show() ```
github_jupyter
# Python для анализа данных ## Что такое SQL. Как писать запросы. Работа с Clickhouse. Автор: *Ян Пиле, НИУ ВШЭ* Язык SQL очень прочно влился в жизнь аналитиков и требования к кандидатам благодаря простоте, удобству и распространенности. Часто SQL используется для формирования выгрузок, витрин (с последующим построением отчетов на основе этих витрин) и администрирования баз данных. Поскольку повседневная работа аналитика неизбежно связана с выгрузками данных и витринами, навык написания SQL запросов может стать крайне полезным умением. Мы будем работать с колоночной базой данных Clickhouse. Прочитать, детали о том, что такое Clickhouse и какой диалект SQL там используется, можно прочитать в документации: https://clickhouse.tech/docs/ru/ Рассказ о том, что такое базы данных и в каком виде данные хранятся, требуется отдельный – тема очень большая. Мы будем представлять себе базу данных как набор хранящихся на нескольких серверах таблиц с именованными колонками (как в Excel-файлах, только больше (на самом деле это не совсем так, но нам усложнение логики не потребуется) #### Структура sql-запросов Общая структура запроса выглядит следующим образом: * SELECT ('столбцы или * для выбора всех столбцов; обязательно') * FROM ('таблица; обязательно') * WHERE ('условие/фильтрация, например, city = 'Moscow'; необязательно') * GROUP BY ('столбец, по которому хотим сгруппировать данные; необязательно') * HAVING ('условие/фильтрация на уровне сгруппированных данных; необязательно') * ORDER BY ('столбец, по которому хотим отсортировать вывод; необязательно') * LIMIT ('число, сколько строк результата надо вывести; необязательно') Для начала попробуем порешать задачи в интерфейсе Clickhouse. Он называется Tabix. Web-нтерфейс Clickhouse располагается по адресу: **Tabix.beslan.pro** (прямо на занятии зайдем и посмотрим) Наша база данных Clickhouse состоит из четырех таблиц: * **events** (события в приложении) * **checks** (чеки покупок в приложении) * **devices** (идентификаторы устройств, на которые приложения установлены) * **installs** (установки приложений) Возьмем одну из таблиц и опробуем элементы запроса на ней. Пусть это будет таблица events. Для простоты визуализации результатов запроса мы будем загружать данные прямо в Python (для этого я уже написал функцию, а уже потом будем разбирать, как эта функция работает. ``` !pip install pandahouse import json # Чтобы разбирать поля import requests # чтобы отправлять запрос к базе import pandas as pd # чтобы в табличном виде хранить результаты запроса # имена явки пароли. если хотите, чтобы считалось с вашего логина, вставьте сюда свои логин и пароль USER = 'student' PASS = 'dpo_python_2020' HOST = 'http://clickhouse.beslan.pro:8080/' def get_clickhouse_data(query, host=HOST, USER = USER, PASS = PASS, connection_timeout = 1500, dictify=True, **kwargs): NUMBER_OF_TRIES = 5 # Количество попыток запуска DELAY = 10 #время ожидания между запусками import time params = kwargs #если вдруг нам нужно в функцию положить какие-то параметры if dictify: query += "\n FORMAT JSONEachRow" # dictify = True отдает каждую строку в виде JSON'a for i in range(NUMBER_OF_TRIES): # headers = {'Accept-Encoding': 'gzip'} r = requests.post(host, params = params, auth=(USER, PASS), timeout = connection_timeout, data=query ) # отправили запрос на сервер if r.status_code == 200 and not dictify: return r.iter_lines() # генератор :) elif r.status_code == 200 and dictify: return (json.loads(x) for x in r.iter_lines()) # генератор :) else: print('ATTENTION: try #%d failed' % i) if i != (NUMBER_OF_TRIES - 1): print(r.text) time.sleep(DELAY * (i + 1)) else: raise(ValueError, r.text) def get_data(query): return pd.DataFrame(list(get_clickhouse_data(query, dictify=True))) ``` SQL-запросы мы будем сгружать в функцию в виде текста. ``` query = """ select * from events limit 10 """ gg = get_data(query) gg ``` Отлично! Мы достали 10 записей таблицы events. Теперь опробуем выражение where: достанем только те записи, которые соответствуют платформе iOS. ``` query = """ select EventDate, count() as cnt from default.events where (EventDate >= '2019-04-01' and EventDate <= '2019-04-10') or (EventDate >= '2019-04-20' and EventDate <= '2019-04-30') group by EventDate having cnt < 1000000 order by EventDate """ get_data(query) ``` Теперь остается попробовать выражения group by, having, order by и какую-нибудь группировочную функцию. Предлагаю посчитать количество событий (сумму поля events) в платформе iOS за июнь 2019 года, отсортировав выдачу по дате и выводя только дни, количество событий в которых было больше 6000000 ``` query = """ select * from (select EventDate, sum(events) as events_cnt from events where AppPlatform ='iOS' and EventDate between'2019-06-01' and '2019-06-30' group by EventDate having sum(events)>6000000 order by EventDate) """ get_data(query) ``` А еще существует понятие "подзапрос", имеется в виду, что вы в одном запросе обращаетесь к результатам другого запроса. Например можно посчитать количество дней, в которые events_cnt было больше 6000000 ``` query = """ select count() as days_cnt from (select EventDate, sum(events) as events_cnt from events where AppPlatform ='iOS' and EventDate between'2019-06-01' and '2019-06-30' group by EventDate having sum(events)>6000000 order by EventDate) """ get_data(query) ``` Кроме того результаты подзапроса можно передать в блок where. Давайте попробуем достать те DeviceID, которые совершили более 1300 событий 2019-05-15 ``` query = """ select DeviceID from events where EventDate = '2019-05-15' group by DeviceID having sum(events)>=1300 """ get_data(query) ``` А теперь достанем количество событий, которые совершили эти DeviceID за июнь 2019 в разбивке по дням. ``` query = """ select EventDate, sum(events) as events_cnt from events where EventDate between'2019-06-01' and '2019-06-30' and DeviceID in (select DeviceID from events where EventDate = '2019-05-15' group by DeviceID having sum(events)>=1300) group by EventDate order by EventDate """ get_data(query) ``` #### Объединение таблиц - JOIN Как мы узнали ранее, в реляционных базах данных таблицы имеют избыточные данные (ключи), для объединения таблиц друг с другом. И именно для объединения таблиц используется функция JOIN. JOIN используется в блоке FROM, после первого источника. После JOIN указывается условие для объединения. Базово, синтаксис выглядит так SELECT field FROM table_one AS l JOIN table_two AS r ON l.key = r.key В данном примере мы указали первую таблицу как левую ( l ), вторую как правую ( r ), и указали, что они объединяются по ключу key. Если с обеих сторон нашлось более одной строки с одинаковыми значениями, то по всем этим строкам строится декартово произведение (если явно не указано обратное). Джойны бывают разных видов. В случае Clickhouse это: * **INNER** (по умолчанию) — строки попадают в результат, только если значение ключевых колонок присутствует в обеих таблицах. * **FULL**, **LEFT** и **RIGHT** — при отсутствии значения в обеих или в одной из таблиц включает строку в результат, но оставляет пустыми (NULL) колонки, соответствующие противоположной таблице. * **CROSS** — декартово произведение двух таблиц целиком без указания ключевых колонок, секция с ON/USING явно не пишется; <a> <img src="https://i.pinimg.com/originals/c7/07/f9/c707f9cdc08b1cdd773c006da976c8e6.jpg" width="800" height="160" ></a> JOIN'ы могут иметь различную строгость. Перед JOIN'ом модет стоять модицицирующее выражение, например: **ANY INNER JOIN** **ALL** — если правая таблица содержит несколько подходящих строк, то ClickHouse выполняет их декартово произведение. Это стандартное поведение JOIN в SQL. **ANY** — если в правой таблице несколько соответствующих строк, то присоединяется только первая найденная. Если в правой таблице есть только одна подходящая строка, то результаты ANY и ALL совпадают. Чтобы посмотреть как вживую работает JOIN, давайте посмотрим, какие UserID совершили установки приложения. Для этого нужно взять таблицу Installs, выбрать из нее все поля и приджойнить ее по DeviceID к таблице devices. Чтобы на результат можно было посмотреть, выведем только 10 записей. ``` query = ''' select a.Source as Source, a.DeviceID as DeviceID, a.InstallCost as InstallCost, a.InstallationDate as InstallationDate, b.UserID as UserID from installs as a inner join devices as b on a.DeviceID = b.DeviceID where a.InstallationDate between '2019-01-01' and '2019-06-30' limit 10''' get_data(query) ``` Прочие нюансы SQL в Clickhouse мы разберем прямо на примерах с реальными задачами. Отдельно также нужно упомянуть, что пока Clickhouse не поддерживает оконных функций.
github_jupyter
In this project, I implemented three models, which are subtractor, adder-subtractor, adder-subtractor-multiplier, based on `Addition.ipynb` provided in homework3 sameple code. And I wrote three jupyter notebooks of these models, which named [Subtractor.ipynb](https://nbviewer.jupyter.org/github/rapirent/DSAI-HW3/blob/master/Subtractor.ipynb), [addition-subtractor.ipynb](https://nbviewer.jupyter.org/github/rapirent/DSAI-HW3/blob/master/addition-subtractor.ipynb) and [multiply.ipynb](https://nbviewer.jupyter.org/github/rapirent/DSAI-HW3/blob/master/multiply.ipynb). (Surely you can check out them) To leverage different configurations to compare performances of the same model, I wrote three python script of these models, which named `subtractor.py`, `addition-subtractor.py` and `multiply.py` to create figures, records, model savings. (You can use `model_load-subtractor.py`, `model_load-addition-subtractor.py` or `model_load-multiply.py` to reload models) # Idea - Based on `one-hot encording`, we can encode the symbol to the vector form. - If we constrained the number of the arithmetic operation on limited terms, build a NN receive this vector and do a supervised learning, it can be possible that reducing arithmetic computing problem to a `seq2seq` problem. - the idea on `addition operation` was proved in howework3 sample code. - And I try to add symbol `-` (minus) and `*` (multiply), adjust the NN accept vector length to expand this NN to achieve more complex computing opreations. - But those operations are contrained in only `two terms`. # initial - In these three jupyter notebooks, I run these models in same condition. - DATA_SIZE = 60000 - TRAIN_SIZE = 45000 - DIGITS = 3 - RNN = layers.LSTM - HIDDEN_SIZE = 128 - BATCH_SIZE = 128 - LAYERS = 1 - EPOCH = 100 ## subtractor - In subtractor, the final accuracy on test dataset whose size is 15000 is `0.9802` ![](./fig/subtractor-jupyter-accuracy.png) - You can see even growth of accuracy of training and validating was slowing down, the accuracy of testing was still growing - This can prove the model was trained well and not just due to `over-fitting` train-dataset or validate-dataset. - And you can see the accuracy of testing oscillate in the beginging, but after more epoches runing , it can achieve more high accuracy. ![](./fig/subtractor-jupyter-loss.png) - the loss of training and validating is showed as above. ## addition-subtractor - In adder-subtracotr, the final accuracy on test dataset whose size is 15000 is `0.8534666666666667` ![](./fig/addition-subtractor-jupyter-accuracy.png) - the final accuracy on test dataset is NOT BAD - As you seen in figure above, even growth of accuracy of training and validating was slowing down, the accuracy of testing was still growing. - And it is apparently model can achieve more higher accuracy if it was trained after a few epoches, because the accuracy of testing was still oscillate. ![](./fig/addition-subtractor-jupyter-loss.png) - According the loss of validating, the trained model may not found the (even local or global) optimum, it could be due to small dataset size and can be imporoved if there are more data. ## multiply - In adder-subtractor-multiplier, the final accuracy on test dataset whose size is 15000 is `0.5378666666666667`. ![](./fig/multiply-jupyter-accuracy.png) - The accuracy of validating is still low, which meant the model was NOT trained well, and may be due to very small dataset. - the adder-subtractor-multiplier include three types of operations, and results of multiplication contained much larger digit numbers than addition or subtraction. ![](./fig/multiply-jupyter-loss.png) - The viewpoint of very small dataset can also be proved by the model loss figure as you can see the loss of vqlidating was increasing eventually. # compare - the experiments were done on the platform with this configuration. - i9-7920X - 31.1 GiB memory - GTX 1080 Ti/PCIe/SSE2 * 4 - Ubuntu 18.04 LTS - you can check about figures in `fig/` directory, record log in `data/` directory and corpus used in each experiments in `corpus/` directory. - the prefix presents the model types: `s` for subtractor, `as` for `addition-subtractor` and `m` for `multiply` (adder-subtractor-multiplier) - ***the legends `test` of used figures in this section was present `validation`*** ## subtractor ### with different epoch size in 3 digits ```sh $ python3 ./subtractor.py "--epoch=1" "--output_name=epoch1" "--data_size=50000" "--train_size=40000" "--digits=3" $ python3 ./subtractor.py "--epoch=2" "--output_name=epoch2" "--data_size=50000" "--train_size=40000" "--digits=3" $ python3 ./subtractor.py "--epoch=3" "--output_name=epoch3" "--data_size=50000" "--train_size=40000" "--digits=3" ``` - 100 epoches:![](./fig/s-accuracy-epoch1.png) - test accuracy `0.9748` - 200 epoches:![](./fig/s-accuracy-epoch2.png) - test accuracy `0.9789` - 300 epoches:![](./fig/s-accuracy-epoch3.png) - test accuracy `0.9786` - As we seen in figures above, we can make an assumption about the epoch number, it may be better to set as `150~200`. - but the epoch number didn't significantly affect the accuracy. ### with different data size in 3 digits ```sh python3 ./subtractor.py "--epoch=2" "--output_name=train18000" "--data_size=38000" "--train_size=18000" "--digits=3" python3 ./subtractor.py "--epoch=2" "--output_name=train27000" "--data_size=57000" "--train_size=27000" "--digits=3" python3 ./subtractor.py "--epoch=2" "--output_name=train36000" "--data_size=66000" "--train_size=36000" "--digits=3" python3 ./subtractor.py "--epoch=2" "--output_name=train45000" "--data_size=75000" "--train_size=45000" "--digits=3" python3 ./subtractor.py "--epoch=2" "--output_name=train45000large" "--data_size=95000" "--train_size=45000" "--digits=3" ``` - 18000 train data:![](./fig/s-accuracy-train18000.png) - test accuracy `0.91135` - 27000 train data:![](./fig/s-accuracy-train27000.png) - test accuracy `0.9600666666666666` - 36000 train data:![](./fig/s-accuracy-train36000.png) - test accuracy `0.9765333333333334` - 45000 train data:![](./fig/s-accuracy-train45000.png) - test accuracy `0.9863333333333333` - 45000 train data and larger test data (50000):![](./fig/s-accuracy-train45000large.png) - test accuracy `0.9865` - as what you can see, the substarctor of 3 digits can be well train on more than 27000 train data. - and it can perform well in larger test data. (0.9863333333333333 v.s. 0.9865) ### with different data size in 4 digits ```sh python3 ./subtractor.py "--epoch=2" "--output_name=digit4-train45000" "--data_size=75000" "--train_size=45000" "--digits=4" python3 ./subtractor.py "--epoch=2" "--output_name=digit4-train36000" "--data_size=66000" "--train_size=36000" "--digits=4" python3 ./subtractor.py "--epoch=2" "--output_name=digit4-train60000" "--data_size=90000" "--train_size=60000" "--digits=4" ``` - 36000 train data:![](./fig/s-accuracy-digit4-train36000.png) - test accuracy `0.8051` - 45000 train data:![](./fig/s-accuracy-digit4-train45000.png) - test accuracy `0.8378666666666666` - 60000 train data:![](./fig/s-accuracy-digit4-train60000.png) - test accuracy `0.9149` - It showed that more than 45000 train dataset can perfor better training result. ## addition-subtractor ### with different train data size in 3 digits ```sh python3 ./addition-subtractor.py "--epoch=2" "--output_name=train18000" "--data_size=38000" "--train_size=18000" "--digits=3" python3 ./addition-subtractor.py "--epoch=2" "--output_name=train27000" "--data_size=57000" "--train_size=27000" "--digits=3" python3 ./addition-subtractor.py "--epoch=2" "--output_name=train36000" "--data_size=66000" "--train_size=36000" "--digits=3" python3 ./addition-subtractor.py "--epoch=2" "--output_name=train45000" "--data_size=75000" "--train_size=45000" "--digits=3" python3 ./addition-subtractor.py "--epoch=2" "--output_name=train60000" "--data_size=90000" "--train_size=60000" "--digits=3" python3 ./addition-subtractor.py "--epoch=2" "--output_name=train90000" "--data_size=120000" "--train_size=90000" "--digits=3" ``` - 18000 train data:![](./fig/as-accuracy-train18000.png) - test accuracy `0.449` - 27000 train data:![](./fig/as-accuracy-train27000.png) - test accuracy `0.7001666666666667` - 36000 train data:![](./fig/as-accuracy-train36000.png) - test accuracy `0.8155333333333333` - 45000 train data:![](./fig/as-accuracy-train45000.png) - test accuracy `0.8899333333333334` - 60000 train data:![](./fig/as-accuracy-train60000.png) - test accuracy `0.9299666666666667` - 90000 train data:![](./fig/as-accuracy-train90000.png) - test accuracy `0.9308666666666666` - As what you can see, the adder-substarctor of 3 digits can be well train on more than 45000 train data. - After expand the train dataset size to 60000, the test accuracy growth slowed down. - But can achieve high accuracy in much less epoch number. # with different digit number in same train data size (75000) ```sh python3 ./addition-subtractor.py "--epoch=2" "--output_name=train45000" "--data_size=75000" "--train_size=45000" "--digits=3" python3 ./addition-subtractor.py "--epoch=2" "--output_name=digit4" "--data_size=75000" "--train_size=45000" "--digits=4" python3 ./addition-subtractor.py "--epoch=2" "--output_name=digit5" "--data_size=75000" "--train_size=45000" "--digits=5" ``` - 3 digits:![](./fig/as-accuracy-train45000.png) - test accuracy `0.8899333333333334` - 4 digits:![](./fig/as-accuracy-digit4.png) - test accuracy `0.5333333333333333` - 5 digits:![](./fig/as-accuracy-digit5.png) - test accuracy `0.191` - The result of this experiments proved that as long as the digit number increases, the data size should be increase too, otherwise the accuracy will decrease severely. ## multiply (adder-subtractor-multiplier) - Because of lack of time, I just only ran one experiment in `multiply.py` and not gained a good performance. - But I trusted it can be improve by expanding the dataset size. ### diffierent dataset size in 3 digits ```sh python3 ./multiply.py "--epoch=2" "--output_name=train18000" "--data_size=38000" "--train_size=18000" "--digits=3" python3 ./multiply.py "--epoch=2" "--output_name=train27000" "--data_size=57000" "--train_size=27000" "--digits=3" python3 ./multiply.py "--epoch=2" "--output_name=train36000" "--data_size=66000" "--train_size=36000" "--digits=3" python3 ./multiply.py "--epoch=2" "--output_name=train45000" "--data_size=75000" "--train_size=45000" "--digits=3" python3 ./multiply.py "--epoch=2" "--output_name=train60000" "--data_size=90000" "--train_size=60000" "--digits=3" python3 ./multiply.py "--epoch=2" "--output_name=train120000" "--data_size=120000" "--train_size=90000" "--digits=3" ``` - 18000 train data:![](./fig/m-accuracy-train18000.png)![](./fig/m-loss-train18000.png) - test accuracy `0.217` - 27000 train data:![](./fig/m-accuracy-train27000.png)![](./fig/m-loss-train27000.png) - test accuracy `0.3271` - 36000 train data:![](./fig/m-accuracy-train36000.png)![](./fig/m-loss-train36000.png) - test accuracy `0.4504666666666667` - 45000 train data:![](./fig/m-accuracy-train45000.png)![](./fig/m-loss-train45000.png) - test accuracy `0.4750333333333333` - 60000 train data:![](./fig/m-accuracy-train60000.png)![](./fig/m-loss-train60000.png) - test accuracy `0.5541666666666667` - 120000 train data:![](./fig/m-accuracy-train120000.png)![](./fig/m-loss-train120000.png) - test accuracy `0.6574333333333333` - According the model loss diagram, I assume the train data was not sufficient for this model becasue the loss of validating was not steady decreasing. - And by the observation that accuracy was increasing as long as dataset size increased, I believed it can achieve a better performance if we add more train data. - Because of the digit number of result in multiplication operation was so large (if it was 3 digits, then result can be 6 digits, compared with the result is 3 digits in addition or subtraction operation). - Even consider about the `multiply.py` implemented the multi-operation model, the performance on the `120000` train dataset size is acceptable (>=0.6). # My opinion about `Can we apply the same training approach for multipliction? - My answer is `yes`. - Refer to the experiment about multiply.py above, the accuracy can be improved by adding more train data. - Performance on the `120000` train dataset size is still acceptable (>=0.6). - Because of the digit number of result in multiplication operation was so large (if it was 3 digits, then result can be 6 digits, compared with the result is 3 digits in addition or subtraction operation).
github_jupyter
# Image segmentation with a U-Net-like architecture **Author:** [fchollet](https://twitter.com/fchollet)<br> **Date created:** 2019/03/20<br> **Last modified:** 2020/04/20<br> **Description:** Image segmentation model trained from scratch on the Oxford Pets dataset. ## Download the data ``` !curl -O http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz !curl -O http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz !tar -xf images.tar.gz !tar -xf annotations.tar.gz ``` ## Prepare paths of input images and target segmentation masks ``` import os input_dir = "images/" target_dir = "annotations/trimaps/" img_size = (160, 160) num_classes = 4 batch_size = 32 input_img_paths = sorted( [ os.path.join(input_dir, fname) for fname in os.listdir(input_dir) if fname.endswith(".jpg") ] ) target_img_paths = sorted( [ os.path.join(target_dir, fname) for fname in os.listdir(target_dir) if fname.endswith(".png") and not fname.startswith(".") ] ) print("Number of samples:", len(input_img_paths)) for input_path, target_path in zip(input_img_paths[:10], target_img_paths[:10]): print(input_path, "|", target_path) ``` ## What does one input image and corresponding segmentation mask look like? ``` from IPython.display import Image, display from tensorflow.keras.preprocessing.image import load_img import PIL from PIL import ImageOps # Display input image #7 display(Image(filename=input_img_paths[9])) # Display auto-contrast version of corresponding target (per-pixel categories) img = PIL.ImageOps.autocontrast(load_img(target_img_paths[9])) display(img) ``` ## Prepare `Sequence` class to load & vectorize batches of data ``` from tensorflow import keras import numpy as np from tensorflow.keras.preprocessing.image import load_img class OxfordPets(keras.utils.Sequence): """Helper to iterate over the data (as Numpy arrays).""" def __init__(self, batch_size, img_size, input_img_paths, target_img_paths): self.batch_size = batch_size self.img_size = img_size self.input_img_paths = input_img_paths self.target_img_paths = target_img_paths def __len__(self): return len(self.target_img_paths) // self.batch_size def __getitem__(self, idx): """Returns tuple (input, target) correspond to batch #idx.""" i = idx * self.batch_size batch_input_img_paths = self.input_img_paths[i : i + self.batch_size] batch_target_img_paths = self.target_img_paths[i : i + self.batch_size] x = np.zeros((batch_size,) + self.img_size + (3,), dtype="float32") for j, path in enumerate(batch_input_img_paths): img = load_img(path, target_size=self.img_size) x[j] = img y = np.zeros((batch_size,) + self.img_size + (1,), dtype="uint8") for j, path in enumerate(batch_target_img_paths): img = load_img(path, target_size=self.img_size, color_mode="grayscale") y[j] = np.expand_dims(img, 2) return x, y ``` ## Perpare U-Net Xception-style model ``` from tensorflow.keras import layers def get_model(img_size, num_classes): inputs = keras.Input(shape=img_size + (3,)) ### [First half of the network: downsampling inputs] ### # Entry block x = layers.Conv2D(32, 3, strides=2, padding="same")(inputs) x = layers.BatchNormalization()(x) x = layers.Activation("relu")(x) previous_block_activation = x # Set aside residual # Blocks 1, 2, 3 are identical apart from the feature depth. for filters in [64, 128, 256]: x = layers.Activation("relu")(x) x = layers.SeparableConv2D(filters, 3, padding="same")(x) x = layers.BatchNormalization()(x) x = layers.Activation("relu")(x) x = layers.SeparableConv2D(filters, 3, padding="same")(x) x = layers.BatchNormalization()(x) x = layers.MaxPooling2D(3, strides=2, padding="same")(x) # Project residual residual = layers.Conv2D(filters, 1, strides=2, padding="same")( previous_block_activation ) x = layers.add([x, residual]) # Add back residual previous_block_activation = x # Set aside next residual ### [Second half of the network: upsampling inputs] ### for filters in [256, 128, 64, 32]: x = layers.Activation("relu")(x) x = layers.Conv2DTranspose(filters, 3, padding="same")(x) x = layers.BatchNormalization()(x) x = layers.Activation("relu")(x) x = layers.Conv2DTranspose(filters, 3, padding="same")(x) x = layers.BatchNormalization()(x) x = layers.UpSampling2D(2)(x) # Project residual residual = layers.UpSampling2D(2)(previous_block_activation) residual = layers.Conv2D(filters, 1, padding="same")(residual) x = layers.add([x, residual]) # Add back residual previous_block_activation = x # Set aside next residual # Add a per-pixel classification layer outputs = layers.Conv2D(num_classes, 3, activation="softmax", padding="same")(x) # Define the model model = keras.Model(inputs, outputs) return model # Free up RAM in case the model definition cells were run multiple times keras.backend.clear_session() # Build model model = get_model(img_size, num_classes) model.summary() ``` ## Set aside a validation split ``` import random # Split our img paths into a training and a validation set val_samples = 1000 random.Random(1337).shuffle(input_img_paths) random.Random(1337).shuffle(target_img_paths) train_input_img_paths = input_img_paths[:-val_samples] train_target_img_paths = target_img_paths[:-val_samples] val_input_img_paths = input_img_paths[-val_samples:] val_target_img_paths = target_img_paths[-val_samples:] # Instantiate data Sequences for each split train_gen = OxfordPets( batch_size, img_size, train_input_img_paths, train_target_img_paths ) val_gen = OxfordPets(batch_size, img_size, val_input_img_paths, val_target_img_paths) ``` ## Train the model ``` # Configure the model for training. # We use the "sparse" version of categorical_crossentropy # because our target data is integers. model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy") callbacks = [ keras.callbacks.ModelCheckpoint("oxford_segmentation.h5", save_best_only=True) ] # Train the model, doing validation at the end of each epoch. epochs = 15 model.fit(train_gen, epochs=epochs, validation_data=val_gen, callbacks=callbacks) ``` ## Visualize predictions ``` # Generate predictions for all images in the validation set val_gen = OxfordPets(batch_size, img_size, val_input_img_paths, val_target_img_paths) val_preds = model.predict(val_gen) def display_mask(i): """Quick utility to display a model's prediction.""" mask = np.argmax(val_preds[i], axis=-1) mask = np.expand_dims(mask, axis=-1) img = PIL.ImageOps.autocontrast(keras.preprocessing.image.array_to_img(mask)) display(img) # Display results for validation image #10 i = 10 # Display input image display(Image(filename=val_input_img_paths[i])) # Display ground-truth target mask img = PIL.ImageOps.autocontrast(load_img(val_target_img_paths[i])) display(img) # Display mask predicted by our model display_mask(i) # Note that the model only sees inputs at 150x150. ```
github_jupyter
``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt import pandas as pd from pandas import Series, DataFrame import numpy.random as rnd import scipy.stats as st import os plt.style.use(os.path.join(os.getcwd(), 'mystyle.mplstyle') ) nvalues = 10 norm_variates = rnd.randn(nvalues) norm_variates for i, v in enumerate(sorted(norm_variates), start=1): print('{0:2d} {1:+.4f}'.format(i, v)) plt.figure() ax = plt.gca() ax.axis('off') plt.plot([1,2],[1,1], lw=2, color='green') plt.plot([2,2],[1,2], lw=2, color='green', ls='--') plt.plot([2,3.5],[2,2], lw=2, color='green') plt.plot(1,1, marker='o', mfc='green') plt.plot(2,1, marker='o', mfc='white', mec='green', mew=2) plt.plot(2,2, marker='o', mfc='green') plt.plot(3.5,2, marker='o', mfc='white', mec='green', mew=2) plt.text(2.0, 0.9, '$v$', fontsize=20, horizontalalignment='center', verticalalignment='top') xx = 3.8 delta = 0.1 plt.plot([xx,xx],[1,2], lw=1.2, color='black') plt.plot([xx-delta, xx+delta], [1,1], lw=1.2, color='black') plt.plot([xx-delta, xx+delta], [2,2], lw=1.2, color='black') plt.text(xx+delta, 1.5, '$1/N$', fontsize=20, horizontalalignment='left', verticalalignment='center') plt.axis([0.5,5,0,3]); def plot_cdf(data, plot_range=None, scale_to=None, **kwargs): num_bins= len(data) sorted_data = np.array(sorted(data), dtype=np.float64) data_range = sorted_data[-1] - sorted_data[0] counts, bin_edges = np.histogram(sorted_data, bins=num_bins) xvalues = bin_edges[1:] yvalues = np.cumsum(counts) if plot_range is None: xmin = xvalues[0] xmax = xvalues[-1] else: xmin, xmax = plot_range # pad the arrays xvalues = np.concatenate([[xmin, xvalues[0]], xvalues, [xmax]]) yvalues = np.concatenate([[0.0, 0.0], yvalues, [yvalues.max()]]) if scale_to: yvalues = yvalues / len(data) * scale_to plt.axis([xmin, xmax, 0, yvalues.max()]) return plt.step(xvalues, yvalues, **kwargs) nvalues = 20 #rnd.seed(123) # to get identical results every time norm_variates = rnd.randn(nvalues) axes = plot_cdf(norm_variates, plot_range=[-6,3], scale_to=1., lw=2.5, color='Brown') for v in [0.25, 0.5, 0.75, 1.0]: plt.axhline(v, lw=1, ls='--', color='black') wing_lengths = np.fromfile('data/housefly-wing-lengths.txt', sep='\n', dtype=np.int64) wing_lengths plot_cdf(wing_lengths, plot_range=[30, 60], scale_to=100, lw=2) plt.grid(lw=1, ls='dashed') plt.xlabel('Housefly wing length (x.1mm)', fontsize=18) plt.ylabel('Percent', fontsize=18); import scipy.stats as st N = 4857 mean = 63.8 serror = 0.06 sdev = serror * np.sqrt(N) rvnorm = st.norm(loc=mean, scale=sdev) xmin = mean-3*sdev xmax = mean+3*sdev xx = np.linspace(xmin,xmax,200) plt.figure(figsize=(8,3)) plt.subplot(1,2,1) plt.plot(xx, rvnorm.cdf(xx)) plt.title('CDF') plt.xlabel('Height (in)') plt.ylabel('Proportion of women') plt.axis([xmin, xmax, 0.0, 1.0]) plt.subplot(1,2,2) plt.plot(xx, rvnorm.pdf(xx)) plt.title('PDF') plt.xlabel('Height (in)') plt.axis([xmin, xmax, 0.0, 0.1]); rvnorm.cdf(68) rvnorm.cdf(63) 100*(rvnorm.cdf(68)-rvnorm.cdf(63)) st.rv_continuous.fit? st.rv_continuous.fit categories = [ ('Petite', 59, 63), ('Average', 63, 68), ('Tall', 68, 71), ] for cat, vmin, vmax in categories: percent = 100*(rvnorm.cdf(vmax)-rvnorm.cdf(vmin)) print('{0:>8s}: {1:.2f}'.format(cat, percent)) too_short = 100*rvnorm.cdf(59) too_tall = 100*(1 - rvnorm.cdf(71)) unclassified = too_short + too_tall print(too_short, too_tall, unclassified) a = rvnorm.ppf(0.25) b = rvnorm.ppf(0.75) print (a, b) mean, variance, skew, kurtosis = rvnorm.stats(moments='mvks') print(mean, variance, skew, kurtosis) eta = 1.0 beta = 1.5 rvweib = st.weibull_min(beta, scale=eta) xmin = 0 xmax = 3 xx = np.linspace(xmin,xmax,200) plt.figure(figsize=(8,3)) plt.subplot(1,2,1) plt.plot(xx, rvweib.cdf(xx)) plt.title('CDF') plt.xlabel('Height (in)') plt.ylabel('Proportion of women') plt.subplot(1,2,2) plt.plot(xx, rvweib.pdf(xx)) plt.title('PDF') plt.xlabel('Height (in)'); weib_variates = rvweib.rvs(size=500) print(weib_variates[:10]) weib_df = DataFrame(weib_variates,columns=['weibull_variate']) weib_df.hist(bins=30); xmin = 0 xmax = 3.5 xx = np.linspace(xmin,xmax,200) plt.plot(xx, rvweib.cdf(xx), color='orange', lw=5) plot_cdf(weib_variates, plot_range=[xmin, xmax], scale_to=1, lw=2, color='green') plt.axis([xmin, xmax, 0, 1]) plt.title('Weibul distribution simulation', fontsize=14) plt.xlabel('Failure Time', fontsize=12) plt.grid(lw=1, ls='dashed'); wing_lengths = np.fromfile('data/housefly-wing-lengths.txt', sep='\n', dtype=np.int64) mean, std = st.norm.fit(wing_lengths) print(mean, std) st.probplot(wing_lengths, dist='norm', plot=plt) plt.grid(lw=1, ls='dashed'); mean=0.0 sdev=1.0 rvnorm = st.norm(loc=mean, scale=sdev) cdf = rvnorm.cdf pdf = rvnorm.pdf a = -.6 b = 1.2 xmin = mean-3*sdev xmax = mean+3*sdev xx = np.linspace(xmin,xmax,200) plt.figure(figsize=(9,3.5)) yy = cdf(xx) plt.subplot(1,2,1) plt.title('Cumulative distribution function', fontsize=15) #plt.tick_params(axis='x', which='both', bottom='off', top='off', labelbottom='off') #plt.tick_params(axis='y', which='both', left='off', right='off', labelleft='off') plt.plot([a,a], [0.0, cdf(a)], lw=1.2, ls='--', color='black') plt.plot([b,b], [0.0, cdf(b)], lw=1.2, ls='--', color='black') plt.plot([xmin,a], [cdf(a), cdf(a)], lw=1.2, ls='--', color='black') plt.plot([xmin,b], [cdf(b), cdf(b)], lw=1.2, ls='--', color='black') plt.plot(xx, yy, color='Brown', lw=2.5) plt.plot([xmin, xmin], [cdf(a)+0.01, cdf(b)-0.015], lw=10, color='black') plt.text(a, -0.06, '$a$', fontsize=14, horizontalalignment='center') plt.text(b, -0.06, '$b$', fontsize=14, horizontalalignment='center') plt.text(xmin-0.06, cdf(a), '$F(a)$', fontsize=14, horizontalalignment='right', verticalalignment='center') plt.text(xmin-0.06, cdf(b), '$F(b)$', fontsize=14, horizontalalignment='right', verticalalignment='center') plt.text(b, -0.06, '$b$', fontsize=14, horizontalalignment='center') plt.text(xmin+0.15, 0.5*(cdf(a)+cdf(b)), '$P=F(b)-F(a)$', fontsize=14, verticalalignment='center') yy = pdf(xx) plt.subplot(1,2,2) plt.title('Probability density function', fontsize=15) #plt.tick_params(axis='x', which='both', bottom='off', top='off', labelbottom='off') #plt.tick_params(axis='y', which='both', left='off', right='off', labelleft='off') plt.plot([a, a], [0.0, pdf(a)], lw=1.2, color='black') plt.plot([b, b], [0.0, pdf(b)], lw=1.2, color='black') plt.fill_between(xx, yy, where=(a<=xx) & (xx<=b), color='LemonChiffon') plt.text(a, -0.025, '$a$', fontsize=14, horizontalalignment='center') plt.text(b, -0.025, '$b$', fontsize=14, horizontalalignment='center') plt.text(0.5*(a+b)-.1, 0.2, '$P=$Area', fontsize=14, horizontalalignment='center', verticalalignment='top') plt.plot(xx, yy, color='Brown', lw=2.5); N = 20 p = 0.5 rv_binom = st.binom(N, p) rv_binom.pmf(12) rv_binom.cdf(7) xx = np.arange(N+1) cdf = rv_binom.cdf(xx) pmf = rv_binom.pmf(xx) xvalues = np.arange(N+1) plt.figure(figsize=(9,3.5)) plt.subplot(1,2,1) plt.step(xvalues, cdf, lw=2, color='brown') plt.grid(lw=1, ls='dashed') plt.title('Binomial cdf, $N=20$, $p=0.5$', fontsize=16) plt.subplot(1,2,2) left = xx - 0.5 plt.bar(left, pmf, 1.0, color='CornflowerBlue') plt.title('Binomial pmf, $N=20$, $p=0.5$', fontsize=16) plt.axis([0, 20, 0, .18]); mean = rv_binom.mean() std = rv_binom.std() print(mean, std) mean = N*p std = np.sqrt(N*p*(1-p)) print(mean, std) rvnorm = st.norm(loc=mean, scale=std) pdf = rvnorm.pdf xx = np.linspace(0, 20, 200) yy = pdf(xx) plt.plot(xx,yy, linewidth=3. , color='Chocolate') xx = np.arange(N+1) pmf = rv_binom.pmf(xx) left = xx - 0.5 plt.bar(left, pmf, 1.0, color='CornflowerBlue') rvnorm = st.norm(loc=mean, scale=std) plt.axis([0,20,0,.18]); import scipy.stats as st binorm_variates = st.multivariate_normal.rvs(mean=[0,0], size=300) df = DataFrame(binorm_variates, columns=['Z1', 'Z2']) df.head(10) df.plot(kind='scatter', x='Z1', y='Z2') plt.title('Bivariate Normal Distribution') plt.axis([-4,4,-4,4]); ```
github_jupyter
# Learning Curves and Bias-Variance Tradeoff In practice, much of the task of machine learning involves selecting algorithms, parameters, and sets of data to optimize the results of the method. All of these things can affect the quality of the results, but it’s not always clear which is best. For example, if your results have an error that’s larger than you hoped, you might imagine that increasing the training set size will always lead to better results. But this is not the case! Below, we’ll explore the reasons for this. Much of the material in this section was adapted from Andrew Ng’s excellent set of machine learning video lectures. See http://www.ml-class.org. In this section we’ll work with an extremely simple learning model: polynomial regression. This simply fits a polynomial of degree d to the data: if d = 1, then it is simple linear regression. First we'll ensure that we're in pylab mode, with figures being displayed inline: ``` %pylab inline ``` Polynomial regression can be done with the functions ``polyfit`` and ``polyval``, available in ``numpy``. For example: ``` import numpy as np np.random.seed(42) x = np.random.random(20) y = np.sin(2 * x) p = np.polyfit(x, y, 1) # fit a 1st-degree polynomial (i.e. a line) to the data print p # slope and intercept x_new = np.random.random(3) y_new = np.polyval(p, x_new) # evaluate the polynomial at x_new print abs(np.sin(x_new) - y_new) ``` Using a 1st-degree polynomial fit (that is, fitting a straight line to x and y), we predicted the value of y for a new input. This prediction has an absolute error of about 0.2 for the few test points which we tried. We can visualize the fit with the following function: ``` import pylab as pl def plot_fit(x, y, p): xfit = np.linspace(0, 1, 1000) yfit = np.polyval(p, xfit) pl.scatter(x, y, c='k') pl.plot(xfit, yfit) pl.xlabel('x') pl.ylabel('y') plot_fit(x, y, p) ``` When the error of predicted results is larger than desired, there are a few courses of action that can be taken: 1. Increase the number of training points N. This might give us a training set with more coverage, and lead to greater accuracy. 2. Increase the degree d of the polynomial. This might allow us to more closely fit the training data, and lead to a better result 3. Add more features. If we were to, for example, perform a linear regression using $x$, $\sqrt{x}$, $x^{-1}$, or other functions, we might hit on a functional form which can better be mapped to the value of y. The best course to take will vary from situation to situation, and from problem to problem. In this situation, number 2 and 3 may be useful, but number 1 will certainly not help: our model does not intrinsically fit the data very well. In machine learning terms, we say that it has high bias and that the data is *under-fit*. The ability to quickly figure out how to tune and improve your model is what separates good machine learning practitioners from the bad ones. In this section we’ll discuss some tools that can help determine which course is most likely to lead to good results. ## Bias, Variance, Overfitting, and Underfitting We’ll work with a simple example. Imagine that you would like to build an algorithm which will predict the price of a house given its size. Naively, we’d expect that the cost of a house grows as the size increases, but there are many other factors which can contribute. Imagine we approach this problem with the polynomial regression discussed above. We can tune the degree $d$ to try to get the best fit. First let's define some utility functions: ``` def test_func(x, err=0.5): return np.random.normal(10 - 1. / (x + 0.1), err) def compute_error(x, y, p): yfit = np.polyval(p, x) return np.sqrt(np.mean((y - yfit) ** 2)) ``` Run the following code to produce an example plot: ``` N = 8 np.random.seed(42) x = 10 ** np.linspace(-2, 0, N) y = test_func(x) xfit = np.linspace(-0.2, 1.2, 1000) titles = ['d = 1 (under-fit)', 'd = 2', 'd = 6 (over-fit)'] degrees = [1, 2, 6] pl.figure(figsize = (9, 3.5)) pl.subplots_adjust(left = 0.06, right=0.98, bottom=0.15, top=0.85, wspace=0.05) for i, d in enumerate(degrees): pl.subplot(131 + i, xticks=[], yticks=[]) pl.scatter(x, y, marker='x', c='k', s=50) p = np.polyfit(x, y, d) yfit = np.polyval(p, xfit) pl.plot(xfit, yfit, '-b') pl.xlim(-0.2, 1.2) pl.ylim(0, 12) pl.xlabel('house size') if i == 0: pl.ylabel('price') pl.title(titles[i]) ``` In the above figure, we see fits for three different values of $d$. For $d = 1$, the data is under-fit. This means that the model is too simplistic: no straight line will ever be a good fit to this data. In this case, we say that the model suffers from high bias. The model itself is biased, and this will be reflected in the fact that the data is poorly fit. At the other extreme, for $d = 6$ the data is over-fit. This means that the model has too many free parameters (6 in this case) which can be adjusted to perfectly fit the training data. If we add a new point to this plot, though, chances are it will be very far from the curve representing the degree-6 fit. In this case, we say that the model suffers from high variance. The reason for this label is that if any of the input points are varied slightly, it could result in an extremely different model. In the middle, for $d = 2$, we have found a good mid-point. It fits the data fairly well, and does not suffer from the bias and variance problems seen in the figures on either side. What we would like is a way to quantitatively identify bias and variance, and optimize the metaparameters (in this case, the polynomial degree d) in order to determine the best algorithm. This can be done through a process called cross-validation. ## Cross-validation and Testing Let's start by defining a new dataset which we can use to explore cross-validation. We will use a simple x vs. y regression estimator for ease of visualization, but the concepts also readily apply to more complicated datasets and models. ``` Ntrain = 100 Ncrossval = 100 Ntest = 50 error = 1.0 # randomly sample the data np.random.seed(0) x = np.random.random(Ntrain + Ncrossval + Ntest) y = test_func(x, error) # select training set # data is already random, so we can just choose a slice. xtrain = x[:Ntrain] ytrain = y[:Ntrain] # select cross-validation set xcrossval = x[Ntrain:Ntrain + Ncrossval] ycrossval = y[Ntrain:Ntrain + Ncrossval] # select test set xtest = x[Ntrain:-Ntest] ytest = y[Ntrain:-Ntest] pl.scatter(xtrain, ytrain, color='red') pl.scatter(xcrossval, ycrossval, color='blue') ``` In order to quantify the effects of bias and variance and construct the best possible estimator, we will split our training data into three parts: a *training set*, a *cross-validation set*, and a *test set*. As a general rule, the training set should be about 60% of the samples, and the cross-validation and test sets should be about 20% each. The general idea is as follows. The model parameters (in our case, the coefficients of the polynomials) are learned using the training set as above. The error is evaluated on the cross-validation set, and the meta-parameters (in our case, the degree of the polynomial) are adjusted so that this cross-validation error is minimized. Finally, the labels are predicted for the test set. These labels are used to evaluate how well the algorithm can be expected to perform on unlabeled data. Why do we need both a cross-validation set and a test set? Many machine learning practitioners use the same set of data as both a cross-validation set and a test set. This is not the best approach, for the same reasons we outlined above. Just as the parameters can be over-fit to the training data, the meta-parameters can be over-fit to the cross-validation data. For this reason, the minimal cross-validation error tends to under-estimate the error expected on a new set of data. The cross-validation error of our polynomial classifier can be visualized by plotting the error as a function of the polynomial degree d. We can do this as follows. This will spit out warnings about "poorly conditioned" polynomials: that is OK for now. ``` degrees = np.arange(1, 21) train_err = np.zeros(len(degrees)) crossval_err = np.zeros(len(degrees)) test_err = np.zeros(len(degrees)) for i, d in enumerate(degrees): p = np.polyfit(xtrain, ytrain, d) train_err[i] = compute_error(xtrain, ytrain, p) crossval_err[i] = compute_error(xcrossval, ycrossval, p) pl.figure() pl.title('Error for 100 Training Points') pl.plot(degrees, crossval_err, lw=2, label = 'cross-validation error') pl.plot(degrees, train_err, lw=2, label = 'training error') pl.plot([0, 20], [error, error], '--k', label='intrinsic error') pl.legend() pl.xlabel('degree of fit') pl.ylabel('rms error') ``` This figure compactly shows the reason that cross-validation is important. On the left side of the plot, we have very low-degree polynomial, which under-fits the data. This leads to a very high error for both the training set and the cross-validation set. On the far right side of the plot, we have a very high degree polynomial, which over-fits the data. This can be seen in the fact that the training error is very low, while the cross-validation error is very high. Plotted for comparison is the intrinsic error (this is the scatter artificially added to the data: click on the above image to see the source code). For this toy dataset, error = 1.0 is the best we can hope to attain. Choosing $d=6$ in this case gets us very close to the optimal error. The astute reader will realize that something is amiss here: in the above plot, $d = 6$ gives the best results. But in the previous plot, we found that $d = 6$ vastly over-fits the data. What’s going on here? The difference is the **number of training points** used. In the previous example, there were only eight training points. In this example, we have 100. As a general rule of thumb, the more training points used, the more complicated model can be used. But how can you determine for a given model whether more training points will be helpful? A useful diagnostic for this are learning curves. ## Learning Curves A learning curve is a plot of the training and cross-validation error as a function of the number of training points. Note that when we train on a small subset of the training data, the training error is computed using this subset, not the full training set. These plots can give a quantitative view into how beneficial it will be to add training samples. ``` # suppress warnings from Polyfit import warnings warnings.filterwarnings('ignore', message='Polyfit*') def plot_learning_curve(d): sizes = np.linspace(2, Ntrain, 50).astype(int) train_err = np.zeros(sizes.shape) crossval_err = np.zeros(sizes.shape) for i, size in enumerate(sizes): p = np.polyfit(xtrain[:size], ytrain[:size], d) crossval_err[i] = compute_error(xcrossval, ycrossval, p) train_err[i] = compute_error(xtrain[:size], ytrain[:size], p) fig = pl.figure() pl.plot(sizes, crossval_err, lw=2, label='cross-val error') pl.plot(sizes, train_err, lw=2, label='training error') pl.plot([0, Ntrain], [error, error], '--k', label='intrinsic error') pl.xlabel('traning set size') pl.ylabel('rms error') pl.legend(loc = 0) pl.ylim(0, 4) pl.xlim(0, 99) pl.title('d = %i' % d) plot_learning_curve(d=1) ``` Here we show the learning curve for $d = 1$. From the above discussion, we know that $d = 1$ is a high-bias estimator which under-fits the data. This is indicated by the fact that both the training and cross-validation errors are very high. If this is the case, adding more training data will not help matters: both lines have converged to a relatively high error. ``` plot_learning_curve(d=20) ``` Here we show the learning curve for $d = 20$. From the above discussion, we know that $d = 20$ is a high-variance estimator which over-fits the data. This is indicated by the fact that the training error is much less than the cross-validation error. As we add more samples to this training set, the training error will continue to climb, while the cross-validation error will continue to decrease, until they meet in the middle. In this case, our intrinsic error was set to 1.0, and we can infer that adding more data will allow the estimator to very closely match the best possible cross-validation error. ``` plot_learning_curve(d=6) ``` For our $d=6$ case, we see that we have more training data than we need. This is not a problem (especially if the algorithm scales well with large $N$), but if our data were expensive to obtain or if the training scales unfavorably with $N$, we could have used a diagram like this to determine this and stop once we had recorded 40-50 training samples. ## Summary We’ve seen above that an under-performing algorithm can be due to two possible situations: high bias (under-fitting) and high variance (over-fitting). In order to evaluate our algorithm, we set aside a portion of our training data for cross-validation. Using the technique of learning curves, we can train on progressively larger subsets of the data, evaluating the training error and cross-validation error to determine whether our algorithm has high variance or high bias. But what do we do with this information? #### High Bias If our algorithm shows high bias, the following actions might help: - **Add more features**. In our example of predicting home prices, it may be helpful to make use of information such as the neighborhood the house is in, the year the house was built, the size of the lot, etc. Adding these features to the training and test sets can improve a high-bias estimator - **Use a more sophisticated model**. Adding complexity to the model can help improve on bias. For a polynomial fit, this can be accomplished by increasing the degree d. Each learning technique has its own methods of adding complexity. - **Use fewer samples**. Though this will not improve the classification, a high-bias algorithm can attain nearly the same error with a smaller training sample. For algorithms which are computationally expensive, reducing the training sample size can lead to very large improvements in speed. - **Decrease regularization**. Regularization is a technique used to impose simplicity in some machine learning models, by adding a penalty term that depends on the characteristics of the parameters. If a model has high bias, decreasing the effect of regularization can lead to better results. #### High Variance If our algorithm shows high variance, the following actions might help: - **Use fewer features**. Using a feature selection technique may be useful, and decrease the over-fitting of the estimator. - **Use more training samples**. Adding training samples can reduce the effect of over-fitting, and lead to improvements in a high variance estimator. - **Increase Regularization**. Regularization is designed to prevent over-fitting. In a high-variance model, increasing regularization can lead to better results. These choices become very important in real-world situations. For example, due to limited telescope time, astronomers must seek a balance between observing a large number of objects, and observing a large number of features for each object. Determining which is more important for a particular learning task can inform the observing strategy that the astronomer employs. In a later exercise, we will explore the use of learning curves for the photometric redshift problem.
github_jupyter
<a href="https://colab.research.google.com/github/yukinaga/minnano_kaggle/blob/main/section_2/01_pandas_basic.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Pandasの基礎 PandasはPythonでデータ分析を行うためのライブラリで、データの読み込みや編集、統計量の表示などを簡単に行うことができます。 主要なコードはCythonまたはC言語で書かれており、高速に動作します。 ## ●Pandasの導入 Pandasを使うためには、Pandasのモジュールをインポートする必要があります。 NumPyもインポートしておきます。 ``` import pandas as pd import numpy as np # 練習用 ``` Pandasのデータ構造にはSeries(一次元)とDataFrame(二次元)があります。 ## ●Seriesの作成 Seriesはラベル付きの一次元の配列で、整数や少数、文字列など様々な型のデータを格納することができます。 以下は、リストからSeriesを作る例です。 ラベルは`index`で指定します。 ``` a = pd.Series([60, 80, 70, 50, 30], index=["Japanese", "English", "Math", "Science", "History"]) print(type(a)) print(a) # 練習用 ``` 上記ではリストとしてデータとラベルを渡していますが、NumPyの配列を使っても構いません。 ``` a = pd.Series(np.array([60, 80, 70, 50, 30]), index=np.array(["Japanese", "English", "Math", "Science", "History"])) print(type(a)) print(a) # 練習用 ``` Seriesは、辞書から作ることもできます。 ``` a = pd.Series({"Japanese":60, "English":80, "Math":70, "Science":50, "History":30}) print(type(a)) print(a) # 練習用 ``` ## ●Seriesの操作 インデックスやラベルを使って、Seriesのデータの操作を行うことができます。 以下は、データにアクセスする例です。 ``` a = pd.Series([60, 80, 70, 50, 30], index=["Japanese", "English", "Math", "Science", "History"]) print(a[2]) # インデックスを指定 print(a["Math"]) # ラベルを指定 # 練習用 ``` `append`を使ってデータを追加することができます。 ``` a = pd.Series([60, 80, 70, 50, 30], index=["Japanese", "English", "Math", "Science", "History"]) b = pd.Series([20], index=["Art"]) a = a.append(b) print(a) # 練習用 ``` その他、データの変更や削除、Series同士の結合なども可能です。 詳細については、公式ドキュメントなどを参考にしましょう。 https://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html ## ●DataFrameの作成 DataFrameはラベル付きの二次元の配列で、整数や少数、文字列など様々な型のデータを格納することができます。 以下は、二次元のリストからDataFrameを作る例です。 ``` a = pd.DataFrame([[80, 60, 70, True], [90, 80, 70, True], [70, 60, 75, True], [40, 60, 50, False], [20, 30, 40, False], [50, 20, 10, False]]) a # ノートブックではprintを使わなくても表示が可能 # 練習用 ``` DataFrameはSeriesや辞書、NumPyの配列から作ることも可能です。 行と列には、ラベルをつけることができます。 ``` a.index = ["Taro", "Hanako", "Jiro", "Sachiko", "Saburo", "Yoko"] a.columns = ["Japanese", "English", "Math", "Result"] a # 練習用 ``` ## ●データの特徴 `shape`により、データの行数、列数を取得できます。 ``` a.shape # 行数、列数 # 練習用 ``` 最初の5行を表示する際は、`head()`を、最後の5行のみを表示する際は`tail()`を使います。 特に行数が多い場合に、データの概要を把握するのに便利です。 ``` a.head() # 最初の5行 # 練習用 a.tail() # 最後の5行 # 練習用 ```  基本的な統計量は、`describe()`で一度に表示することができます。 ``` a.describe() # 基本的な統計量 # 練習用 ``` これらの値は、`mean()`や`max()`などのメソッドで個別に取得することもできます。 ## ●DataFrameの操作 インデックスやラベルを使って、DataFrameのデータの操作を行うことができます。 以下のコードでは、`loc()`メソッドを使って範囲を指定し、Seriesデータを取り出しています。 ``` tr = a.loc["Taro", :] # 一行取り出す print(type(tr)) tr # 練習用 ``` 取り出した行の型がSeriesになっていることが確認できますね。 同様にして、DataFrameから列を取り出すこともできます。 ``` ma = a.loc[:, "English"] # 一列取り出す print(type(ma)) ma # 練習用 ``` こちらもSeries型ですね。 `iloc`を使えばインデックスにより範囲を指定することも可能です。 ``` r = a.iloc[1:4, :2] # 行:1-3、列:0-1 print(type(r)) r # 練習用 ``` `loc()`メソッドにより、行を追加することができます。 ``` a.loc["Shiro"] = pd.Series([70, 80, 70, True], index=["Japanese", "English", "Math", "Result"], name="Shiro") # Seriesを行として追加 a # 練習用 ``` 列のラベルを指定し、列を追加することができます。 ``` a["Science"] = [80, 70, 60, 50, 60, 40, 80] # 列をリストとして追加 a # 練習用 ``` `sort_values`メソッドにより、DataFrameをソートすることができます。 ``` a.sort_values(by="Math",ascending=False) # 練習用 ``` 他にも、DataFrameにはデータの削除や変更、DataFrame同士の結合など様々な機能があります。 もちろん、条件を詳しく絞ってデータを抽出することも可能です。 さらに詳しく知りたい方は、公式ドキュメントなどを参考にしましょう。 https://pandas.pydata.org/pandas-docs/stable/index.html
github_jupyter
# First and Second order random walks First and second order random walks are a node-sampling mechanism that can be employed in a large number of algorithms. In this notebook we will shortly show how to use Ensmallen to sample a large number of random walks from big graphs. To install the GraPE library run: ``` pip install grape ``` To install the Ensmallen module exclusively, which may be useful when the TensorFlow dependency causes problems, do run: ``` pip install ensmallen ! pip install -q ensmallen ``` ## Retrieving a graph to run the sampling on In this tutorial we will run examples on the [Homo Sapiens graph from STRING](https://string-db.org/cgi/organisms). If you want to load a graph from an edge list, just follow the examples provided from the Loading a Graph in Ensmallen tutorial. ``` from ensmallen.datasets.string import HomoSapiens ``` Retrieving and loading the graph ``` graph = HomoSapiens() # We also create a version of the graph without edge weights unweighted_graph = graph.remove_edge_weights() ``` We compute the graph report: ``` graph ``` and the unweighted graph report: ``` unweighted_graph ``` ## Random walks are heavily parallelized All the algorithms to sample random walks provided by Ensmallen are heavily parallelized. Therefore, their execution on instances with a large amount amount of threads will lead to (obviously) better time performance. This notebook is being executed on a COLAB instance with only 2 core; therefore the performance will not be as good as they could be even on your notebook, or your cellphone (Ensmallen can run on Android phones). ``` from multiprocessing import cpu_count cpu_count() ``` ## Unweighted first-order random walks Computation of first-order random walks ignoring the edge weights. In the following examples random walks are computed (on unweighted and weighted graphs) by either invoking method *random_walks* or method *complete_walks*. *random_walks* automatically chooses between exact and sampled random walks; use this method if you want to let *Grape* to chose the best option. *complete_walks* is the method used to compute exact walks. ``` %%time unweighted_graph.random_walks( # We want random walks with length 100 walk_length=32, # We want to get random walks starting from 1000 random nodes quantity=1000, # We want 2 iterations from each node iterations=2 ) %%time unweighted_graph.complete_walks( # We want random walks with length 100 walk_length=100, # We want 2 iterations from each node iterations=2 ) ``` ## Weighted first-order random walks Computation of first-order random walks, biased using the edge weights. ``` %%time graph.random_walks( # We want random walks with length 100 walk_length=100, # We want to get random walks starting from 1000 random nodes quantity=1000, # We want 2 iterations from each node iterations=2 ) ``` Similarly, to get random walks from all of the nodes in the graph it is possible to use: ``` %%time graph.complete_walks( # We want random walks with length 100 walk_length=100, # We want 2 iterations from each node iterations=2 ) ``` ## Second-order random walks In the following we show the computation of second-order random walks, that is random walks that use [Node2Vec parameters](https://arxiv.org/abs/1607.00653) to bias the random walk towards a BFS or a DFS. ``` %%time graph.random_walks( # We want random walks with length 100 walk_length=32, # We want to get random walks starting from 1000 random nodes quantity=1000, # We want 2 iterations from each node iterations=2, return_weight=2.0, explore_weight=2.0, ) %%time unweighted_graph.random_walks( # We want random walks with length 100 walk_length=32, # We want to get random walks starting from 1000 random nodes quantity=1000, # We want 2 iterations from each node iterations=2, return_weight=2.0, explore_weight=2.0, ) %%time graph.complete_walks( # We want random walks with length 100 walk_length=32, # We want 2 iterations from each node iterations=2, return_weight=2.0, explore_weight=2.0, ) %%time unweighted_graph.complete_walks( # We want random walks with length 100 walk_length=32, # We want 2 iterations from each node iterations=2, return_weight=2.0, explore_weight=2.0, ) ``` ## Approximated second-order random walks When working on graphs where some nodes have an extremely high node degree, *d* (e.g. *d > 50000*), the computation of the transition weights can be a bottleneck. In those use-cases approximated random walks can help make the computation considerably faster, by randomly subsampling each node's neighbourhood to a maximum number, provided by the user. In the considered graph, the highest node degree id *d $\approx$ 7000*. In the GraPE paper we show experiments comparing the edge-prediction performance of a model trained on graph embeddings obtained by the Skipgram model when using either exact random walks, or random walks obtained by with significant subsampling of the nodes (maximum node degree clipped at 10). The comparative evaluation shows no decrease in performance. ``` %%time graph.random_walks( # We want random walks with length 100 walk_length=32, # We want to get random walks starting from 1000 random nodes quantity=1000, # We want 2 iterations from each node iterations=2, return_weight=2.0, explore_weight=2.0, # We will subsample the neighbours of the nodes # dynamically to 100. max_neighbours=100 ) %%time graph.complete_walks( # We want random walks with length 100 walk_length=32, # We want 2 iterations from each node iterations=2, return_weight=2.0, explore_weight=2.0, # We will subsample the neighbours of the nodes # dynamically to 100. max_neighbours=100 ) ``` ## Enabling the speedups Ensmallen provides numerous speed-ups based on time-memory tradeoffs, which allow faster computation. Automatic Speed-up can be enabled by simply seeting a semaphor: ``` graph.enable() ``` ### Weighted first order random walks with speedups The first order random walks have about an order of magnitude speed increase. ``` %%time graph.random_walks( # We want random walks with length 100 walk_length=100, # We want to get random walks starting from 1000 random nodes quantity=1000, # We want 10 iterations from each node iterations=2 ) %%time graph.complete_walks( # We want random walks with length 100 walk_length=100, # We want 10 iterations from each node iterations=2 ) ``` ### Second order random walks with speedups ``` %%time graph.random_walks( # We want random walks with length 100 walk_length=32, # We want to get random walks starting from 1000 random nodes quantity=1000, # We want 2 iterations from each node iterations=2, return_weight=2.0, explore_weight=2.0, ) %%time graph.complete_walks( # We want random walks with length 100 walk_length=32, # We want 2 iterations from each node iterations=2, return_weight=2.0, explore_weight=2.0, ) ``` ## Approximated second-order random walks with speedups ``` %%time graph.random_walks( # We want random walks with length 100 walk_length=32, # We want to get random walks starting from 1000 random nodes quantity=1000, # We want 2 iterations from each node iterations=2, return_weight=2.0, explore_weight=2.0, # We will subsample the neighbours of the nodes # dynamically to 100. max_neighbours=100 ) %%time graph.complete_walks( # We want random walks with length 100 walk_length=32, # We want 2 iterations from each node iterations=2, return_weight=2.0, explore_weight=2.0, # We will subsample the neighbours of the nodes # dynamically to 100. max_neighbours=100 ) ```
github_jupyter
![](https://github.com/hse-econ-data-science/dap_2021_spring/blob/main/sem02_forif/flowchart.png?raw=true) ``` x = 1 if x == 1: print('That is true') x = 1 if x != 1: print('That is true') else: print('x = 1') if x == 1: print('That is true!') a = int(input()) b = int(input()) if a < b: print(a) else: if a > b: print(b) else: print('Равные числа', a) a = int(input()) b = int(input()) if a < b: print(a) elif a > b: print(b) else: print('Равные числа', a) a=input() b=input() if a < b: print(a) elif a > b: print(b) else: print('равные числа',a) ``` ### Распродажа В магазине проходит акция: * На все товары дешевле 1000 рублей скидка 15% * На все товары дороже либо равно 1000, но дешевле либо равно 5000 рублей скидка 20% * На все товары дороже 5000 рублей скидка 25% **Ввод** Целое неотрицательное число - цена товара в рублях **Вывод** Целое неотрицательное число - цена товара с учётом скидки в рублях ``` q = int(input()) if q < 1000: print(q - 0.15 * q) elif q <= 5000: print(q - 0.20 * q) else: print(q - 0.25 * q) x = 'abc' x[2] x = 12 z = str(x) z ``` ## Хитрости умножения Для умножения двузначного числа на 11 есть интересная хитрость: результат произведения можно получить если между первой и второй цифрой исходного числа вписать сумму этих цифр. Например, 15 * 11 = 1 1+5 5 = 165 или 34 * 11 = 3 3+4 4 = 374. Реализуйте программу, которая умножала бы двузначные числа на 11, используя эту хитрость. Пользоваться оператором умножения нельзя. __Формат ввода:__ Вводится двузначное число. __Формат вывода:__ Программа должна вывести ответ на задачу. **Тест 1** __Пример ввода:__ 15 __Вывод:__ 165 **Тест 2** __Пример ввода:__ 66 __Вывод:__ 726 ``` 54 * 11 '1' + '2' x = input() n1 = int(x[0]) n2 = int(x[1]) if n1 + n2 < 10: print(str(n1) + str(n1+n2) + str(n2)) else: print(str(n1 + 1) + str((n1 + n2) - 10) + str(n2)) print('Текст') '67' ``` # Цикл while ``` i = 1 while i <= 10: print(i) i += 1 ``` break ``` i = 1 while i <= 5: mark = int(input("Введите оценку: ")) if mark < 4: print('YES') break i += 1 else: # else находится на том же уровне отступа, что и while, #поэтому относится именно к циклу, а не к условию внутри цикла print ('NO') i = 1 while i <= 5: mark = int(input("Введите оценку: ")) if mark < 4: print('YES') break i += 1 else: # else находится на том же уровне отступа, что и while, поэтому относится именно к циклу, а не к условию внутри цикла print ('NO') i = 1 while i <= 5: mark = int(input("Введите оценку: ")) if mark < 4: print('YES') break i += 1 else: # else находится на том же уровне отступа, что и while, поэтому относится именно к циклу, а не к условию внутри цикла print ('NO') ``` continue ``` i = 1 retakes = 0 while i <= 5: mark = int(input("Введите оценку: ")) i += 1 if mark >= 4: # если пересдачи нет, сразу же идем проверять переменную i, без увеличесния переменной retakes continue retakes += 1 print("Итого пересдач:", retakes) i = 1 retakes = 0 while i <= 5: mark = int(input("Введите оценку: ")) i += 1 if mark < 4: retakes += 1 print("Итого пересдач:", retakes) a += 1 a *= 2 a /= 2 a %= 2 ``` ## (∩`-´)⊃━☆゚.*・。゚ Задача Вася начал бегать и в первый день он пробежал X километров и выдохся. Вася поставил себе цель Y километров и решил узнать, когда он ее достигнет, если каждый день будет бегать дистанцию на 10% больше, чем в предыдущий. **Формат ввода** Программа получает на вход целые числа X, Y **Формат вывода** Одно целое число (день, когда Вася пробежит свою цель) **Примеры** **Ввод:** 10 21 **Вывод:** 9 ``` x = int(input()) y = int(input()) n = 1 while x < y: n += 1 x *= 1.1 print(n) x= int(input()) y= int(input()) i= 1 while x < y: x *= 1.1 i += 1 print(i) ``` # Функция range ``` i = 1 myList = [] #list N = int(input()) while i <= N: myList.append(i) #добавить элемент к списку i += 1 print(myList) a = [1, 2, 3] a[2] myRange = range(20) print(list(myRange)) type(myRange) myRange = range(1, 21, 1) print(list(myRange)) myRange = range(1, 21, 2) print(list(myRange)) myRange = range(20, 0, -1) print(list(myRange)) myRange = range(20, 0, -2) print(list(myRange)) myRange = range(20, 0, -2) print(list(myRange)) students = ['Ivan Ivanov', 'Tatiana Sidorova', 'Maria Smirnova'] for i in range(len(students)): print(students[i]) students = ['Ivan Ivanov', 'Tatiana Sidorova', 'Maria Smirnova'] for student in students: print(student) ``` # Задача Пройтись в цикле по элементам списка и посчитать их сумму. **Формат ввода** Программа получает на вход числа через пробел. **Формат вывода** Сумма элементов списка. ``` x = input() z = x.split() print(z) x = input() print(x) x = list(map(int, (input().split()))) print(x) x = input().split() total = 0 for i in x: total += float(i) print(total) int(24.0) ``` # Задача Выведите значение наименьшего из всех положительных элементов в списке. Известно, что в списке есть хотя бы один положительный элемент, а значения всех элементов списка по модулю не превосходят 1000. **Формат ввода** Вводится список чисел. Все числа списка находятся на одной строке. **Формат вывода** Выведите ответ на задачу. **Пример 1** **Ввод** 5 -4 3 -2 1 **Вывод** 1 **Пример 2** **Ввод** 10 5 0 -5 -10 **Вывод** 5 **Пример 3** **Ввод** -1 -2 -3 -4 100 **Вывод** 100 ``` A = input().split() min_number = 1000 for i in range(len(A)): if int(A[i])>=0: if int(A[i])<min_number: min_number=int(A[i]) print(min_number) ```
github_jupyter
``` import sys sys.path.insert(0, '..') from branca.element import * ``` ## Element This is the base brick of `branca`. You can create an `Element` in providing a template string: ``` e = Element("This is fancy text") ``` Each element has an attribute `_name` and a unique `_id`. You also have a method `get_name` to get a unique string representation of the element. ``` print(e._name, e._id) print(e.get_name()) ``` You can render an `Element` using the method `render`: ``` e.render() ``` In the template, you can use keyword `this` for accessing the object itself ; and the keyword `kwargs` for accessing any keyword argument provided in the `render` method: ``` e = Element("Hello {{kwargs['you']}}, my name is `{{this.get_name()}}`.") e.render(you='World') ``` Well, this is not really cool for now. What makes elements useful lies in the fact that you can create trees out of them. To do so, you can either use the method `add_child` or the method `add_to`. ``` child = Element('This is the child.') parent = Element('This is the parent.').add_child(child) parent = Element('This is the parent.') child = Element('This is the child.').add_to(parent) ``` Now in the example above, embedding the one in the other does not change anything. ``` print(parent.render(), child.render()) ``` But you can use the tree structure in the template. ``` parent = Element("<parent>{% for child in this._children.values() %}{{child.render()}}{% endfor %}</parent>") Element('<child1/>').add_to(parent) Element('<child2/>').add_to(parent) parent.render() ``` As you can see, the child of an element are referenced in the `_children` attibute in the form of an `OrderedDict`. You can choose the key of each child in specifying a `name` in the `add_child` (or `add_to`) method: ``` parent = Element("<parent>{% for child in this._children.values() %}{{child.render()}}{% endfor %}</parent>") Element('<child1/>').add_to(parent, name='child_1') parent._children ``` That way, it's possible to overwrite a child in specifying the same name: ``` Element('<child1_overwritten/>').add_to(parent, name='child_1') parent.render() ``` I hope you start to find it useful. In fact, the real interest of `Element` lies in the classes that inherit from it. The most important one is `Figure` described in the next section. ## Figure A `Figure` represents an HTML document. It's composed of 3 parts (attributes): * `header` : corresponds to the `<head>` part of the HTML document, * `html` : corresponds to the `<body>` part, * `script` : corresponds to a `<script>` section that will be appended after the `<body>` section. ``` f = Figure() print(f.render()) ``` You can for example create a beatiful cyan "hello-world" webpage in doing: ``` f.header.add_child(Element("<style>body {background-color: #00ffff}</style>")) f.html.add_child(Element("<h1>Hello world</h1>")) print(f.render()) ``` You can simply save the content of the `Figure` to a file, thanks to the `save` method: ``` f.save('foo.html') print(open('foo.html').read()) ``` If you want to visualize it in the notebook, you can let `Figure._repr_html_` method do it's job in typing: ``` f ``` If this rendering is too large for you, you can force it's width and height: ``` f.width = 300 f.height = 200 f ``` Note that you can also define a `Figure`'s size in a matplotlib way: ``` Figure(figsize=(5,5)) ``` ## MacroElement It happens you need to create elements that have multiple effects on a Figure. For this, you can use `MacroElement` whose template contains macros ; each macro writes something into the parent Figure's header, body and script. ``` macro = MacroElement() macro._template = Template( '{% macro header(this, kwargs) %}' 'This is header of {{this.get_name()}}' '{% endmacro %}' '{% macro html(this, kwargs) %}' 'This is html of {{this.get_name()}}' '{% endmacro %}' '{% macro script(this, kwargs) %}' 'This is script of {{this.get_name()}}' '{% endmacro %}' ) print(Figure().add_child(macro).render()) ``` ## Link To embed javascript and css links in the header, you can use these class: ``` js_link = JavascriptLink('https://example.com/javascript.js') js_link.render() css_link = CssLink('https://example.com/style.css') css_link.render() ``` ## Html An `Html` element enables you to create custom div to put in the *body* of your page. ``` html = Html('Hello world') html.render() ``` It's designed to render the text *as you gave it*, so it won't work directly it you want to embed HTML code inside the div. ``` Html('<b>Hello world</b>').render() ``` For this, you have to set `script=True` and it will work: ``` Html('<b>Hello world</b>', script=True).render() ``` ## IFrame If you need to embed a full webpage (with separate javascript environment), you can use `IFrame`. ``` iframe = IFrame('Hello World') iframe.render() ``` As you can see, it will embed the full content of the iframe in a *base64* string so that the ouput looks like: ``` f = Figure(height=180) f.html.add_child(Element("Before the frame")) f.html.add_child(IFrame('In the frame', height='100px')) f.html.add_child(Element("After the frame")) f ``` ## Div At last, you have the `Div` element that behaves almost like `Html` with a few differences: * The style is put in the header, while `Html`'s style is embedded inline. * `Div` inherits from `MacroElement` so that: * It cannot be rendered unless it's embedded in a `Figure`. * It is a useful object toinherit from when you create new classes. ``` div = Div() div.html.add_child(Element('Hello world')) print(Figure().add_child(div).render()) ```
github_jupyter
<a href="https://practicalai.me"><img src="https://raw.githubusercontent.com/practicalAI/images/master/images/rounded_logo.png" width="100" align="left" hspace="20px" vspace="20px"></a> <img src="https://raw.githubusercontent.com/practicalAI/images/master/basic_ml/06_Multilayer_Perceptron/nn.png" width="200" vspace="10px" align="right"> <div align="left"> <h1>Multilayer Perceptron (MLP)</h1> In this lesson, we will explore multilayer perceptrons (MLPs) which are a basic type of neural network. We will implement them using Tensorflow with Keras.</div> <table align="center"> <td> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/rounded_logo.png" width="25"><a target="_blank" href="https://practicalai.me"> View on practicalAI</a> </td> <td> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/colab_logo.png" width="25"><a target="_blank" href="https://colab.research.google.com/github/practicalAI/practicalAI/blob/master/notebooks/06_Multilayer_Perceptron.ipynb"> Run in Google Colab</a> </td> <td> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/github_logo.png" width="22"><a target="_blank" href="https://github.com/practicalAI/practicalAI/blob/master/notebooks/basic_ml/06_Multilayer_Perceptron.ipynb"> View code on GitHub</a> </td> </table> # Overview * **Objective:** Predict the probability of class $y$ given the inputs $X$. Non-linearity is introduced to model the complex, non-linear data. * **Advantages:** * Can model non-linear patterns in the data really well. * **Disadvantages:** * Overfits easily. * Computationally intensive as network increases in size. * Not easily interpretable. * **Miscellaneous:** Future neural network architectures that we'll see use the MLP as a modular unit for feed forward operations (affine transformation (XW) followed by a non-linear operation). Our goal is to learn a model 𝑦̂ that models 𝑦 given 𝑋 . You'll notice that neural networks are just extensions of the generalized linear methods we've seen so far but with non-linear activation functions since our data will be highly non-linear. <img src="https://raw.githubusercontent.com/practicalAI/images/master/basic_ml/06_Multilayer_Perceptron/nn.png" width="550"> $z_1 = XW_1$ $a_1 = f(z_1)$ $z_2 = a_1W_2$ $\hat{y} = softmax(z_2)$ # classification * $X$ = inputs | $\in \mathbb{R}^{NXD}$ ($D$ is the number of features) * $W_1$ = 1st layer weights | $\in \mathbb{R}^{DXH}$ ($H$ is the number of hidden units in layer 1) * $z_1$ = outputs from first layer $\in \mathbb{R}^{NXH}$ * $f$ = non-linear activation function *nn $a_1$ = activation applied first layer's outputs | $\in \mathbb{R}^{NXH}$ * $W_2$ = 2nd layer weights | $\in \mathbb{R}^{HXC}$ ($C$ is the number of classes) * $z_2$ = outputs from second layer $\in \mathbb{R}^{NXH}$ * $\hat{y}$ = prediction | $\in \mathbb{R}^{NXC}$ ($N$ is the number of samples) **Note**: We're going to leave out the bias terms $\beta$ to avoid further crowding the backpropagation calculations. ### Training 1. Randomly initialize the model's weights $W$ (we'll cover more effective initialization strategies later in this lesson). 2. Feed inputs $X$ into the model to do the forward pass and receive the probabilities. * $z_1 = XW_1$ * $a_1 = f(z_1)$ * $z_2 = a_1W_2$ * $\hat{y} = softmax(z_2)$ 3. Compare the predictions $\hat{y}$ (ex. [0.3, 0.3, 0.4]]) with the actual target values $y$ (ex. class 2 would look like [0, 0, 1]) with the objective (cost) function to determine loss $J$. A common objective function for classification tasks is cross-entropy loss. * $J(\theta) = - \sum_i y_i ln (\hat{y_i}) $ * Since each input maps to exactly one class, our cross-entropy loss simplifies to: * $J(\theta) = - \sum_i ln(\hat{y_i}) = - \sum_i ln (\frac{e^{X_iW_y}}{\sum_j e^{X_iW}}) $ 4. Calculate the gradient of loss $J(\theta)$ w.r.t to the model weights. * $\frac{\partial{J}}{\partial{W_{2j}}} = \frac{\partial{J}}{\partial{\hat{y}}}\frac{\partial{\hat{y}}}{\partial{W_{2j}}} = - \frac{1}{\hat{y}}\frac{\partial{\hat{y}}}{\partial{W_{2j}}} = - \frac{1}{\frac{e^{W_{2y}a_1}}{\sum_j e^{a_1W}}}\frac{\sum_j e^{a_1W}e^{a_1W_{2y}}0 - e^{a_1W_{2y}}e^{a_1W_{2j}}a_1}{(\sum_j e^{a_1W})^2} = \frac{a_1e^{a_1W_{2j}}}{\sum_j e^{a_1W}} = a_1\hat{y}$ * $\frac{\partial{J}}{\partial{W_{2y}}} = \frac{\partial{J}}{\partial{\hat{y}}}\frac{\partial{\hat{y}}}{\partial{W_{2y}}} = - \frac{1}{\hat{y}}\frac{\partial{\hat{y}}}{\partial{W_{2y}}} = - \frac{1}{\frac{e^{W_{2y}a_1}}{\sum_j e^{a_1W}}}\frac{\sum_j e^{a_1W}e^{a_1W_{2y}}a_1 - e^{a_1W_{2y}}e^{a_1W_{2y}}a_1}{(\sum_j e^{a_1W})^2} = \frac{1}{\hat{y}}(a_1\hat{y} - a_1\hat{y}^2) = a_1(\hat{y}-1)$ * $ \frac{\partial{J}}{\partial{W_1}} = \frac{\partial{J}}{\partial{\hat{y}}} \frac{\partial{\hat{y}}}{\partial{a_2}} \frac{\partial{a_2}}{\partial{z_2}} \frac{\partial{z_2}}{\partial{W_1}} = W_2(\partial{scores})(\partial{ReLU})X $ 5. Update the weights $W$ using a small learning rate $\alpha$. The updates will penalize the probabiltiy for the incorrect classes (j) and encourage a higher probability for the correct class (y). * $W_i = W_i - \alpha\frac{\partial{J}}{\partial{W_i}}$ 6. Repeat steps 2 - 4 until model performs well. # Set up ``` # Use TensorFlow 2.x %tensorflow_version 2.x import os import numpy as np import tensorflow as tf # Arguments SEED = 1234 SHUFFLE = True DATA_FILE = "spiral.csv" INPUT_DIM = 2 NUM_CLASSES = 3 NUM_SAMPLES_PER_CLASS = 500 TRAIN_SIZE = 0.7 VAL_SIZE = 0.15 TEST_SIZE = 0.15 NUM_EPOCHS = 10 BATCH_SIZE = 32 HIDDEN_DIM = 100 LEARNING_RATE = 1e-2 # Set seed for reproducibility np.random.seed(SEED) tf.random.set_seed(SEED) ``` # Data Download non-linear spiral data for a classification task. ``` import matplotlib.pyplot as plt import pandas as pd import urllib # Upload data from GitHub to notebook's local drive url = "https://raw.githubusercontent.com/practicalAI/practicalAI/master/data/spiral.csv" response = urllib.request.urlopen(url) html = response.read() with open(DATA_FILE, 'wb') as fp: fp.write(html) # Load data df = pd.read_csv(DATA_FILE, header=0) X = df[['X1', 'X2']].values y = df['color'].values df.head(5) print ("X: ", np.shape(X)) print ("y: ", np.shape(y)) # Visualize data plt.title("Generated non-linear data") colors = {'c1': 'red', 'c2': 'yellow', 'c3': 'blue'} plt.scatter(X[:, 0], X[:, 1], c=[colors[_y] for _y in y], edgecolors='k', s=25) plt.show() ``` # Split data ``` import collections import json from sklearn.model_selection import train_test_split ``` ### Components ``` def train_val_test_split(X, y, val_size, test_size, shuffle): """Split data into train/val/test datasets. """ X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=test_size, stratify=y, shuffle=shuffle) X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, test_size=val_size, stratify=y_train, shuffle=shuffle) return X_train, X_val, X_test, y_train, y_val, y_test ``` ### Operations ``` # Create data splits X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split( X=X, y=y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE) class_counts = dict(collections.Counter(y)) print (f"X_train: {X_train.shape}, y_train: {y_train.shape}") print (f"X_val: {X_val.shape}, y_val: {y_val.shape}") print (f"X_test: {X_test.shape}, y_test: {y_test.shape}") print (f"X_train[0]: {X_train[0]}") print (f"y_train[0]: {y_train[0]}") print (f"Classes: {class_counts}") ``` # Label encoder ``` import json from sklearn.preprocessing import LabelEncoder # Output vectorizer y_tokenizer = LabelEncoder() # Fit on train data y_tokenizer = y_tokenizer.fit(y_train) classes = list(y_tokenizer.classes_) print (f"classes: {classes}") # Convert labels to tokens print (f"y_train[0]: {y_train[0]}") y_train = y_tokenizer.transform(y_train) y_val = y_tokenizer.transform(y_val) y_test = y_tokenizer.transform(y_test) print (f"y_train[0]: {y_train[0]}") # Class weights counts = collections.Counter(y_train) class_weights = {_class: 1.0/count for _class, count in counts.items()} print (f"class counts: {counts},\nclass weights: {class_weights}") ``` # Standardize data We need to standardize our data (zero mean and unit variance) in order to optimize quickly. We're only going to standardize the inputs X because out outputs y are class values. ``` from sklearn.preprocessing import StandardScaler # Standardize the data (mean=0, std=1) using training data X_scaler = StandardScaler().fit(X_train) # Apply scaler on training and test data (don't standardize outputs for classification) standardized_X_train = X_scaler.transform(X_train) standardized_X_val = X_scaler.transform(X_val) standardized_X_test = X_scaler.transform(X_test) # Check print (f"standardized_X_train: mean: {np.mean(standardized_X_train, axis=0)[0]}, std: {np.std(standardized_X_train, axis=0)[0]}") print (f"standardized_X_val: mean: {np.mean(standardized_X_val, axis=0)[0]}, std: {np.std(standardized_X_val, axis=0)[0]}") print (f"standardized_X_test: mean: {np.mean(standardized_X_test, axis=0)[0]}, std: {np.std(standardized_X_test, axis=0)[0]}") ``` # Linear model Before we get to our neural network, we're going to implement a generalized linear model (logistic regression) first to see why linear models won't suffice for our dataset. We will use Tensorflow with Keras to do this. ``` import itertools import matplotlib.pyplot as plt from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Input from tensorflow.keras.losses import SparseCategoricalCrossentropy from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam ``` ### Components ``` # Linear model class LogisticClassifier(Model): def __init__(self, hidden_dim, num_classes): super(LogisticClassifier, self).__init__() self.fc1 = Dense(units=hidden_dim, activation='linear') # linear = no activation function self.fc2 = Dense(units=num_classes, activation='softmax') def call(self, x_in, training=False): """Forward pass.""" z = self.fc1(x_in) y_pred = self.fc2(z) return y_pred def sample(self, input_shape): x_in = Input(shape=input_shape) return Model(inputs=x_in, outputs=self.call(x_in)).summary() def plot_confusion_matrix(y_true, y_pred, classes, cmap=plt.cm.Blues): """Plot a confusion matrix using ground truth and predictions.""" # Confusion matrix cm = confusion_matrix(y_true, y_pred) cm_norm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] # Figure fig = plt.figure() ax = fig.add_subplot(111) cax = ax.matshow(cm, cmap=plt.cm.Blues) fig.colorbar(cax) # Axis plt.title("Confusion matrix") plt.ylabel("True label") plt.xlabel("Predicted label") ax.set_xticklabels([''] + classes) ax.set_yticklabels([''] + classes) ax.xaxis.set_label_position('bottom') ax.xaxis.tick_bottom() # Values thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, f"{cm[i, j]:d} ({cm_norm[i, j]*100:.1f}%)", horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") # Display plt.show() def plot_multiclass_decision_boundary(model, X, y, savefig_fp=None): """Plot the multiclass decision boundary for a model that accepts 2D inputs. Arguments: model {function} -- trained model with function model.predict(x_in). X {numpy.ndarray} -- 2D inputs with shape (N, 2). y {numpy.ndarray} -- 1D outputs with shape (N,). """ # Axis boundaries x_min, x_max = X[:, 0].min() - 0.1, X[:, 0].max() + 0.1 y_min, y_max = X[:, 1].min() - 0.1, X[:, 1].max() + 0.1 xx, yy = np.meshgrid(np.linspace(x_min, x_max, 101), np.linspace(y_min, y_max, 101)) # Create predictions x_in = np.c_[xx.ravel(), yy.ravel()] y_pred = model.predict(x_in) y_pred = np.argmax(y_pred, axis=1).reshape(xx.shape) # Plot decision boundary plt.contourf(xx, yy, y_pred, cmap=plt.cm.Spectral, alpha=0.8) plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.RdYlBu) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) # Plot if savefig_fp: plt.savefig(savefig_fp, format='png') ``` ### Operations ``` # Initialize the model model = LogisticClassifier(hidden_dim=HIDDEN_DIM, num_classes=NUM_CLASSES) model.sample(input_shape=(INPUT_DIM,)) # Compile model.compile(optimizer=Adam(lr=LEARNING_RATE), loss=SparseCategoricalCrossentropy(), metrics=['accuracy']) # Training model.fit(x=standardized_X_train, y=y_train, validation_data=(standardized_X_val, y_val), epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, class_weight=class_weights, shuffle=False, verbose=1) # Predictions pred_train = model.predict(standardized_X_train) pred_test = model.predict(standardized_X_test) print (f"sample probability: {pred_test[0]}") pred_train = np.argmax(pred_train, axis=1) pred_test = np.argmax(pred_test, axis=1) print (f"sample class: {pred_test[0]}") # Accuracy train_acc = accuracy_score(y_train, pred_train) test_acc = accuracy_score(y_test, pred_test) print (f"train acc: {train_acc:.2f}, test acc: {test_acc:.2f}") # Metrics plot_confusion_matrix(y_test, pred_test, classes=classes) print (classification_report(y_test, pred_test)) # Visualize the decision boundary plt.figure(figsize=(12,5)) plt.subplot(1, 2, 1) plt.title("Train") plot_multiclass_decision_boundary(model=model, X=standardized_X_train, y=y_train) plt.subplot(1, 2, 2) plt.title("Test") plot_multiclass_decision_boundary(model=model, X=standardized_X_test, y=y_test) plt.show() ``` # Activation functions Using the generalized linear method (logistic regression) yielded poor results because of the non-linearity present in our data. We need to use an activation function that can allow our model to learn and map the non-linearity in our data. There are many different options so let's explore a few. ``` from tensorflow.keras.activations import relu from tensorflow.keras.activations import sigmoid from tensorflow.keras.activations import tanh # Fig size plt.figure(figsize=(12,3)) # Data x = np.arange(-5., 5., 0.1) # Sigmoid activation (constrain a value between 0 and 1.) plt.subplot(1, 3, 1) plt.title("Sigmoid activation") y = sigmoid(x) plt.plot(x, y) # Tanh activation (constrain a value between -1 and 1.) plt.subplot(1, 3, 2) y = tanh(x) plt.title("Tanh activation") plt.plot(x, y) # Relu (clip the negative values to 0) plt.subplot(1, 3, 3) y = relu(x) plt.title("ReLU activation") plt.plot(x, y) # Show plots plt.show() ``` The ReLU activation function ($max(0,z)$) is by far the most widely used activation function for neural networks. But as you can see, each activation function has its own constraints so there are circumstances where you'll want to use different ones. For example, if we need to constrain our outputs between 0 and 1, then the sigmoid activation is the best choice. <img width="45" src="http://bestanimations.com/HomeOffice/Lights/Bulbs/animated-light-bulb-gif-29.gif" align="left" vspace="20px" hspace="10px"> In some cases, using a ReLU activation function may not be sufficient. For instance, when the outputs from our neurons are mostly negative, the activation function will produce zeros. This effectively creates a "dying ReLU" and a recovery is unlikely. To mitigate this effect, we could lower the learning rate or use [alternative ReLU activations](https://medium.com/tinymind/a-practical-guide-to-relu-b83ca804f1f7), ex. leaky ReLU or parametric ReLU (PReLU), which have a small slope for negative neuron outputs. # From scratch Now let's create our multilayer perceptron (MLP) which is going to be exactly like the logistic regression model but with the activation function to map the non-linearity in our data. Before we use TensorFlow 2.0 + Keras we will implement our neural network from scratch using NumPy so we can: 1. Absorb the fundamental concepts by implementing from scratch 2. Appreciate the level of abstraction TensorFlow provides <div align="left"> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/lightbulb.gif" width="45px" align="left" hspace="10px"> </div> It's normal to find the math and code in this section slightly complex. You can still read each of the steps to build intuition for when we implement this using TensorFlow + Keras. ``` print (f"X: {standardized_X_train.shape}") print (f"y: {y_train.shape}") ``` Our goal is to learn a model 𝑦̂ that models 𝑦 given 𝑋 . You'll notice that neural networks are just extensions of the generalized linear methods we've seen so far but with non-linear activation functions since our data will be highly non-linear. $z_1 = XW_1$ $a_1 = f(z_1)$ $z_2 = a_1W_2$ $\hat{y} = softmax(z_2)$ # classification * $X$ = inputs | $\in \mathbb{R}^{NXD}$ ($D$ is the number of features) * $W_1$ = 1st layer weights | $\in \mathbb{R}^{DXH}$ ($H$ is the number of hidden units in layer 1) * $z_1$ = outputs from first layer $\in \mathbb{R}^{NXH}$ * $f$ = non-linear activation function * $a_1$ = activation applied first layer's outputs | $\in \mathbb{R}^{NXH}$ * $W_2$ = 2nd layer weights | $\in \mathbb{R}^{HXC}$ ($C$ is the number of classes) * $z_2$ = outputs from second layer $\in \mathbb{R}^{NXH}$ * $\hat{y}$ = prediction | $\in \mathbb{R}^{NXC}$ ($N$ is the number of samples) 1. Randomly initialize the model's weights $W$ (we'll cover more effective initialization strategies later in this lesson). ``` # Initialize first layer's weights W1 = 0.01 * np.random.randn(INPUT_DIM, HIDDEN_DIM) b1 = np.zeros((1, HIDDEN_DIM)) print (f"W1: {W1.shape}") print (f"b1: {b1.shape}") ``` 2. Feed inputs $X$ into the model to do the forward pass and receive the probabilities. First we pass the inputs into the first layer. * $z_1 = XW_1$ ``` # z1 = [NX2] · [2X100] + [1X100] = [NX100] z1 = np.dot(standardized_X_train, W1) + b1 print (f"z1: {z1.shape}") ``` Next we apply the non-linear activation function, ReLU ($max(0,z)$) in this case. * $a_1 = f(z_1)$ ``` # Apply activation function a1 = np.maximum(0, z1) # ReLU print (f"a_1: {a1.shape}") ``` We pass the activations to the second layer to get our logits. * $z_2 = a_1W_2$ ``` # Initialize second layer's weights W2 = 0.01 * np.random.randn(HIDDEN_DIM, NUM_CLASSES) b2 = np.zeros((1, NUM_CLASSES)) print (f"W2: {W2.shape}") print (f"b2: {b2.shape}") # z2 = logits = [NX100] · [100X3] + [1X3] = [NX3] logits = np.dot(a1, W2) + b2 print (f"logits: {logits.shape}") print (f"sample: {logits[0]}") ``` We'll apply the softmax function to normalize the logits and btain class probabilities. * $\hat{y} = softmax(z_2)$ ``` # Normalization via softmax to obtain class probabilities exp_logits = np.exp(logits) y_hat = exp_logits / np.sum(exp_logits, axis=1, keepdims=True) print (f"y_hat: {y_hat.shape}") print (f"sample: {y_hat[0]}") ``` 3. Compare the predictions $\hat{y}$ (ex. [0.3, 0.3, 0.4]]) with the actual target values $y$ (ex. class 2 would look like [0, 0, 1]) with the objective (cost) function to determine loss $J$. A common objective function for classification tasks is cross-entropy loss. * $J(\theta) = - \sum_i ln(\hat{y_i}) = - \sum_i ln (\frac{e^{X_iW_y}}{\sum_j e^{X_iW}}) $ ``` # Loss correct_class_logprobs = -np.log(y_hat[range(len(y_hat)), y_train]) loss = np.sum(correct_class_logprobs) / len(y_train) ``` 4. Calculate the gradient of loss $J(\theta)$ w.r.t to the model weights. The gradient of the loss w.r.t to W2 is the same as the gradients from logistic regression since $\hat{y} = softmax(z_2)$. * $\frac{\partial{J}}{\partial{W_{2j}}} = \frac{\partial{J}}{\partial{\hat{y}}}\frac{\partial{\hat{y}}}{\partial{W_{2j}}} = - \frac{1}{\hat{y}}\frac{\partial{\hat{y}}}{\partial{W_{2j}}} = - \frac{1}{\frac{e^{W_{2y}a_1}}{\sum_j e^{a_1W}}}\frac{\sum_j e^{a_1W}e^{a_1W_{2y}}0 - e^{a_1W_{2y}}e^{a_1W_{2j}}a_1}{(\sum_j e^{a_1W})^2} = \frac{a_1e^{a_1W_{2j}}}{\sum_j e^{a_1W}} = a_1\hat{y}$ * $\frac{\partial{J}}{\partial{W_{2y}}} = \frac{\partial{J}}{\partial{\hat{y}}}\frac{\partial{\hat{y}}}{\partial{W_{2y}}} = - \frac{1}{\hat{y}}\frac{\partial{\hat{y}}}{\partial{W_{2y}}} = - \frac{1}{\frac{e^{W_{2y}a_1}}{\sum_j e^{a_1W}}}\frac{\sum_j e^{a_1W}e^{a_1W_{2y}}a_1 - e^{a_1W_{2y}}e^{a_1W_{2y}}a_1}{(\sum_j e^{a_1W})^2} = \frac{1}{\hat{y}}(a_1\hat{y} - a_1\hat{y}^2) = a_1(\hat{y}-1)$ The gradient of the loss w.r.t W1 is a bit trickier since we have to backpropagate through two sets of weights. * $ \frac{\partial{J}}{\partial{W_1}} = \frac{\partial{J}}{\partial{\hat{y}}} \frac{\partial{\hat{y}}}{\partial{a_1}} \frac{\partial{a_1}}{\partial{z_1}} \frac{\partial{z_1}}{\partial{W_1}} = W_2(\partial{scores})(\partial{ReLU})X $ ``` # dJ/dW2 dscores = y_hat dscores[range(len(y_hat)), y_train] -= 1 dscores /= len(y_train) dW2 = np.dot(a1.T, dscores) db2 = np.sum(dscores, axis=0, keepdims=True) # dJ/dW1 dhidden = np.dot(dscores, W2.T) dhidden[a1 <= 0] = 0 # ReLu backprop dW1 = np.dot(standardized_X_train.T, dhidden) db1 = np.sum(dhidden, axis=0, keepdims=True) ``` 5. Update the weights $W$ using a small learning rate $\alpha$. The updates will penalize the probabiltiy for the incorrect classes (j) and encourage a higher probability for the correct class (y). * $W_i = W_i - \alpha\frac{\partial{J}}{\partial{W_i}}$ ``` # Update weights W1 += -LEARNING_RATE * dW1 b1 += -LEARNING_RATE * db1 W2 += -LEARNING_RATE * dW2 b2 += -LEARNING_RATE * db2 ``` 6. Repeat steps 2 - 4 until model performs well. ``` # Initialize random weights W1 = 0.01 * np.random.randn(INPUT_DIM, HIDDEN_DIM) b1 = np.zeros((1, HIDDEN_DIM)) W2 = 0.01 * np.random.randn(HIDDEN_DIM, NUM_CLASSES) b2 = np.zeros((1, NUM_CLASSES)) # Training loop for epoch_num in range(1000): # First layer forward pass [NX2] · [2X100] = [NX100] z1 = np.dot(standardized_X_train, W1) + b1 # Apply activation function a1 = np.maximum(0, z1) # ReLU # z2 = logits = [NX100] · [100X3] = [NX3] logits = np.dot(a1, W2) + b2 # Normalization via softmax to obtain class probabilities exp_logits = np.exp(logits) y_hat = exp_logits / np.sum(exp_logits, axis=1, keepdims=True) # Loss correct_class_logprobs = -np.log(y_hat[range(len(y_hat)), y_train]) loss = np.sum(correct_class_logprobs) / len(y_train) # show progress if epoch_num%100 == 0: # Accuracy y_pred = np.argmax(logits, axis=1) accuracy = np.mean(np.equal(y_train, y_pred)) print (f"Epoch: {epoch_num}, loss: {loss:.3f}, accuracy: {accuracy:.3f}") # dJ/dW2 dscores = y_hat dscores[range(len(y_hat)), y_train] -= 1 dscores /= len(y_train) dW2 = np.dot(a1.T, dscores) db2 = np.sum(dscores, axis=0, keepdims=True) # dJ/dW1 dhidden = np.dot(dscores, W2.T) dhidden[a1 <= 0] = 0 # ReLu backprop dW1 = np.dot(standardized_X_train.T, dhidden) db1 = np.sum(dhidden, axis=0, keepdims=True) # Update weights W1 += -1e0 * dW1 b1 += -1e0 * db1 W2 += -1e0 * dW2 b2 += -1e0 * db2 class MLPFromScratch(): def predict(self, x): z1 = np.dot(x, W1) + b1 a1 = np.maximum(0, z1) logits = np.dot(a1, W2) + b2 exp_logits = np.exp(logits) y_hat = exp_logits / np.sum(exp_logits, axis=1, keepdims=True) return y_hat # Evaluation model = MLPFromScratch() logits_train = model.predict(standardized_X_train) pred_train = np.argmax(logits_train, axis=1) logits_test = model.predict(standardized_X_test) pred_test = np.argmax(logits_test, axis=1) # Training and test accuracy train_acc = np.mean(np.equal(y_train, pred_train)) test_acc = np.mean(np.equal(y_test, pred_test)) print (f"train acc: {train_acc:.2f}, test acc: {test_acc:.2f}") # Visualize the decision boundary plt.figure(figsize=(12,5)) plt.subplot(1, 2, 1) plt.title("Train") plot_multiclass_decision_boundary(model=model, X=standardized_X_train, y=y_train) plt.subplot(1, 2, 2) plt.title("Test") plot_multiclass_decision_boundary(model=model, X=standardized_X_test, y=y_test) plt.show() ``` Credit for the plotting functions and the intuition behind all this is due to [CS231n](http://cs231n.github.io/neural-networks-case-study/), one of the best courses for machine learning. Now let's implement the MLP with TensorFlow + Keras. # TensorFlow + Keras ### Components ``` # MLP class MLP(Model): def __init__(self, hidden_dim, num_classes): super(MLP, self).__init__() self.fc1 = Dense(units=hidden_dim, activation='relu') # replaced linear with relu self.fc2 = Dense(units=num_classes, activation='softmax') def call(self, x_in, training=False): """Forward pass.""" z = self.fc1(x_in) y_pred = self.fc2(z) return y_pred def sample(self, input_shape): x_in = Input(shape=input_shape) return Model(inputs=x_in, outputs=self.call(x_in)).summary() ``` ### Operations ``` # Initialize the model model = MLP(hidden_dim=HIDDEN_DIM, num_classes=NUM_CLASSES) model.sample(input_shape=(INPUT_DIM,)) # Compile optimizer = Adam(lr=LEARNING_RATE) model.compile(optimizer=optimizer, loss=SparseCategoricalCrossentropy(), metrics=['accuracy']) # Training model.fit(x=standardized_X_train, y=y_train, validation_data=(standardized_X_val, y_val), epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, class_weight=class_weights, shuffle=False, verbose=1) # Predictions pred_train = model.predict(standardized_X_train) pred_test = model.predict(standardized_X_test) print (f"sample probability: {pred_test[0]}") pred_train = np.argmax(pred_train, axis=1) pred_test = np.argmax(pred_test, axis=1) print (f"sample class: {pred_test[0]}") # Accuracy train_acc = accuracy_score(y_train, pred_train) test_acc = accuracy_score(y_test, pred_test) print (f"train acc: {train_acc:.2f}, test acc: {test_acc:.2f}") # Metrics plot_confusion_matrix(y_test, pred_test, classes=classes) print (classification_report(y_test, pred_test)) # Visualize the decision boundary plt.figure(figsize=(12,5)) plt.subplot(1, 2, 1) plt.title("Train") plot_multiclass_decision_boundary(model=model, X=standardized_X_train, y=y_train) plt.subplot(1, 2, 2) plt.title("Test") plot_multiclass_decision_boundary(model=model, X=standardized_X_test, y=y_test) plt.show() ``` # Inference ``` # Inputs for inference X_infer = pd.DataFrame([{'X1': 0.1, 'X2': 0.1}]) X_infer.head() # Standardize standardized_X_infer = X_scaler.transform(X_infer) print (standardized_X_infer) # Predict y_infer = model.predict(standardized_X_infer) _class = np.argmax(y_infer) print (f"The probability that you have a class {classes[_class]} is {y_infer[0][_class]*100.0:.0f}%") ``` # Initializing weights So far we have been initializing weights with small random values and this isn't optimal for convergence during training. The objective is to have weights that are able to produce outputs that follow a similar distribution across all neurons. We can do this by enforcing weights to have unit variance prior the affine and non-linear operations. <img width="45" src="http://bestanimations.com/HomeOffice/Lights/Bulbs/animated-light-bulb-gif-29.gif" align="left" vspace="20px" hspace="10px"> A popular method is to apply [xavier initialization](http://andyljones.tumblr.com/post/110998971763/an-explanation-of-xavier-initialization), which essentially initializes the weights to allow the signal from the data to reach deep into the network. You may be wondering why we don't do this for every forward pass and that's a great question. We'll look at more advanced strategies that help with optimization like batch/layer normalization, etc. in future lessons. Meanwhile you can check out other initializers [here](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/initializers). ``` from tensorflow.keras.initializers import glorot_normal # MLP class MLP(Model): def __init__(self, hidden_dim, num_classes): super(MLP, self).__init__() xavier_initializer = glorot_normal() # xavier glorot initiailization self.fc1 = Dense(units=hidden_dim, kernel_initializer=xavier_initializer, activation='relu') self.fc2 = Dense(units=num_classes, activation='softmax') def call(self, x_in, training=False): """Forward pass.""" z = self.fc1(x_in) y_pred = self.fc2(z) return y_pred def sample(self, input_shape): x_in = Input(shape=input_shape) return Model(inputs=x_in, outputs=self.call(x_in)).summary() ``` # Dropout A great technique to overcome overfitting is to increase the size of your data but this isn't always an option. Fortuntely, there are methods like regularization and dropout that can help create a more robust model. Dropout is a technique (used only during training) that allows us to zero the outputs of neurons. We do this for `dropout_p`% of the total neurons in each layer and it changes every batch. Dropout prevents units from co-adapting too much to the data and acts as a sampling strategy since we drop a different set of neurons each time. <img src="https://raw.githubusercontent.com/practicalAI/images/master/basic_ml/06_Multilayer_Perceptron/dropout.png" width="350"> * [Dropout: A Simple Way to Prevent Neural Networks from Overfitting](http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf) ``` from tensorflow.keras.layers import Dropout from tensorflow.keras.regularizers import l2 ``` ### Components ``` # MLP class MLP(Model): def __init__(self, hidden_dim, lambda_l2, dropout_p, num_classes): super(MLP, self).__init__() self.fc1 = Dense(units=hidden_dim, kernel_regularizer=l2(lambda_l2), # adding L2 regularization activation='relu') self.dropout = Dropout(rate=dropout_p) self.fc2 = Dense(units=num_classes, activation='softmax') def call(self, x_in, training=False): """Forward pass.""" z = self.fc1(x_in) if training: z = self.dropout(z, training=training) # adding dropout y_pred = self.fc2(z) return y_pred def sample(self, input_shape): x_in = Input(shape=input_shape) return Model(inputs=x_in, outputs=self.call(x_in)).summary() ``` ### Operations ``` # Arguments DROPOUT_P = 0.1 # % of the neurons that are dropped each pass LAMBDA_L2 = 1e-4 # L2 regularization # Initialize the model model = MLP(hidden_dim=HIDDEN_DIM, lambda_l2=LAMBDA_L2, dropout_p=DROPOUT_P, num_classes=NUM_CLASSES) model.sample(input_shape=(INPUT_DIM,)) ``` # Overfitting Though neural networks are great at capturing non-linear relationships they are highly susceptible to overfitting to the training data and failing to generalize on test data. Just take a look at the example below where we generate completely random data and are able to fit a model with [$2*N*C + D$](https://arxiv.org/abs/1611.03530) hidden units. The training performance is good (~70%) but the overfitting leads to very poor test performance. We'll be covering strategies to tackle overfitting in future lessons. ``` # Arguments NUM_EPOCHS = 500 NUM_SAMPLES_PER_CLASS = 50 LEARNING_RATE = 1e-1 HIDDEN_DIM = 2 * NUM_SAMPLES_PER_CLASS * NUM_CLASSES + INPUT_DIM # 2*N*C + D # Generate random data X = np.random.rand(NUM_SAMPLES_PER_CLASS * NUM_CLASSES, INPUT_DIM) y = np.array([[i]*NUM_SAMPLES_PER_CLASS for i in range(NUM_CLASSES)]).reshape(-1) print ("X: ", format(np.shape(X))) print ("y: ", format(np.shape(y))) # Create data splits X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split( X, y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE) print ("X_train:", X_train.shape) print ("y_train:", y_train.shape) print ("X_val:", X_val.shape) print ("y_val:", y_val.shape) print ("X_test:", X_test.shape) print ("y_test:", y_test.shape) # Standardize the inputs (mean=0, std=1) using training data X_scaler = StandardScaler().fit(X_train) # Apply scaler on training and test data (don't standardize outputs for classification) standardized_X_train = X_scaler.transform(X_train) standardized_X_val = X_scaler.transform(X_val) standardized_X_test = X_scaler.transform(X_test) # Initialize the model model = MLP(hidden_dim=HIDDEN_DIM, lambda_l2=0.0, dropout_p=0.0, num_classes=NUM_CLASSES) model.sample(input_shape=(INPUT_DIM,)) # Compile optimizer = Adam(lr=LEARNING_RATE) model.compile(optimizer=optimizer, loss=SparseCategoricalCrossentropy(), metrics=['accuracy']) # Training model.fit(x=standardized_X_train, y=y_train, validation_data=(standardized_X_val, y_val), epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, class_weight=class_weights, shuffle=False, verbose=1) # Predictions pred_train = model.predict(standardized_X_train) pred_test = model.predict(standardized_X_test) print (f"sample probability: {pred_test[0]}") pred_train = np.argmax(pred_train, axis=1) pred_test = np.argmax(pred_test, axis=1) print (f"sample class: {pred_test[0]}") # Accuracy train_acc = accuracy_score(y_train, pred_train) test_acc = accuracy_score(y_test, pred_test) print (f"train acc: {train_acc:.2f}, test acc: {test_acc:.2f}") # Classification report plot_confusion_matrix(y_true=y_test, y_pred=pred_test, classes=classes) print (classification_report(y_test, pred_test)) # Visualize the decision boundary plt.figure(figsize=(12,5)) plt.subplot(1, 2, 1) plt.title("Train") plot_multiclass_decision_boundary(model=model, X=standardized_X_train, y=y_train) plt.subplot(1, 2, 2) plt.title("Test") plot_multiclass_decision_boundary(model=model, X=standardized_X_test, y=y_test) plt.show() ``` It's important that we experiment, starting with simple models that underfit (high bias) and improve it towards a good fit. Starting with simple models (linear/logistic regression) let's us catch errors without the added complexity of more sophisticated models (neural networks). <img src="https://raw.githubusercontent.com/practicalAI/images/master/basic_ml/06_Multilayer_Perceptron/fit.png" width="700"> --- <div align="center"> Subscribe to our <a href="https://practicalai.me/#newsletter">newsletter</a> and follow us on social media to get the latest updates! <a class="ai-header-badge" target="_blank" href="https://github.com/practicalAI/practicalAI"> <img src="https://img.shields.io/github/stars/practicalAI/practicalAI.svg?style=social&label=Star"></a>&nbsp; <a class="ai-header-badge" target="_blank" href="https://www.linkedin.com/company/madewithml"> <img src="https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social"></a>&nbsp; <a class="ai-header-badge" target="_blank" href="https://twitter.com/madewithml"> <img src="https://img.shields.io/twitter/follow/madewithml.svg?label=Follow&style=social"> </a> </div> </div>
github_jupyter