text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
# Smart Queue Monitoring System - Transportation Scenario ## Overview Now that you have your Python script and job submission script, you're ready to request an **IEI Tank-870** edge node and run inference on the different hardware types (CPU, GPU, VPU, FPGA). After the inference is completed, the output video and stats files need to be retrieved and stored in the workspace, which can then be viewed within the Jupyter Notebook. ## Objectives * Submit inference jobs to Intel's DevCloud using the `qsub` command. * Retrieve and review the results. * After testing, go back to the proposal doc and update your original proposed hardware device. ## Step 0: Set Up #### IMPORTANT: Set up paths so we can run Dev Cloud utilities You *must* run this every time you enter a Workspace session. (Tip: select the cell and use **Shift+Enter** to run the cell.) ``` %env PATH=/opt/conda/bin:/opt/spark-2.4.3-bin-hadoop2.7/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/intel_devcloud_support import os import sys sys.path.insert(0, os.path.abspath('/opt/intel_devcloud_support')) sys.path.insert(0, os.path.abspath('/opt/intel')) ``` ### Step 0.1: (Optional-step): Original Video If you are curious to see the input video, run the following cell to view the original video stream we'll be using for inference. ``` import videoHtml videoHtml.videoHTML('Transportation', ['original_videos/Transportation.mp4']) ``` ## Step 1 : Inference on a Video In the next few cells, You'll submit your job using the `qsub` command and retrieving the results for each job. Each of the cells below should submit a job to different edge compute nodes. The output of the cell is the `JobID` of your job, which you can use to track progress of a job with `liveQStat`. You will need to submit a job for each of the following hardware types: * **CPU** * **GPU** * **VPU** * **FPGA** **Note:** You will have to submit each job one at a time and retrieve their results. After submission, they will go into a queue and run as soon as the requested compute resources become available. (Tip: **shift+enter** will run the cell and automatically move you to the next cell.) If your job successfully runs and completes, once you retrieve your results, it should output a video and a stats text file in the `results/transportation/<DEVICE>` directory. For example, your **CPU** job should output its files in this directory: > **results/transportation/cpu** **Note**: To get the queue labels for the different hardware devices, you can go to [this link](https://devcloud.intel.com/edge/get_started/devcloud/). The following arguments should be passed to the job submission script after the `-F` flag: * Model path - `/data/models/intel/person-detection-retail-0013/<MODEL PRECISION>/`. You will need to adjust this path based on the model precision being using on the hardware. * Device - `CPU`, `GPU`, `MYRIAD`, `HETERO:FPGA,CPU` * Manufacturing video path - `/data/resources/transportation.mp4` * Manufacturing queue_param file path - `/data/queue_param/transportation.npy` * Output path - `/output/results/transportation/<DEVICE>` This should be adjusted based on the device used in the job. * Max num of people - This is the max number of people in queue before the system would redirect them to another queue. ## Step 1.1: Submit to an Edge Compute Node with an Intel® CPU In the cell below, write a script to submit a job to an <a href="https://software.intel.com/en-us/iot/hardware/iei-tank-dev-kit-core">IEI Tank* 870-Q170</a> edge node with an <a href="https://ark.intel.com/products/88186/Intel-Core-i5-6500TE-Processor-6M-Cache-up-to-3-30-GHz-">Intel® Core™ i5-6500TE processor</a>. The inference workload should run on the CPU. ``` #Submit job to the queue cpu_job_id = !qsub queue_job.sh -d . -l nodes=1:tank-870:i5-6500te -F "/data/models/intel/person-detection-retail-0013/FP32/person-detection-retail-0013 CPU /data/resources/transportation.mp4 /data/queue_param/transportation.npy /output/results/transportation/cpu 3" -N store_core print(cpu_job_id[0]) ``` #### Check Job Status To check on the job that was submitted, use `liveQStat` to check the status of the job. Column `S` shows the state of your running jobs. For example: - If `JOB ID`is in Q state, it is in the queue waiting for available resources. - If `JOB ID` is in R state, it is running. ``` import liveQStat liveQStat.liveQStat() ``` #### Get Results Run the next cell to retrieve your job's results. ``` import get_results get_results.getResults(cpu_job_id[0], filename='output.tgz', blocking=True) ``` #### Unpack your output files and view stdout.log ``` !tar zxf output.tgz !cat stdout.log ``` #### View stderr.log This can be used for debugging ``` !cat stderr.log ``` #### View Output Video Run the cell below to view the output video. If inference was successfully run, you should see a video with bounding boxes drawn around each person detected. ``` import videoHtml videoHtml.videoHTML('Transportation CPU', ['results/transportation/cpu/output_video.mp4']) ``` ## Step 1.2: Submit to an Edge Compute Node with CPU and IGPU In the cell below, write a script to submit a job to an <a href="https://software.intel.com/en-us/iot/hardware/iei-tank-dev-kit-core">IEI Tank* 870-Q170</a> edge node with an <a href="https://ark.intel.com/products/88186/Intel-Core-i5-6500TE-Processor-6M-Cache-up-to-3-30-GHz-">Intel® Core i5-6500TE</a>. The inference workload should run on the **Intel® HD Graphics 530** integrated GPU. ``` #Submit job to the queue gpu_job_id = !qsub queue_job.sh -d . -l nodes=tank-870:i5-6500te:intel-hd-530 -F "/data/models/intel/person-detection-retail-0013/FP32/person-detection-retail-0013 HETERO:GPU,CPU /data/resources/transportation.mp4 /data/queue_param/transportation.npy /output/results/transportation/gpu 3" -N store_core print(gpu_job_id[0]) ``` ### Check Job Status To check on the job that was submitted, use `liveQStat` to check the status of the job. Column `S` shows the state of your running jobs. For example: - If `JOB ID`is in Q state, it is in the queue waiting for available resources. - If `JOB ID` is in R state, it is running. ``` import liveQStat liveQStat.liveQStat() ``` #### Get Results Run the next cell to retrieve your job's results. ``` import get_results get_results.getResults(gpu_job_id[0], filename='output.tgz', blocking=True) ``` #### Unpack your output files and view stdout.log ``` !tar zxf output.tgz !cat stdout.log ``` #### View stderr.log This can be used for debugging ``` !cat stderr.log ``` #### View Output Video Run the cell below to view the output video. If inference was successfully run, you should see a video with bounding boxes drawn around each person detected. ``` import videoHtml videoHtml.videoHTML('Transportation GPU', ['results/transportation/gpu/output_video.mp4']) ``` ## Step 1.3: Submit to an Edge Compute Node with an Intel® Neural Compute Stick 2 In the cell below, write a script to submit a job to an <a href="https://software.intel.com/en-us/iot/hardware/iei-tank-dev-kit-core">IEI Tank 870-Q170</a> edge node with an <a href="https://ark.intel.com/products/88186/Intel-Core-i5-6500TE-Processor-6M-Cache-up-to-3-30-GHz-">Intel Core i5-6500te CPU</a>. The inference workload should run on an <a href="https://software.intel.com/en-us/neural-compute-stick">Intel Neural Compute Stick 2</a> installed in this node. ``` #Submit job to the queue vpu_job_id = !qsub queue_job.sh -d . -l nodes=tank-870:i5-6500te:intel-ncs2 -F "/data/models/intel/person-detection-retail-0013/FP16/person-detection-retail-0013 HETERO:MYRIAD,CPU /data/resources/transportation.mp4 /data/queue_param/transportation.npy /output/results/transportation/vpu 3" -N store_core print(vpu_job_id[0]) ``` ### Check Job Status To check on the job that was submitted, use `liveQStat` to check the status of the job. Column `S` shows the state of your running jobs. For example: - If `JOB ID`is in Q state, it is in the queue waiting for available resources. - If `JOB ID` is in R state, it is running. ``` import liveQStat liveQStat.liveQStat() ``` #### Get Results Run the next cell to retrieve your job's results. ``` import get_results get_results.getResults(vpu_job_id[0], filename='output.tgz', blocking=True) ``` #### Unpack your output files and view stdout.log ``` !tar zxf output.tgz !cat stdout.log ``` #### View stderr.log This can be used for debugging ``` !cat stderr.log ``` #### View Output Video Run the cell below to view the output video. If inference was successfully run, you should see a video with bounding boxes drawn around each person detected. ``` import videoHtml videoHtml.videoHTML('Transportation VPU', ['results/transportation/vpu/output_video.mp4']) ``` ## Step 1.4: Submit to an Edge Compute Node with IEI Mustang-F100-A10 In the cell below, write a script to submit a job to an <a href="https://software.intel.com/en-us/iot/hardware/iei-tank-dev-kit-core">IEI Tank 870-Q170</a> edge node with an <a href="https://ark.intel.com/products/88186/Intel-Core-i5-6500TE-Processor-6M-Cache-up-to-3-30-GHz-">Intel Core™ i5-6500te CPU</a> . The inference workload will run on the <a href="https://www.ieiworld.com/mustang-f100/en/"> IEI Mustang-F100-A10 </a> FPGA card installed in this node. ``` #Submit job to the queue fpga_job_id = !qsub queue_job.sh -d . -l nodes=1:tank-870:i5-6500te:iei-mustang-f100-a10 -F "/data/models/intel/person-detection-retail-0013/FP16/person-detection-retail-0013 HETERO:FPGA,CPU /data/resources/transportation.mp4 /data/queue_param/transportation.npy /output/results/transportation/fpga 15" -N store_core print(fpga_job_id[0]) ``` ### Check Job Status To check on the job that was submitted, use `liveQStat` to check the status of the job. Column `S` shows the state of your running jobs. For example: - If `JOB ID`is in Q state, it is in the queue waiting for available resources. - If `JOB ID` is in R state, it is running. ``` import liveQStat liveQStat.liveQStat() ``` #### Get Results Run the next cell to retrieve your job's results. ``` import get_results get_results.getResults(fpga_job_id[0], filename='output.tgz', blocking=True) ``` #### Unpack your output files and view stdout.log ``` !tar zxf output.tgz !cat stdout.log ``` #### View stderr.log This can be used for debugging ``` !cat stderr.log ``` #### View Output Video Run the cell below to view the output video. If inference was successfully run, you should see a video with bounding boxes drawn around each person detected. ``` import videoHtml videoHtml.videoHTML('Transportation FPGA', ['results/transportation/fpga/output_video.mp4']) ``` ***Wait!*** Please wait for all the inference jobs and video rendering to complete before proceeding to the next step. ## Step 2: Assess Performance Run the cells below to compare the performance across all 4 devices. The following timings for the model are being comapred across all 4 devices: - Model Loading Time - Average Inference Time - FPS ``` import matplotlib.pyplot as plt device_list=['cpu', 'gpu', 'fpga', 'vpu'] inference_time=[] fps=[] model_load_time=[] for device in device_list: with open('results/transportation/'+device+'/stats.txt', 'r') as f: inference_time.append(float(f.readline().split("\n")[0])) fps.append(float(f.readline().split("\n")[0])) model_load_time.append(float(f.readline().split("\n")[0])) plt.bar(device_list, inference_time) plt.xlabel("Device Used") plt.ylabel("Total Inference Time in Seconds") plt.show() plt.bar(device_list, fps) plt.xlabel("Device Used") plt.ylabel("Frames per Second") plt.show() plt.bar(device_list, model_load_time) plt.xlabel("Device Used") plt.ylabel("Model Loading Time in Seconds") plt.show() ``` # Step 3: Update Proposal Document Now that you've completed your hardware testing, you should go back to the proposal document and validate or update your originally proposed hardware. Once you've updated your proposal, you can move onto the next scenario.
github_jupyter
This notebook serves to make some simple plots of the 1) losses and 2) entities and relations following training with the PyKEEN pipeline. ``` import os import sys import time import numpy as np import pykeen from matplotlib import pyplot as plt from pykeen.pipeline import pipeline from pykeen.triples import TriplesFactory %config InlineBackend.figure_format = 'svg' print(sys.version) print(pykeen.get_version(with_git_hash=True)) print(time.asctime()) ``` ## Toy Example Following the disussions proposed in https://github.com/pykeen/pykeen/issues/97, a very small set of triples are trained and visualized. ``` triples = ''' Brussels locatedIn Belgium Belgium partOf EU EU hasCapital Brussels '''.strip() triples = np.array([triple.split('\t') for triple in triples.split('\n')]) tf = TriplesFactory(triples=triples) ``` Training with default arguments ``` results = pipeline( training_triples_factory=tf, testing_triples_factory=tf, model = 'TransE', model_kwargs=dict(embedding_dim=2), training_kwargs=dict(use_tqdm_batch=False), evaluation_kwargs=dict(use_tqdm=False), random_seed=1, device='cpu', ) results.plot(er_kwargs=dict(plot_relations=True)) plt.savefig(os.path.expanduser('~/Desktop/toy_1.png'), dpi=300) ``` Training with slower learning and more epochs ``` results = pipeline( training_triples_factory=tf, testing_triples_factory=tf, model = 'TransE', model_kwargs=dict(embedding_dim=2), optimizer_kwargs=dict(lr=1.0e-1), training_kwargs=dict(num_epochs=128, use_tqdm_batch=False), evaluation_kwargs=dict(use_tqdm=False), random_seed=1, device='cpu', ) results.plot(er_kwargs=dict(plot_relations=True)) plt.savefig(os.path.expanduser('~/Desktop/toy_2.png'), dpi=300) ``` Training with appropriate softplus ``` toy_results = pipeline( training_triples_factory=tf, testing_triples_factory=tf, model='TransE', loss='softplus', model_kwargs=dict(embedding_dim=2), optimizer_kwargs=dict(lr=1.0e-1), training_kwargs=dict(num_epochs=128, use_tqdm_batch=False), evaluation_kwargs=dict(use_tqdm=False), random_seed=1, device='cpu', ) toy_results.plot(er_kwargs=dict(plot_relations=True)) plt.savefig(os.path.expanduser('~/Desktop/toy_3.png'), dpi=300) ``` ## Benchmark Dataset Example ``` nations_results = pipeline( dataset='Nations', model='TransE', model_kwargs=dict(embedding_dim=8), optimizer_kwargs=dict(lr=1.0e-1), training_kwargs=dict(num_epochs=80, use_tqdm_batch=False), evaluation_kwargs=dict(use_tqdm=False), random_seed=1, device='cpu', ) nations_results.plot(er_kwargs=dict(plot_relations=True)) # Filter the ER plot down to a specific set of entities and relations nations_results.plot_er( relations={'treaties'}, apply_limits=False, plot_relations=True, ); ```
github_jupyter
## Exercise 3 In the videos you looked at how you would improve Fashion MNIST using Convolutions. For your exercise see if you can improve MNIST to 99.8% accuracy or more using only a single convolutional layer and a single MaxPooling 2D. You should stop training once the accuracy goes above this amount. It should happen in less than 20 epochs, so it's ok to hard code the number of epochs for training, but your training must end once it hits the above metric. If it doesn't, then you'll need to redesign your layers. I've started the code for you -- you need to finish it! When 99.8% accuracy has been hit, you should print out the string "Reached 99.8% accuracy so cancelling training!" ``` import tensorflow as tf from os import path, getcwd, chdir # DO NOT CHANGE THE LINE BELOW. If you are developing in a local # environment, then grab mnist.npz from the Coursera Jupyter Notebook # and place it inside a local folder and edit the path to that location path = f"{getcwd()}/../tmp2/mnist.npz" config = tf.ConfigProto() config.gpu_options.allow_growth = True sess = tf.Session(config=config) # GRADED FUNCTION: train_mnist_conv def train_mnist_conv(): # Please write your code only where you are indicated. # please do not remove model fitting inline comments. # YOUR CODE STARTS HERE class mycallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if (logs.get('acc')>=0.998): print("Reached 99.8% accuracy so cancelling training!") self.model.stop_training = True # YOUR CODE ENDS HERE mnist = tf.keras.datasets.mnist (training_images, training_labels), (test_images, test_labels) = mnist.load_data(path=path) # YOUR CODE STARTS HERE print(training_images.shape) training_images = training_images.reshape(60000, 28, 28, 1) training_images = training_images/255.0 print(test_images.shape) test_images = test_images.reshape(test_images.shape[0], 28, 28, 1) test_images = test_images/255.0 callbacks = mycallback() # YOUR CODE ENDS HERE model = tf.keras.models.Sequential([ # YOUR CODE STARTS HERE tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28,28,1)), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation = tf.nn.relu), tf.keras.layers.Dense(10, activation = tf.nn.softmax) # YOUR CODE ENDS HERE ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # model fitting history = model.fit( # YOUR CODE STARTS HERE training_images, training_labels, epochs = 15, callbacks=[callbacks] # YOUR CODE ENDS HERE ) # model fitting return history.epoch, history.history['acc'][-1] _, _ = train_mnist_conv() ```
github_jupyter
# Notebook for testing out methods with limiters ``` import numpy as np import matplotlib.pyplot as plt % matplotlib inline from matplotlib import animation from IPython.display import HTML from numpy.linalg import inv, det from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import numpy.linalg as LA # define grids nx = 200 nt = 1000 nlayers = 2 # moved boundaries away as there was some nasty feedback occuring here xmin = -5. xmax = 15. alpha = 0.9 beta = 0. gamma = 1 / alpha**2 class U: def __init__(self, nlayers, nx, nt, xmin, xmax, rho, alpha, beta, gamma, periodic=True): self.nlayers = nlayers self.nx = nx self.nt = nt self.U = np.zeros((2, nlayers, nx, nt+1)) self.x = np.linspace(xmin, xmax, num=nx-4, endpoint=False) self.rho = rho self.dx = self.x[1] - self.x[0] self.dt = 0.1 * self.dx # metric stuff self.alpha = alpha self.beta = beta self.gamma = gamma # gamma down self.gamma_up = 1/gamma self.periodic = periodic def D(self,indices): return self.U[(0,) + tuple(indices)] def Sx(self,indices): return self.U[(1,) + tuple(indices)] def initial_data(self, D0=None, Sx0=None): """ Set the initial data """ if D0 is not None: self.U[0,:,:,0] = D0 if Sx0 is not None: self.U[1,:,:,0] = Sx0 # enforce bcs self.bcs(0) def Uj(self, layer, t): return self.U[:,layer,:,t] def bcs(self, t): if self.periodic: self.U[:,:,:2,t] = self.U[:,:,-4:-2,t] self.U[:,:,-2:,t] = self.U[:,:,2:4,t] else: # outflow self.U[:,:,:2,t] = self.U[:,:,np.newaxis,2,t] self.U[:,:,-2:,t] = self.U[:,:,np.newaxis,-3,t] rho = np.ones(nlayers) U = U(nlayers, nx, nt, xmin, xmax, rho, alpha, beta, gamma, periodic=True) # Start off with an initial water hill D0 = np.zeros_like(U.U[0,:,:,0]) D0[0,2:-2] = 1 + 0.2 * np.exp(-(U.x-2)**2*2) D0[1,2:-2] = 0.8 + 0.1 * np.exp(-(U.x-7)**2*2) U.initial_data(D0=D0) plt.plot(U.x,U.U[0,0,2:-2,0],U.x,U.U[0,1,2:-2,0], lw=2) plt.show() def evolve(n): for j in range(nlayers): # to simplify indexing to 2 indices rather than 4 qn = U.U[:,j,:,n] # find slopes S_upwind = (qn[:,2:] - qn[:,1:-1]) / U.dx S_downwind = (qn[:,1:-1] - qn[:,:-2]) / U.dx S_av = 0.5 * (S_upwind + S_downwind) # ratio r = np.ones_like(S_av) * 1.e6 # mask to stop divide by zero r[S_downwind > 1.e-10] = S_upwind[S_downwind > 1.e-10] / S_downwind[S_downwind > 1.e-10] # MC #phi = np.maximum(0.0, np.minimum(2.*r/(1.+r), 2./(1.+r))) # superbee phi = np.maximum(0.0, np.minimum(1,2.*r), np.minimum(2., r)) S = phi * S_av qp = qn[:,1:-1] + S * 0.5 * U.dx # defined from 1+1/2 to -2+1/2 qm = qn[:,1:-1] - S * 0.5 * U.dx # defined from 1-1/2 to -2-1/2 fp = np.zeros_like(qp) fm = np.zeros_like(qm) W = np.sqrt(qp[1,:]**2 * U.gamma_up / qp[0,:]**2 + 1) u = qp[1,:] / (qp[0,:] * W) # u_down qx = u * U.gamma_up - U.beta/U.alpha fp[0,:] = qp[0,:] * qx fp[1,:] = qp[1,:] * qx + 0.5 * qp[0,:]**2 / W**2 W = np.sqrt(qm[1,:]**2 * U.gamma_up / qm[0,:]**2 + 1) u = qm[1,:] / (qm[0,:] * W) # u_down qx = u * U.gamma_up - U.beta/U.alpha fm[0,:] = qm[0,:] * qx fm[1,:] = qm[1,:] * qx + 0.5 * qm[0,:]**2 / W**2 # Lax-Friedrichs flux # at each boundary, we have a left and right state # NOTE: This method requires two ghost cells on either side! qL = qp[:,:-1] # defined from 1+1/2 to -2-1/2, projected from left side of interface qR = qm[:,1:] # 1+1/2 to -2-1/2, projected from right of interface fL = fp[:,:-1] fR = fm[:,1:] alp = 1. # alpha f_minus_half = 0.5 * (fL[:,:-1] + fR[:,:-1] + alp * (qL[:,:-1] - qR[:,:-1])) # 1+1/2 to -3-1/2 f_plus_half = 0.5 * (fL[:,1:] + fR[:,1:] + alp * (qL[:,1:] - qR[:,1:])) # 2+1/2 to -2-1/2 U.U[:,j,2:-2,n+1] = qn[:, 2:-2] - U.dt/U.dx * U.alpha * (f_plus_half - f_minus_half) # 2 to -3 # do boundaries U.bcs(n+1) for i in range(nt): evolve(i) plt.plot(U.x,U.U[0, 0, 2:-2, 0],U.x,U.U[0, 1, 2:-2, 0], lw=2) plt.show() fig = plt.figure() ax = plt.axes(xlim=(0,10), ylim=(0.7,1.4)) line = ax.plot([],[], lw=2)[0] line2 = ax.plot([],[], lw=2)[0] def animate(i): line.set_data(U.x, U.U[0, 0, 2:-2,i*10]) line2.set_data(U.x, U.U[0, 1, 2:-2,i*10]) anim = animation.FuncAnimation(fig, animate, frames=100, interval=60)#, init_func=init) HTML(anim.to_html5_video()) ``` ## GR 3d ``` # define grids nx = 100 ny = 100 nt = 200 nlayers = 2 xmin = -5. xmax = 15. ymin = -5. ymax = 15. alpha = 1.0 beta = [0., 0.] gamma = 1 / alpha**2 * np.eye(2) class U: def __init__(self, nlayers, nx, ny, nt, xmin, xmax, ymin, ymax, rho, alpha, beta, gamma, periodic=True): self.nlayers = nlayers self.nx = nx self.ny = ny self.nt = nt self.U = np.zeros((3, nlayers, nx, ny, nt+1)) self.x = np.linspace(xmin, xmax, num=nx-4, endpoint=False) self.y = np.linspace(ymin, ymax, num=ny-4, endpoint=False) self.rho = rho self.dx = self.x[1] - self.x[0] self.dy = self.y[1] - self.y[0] self.dt = 0.1 * min(self.dx, self.dy) # metric stuff self.alpha = alpha self.beta = beta self.gamma = gamma # gamma down self.gamma_up = inv(gamma) self.periodic = periodic def D(self,indices): return self.U[(0,) + tuple(indices)] def Sx(self,indices): return self.U[(1,) + tuple(indices)] def Sy(self,indices): return self.U[(2,) + tuple(indices)] def initial_data(self, D0=None, Sx0=None, Sy0=None, Q=None): """ Set the initial data """ if D0 is not None: self.U[0,:,:,:,0] = D0 if Sx0 is not None: self.U[1,:,:,:,0] = Sx0 if Sy0 is not None: self.U[2,:,:,:,0] = Sy0 if Q is not None: self.Q = Q # enforce bcs self.bcs(0) def Uj(self, layer, t): return self.U[:,layer,:,:,t] def bcs(self, t): if self.periodic: self.U[:,:,:2,:,t] = self.U[:,:,-4:-2,:,t] self.U[:,:,:,:2,t] = self.U[:,:,:,-4:-2,t] self.U[:,:,-2:,:,t] = self.U[:,:,2:4,:,t] self.U[:,:,:,-2:,t] = self.U[:,:,:,2:4,t] else: #outflow self.U[:,:,:2,:,t] = self.U[:,:,np.newaxis,2,:,t] self.U[:,:,:,:2,t] = self.U[:,:,:,np.newaxis,2,t] self.U[:,:,-2:,:,t] = self.U[:,:,np.newaxis,-3,:,t] self.U[:,:,:,-2:,t] = self.U[:,:,:,np.newaxis,-3,t] rho = np.ones(nlayers) # going to try setting the top fluid to be heavier #rho[0] = 1.5 U = U(nlayers, nx, ny, nt, xmin, xmax, ymin, ymax, rho, alpha, beta, gamma, periodic=False) # Start off with an initial water hill D0 = np.zeros_like(U.U[0,:,:,:,0]) D0[0,2:-2,2:-2] = 1 + 0.4 * np.exp(-((U.x[:,np.newaxis]-2)**2 + (U.y[np.newaxis,:]-2)**2)*2) D0[1,2:-2,2:-2] = 0.8 + 0.2 * np.exp(-((U.x[:,np.newaxis]-7)**2 + (U.y[np.newaxis,:]-7)**2)*2) # heating rate - flux of material from lower to upper layer. Set Q of top layer as 0. Q = np.zeros((nlayers, nx, ny)) Q[:,2:-2,2:-2] = 0.2 * np.exp(-((U.x[np.newaxis,:,np.newaxis]-5)**2 + (U.y[np.newaxis,np.newaxis,:]-5)**2)*2) Q[0,:,:] = -Q[0,:,:] U.initial_data(D0=D0, Q=Q) X, Y = np.meshgrid(U.x,U.y) fig = plt.figure(figsize=(12,10)) ax = fig.gca(projection='3d') ax.set_xlim(-5,15) ax.set_ylim(-5,15) ax.set_zlim(0.7,1.4) ax.plot_surface(X,Y,U.U[0,1,2:-2,2:-2,0], rstride=1, cstride=2, lw=0, cmap=cm.viridis, antialiased=True) ax.plot_wireframe(X,Y,U.U[0,0,2:-2,2:-2,0], rstride=2, cstride=2, lw=0.1, cmap=cm.viridis, antialiased=True) #ax.view_init(80) #plt.plot(x,h[0,1:-1,0],x,h[1,1:-1,0], lw=2) plt.show() # evolution using second-order Lax-Wendroff # note: have assumed metric is constant so can move outside of derivatives def evolve(U, n): for j in range(nlayers): # to simplify indexing to 3 indices rather than 5 qn = U.U[:,j,:,:,n] # x-direction # find slopes S_upwind = (qn[:,2:,2:-2] - qn[:,1:-1,2:-2]) / U.dx S_downwind = (qn[:,1:-1,2:-2] - qn[:,:-2,2:-2]) / U.dx S_av = 0.5 * (S_upwind + S_downwind) # ratio r = np.ones_like(S_av) * 1.e6 # mask to stop divide by zero r[np.abs(S_downwind) > 1.e-10] = S_upwind[np.abs(S_downwind) > 1.e-10] / S_downwind[np.abs(S_downwind) > 1.e-10] # MC #phi = np.maximum(0.0, np.minimum(2.*r/(1.+r), 2./(1.+r))) # superbee phi = np.maximum(0.0, np.minimum(1,2.*r), np.minimum(2., r)) S = phi * S_av qp = qn[:,1:-1,2:-2] + S * 0.5 * U.dx qm = qn[:,1:-1,2:-2] - S * 0.5 * U.dx fp = np.zeros_like(qp) fm = np.zeros_like(qm) W = np.sqrt((qp[1,:,:]**2 * U.gamma_up[0,0] + 2 * qp[1,:,:] * qp[2,:,:] * U.gamma_up[0,1] + qp[2,:,:]**2 * U.gamma_up[1,1]) / qp[0,:,:]**2 + 1) u = qp[1,:,:] / (qp[0,:,:] * W) # u_down v = qp[2,:,:] / (qp[0,:,:] * W) # u_down qx = u * U.gamma_up[0,0] + v * U.gamma_up[0,1] - U.beta[0]/U.alpha #qy = v * U.gamma_up[1,1] + u * U.gamma_up[0,1] - U.beta[1]/U.alpha fp[0,:,:] = qp[0,:,:] * qx fp[1,:,:] = qp[1,:,:] * qx + 0.5 * qp[0,:,:]**2 / W**2 fp[2,:,:] = qp[2,:,:] * qx W = np.sqrt((qm[1,:,:]**2 * U.gamma_up[0,0] + 2 * qm[1,:,:] * qm[2,:,:] * U.gamma_up[0,1] + qm[2,:,:]**2 * U.gamma_up[1,1]) / qm[0,:,:]**2 + 1) u = qm[1,:,:] / (qm[0,:,:] * W) # u_down v = qm[2,:,:] / (qm[0,:,:] * W) # u_down qx = u * U.gamma_up[0,0] + v * U.gamma_up[0,1] - U.beta[0]/U.alpha #qy = v * U.gamma_up[1,1] + u * U.gamma_up[0,1] - U.beta[1]/U.alpha fm[0,:,:] = qm[0,:,:] * qx fm[1,:,:] = qm[1,:,:] * qx + 0.5 * qm[0,:,:]**2 / W**2 fm[2,:,:] = qm[2,:,:] * qx # Lax-Friedrichs flux # at each boundary, we have a left and right state # NOTE: This method requires two ghost cells on either side! #qL = qp[:,:-1,:] #qR = qm[:,1:,:] #fL = fp[:,:-1,:] #fR = fm[:,1:,:] alp = 1. # alpha #fx_minus_half = 0.5 * (fL[:,:-1,:] + fR[:,:-1,:] + alp * (qL[:,:-1,:] - qR[:,:-1,:])) #fx_plus_half = 0.5 * (fL[:,1:,:] + fR[:,1:,:] + alp * (qL[:,1:,:] - qR[:,1:,:])) fx_minus_half = 0.5 * (fp[:,:-2,:] + fm[:,1:-1,:] + alp * (qp[:,:-2,:] - qm[:,1:-1,:])) fx_plus_half = 0.5 * (fp[:,1:-1,:] + fm[:,2:,:] + alp * (qp[:,1:-1,:] - qm[:,2:,:])) #U.U[:,j,2:-2,:,n+1] = qn[:, 2:-2,:] - U.dt/U.dx * U.alpha * (f_plus_half - f_minus_half) # y-direction # find slopes S_upwind = (qn[:,2:-2,2:] - qn[:,2:-2,1:-1]) / U.dy S_downwind = (qn[:,2:-2,1:-1] - qn[:,2:-2,:-2]) / U.dy S_av = 0.5 * (S_upwind + S_downwind) # ratio r = np.ones_like(S_av) * 1.e6 # mask to stop divide by zero r[S_downwind > 1.e-10] = S_upwind[S_downwind > 1.e-10] / S_downwind[S_downwind > 1.e-10] # MC #phi = np.maximum(0.0, np.minimum(2.*r/(1.+r), 2./(1.+r))) # superbee phi = np.maximum(0.0, np.minimum(1,2.*r), np.minimum(2., r)) S = phi * S_av qp = qn[:,2:-2,1:-1] + S * 0.5 * U.dy qm = qn[:,2:-2,1:-1] - S * 0.5 * U.dy fp = np.zeros_like(qp) fm = np.zeros_like(qm) W = np.sqrt((qp[1,:,:]**2 * U.gamma_up[0,0] + 2 * qp[1,:,:] * qp[2,:,:] * U.gamma_up[0,1] + qp[2,:,:]**2 * U.gamma_up[1,1]) / qp[0,:,:]**2 + 1) u = qp[1,:,:] / (qp[0,:,:] * W) # u_down v = qp[2,:,:] / (qp[0,:,:] * W) # u_down #qx = u * U.gamma_up[0,0] + v * U.gamma_up[0,1] - U.beta[0]/U.alpha qy = v * U.gamma_up[1,1] + u * U.gamma_up[0,1] - U.beta[1]/U.alpha fp[0,:,:] = qp[0,:,:] * qy fp[1,:,:] = qp[1,:,:] * qy fp[2,:,:] = qp[2,:,:] * qy + 0.5 * qp[0,:,:]**2 / W**2 W = np.sqrt((qm[1,:,:]**2 * U.gamma_up[0,0] + 2 * qm[1,:,:] * qm[2,:,:] * U.gamma_up[0,1] + qm[2,:,:]**2 * U.gamma_up[1,1]) / qm[0,:,:]**2 + 1) u = qm[1,:,:] / (qm[0,:,:] * W) # u_down v = qm[2,:,:] / (qm[0,:,:] * W) # u_down #qx = u * U.gamma_up[0,0] + v * U.gamma_up[0,1] - U.beta[0]/U.alpha qy = v * U.gamma_up[1,1] + u * U.gamma_up[0,1] - U.beta[1]/U.alpha fm[0,:,:] = qm[0,:,:] * qy fm[1,:,:] = qm[1,:,:] * qy fm[2,:,:] = qm[2,:,:] * qy + 0.5 * qm[0,:,:]**2 / W**2 # Lax-Friedrichs flux # at each boundary, we have a left and right state # NOTE: This method requires two ghost cells on either side! #qL = qp[:,:,:-1] #qR = qm[:,:,1:] #fL = fp[:,:,:-1] #fR = fm[:,:,1:] alp = 1. # alpha #fy_minus_half = 0.5 * (fL[:,:,:-1] + fR[:,:,:-1] + alp * (qL[:,:,:-1] - qR[:,:,:-1])) #fy_plus_half = 0.5 * (fL[:,:,1:] + fR[:,:,1:] + alp * (qL[:,:,1:] - qR[:,:,1:])) fy_minus_half = 0.5 * (fp[:,:,:-2] + fm[:,:,1:-1] + alp * (qp[:,:,:-2] - qm[:,:,1:-1])) fy_plus_half = 0.5 * (fp[:,:,1:-1] + fm[:,:,2:] + alp * (qp[:,:,1:-1] - qm[:,:,2:])) U.U[:,j,2:-2,2:-2,n+1] = qn[:,2:-2,2:-2] - \ U.dt/U.dx * U.alpha * (fx_plus_half[:,:,:] - fx_minus_half[:,:,:]) -\ U.dt/U.dy * U.alpha * (fy_plus_half[:,:,:] - fy_minus_half[:,:,:]) # do boundaries U.bcs(n+1) """U_half = U.U[:,:,:,:,n+1] W = np.sqrt((U_half[1,:,:,:]**2 * U.gamma_up[0,0] + 2 * U_half[1,:,:,:] * U_half[2,:,:,:] * U.gamma_up[0,1] + U_half[2,:,:,:]**2 * U.gamma_up[1,1]) / U_half[0,:,:,:]**2 + 1) ph = U_half[0,:,:,:] / W Sx = U_half[1,:,:,:] Sy = U_half[2,:,:,:] for j in range(nlayers): # calculate source terms # a more sophisticated scheme for the source terms is needed, but that's a real headache so shall ignore for now # and just use operator splitting sum_phs = np.zeros((U.nx,U.ny)) sum_qs = 0 if j < (nlayers - 1): # i.e. it has another layer beneath it sum_qs += ((U.Q[j+1,1:-1,1:-1] - U.Q[j,1:-1,1:-1])) deltaQx = (U.Q[j,:,:] - U.Q[j+1,:,:]) * (Sx[j,:,:] - Sx[j+1,:,:]) / ph[j,:,:] deltaQy = (U.Q[j,:,:] - U.Q[j+1,:,:]) * (Sy[j,:,:] - Sy[j+1,:,:]) / ph[j,:,:] if j > 0: # i.e. has another layer above it sum_qs += -U.rho[j-1]/U.rho[j] * (U.Q[j,1:-1,1:-1] - U.Q[j-1,1:-1,1:-1]) deltaQx = U.rho[j-1]/U.rho[j] * (U.Q[j,:,:] - U.Q[j-1,:,:]) * (Sx[j,:,:] - Sx[j-1,:,:]) / ph[j,:,:] deltaQy = U.rho[j01]/U.rho[j] * (U.Q[j,:,:] - U.Q[j-1,:,:]) * (Sy[j,:,:] - Sy[j-1,:,:]) / ph[j,:,:] for i in range(j): sum_phs += U.rho[i] / U.rho[j] * ph[i,:,:] for i in range(j+1,nlayers): sum_phs += ph[i,:,:] dx_sum_phs = 0.5/U.dx * (sum_phs[2:,1:-1] - sum_phs[:-2,1:-1]) dy_sum_phs = 0.5/U.dy * (sum_phs[1:-1,2:] - sum_phs[1:-1,:-2]) # h U.U[0,j,2:-2,2:-2,n+1] += U.alpha * U.dt * (sum_qs) # hu U.U[1,j,2:-2,2:-2,n+1] += U.alpha * U.dt * (-deltaQx[1:-1,1:-1] - dx_sum_phs) * ph[j,1:-1,1:-1] # hv U.U[2,j,2:-2,2:-2,n+1] += U.alpha * U.dt * (-deltaQy[1:-1,1:-1] - dy_sum_phs) * ph[j,1:-1,1:-1] # do boundaries U.bcs(n+1)""" for i in range(nt): evolve(U, i) X, Y = np.meshgrid(U.x,U.y) fig = plt.figure(figsize=(12,10)) ax = fig.gca(projection='3d') ax.set_xlim(0,10) ax.set_ylim(0,10) ax.set_zlim(0.7,1.4) n = 1 ax.plot_surface(X[23:-23,23:-23],Y[23:-23,23:-23],U.U[0,1,25:-25,25:-25,n], rstride=1, cstride=2, lw=0, cmap=cm.viridis, antialiased=True) ax.plot_wireframe(X[23:-23,23:-23],Y[23:-23,23:-23],U.U[0,0,25:-25,25:-25,n], rstride=2, cstride=2, lw=0.1, cmap=cm.viridis, antialiased=True) #ax.view_init(80) #plt.plot(x,h[0,1:-1,0],x,h[1,1:-1,0], lw=2) plt.show() fig = plt.figure(figsize=(12,10)) ax = fig.gca(projection='3d') #ax = plt.axes(xlim=(0,10), zlim=(0.7,1.4)) #surface_1 = ax.plot_surface([],[],[], rstride=1, cstride=2, lw=0, cmap=cm.viridis, antialiased=True)[0] #surface_2 = ax.plot_wireframe([],[],[], rstride=2, cstride=2, lw=0.1, cmap=cm.viridis, antialiased=True)[0] #line = ax.plot([],[], lw=2)[0] #line2 = ax.plot([],[], lw=2)[0] surface_1 = ax.plot_surface(X[20:-20,20:-20],Y[20:-20,20:-20],U.U[0,1,22:-22,22:-22,0], rstride=1, cstride=2, lw=0, cmap=cm.viridis, antialiased=True) surface_2 = ax.plot_wireframe(X[20:-20,20:-20],Y[20:-20,20:-20],U.U[0,0,22:-22,22:-22,0], rstride=2, cstride=2, lw=0.1, cmap=cm.viridis, antialiased=True) def init(): surface_1 = ax.plot_surface(X[20:-20,20:-20],Y[20:-20,20:-20],U.U[0,1,22:-22,22:-22,0], rstride=1, cstride=2, lw=0, cmap=cm.viridis, antialiased=True) surface_2 = ax.plot_wireframe(X[20:-20,20:-20],Y[20:-20,20:-20],U.U[0,0,22:-22,22:-22,0], rstride=2, cstride=2, lw=0.1, cmap=cm.viridis, antialiased=True) def animate(i): ax.clear() ax.set_xlim(-0.5,10.5) ax.set_ylim(-0.5,10.5) ax.set_zlim(0.7,1.4) ax.plot_surface(X[20:-20,20:-20],Y[20:-20,20:-20],U.U[0,1,22:-22,22:-22,i], rstride=1, cstride=2, lw=0, cmap=cm.viridis, antialiased=True) ax.plot_wireframe(X[20:-20,20:-20],Y[20:-20,20:-20],U.U[0,0,22:-22,22:-22,i], rstride=2, cstride=2, lw=0.1, cmap=cm.viridis, antialiased=True) #ax.view_init(80) anim = animation.FuncAnimation(fig, animate, frames=200, interval=80)#, init_func=init) HTML(anim.to_html5_video()) ```
github_jupyter
``` %matplotlib inline from __future__ import division import math import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns # This is lr_debug.py try: xrange except NameError: xrange = range def add_intercept(X_): m, n = X_.shape X = np.zeros((m, n + 1)) X[:, 0] = 1 X[:, 1:] = X_ return X def load_data(filename): D = np.loadtxt(filename) Y = D[:, 0] X = D[:, 1:] return add_intercept(X), Y def calc_grad(X, Y, theta): m, n = X.shape grad = np.zeros(theta.shape) margins = Y * X.dot(theta) probs = 1. / (1 + np.exp(margins)) grad = -(1./m) * (X.T.dot(probs * Y)) return grad def logistic_regression(X, Y): m, n = X.shape theta = np.zeros(n) learning_rate = 10 i = 0 while True: i += 1 prev_theta = theta grad = calc_grad(X, Y, theta) theta = theta - learning_rate * (grad) norm = np.linalg.norm(prev_theta - theta) if i % 10000 == 0: print('Finished {0} iterations; Diff theta: {1}; theta: {2}; Grad: {3}'.format( i, norm, theta, grad)) if norm < 1e-15: print('Converged in %d iterations' % i) break return def main(): print('==== Training model on data set A ====') Xa, Ya = load_data('data/data_a.txt') logistic_regression(Xa, Ya) print('\n==== Training model on data set B ====') Xb, Yb = load_data('data/data_b.txt') logistic_regression(Xb, Yb) return theta = [-20.81394174, 21.45250215, 19.85155266] theta_x = df_a.shape[0]*theta[0] + df_a.x1*theta[1] + df_a.x2*theta[2] #np.min(1. / (1 + np.exp(theta_x))) np.sum(theta_x*df_a.y)/theta_x.shape[0] theta = [-52.74109217, 52.92982273, 52.69691453] theta_x = df_b.shape[0]*theta[0] + df_b.x1*theta[1] + df_b.x2*theta[2] #np.min(1. / (1 + np.exp(theta_x))) np.sum(theta_x*df_b.y)/theta_x.shape[0] main() df_a = pd.read_csv('data/data_a.txt', sep=' ', header=None, names=['y','x1', 'x2']) df_b = pd.read_csv('data/data_b.txt', sep=' ', header=None, names=['y','x1', 'x2']) df_a.describe() df_b.describe() ``` <a id='1a'></a> ### Problem 1.a) Most notable difference between logistic regressions for datasets A & B is that while A converges within 31k iteractions, B never converges (at least within 110mm iterations) <a id='1b'></a> ### Problem 1.b) $$ margin := y \theta^{\top}x $$ $$ \hat{p} = \frac{1}{\left(1+e^{-margin}\right)} $$ ### Data: ### -Dataset A: is not linearly separable, ergo {margin} variable is not always negative, and sums to positive ==> probabilities will tend to zero as exp(margin) >> 0, so gradients will become very small quickly, ergo gradient descent will converge quicker. ### -Dataset B: is linearly separable, ergo {margin} variable is always negative ==> probabilities will tend to one, ergo exp(margin) approaches zero, as margin approaches negative infinity, as we increase magnitude of theta, to push probabilities to one. ``` plt.title('Data Set A') plt.scatter(df_a.loc[df_a.y==1 ,'x1'], df_a.loc[df_a.y==1 ,'x2'], marker = 'x') plt.scatter(df_a.loc[df_a.y==-1 ,'x1'], df_a.loc[df_a.y==-1 ,'x2'], marker = 'o') plt.title('Data Set B') plt.scatter(df_b.loc[df_b.y==1 ,'x1'], df_b.loc[df_b.y==1 ,'x2'], marker = 'x') plt.scatter(df_b.loc[df_b.y==-1 ,'x1'], df_b.loc[df_b.y==-1 ,'x2'], marker = 'o') sns.pairplot(df_a,vars=["x1", "x2"], hue="y", height=4, aspect=1.5, diag_kind="kde"); sns.pairplot(df_b,vars=["x1", "x2"], hue="y", height=4, aspect=1.5, diag_kind="kde"); ``` <a id='1c'></a> ### Problem 1.c ### i) Different Constant Learning Rate: increasing learning rate, say to 10,000X could increase speed of convergence for dataset b, but would make convergence on other datasets impossible (e.g. dataset b), since much larger updates to theta could always overshoot optimal theta. ### ii) Decreasing Learning Rate over Time: the fundamental problem is gradient never converges, so changing learning rate won't help. ### iii) Adding L2-Regularization for Theta to Loss Function: this should help keep ||theta|| from simply increasing, by penalizing larger values of ||theta||. ### iv) Linear Scaling of Input Features: features are already between 0 and 1, this won't likely help. ### v) Adding Gaussian Noise to training data or labels, this could prevent clear separability of classes, but doesn't guarantee it, since it depends on particular samples of data or labels. <a id='1d'></a> ### Problem 1.d) if we use functional or optimal margins, we can constrain size of ||theta||, to avoid this issue
github_jupyter
``` %matplotlib inline ``` # Save and Load the Model In this section we will look at how to persist model state with saving, loading and running model predictions. ``` import torch import torch.onnx as onnx import torchvision.models as models ``` ## Saving and Loading Model Weights PyTorch models store the learned parameters in an internal state dictionary, called `state_dict`. These can be persisted via the `torch.save` method: ``` model = models.vgg16(pretrained=True) torch.save(model.state_dict(), 'data/model_weights.pth') ``` To load model weights, you need to create an instance of the same model first, and then load the parameters using `load_state_dict()` method. ``` model = models.vgg16() # we do not specify pretrained=True, i.e. do not load default weights model.load_state_dict(torch.load('data/model_weights.pth')) model.eval() ``` **Note** be sure to call `model.eval()` method before inferencing to set the dropout and batch normalization layers to evaluation mode. Failing to do this will yield inconsistent inference results. ## Saving and Loading Models with Shapes When loading model weights, we needed to instantiate the model class first, because the class defines the structure of a network. We might want to save the structure of this class together with the model, in which case we can pass `model` (and not `model.state_dict()`) to the saving function: ``` torch.save(model, 'data/vgg_model.pth') ``` We can then load the model like this: ``` model = torch.load('data/vgg_model.pth') ``` **Note** This approach uses Python [pickle](https://docs.python.org/3/library/pickle.html) module when serializing the model, thus it relies on the actual class definition to be available when loading the model. ## Exporting Model to ONNX PyTorch also has native ONNX export support. Given the dynamic nature of the PyTorch execution graph, however, the export process must traverse the execution graph to produce a persisted ONNX model. For this reason, a test variable of the appropriate size should be passed in to the export routine (in our case, we will create a dummy zero tensor of the correct size): ``` input_image = torch.zeros((1,3,224,224)) onnx.export(model, input_image, 'data/model.onnx') ``` There are a lot of things you can do with ONNX model, including running inference on different platforms and in different programming languages. For more details, we recommend visiting [ONNX tutorial](https://github.com/onnx/tutorials). Congratulations! You have completed the PyTorch beginner tutorial! We hope this tutorial has helped you get started with deep learning on PyTorch. Good luck!
github_jupyter
# Simple MLP demo for TIMIT using Keras This notebook describes how to reproduce the results for the simple MLP architecture described in this paper: [ftp://ftp.idsia.ch/pub/juergen/nn_2005.pdf](ftp://ftp.idsia.ch/pub/juergen/nn_2005.pdf) And in Chapter 5 of this thesis: http://www.cs.toronto.edu/~graves/phd.pdf To begin with, if you have a multi-gpu system (like I do), you may want to choose which GPU you want to run this on (indexing from 0): ``` import os os.environ['CUDA_VISIBLE_DEVICES']='0' ``` Here we import the stuff we use below: ``` import numpy as np from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.optimizers import Adam, SGD from IPython.display import clear_output from tqdm import * import sys sys.path.append('../python') from data import Corpus, History ``` ## Loading the data Here we load the corpus stored in HDF5 files. It contains both normalized and unnormalized data and we're interested in the former: ``` train=Corpus('../data/TIMIT_train.hdf5',load_normalized=True) dev=Corpus('../data/TIMIT_dev.hdf5',load_normalized=True) test=Corpus('../data/TIMIT_test.hdf5',load_normalized=True) ``` The data can be loaded all at once into separate Numpy arrays: ``` tr_in,tr_out_dec=train.get() dev_in,dev_out_dec=dev.get() tst_in,tst_out_dec=test.get() ``` The loaded data is a list of utterances, where each utterance is a matrix (for inputs) or a vector (for outputs) of different sizes. That is why the whole corpus is not a matrix (which would require that each utterance is the same length): ``` print tr_in.shape print tr_in[0].shape print tr_out_dec.shape print tr_out_dec[0].shape ``` The papers/thesis above use 26 features instead of the standrd 39, ie they only use first-order regression coefficients (deltas). We usually prepare a corpus for the full 39 features, so to be comparable, lets extract the 26 from that: ``` for u in range(tr_in.shape[0]): tr_in[u]=tr_in[u][:,:26] for u in range(dev_in.shape[0]): dev_in[u]=dev_in[u][:,:26] for u in range(tst_in.shape[0]): tst_in[u]=tst_in[u][:,:26] ``` ## Parameters Here we'll define some standard sizes and parameters: ``` input_dim=tr_in[0].shape[1] output_dim=61 hidden_num=250 epoch_num=1500 ``` ### 1-hot vectors For most loss functions, the output for each utterance needs to be a matrix of size (output_dim,sample_num). That means we need to convert the output from a list of decisions to a list of 1-hot vectors. This is a requirement of Keras: ``` def dec2onehot(dec): ret=[] for u in dec: assert np.all(u<output_dim) num=u.shape[0] r=np.zeros((num,output_dim)) r[range(0,num),u]=1 ret.append(r) return np.array(ret) tr_out=dec2onehot(tr_out_dec) dev_out=dec2onehot(dev_out_dec) tst_out=dec2onehot(tst_out_dec) ``` ## Model definition Here we define the model exactly as in the paper: one hidden layer with 250 units, sigmoid activation in the hidden and softmax in the output, cross-entropy loss. The only thing that differs is the optimizer. You can use SGD, but the values in the paper seem to be far too small. Adam works just as well and maybe even a bit faster. Feel free to experiment: ``` model = Sequential() model.add(Dense(input_dim=input_dim,output_dim=hidden_num)) model.add(Activation('sigmoid')) model.add(Dense(output_dim=output_dim)) model.add(Activation('softmax')) optimizer= SGD(lr=3e-3,momentum=0.9,nesterov=False) loss='categorical_crossentropy' metrics=['accuracy'] model.compile(loss=loss, optimizer=optimizer, metrics=metrics) ``` ## Training Here we have a training loop. We don't use the "fit" method to accomodate the specific conditions in the paper: we register the loss/accuracy of dev and test at each time step, we do weight update after each utterance. ``` from random import shuffle tr_hist=History('Train') dev_hist=History('Dev') tst_hist=History('Test') tr_it=range(tr_in.shape[0]) for e in range(epoch_num): print 'Epoch #{}/{}'.format(e+1,epoch_num) sys.stdout.flush() shuffle(tr_it) for u in tqdm(tr_it): l,a=model.train_on_batch(tr_in[u],tr_out[u]) tr_hist.r.addLA(l,a,tr_out[u].shape[0]) clear_output() tr_hist.log() for u in range(dev_in.shape[0]): l,a=model.test_on_batch(dev_in[u],dev_out[u]) dev_hist.r.addLA(l,a,dev_out[u].shape[0]) dev_hist.log() for u in range(tst_in.shape[0]): l,a=model.test_on_batch(tst_in[u],tst_out[u]) tst_hist.r.addLA(l,a,tst_out[u].shape[0]) tst_hist.log() print 'Done!' ``` ## Results Here we can plot the loss and PER (phoneme error rate) while training: ``` import matplotlib.pyplot as P %matplotlib inline fig,ax=P.subplots(2,sharex=True,figsize=(12,10)) ax[0].set_title('Loss') ax[0].plot(tr_hist.loss,label='Train') ax[0].plot(dev_hist.loss,label='Dev') ax[0].plot(tst_hist.loss,label='Test') ax[0].legend() ax[0].set_ylim((1.4,2.0)) ax[1].set_title('PER %') ax[1].plot(100*(1-np.array(tr_hist.acc)),label='Train') ax[1].plot(100*(1-np.array(dev_hist.acc)),label='Dev') ax[1].plot(100*(1-np.array(tst_hist.acc)),label='Test') ax[1].legend() ax[1].set_ylim((45,55)) ``` The final results are presented below. Please note that Keras usually calculates accuracy, while the papers generally prefer error rates. We generally shouldn't give the result of the minimum PER for the test set, but we can use the dev set, find it's minimum and provide the value of the test at that time. You can see that the correct PER is not too far from the minimum test PER anyway: ``` print 'Min train PER: {:%}'.format(1-np.max(tr_hist.acc)) print 'Min test PER: {:%}'.format(1-np.max(tst_hist.acc)) print 'Min dev PER epoch: #{}'.format((np.argmax(dev_hist.acc)+1)) print 'Test PER on min dev: {:%}'.format(1-tst_hist.acc[np.argmax(dev_hist.acc)]) ``` The paper gives a value of 48.6% error rate for this architecture and claims it took 835 epochs to reach the value using SGD. Here we can see that ADAM got it a bit faster than that: ``` wer=0.486999999999 print 'Epoch where PER reached {:%}: #{}'.format(wer,np.where((1-np.array(tst_hist.acc))<=wer)[0][0]) ```
github_jupyter
# Import Packages ``` import torch import torch.nn as nn import torch.nn.functional as F import torchsummary as summary from sklearn.preprocessing import StandardScaler import gc import os import sys from pathlib import Path from tqdm import tqdm import numpy as np import pandas as pd import matplotlib.pyplot as plt plt.style.use('ggplot') pd.set_option('display.max_columns', None) rootdir = r'C:\\Users\\bened\\Documents\\UNIVERSITY\\SchoenStats\\PyTorch Working Directory\\Undergrad Stats Project\\Crime Models\\Crime Data Devon and Cornwall' ``` # Load DATA ``` # collect dataframes from files dat_2018_list = [] dat_2019_list = [] dat_2020_list = [] dat_2021_list = [] for root, dirs, files in os.walk(rootdir): for file in tqdm(files): if file.endswith('.csv'): df = pd.read_csv(os.path.join(root, file)) if file.startswith('2018'): dat_2018_list.append(df) elif file.startswith('2019'): dat_2019_list.append(df) elif file.startswith('2020'): dat_2020_list.append(df) else: dat_2021_list.append(df) def clean_dat_fn(dat_list): df = pd.concat(dat_list) df['Month'] = [int(i.split('-')[1]) for i in df['Month']] df['Crime type'] = df['Crime type'].astype('category') df_out = df[['Month', 'Longitude', 'Latitude', 'Crime type']] return(df_out) mllc18 = clean_dat_fn(dat_2018_list) mllc19 = clean_dat_fn(dat_2019_list) mllc20 = clean_dat_fn(dat_2020_list) mllc21 = clean_dat_fn(dat_2021_list) data = mllc18[['Longitude', 'Latitude']].dropna() # random sample from dataset X = data.sample(10000).reset_index(drop=True) X = X[(X.Longitude.notna()) & (X.Longitude < -2.8) & (X.Longitude > -5.8)] X = X[(X.Latitude.notna()) & (X.Latitude < 51.5) & (X.Latitude > 49.5)] X_data = torch.round(torch.tensor(X.values)*111) X = data.sample(2000).reset_index(drop=True) X = X[(X.Longitude.notna()) & (X.Longitude < -2.8) & (X.Longitude > -5.8)] X = X[(X.Latitude.notna()) & (X.Latitude < 51.5) & (X.Latitude > 49.5)] X_test = torch.round(torch.tensor(X.values)*111) #n_samples = 2000 #noisy_moons = datasets.make_moons(n_samples=n_samples, noise=.05) X = np.array(X_data) y = np.array(X_test) # normalize X = StandardScaler().fit_transform(X) y = StandardScaler().fit_transform(y) X_data = X X_test = y X_data = torch.tensor(X_data) X_test = torch.tensor(X_test) # plot generated dataset k = mllc18 k = k[(k.Longitude.notna()) & (k.Longitude < -2.8) & (k.Longitude > -5.8)] k = k[(k.Latitude.notna()) & (k.Latitude < 51.5) & (k.Latitude > 49.5)] long, lat = k['Longitude'], k['Latitude'] plt.figure(figsize=(12, 8)) plt.scatter(round(long*111), round(lat*111), s=10, alpha=0.05) # create features data and target data X_train = X_data.view(-1,2) X_test = X_test.view(-1,2) # plot generated dataset plt.scatter(X_train[:,0], X_train[:,1], alpha=0.1, s=20, label='development', color='grey') plt.scatter(X_test[:,0], X_test[:,1], alpha=0.2, s=20, label='validation', color='red') plt.legend(loc='upper left') plt.title("Training Distribution") # create features data and target data X_train = X_train.view(-1,2) X_test = X_test.view(-1,2) ``` # Define MODEL and Loss Function ``` n_input = 1 n_output = 1 n_gaus = 100 n_hidden = 100 # define MODEL class MDN(nn.Module): def __init__(self, n_hidden, n_gaussians): super(MDN, self).__init__() self.z_h = nn.Sequential(nn.Linear(n_input, n_hidden), nn.Tanh(), nn.Linear(n_hidden, n_hidden), nn.Tanh(), nn.Linear(n_hidden, n_hidden), nn.Tanh()) self.z_pi = nn.Linear(n_hidden, n_gaussians) self.z_mu = nn.Linear(n_hidden, n_gaussians) self.z_sigma = nn.Linear(n_hidden, n_gaussians) #self.z_pi = nn.Linear(n_hidden, n_gaussians) #self.z_mu = nn.Linear(n_hidden, n_gaussians) #self.z_sigma = nn.Linear(n_hidden, n_gaussians) def forward(self, x): z_h = self.z_h(x) pi = F.softmax(self.z_pi(z_h), -1) mu = self.z_mu(z_h) sigma = torch.exp(self.z_sigma(z_h)) return pi, mu, sigma model = MDN(n_hidden=n_hidden, n_gaussians=n_gaus) optimizer = torch.optim.Adam(model.parameters()) # defining LOSS function # modification to the error loss function (because it is an MDN) def mdn_loss_fn(y, mu, sigma, pi): m = torch.distributions.Normal(loc=mu, scale=sigma) loss = torch.exp(m.log_prob(y)) loss = torch.sum(loss * pi, dim=1) loss = -torch.log(loss) return torch.mean(loss) ``` # Training Step ``` # TRAINING the model num_epoch = 500 batch_size = 64 model=model.double() loss_list = [] for epoch in tqdm(range(num_epoch)): permu = torch.randperm(X_train.shape[0]) for i in range(0, X_train.shape[0], batch_size): indices = permu[i:i+batch_size] batch_x, batch_y = X_train[indices,0], X_train[indices,1] pi, mu, sigma = model.forward(batch_x.unsqueeze(1)) optimizer.zero_grad() l = mdn_loss_fn(batch_y.unsqueeze(1), mu, sigma, pi) l.backward() optimizer.step() if epoch % 1 == 0: loss_list.append(l.detach()) # Epochs vs (log)Loss #plt.plot(range(num_epoch), np.array(loss_list), label='log.loss', linewidth=0.5) #plt.ylim((-2.5, 0.5)) #plt.legend() # Epochs vs Loss plt.plot(range(num_epoch), np.exp(np.array(loss_list)), label='loss', linewidth=0.5) plt.ylim((0, 5)) plt.legend() ``` # Sampling Data from Trained Model ``` pi, mu, sigma = model(X_test[:,0].unsqueeze(1)) sigma # GENERATE samples from model # first, extract model parameters # pi - mixing coefficient num_samples = 1998 pi, mu, sigma = model(X_test[:,0].unsqueeze(1)) # simulate mixture of gaussians k = torch.multinomial(pi, 1).view(-1) t_pred = torch.normal(mu, sigma)[np.arange(num_samples), k].data t_pred = t_pred.view(-1, 1) # function for rearranging data dependent on how many gaussians chosen def mdn_col_sort(test, preds, n_gaus, pi=pi, mu=mu, sigma=sigma) : data = torch.cat((test, preds, pi, mu, sigma), dim=1).detach().numpy() extra_cols = [] for i in range(n_gaus): extra_cols.append('pi_' + str(i + 1)) for j in range(n_gaus): extra_cols.append('mu_' + str(j + 1)) for k in range(n_gaus): extra_cols.append('sigma_' + str(k + 1)) col_names = ["X", "T", "T_pred"] + extra_cols data = pd.DataFrame(data, columns=col_names) # final sort for smoother plotting #data = data.sort_values(by='X', axis = 0) data['max_pi'] = "" return data # create "max_pi" column which are labels for each row # these indicate which pi value is the greatest def add_max_pi(data, no_pi=2): pi_cols = [] for i in range(no_pi): pi_cols.append('pi_' + str(i + 1)) pi_dat = data[pi_cols] pi_dat.columns = range(1, no_pi+1) data.max_pi = pi_dat.idxmax(axis=1) return data full_res = mdn_col_sort(X_test, t_pred, n_gaus) full_res = full_res.sort_values(by='X') full_res = add_max_pi(full_res, n_gaus) #full_res.head(5) ``` # RESULTS ``` # special plot function def plot_signif(no, data, col1='blue', col2='lightblue', a=0.4, lw=1.5, lw_m=2.0, plt_var=True, mc=True): plt.scatter(data['X'], data['T'], alpha=0.1, color='grey', s=10) n_dat = data[data.max_pi == no] mu_str = 'mu_'+ str(no) sigma_str = 'sigma_'+ str(no) if mc: plt.plot(n_dat['X'], n_dat[mu_str], color=col1, linewidth=lw_m, label=mu_str) plt.plot(data['X'], data[mu_str], '--',color=col1, alpha=a, linewidth=lw) if plt_var: plt.fill_between(n_dat['X'], n_dat[mu_str]+n_dat[sigma_str], n_dat[mu_str]-n_dat[sigma_str], alpha=a, color=col2) plt.figure(figsize=[10, 7]) #plt.scatter(full_res['X'], full_res['T'], alpha=0.2, color='grey', s=10) for i in range(1, 100, 1): plot_signif(i, full_res, mc=False, plt_var=False, a=0.2) # plot_signif(10, full_res, plt_var=False, mc=False) # plot_signif(20, full_res, col1='red', col2='pink', plt_var=False, mc=False) # plot_signif(30, full_res, col1='green', col2='lightgreen', plt_var=False, mc=False) # plot_signif(40, full_res, col1='orange', col2='gold', plt_var=False, mc=False) # plot_signif(50, full_res, col1='magenta', col2='pink', plt_var=False, mc=False) # plot_signif(60, full_res, plt_var=False, mc=False) # plot_signif(70, full_res, col1='red', col2='pink', plt_var=False, mc=False) # plot_signif(80, full_res, col1='green', col2='lightgreen', plt_var=False, mc=False) # plot_signif(90, full_res, col1='orange', col2='gold', plt_var=False, mc=False) # plot_signif(100, full_res, col1='magenta', col2='pink', plt_var=False, mc=False) plt.legend(loc='upper left', prop={'size': 10}) plt.ylim([-2, 4]) plt.xlim([-2.5, 2]) #fig = plt.gcf() #fig.savefig("MDN_wavedata_f2.png") #plt.draw() #plt.show() plt.figure(figsize=[10, 7]) plt.scatter(X_test[:,0], X_test[:,1], alpha = 0.2, label='target', s=10) plt.scatter(X_test[:,0], t_pred.detach(), alpha = 0.2, label = 'predicted', s=10) plt.legend(loc='upper left') plt.xlim([-2.5, 2]) plt.ylim([-2.5, 4]) plt.legend(loc='upper left', prop={'size':14}) #fig = plt.gcf() #fig.savefig("MDN_pred_investigation.png") #plt.draw() #plt.show() #plt.figure(figsize=(10, 7)) ``` # Root Mean Square (RMS) ``` diff = (t_pred - X_test[:,1].view(-1, 1)) L2_diff = np.square(diff) avg_diff = sum(np.array(L2_diff)) / num_samples RMS = np.sqrt(avg_diff) print("RMS: {}".format(np.around(RMS, decimals=6))) plt.scatter(X_test[:, 0].view(-1, 1), L2_diff, s=10, alpha=0.2) plt.xlim((-2.5, 2)) plt.ylim((0,15 )) plt.scatter(X_test[:, 0].view(-1, 1), diff, s=10, alpha=0.2) plt.title("displacement plot") plt.xlim((-2.5,2)) ``` # Model Summary ``` # summary of model parameters for name, param in model.named_parameters(): print("{}\n{}\n".format(name, param)) ```
github_jupyter
# Prediction Models ### This part is to train the prediction model. We use Logistic Regression, KNN, Random Forest, XGBoost and SVM to train models ### First, build feature vectors. Translate Heroes ID to Vector ``` # Run some setup code for this notebook. from __future__ import print_function import pandas as pd import numpy as np from progressbar import ProgressBar from sklearn.linear_model import LogisticRegression from sklearn.feature_selection import RFE from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier import matplotlib.pyplot as plt from sklearn.metrics import classification_report, confusion_matrix from sklearn.ensemble import RandomForestRegressor from sklearn.datasets import make_regression import xgboost as xgb from sklearn import svm df=pd.read_csv("out.csv") df.head() #Translate heroes ID to a feature Vector Vector_Data=[] pbar = ProgressBar() for i in pbar(range(0,len(df))): aa=np.zeros(250) for j in range(23,28): aa[df.iloc[i,j]]=1; for j in range(28,33): aa[df.iloc[i,j]+125]=1; Vector_Data.append(aa) Vector_Data=pd.DataFrame(Vector_Data) Vector_Data.head(5) ``` ### Build training dataset and test dataset ``` X=Vector_Data y=df["radiant_win"] X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42) X_train.head(5) ``` ### Logistic Regressio: Accuracy is 64%, Very fast ``` Logi_model=LogisticRegression(random_state=0, solver="newton-cg", multi_class='multinomial').fit(X_train, y_train) y_pred=Logi_model.predict(X_test) #The accuracy of logistic regression np.mean(y_pred == y_test) print(confusion_matrix(y_test, y_pred)) print(classification_report(y_test, y_pred)) x=np.arange(2000,80000,2000) accuracies=[] for i in x: Logi_model = Logi_model.fit(X_train[0:i],y_train[0:i]) y_pred=Logi_model.predict(X_test) accuracies.append(np.mean(y_pred == y_test)) plt.figure(figsize=(10, 5)) plt.plot(x, accuracies, color='red', linestyle='dashed', marker='o', markerfacecolor='blue', markersize=10) plt.title('Logistic regression learning curve') plt.xlabel('Number of training samples') plt.ylabel('Accuracy') plt.show() ``` ### K Nearest Neighbors: Accuracy 62%, Very slow KNN is very slow. Basically, the best K is between 80 to 100 and the accuracy is 62% ``` accuracy = [] k=[] # Calculating error for K values between 1 and 100 for i in range(1, 100,5): print(i) knn = KNeighborsClassifier(n_neighbors=i,n_jobs=12) knn.fit(X_train, y_train) y_pred = knn.predict(X_test[0:1000]) accuracy.append(np.mean(y_pred == y_test[0:1000])) k.append(i) plt.figure(figsize=(12, 6)) plt.plot(k,accuracy, color='red', linestyle='dashed', marker='o', markerfacecolor='blue', markersize=10) plt.title('Accuracy Rate K Value') plt.xlabel('K Value') plt.ylabel('Mean Accuracy') plt.show() ``` ### Random Forest: Accuracy 61%, Fast ``` rfModel = RandomForestRegressor(n_estimators=100,n_jobs=12) rfModel.fit(X_train,y_train) y_pred=rfModel.predict(X_test) yT=[] for i in range(0,len(y_pred)): if(y_pred[i]>=0.5): yT.append(True) else: yT.append(False) np.mean(yT == y_test) X_train.shape X_test.shape accuracy = [] n_tree=[] for i in range(10, 210,10): print(i) n_tree.append(i) rfModel = RandomForestRegressor(n_estimators=i,n_jobs=12) rfModel.fit(X_train,y_train) y_pred=rfModel.predict(X_test) yT=[] for i in range(0,len(y_pred)): if(y_pred[i]>=0.5): yT.append(True) else: yT.append(False) accuracy.append(np.mean(yT == y_test)) plt.figure(figsize=(12, 6)) plt.plot(n_tree,accuracy, color='red', linestyle='dashed', marker='o', markerfacecolor='blue', markersize=10) plt.title('Accuracy Rate n_tree Value for Random Forest') plt.xlabel('n_tree Value') plt.ylabel('Mean Accuracy') plt.show() ``` ### XGBoost Model: Accuracy 66%, Fast ``` xgb_cl = xgb.XGBClassifier(learning_rate=0.1,n_job=12) xgb_cl.fit(X_train,y_train) y_pred = xgb_cl.predict(X_test) np.mean(y_pred==y_test) accuracy = [] learning_rate=[] for i in range(5, 80,5): l=i/100 print(i) learning_rate.append(l) xgb_cl = xgb.XGBClassifier(learning_rate=l,n_job=12) xgb_cl.fit(X_train,y_train) y_pred = xgb_cl.predict(X_test) accuracy.append(np.mean(y_pred==y_test)) plt.figure(figsize=(12, 6)) plt.plot(learning_rate,accuracy, color='red', linestyle='dashed', marker='o', markerfacecolor='blue', markersize=10) plt.title('Accuracy Rate learning_rate Value for XGBoost Forest') plt.xlabel('learning_rate Value') plt.ylabel('Mean Accuracy') plt.show() accuracy = [] n_estimators=[] for i in range(50, 1000,50): print(i) n_estimators.append(i) xgb_cl = xgb.XGBClassifier(n_estimators=i,n_job=12) xgb_cl.fit(X_train,y_train) y_pred = xgb_cl.predict(X_test) accuracy.append(np.mean(y_pred==y_test)) plt.figure(figsize=(12, 6)) plt.plot(n_estimators,accuracy, color='red', linestyle='dashed', marker='o', markerfacecolor='blue', markersize=10) plt.title('Accuracy Rate n_estimators Value for XGBoost Forest') plt.xlabel('n_estimators Value') plt.ylabel('Mean Accuracy') plt.show() ``` ### Support Vector Machine: Accuracy 64%, Slow ``` clf = svm.SVC(gamma='scale') clf.fit(X_train,y_train) y_pred=clf.predict(X_test) np.mean(y_pred == y_test) ```
github_jupyter
``` import pandas as pd import numpy as np import os import scipy.stats import matplotlib.pyplot as plt from functools import reduce %matplotlib inline ``` # Load the data ``` data_root = os.path.join('..', 'data', 'time_series') file_name_template = '{}_{}_sliced_{}_tl_bot{}.csv' mapping = { 1: 'thenation', 2: 'thenation', 3: 'thenation', 4: 'washingtonpost', 5: 'washingtonpost', 6: 'washingtonpost', 7: 'USATODAY', 8: 'USATODAY', 9: 'USATODAY', 10: 'WSJ', 11: 'WSJ', 12: 'WSJ', 13: 'BreitbartNews', 14: 'BreitbartNews', 15: 'BreitbartNews' } user_type_list = ['home', 'friend_usr'] methods = ['hashtag', 'url'] drifter_df_dict = {} for key, seed in mapping.items(): method_dict = {} for method in methods: user_type_dict = {} for user_type in user_type_list: temp_df = pd.read_csv(os.path.join(data_root, file_name_template.format(method, seed, user_type, key))) user_type_dict[user_type] = temp_df method_dict[method] = user_type_dict drifter_df_dict[key] = { 'seed': seed, 'dfs': method_dict } ``` # T-test for different groups ``` reverse_mapping = { 'thenation': [1, 2, 3], 'washingtonpost': [4, 5, 6], 'USATODAY': [7, 8, 9], 'WSJ': [10, 11, 12], 'BreitbartNews': [13, 14, 15] } name_mapping = { 'thenation': 'Left', 'washingtonpost': 'C. Left', 'USATODAY': 'Center', 'WSJ': 'C. Right', 'BreitbartNews': 'Right' } ``` Calculate the difference between political alignment scores of the home timelines and their friends’ user timelines. ``` alignment_diffs = {} for method in methods: temp_alignment_diffs = [] for seed, drifter_ids in reverse_mapping.items(): temp_dfs = [] for drifter_id in drifter_ids: temp_df = drifter_df_dict[drifter_id]['dfs'][method]['home'].merge( drifter_df_dict[drifter_id]['dfs'][method]['friend_usr'], on='date' ) temp_dfs.append(temp_df) combined_df = pd.concat(temp_dfs) temp_diff = combined_df['{}_mean_x'.format(method)] - combined_df['{}_mean_y'.format(method)] temp_diff = temp_diff.to_frame(name=seed).reset_index()[[seed]] temp_alignment_diffs.append(temp_diff) alignment_diffs[method] = temp_alignment_diffs def concat_dfs(dfs): return reduce( lambda left, right: pd.merge(left, right, left_index=True, right_index=True, how='outer'), dfs ).rename(columns=name_mapping) alignment_diffs_url = concat_dfs(alignment_diffs['url']) # Dump the raw data for sharing #alignment_diffs_url.to_csv("table_s3_url.csv", index=None) alignment_diffs_hashtag = concat_dfs(alignment_diffs['hashtag']) # Dump the raw data for sharing # alignment_diffs_hashtag.to_csv("table_s3_hashtag.csv", index=None) def do_t_test(samples): t_stat, pvalue = scipy.stats.ttest_1samp(samples, 0) cohen_d = abs(samples.mean() - 0) / np.std(samples, ddof=1) return t_stat, pvalue, cohen_d url_t_test_results = [] for value in name_mapping.values(): t_stat, pvalue, cohen_d = do_t_test(alignment_diffs_url[value].dropna()) if cohen_d < 0.5: effect_size = 'small' elif cohen_d < 0.8: effect_size = 'medium' else: effect_size = 'large' url_t_test_results.append([ value, "link", t_stat, pvalue, pvalue < 0.05, pvalue < 0.01, cohen_d, effect_size, alignment_diffs_url[value].count() ]) url_t_test_results_df = pd.DataFrame(url_t_test_results, columns=[ 'group', 'method', 't_stat', 'pvalue', 'significant_05', 'significant_01', 'cohen_d', 'effect_size', 'n' ]) url_t_test_results_df hashtag_t_test_results = [] for value in name_mapping.values(): t_stat, pvalue, cohen_d = do_t_test(alignment_diffs_hashtag[value].dropna()) if cohen_d < 0.5: effect_size = 'small' elif cohen_d < 0.8: effect_size = 'medium' else: effect_size = 'large' hashtag_t_test_results.append([ value, "hashtag", t_stat, pvalue, pvalue < 0.05, pvalue < 0.01, cohen_d, effect_size, alignment_diffs_hashtag[value].count() ]) hashtag_t_test_results_df = pd.DataFrame(hashtag_t_test_results, columns=[ 'group', 'method', 't_stat', 'pvalue', 'significant_05', 'significant_01', 'cohen_d', 'effect_size', 'n' ]) hashtag_t_test_results_df ```
github_jupyter
# Simple dynamic seq2seq with TensorFlow This tutorial covers building seq2seq using dynamic unrolling with TensorFlow. I wasn't able to find any existing implementation of dynamic seq2seq with TF (as of 01.01.2017), so I decided to learn how to write my own, and document what I learn in the process. I deliberately try to be as explicit as possible. As it currently stands, TF code is the best source of documentation on itself, and I have a feeling that many conventions and design decisions are not documented anywhere except in the brains of Google Brain engineers. I hope this will be useful to people whose brains are wired like mine. **UPDATE**: as of r1.0 @ 16.02.2017, there is new official implementation in `tf.contrib.seq2seq`. See [tutorial #3](3-seq2seq-native-new.ipynb). Official tutorial reportedly be up soon. Personally I still find wiring dynamic encoder-decoder by hand insightful in many ways. Here we implement plain seq2seq — forward-only encoder + decoder without attention. I'll try to follow closely the original architecture described in [Sutskever, Vinyals and Le (2014)](https://arxiv.org/abs/1409.3215). If you notice any deviations, please let me know. Architecture diagram from their paper: ![seq2seq architecutre](pictures/1-seq2seq.png) Rectangles are encoder and decoder's recurrent layers. Encoder receives `[A, B, C]` sequence as inputs. We don't care about encoder outputs, only about the hidden state it accumulates while reading the sequence. After input sequence ends, encoder passes its final state to decoder, which receives `[<EOS>, W, X, Y, Z]` and is trained to output `[W, X, Y, Z, <EOS>]`. `<EOS>` token is a special word in vocabulary that signals to decoder the beginning of translation. ## Implementation details TensorFlow has its own [implementation of seq2seq](https://www.tensorflow.org/tutorials/seq2seq/). Recently it was moved from core examples to [`tensorflow/models` repo](https://github.com/tensorflow/models/tree/master/tutorials/rnn/translate), and uses deprecated seq2seq implementation. Deprecation happened because it uses **static unrolling**. **Static unrolling** involves construction of computation graph with a fixed sequence of time step. Such a graph can only handle sequences of specific lengths. One solution for handling sequences of varying lengths is to create multiple graphs with different time lengths and separate the dataset into this buckets. **Dynamic unrolling** instead uses control flow ops to process sequence step by step. In TF this is supposed to more space efficient and just as fast. This is now a recommended way to implement RNNs. ## Vocabulary Seq2seq maps sequence onto another sequence. Both sequences consist of integers from a fixed range. In language tasks, integers usually correspond to words: we first construct a vocabulary by assigning to every word in our corpus a serial integer. First few integers are reserved for special tokens. We'll call the upper bound on vocabulary a `vocabulary size`. Input data consists of sequences of integers. ``` x = [[5, 7, 8], [6, 3], [3], [1]] ``` While manipulating such variable-length lists are convenient to humans, RNNs prefer a different layout: ``` import helpers xt, xlen = helpers.batch(x) x xt ``` Sequences form columns of a matrix of size `[max_time, batch_size]`. Sequences shorter then the longest one are padded with zeros towards the end. This layout is called `time-major`. It is slightly more efficient then `batch-major`. We will use it for the rest of the tutorial. ``` xlen ``` For some forms of dynamic layout it is useful to have a pointer to terminals of every sequence in the batch in separate tensor (see following tutorials). # Building a model ## Simple seq2seq Encoder starts with empty state and runs through the input sequence. We are not interested in encoder's outputs, only in its `final_state`. Decoder uses encoder's `final_state` as its `initial_state`. Its inputs are a batch-sized matrix with `<EOS>` token at the 1st time step and `<PAD>` at the following. This is a rather crude setup, useful only for tutorial purposes. In practice, we would like to feed previously generated tokens after `<EOS>`. Decoder's outputs are mapped onto the output space using `[hidden_units x output_vocab_size]` projection layer. This is necessary because we cannot make `hidden_units` of decoder arbitrarily large, while our target space would grow with the size of the dictionary. This kind of encoder-decoder is forced to learn fixed-length representation (specifically, `hidden_units` size) of the variable-length input sequence and restore output sequence only from this representation. ``` import numpy as np import tensorflow as tf import helpers tf.reset_default_graph() sess = tf.InteractiveSession() tf.__version__ ``` ### Model inputs and outputs First critical thing to decide: vocabulary size. Dynamic RNN models can be adapted to different batch sizes and sequence lengths without retraining (e.g. by serializing model parameters and Graph definitions via `tf.train.Saver`), but changing vocabulary size requires retraining the model. ``` PAD = 0 EOS = 1 vocab_size = 10 input_embedding_size = 20 encoder_hidden_units = 20 decoder_hidden_units = encoder_hidden_units ``` Nice way to understand complicated function is to study its signature - inputs and outputs. With pure functions, only inputs-output relation matters. - `encoder_inputs` int32 tensor is shaped `[encoder_max_time, batch_size]` - `decoder_targets` int32 tensor is shaped `[decoder_max_time, batch_size]` ``` encoder_inputs = tf.placeholder(shape=(None, None), dtype=tf.int32, name='encoder_inputs') decoder_targets = tf.placeholder(shape=(None, None), dtype=tf.int32, name='decoder_targets') ``` We'll add one additional placeholder tensor: - `decoder_inputs` int32 tensor is shaped `[decoder_max_time, batch_size]` ``` decoder_inputs = tf.placeholder(shape=(None, None), dtype=tf.int32, name='decoder_inputs') ``` We actually don't want to feed `decoder_inputs` manually — they are a function of either `decoder_targets` or previous decoder outputs during rollout. However, there are different ways to construct them. It might be illustrative to explicitly specify them for out first seq2seq implementation. During training, `decoder_inputs` will consist of `<EOS>` token concatenated with `decoder_targets` along time axis. In this way, we always pass target sequence as the history to the decoder, regrardless of what it actually outputs predicts. This can introduce distribution shift from training to prediction. In prediction mode, model will receive tokens it previously generated (via argmax over logits), not the ground truth, which would be unknowable. Notice that all shapes are specified with `None`s (dynamic). We can use batches of any size with any number of timesteps. This is convenient and efficient, however but there are obvious constraints: - Feed values for all tensors should have same `batch_size` - Decoder inputs and ouputs (`decoder_inputs` and `decoder_targets`) should have same `decoder_max_time` ### Embeddings `encoder_inputs` and `decoder_inputs` are int32 tensors of shape `[max_time, batch_size]`, while encoder and decoder RNNs expect dense vector representation of words, `[max_time, batch_size, input_embedding_size]`. We convert one to another by using *word embeddings*. Specifics of working with embeddings are nicely described in [official tutorial on embeddings](https://www.tensorflow.org/tutorials/word2vec/). First we initialize embedding matrix. Initializations are random. We rely on our end-to-end training to learn vector representations for words jointly with encoder and decoder. ``` embeddings = tf.Variable(tf.random_uniform([vocab_size, input_embedding_size], -1.0, 1.0), dtype=tf.float32) ``` We use `tf.nn.embedding_lookup` to *index embedding matrix*: given word `4`, we represent it as 4th column of embedding matrix. This operation is lightweight, compared with alternative approach of one-hot encoding word `4` as `[0,0,0,1,0,0,0,0,0,0]` (vocab size 10) and then multiplying it by embedding matrix. Additionally, we don't need to compute gradients for any columns except 4th. Encoder and decoder will share embeddings. It's all words, right? Well, digits in this case. In real NLP application embedding matrix can get very large, with 100k or even 1m columns. ``` encoder_inputs_embedded = tf.nn.embedding_lookup(embeddings, encoder_inputs) decoder_inputs_embedded = tf.nn.embedding_lookup(embeddings, decoder_inputs) ``` ### Encoder The centerpiece of all things RNN in TensorFlow is `RNNCell` class and its descendants (like `LSTMCell`). But they are outside of the scope of this post — nice [official tutorial](https://www.tensorflow.org/tutorials/recurrent/) is available. `@TODO: RNNCell as a factory` ``` encoder_cell = tf.contrib.rnn.LSTMCell(encoder_hidden_units) encoder_outputs, encoder_final_state = tf.nn.dynamic_rnn( encoder_cell, encoder_inputs_embedded, dtype=tf.float32, time_major=True, ) del encoder_outputs ``` We discard `encoder_outputs` because we are not interested in them within seq2seq framework. What we actually want is `encoder_final_state` — state of LSTM's hidden cells at the last moment of the Encoder rollout. `encoder_final_state` is also called "thought vector". We will use it as initial state for the Decoder. In seq2seq without attention this is the only point where Encoder passes information to Decoder. We hope that backpropagation through time (BPTT) algorithm will tune the model to pass enough information throught the thought vector for correct sequence output decoding. ``` encoder_final_state ``` TensorFlow LSTM implementation stores state as a tuple of tensors. - `encoder_final_state.h` is activations of hidden layer of LSTM cell - `encoder_final_state.c` is final output, which can potentially be transfromed with some wrapper `@TODO: check correctness` ### Decoder ``` decoder_cell = tf.contrib.rnn.LSTMCell(decoder_hidden_units) decoder_outputs, decoder_final_state = tf.nn.dynamic_rnn( decoder_cell, decoder_inputs_embedded, initial_state=encoder_final_state, dtype=tf.float32, time_major=True, scope="plain_decoder", ) ``` Since we pass `encoder_final_state` as `initial_state` to the decoder, they should be compatible. This means the same cell type (`LSTMCell` in our case), the same amount of `hidden_units` and the same amount of layers (single layer). I suppose this can be relaxed if we additonally pass `encoder_final_state` through a one-layer MLP. With encoder, we were not interested in cells output. But decoder's outputs are what we actually after: we use them to get distribution over words of output sequence. At this point `decoder_cell` output is a `hidden_units` sized vector at every timestep. However, for training and prediction we need logits of size `vocab_size`. Reasonable thing would be to put linear layer (fully-connected layer without activation function) on top of LSTM output to get non-normalized logits. This layer is called projection layer by convention. ``` decoder_logits = tf.contrib.layers.linear(decoder_outputs, vocab_size) decoder_prediction = tf.argmax(decoder_logits, 2) ``` ### Optimizer ``` decoder_logits ``` RNN outputs tensor of shape `[max_time, batch_size, hidden_units]` which projection layer maps onto `[max_time, batch_size, vocab_size]`. `vocab_size` part of the shape is static, while `max_time` and `batch_size` is dynamic. ``` stepwise_cross_entropy = tf.nn.softmax_cross_entropy_with_logits( labels=tf.one_hot(decoder_targets, depth=vocab_size, dtype=tf.float32), logits=decoder_logits, ) loss = tf.reduce_mean(stepwise_cross_entropy) train_op = tf.train.AdamOptimizer().minimize(loss) sess.run(tf.global_variables_initializer()) ``` ### Test forward pass Did I say that deep learning is a game of shapes? When building a Graph, TF will throw errors when static shapes are not matching. However, mismatches between dynamic shapes are often only discovered when we try to run something through the graph. So let's try running something. For that we need to prepare values we will feed into placeholders. ``` this is key part where everything comes together @TODO: describe - how encoder shape is fixed to max - how decoder shape is arbitraty and determined by inputs, but should probably be longer then encoder's - how decoder input values are also arbitraty, and how we use GO token, and what are those 0s, and what can be used instead (shifted gold sequence, beam search) @TODO: add references ``` ``` batch_ = [[6], [3, 4], [9, 8, 7]] batch_, batch_length_ = helpers.batch(batch_) print('batch_encoded:\n' + str(batch_)) din_, dlen_ = helpers.batch(np.ones(shape=(3, 1), dtype=np.int32), max_sequence_length=4) print('decoder inputs:\n' + str(din_)) pred_ = sess.run(decoder_prediction, feed_dict={ encoder_inputs: batch_, decoder_inputs: din_, }) print('decoder predictions:\n' + str(pred_)) ``` Successful forward computation, everything is wired correctly. ## Training on the toy task We will teach our model to memorize and reproduce input sequence. Sequences will be random, with varying length. Since random sequences do not contain any structure, model will not be able to exploit any patterns in data. It will simply encode sequence in a thought vector, then decode from it. ``` batch_size = 100 batches = helpers.random_sequences(length_from=3, length_to=8, vocab_lower=2, vocab_upper=10, batch_size=batch_size) print('head of the batch:') for seq in next(batches)[:10]: print(seq) def next_feed(): batch = next(batches) encoder_inputs_, _ = helpers.batch(batch) decoder_targets_, _ = helpers.batch( [(sequence) + [EOS] for sequence in batch] ) decoder_inputs_, _ = helpers.batch( [[EOS] + (sequence) for sequence in batch] ) return { encoder_inputs: encoder_inputs_, decoder_inputs: decoder_inputs_, decoder_targets: decoder_targets_, } ``` Given encoder_inputs `[5, 6, 7]`, decoder_targets would be `[5, 6, 7, 1]`, where 1 is for `EOS`, and decoder_inputs would be `[1, 5, 6, 7]` - decoder_inputs are lagged by 1 step, passing previous token as input at current step. ``` loss_track = [] max_batches = 3001 batches_in_epoch = 1000 try: for batch in range(max_batches): fd = next_feed() _, l = sess.run([train_op, loss], fd) loss_track.append(l) if batch == 0 or batch % batches_in_epoch == 0: print('batch {}'.format(batch)) print(' minibatch loss: {}'.format(sess.run(loss, fd))) predict_ = sess.run(decoder_prediction, fd) for i, (inp, pred) in enumerate(zip(fd[encoder_inputs].T, predict_.T)): print(' sample {}:'.format(i + 1)) print(' input > {}'.format(inp)) print(' predicted > {}'.format(pred)) if i >= 2: break print() except KeyboardInterrupt: print('training interrupted') %matplotlib inline import matplotlib.pyplot as plt plt.plot(loss_track) print('loss {:.4f} after {} examples (batch_size={})'.format(loss_track[-1], len(loss_track)*batch_size, batch_size)) ``` Something is definitely getting learned. # Limitations of the model We have no control over transitions of `tf.nn.dynamic_rnn`, it is unrolled in a single sweep. Some of the things that are not possible without such control: - We can't feed previously generated tokens without falling back to Python loops. This means *we cannot make efficient inference with dynamic_rnn decoder*! - We can't use attention, because attention conditions decoder inputs on its previous state Solution would be to use `tf.nn.raw_rnn` instead of `tf.nn.dynamic_rnn` for decoder, as we will do in tutorial #2. # Fun things to try (aka Exercises) - In `copy_task` increasing `max_sequence_size` and `vocab_upper`. Observe slower learning and general performance degradation. - For `decoder_inputs`, instead of shifted target sequence `[<EOS> W X Y Z]`, try feeding `[<EOS> <PAD> <PAD> <PAD>]`, like we've done when we tested forward pass. Does it break things? Or slows learning?
github_jupyter
# NLP Tutorial With SpaCy * NLP a form of AI or Artificial Intelligences (Building system that can do intelligent things). * NLP or Natural Language Processing - Building system that can understand everyday language. It is a subset of AI. * SpaCy by Explosion.ai (Matthew Honnibal) ![Imgur](https://i.imgur.com/v55ZxW8.png) ## Basic Terms * Tokenization: Segmenting text into words,punctuations marks etc. * Part-of-speech : (POS) Tagging Assigning word types to tokens,like verb or noun. * Dependency Parsing: Assigning syntactic dependency labels, describing the relations between individual tokens, like subject or object. * Lemmatization : Assigning the base forms of words. For example, the lemma of “was” is “be”, and the lemma of “rats” is “rat”. * Sentence Boundary Detection (SBD):Finding and segmenting individual sentences. * Named Entity Recognition (NER): Labelling named “real-world” objects, like persons, companies or locations. * Similarity:Comparing words, text spans and documents and how similar they are to each other. * Text Classification:Assigning categories or labels to a whole document, or parts of a document. * Rule-based Matching:Finding sequences of tokens based on their texts and linguistic annotations, similar to regular expressions. * Training:Updating and improving a statistical model’s predictions. * Serialization:Saving objects to files or byte strings. ## Load the Package ``` import spacy nlp = spacy.load("en") ``` ![Imgur](https://i.imgur.com/q4EfY8Z.jpg) ## Reading A Document or Text ``` docx = nlp("SpaCy is cool tool for nlp") docx docx2=nlp(u"SpaCy is an amazing tool like nltk") docx2 ``` # Sentence Tokens * Tokenization == Splitting or segmenting the text into sentences or tokens * .sent #### Word Tokens * Splitting or segmenting the text into words * .text ``` docx2 #Word tokens for token in docx2: print(token.text) [token.text for token in docx2] ``` #### similar to splitting on spaces ``` docx2.text.split(" ") ``` ## More about words * .shape_ ==> for shape of word eg. capital,lowercase etc. * .is_alpha ==> returns boolean(true or false) if word is alphabet. * .is_stop ==> returns boolean(true or false) if word is a stop word. ``` docx2 for word in docx2: print(word.text,word.shape_) ex_doc=nlp("Hello hello HELLO HeLLo") for word in ex_doc: print("Token =>",word.text," Shape:",word.shape_," Alpha =>",word.is_alpha," Stop Word =>",word.is_stop) ``` # Part Of Speech Tagging * NB attribute_ ==> Returns readable string representation of attribute. * .pos * .pos_ ==> exposes Google Universal pos_tag,simple * .tag * .tag_ ==> exposes Treebank,detailed,for training your own model * * Uses * * Sentiment analysis, Homonym Disambuguity, Prediction ``` doc = nlp("He drinks a drink") for word in doc: print("Word : " , word.text, "," " Part of Speech : ", word.pos_ ) doc1=nlp("I fish a fish") for word in doc1: print("Word : " , word.text, "," " Part of Speech : ", word.pos_ , " ", "Tag : ", word.tag_) ``` ### If you want to know meaning of the pos abbreviation * spacy.explain('NN') ``` spacy.explain('NN') ex1=nlp(u"All the faith he had had had no effect on the outcome of his life") for word in ex1: print(("Word : " , word.text , "Tag : ", word.tag_ , "Part of Speech : ",word.pos_)) ``` ### Syntactic Dependency * It helps us to know the relation between tokens ``` ex3 = nlp("Sally likes Sam") for word in ex3: print(("Word : " , word.text , "Tag : ", word.tag_ , "Part of Speech : ",word.pos_, " Dependency :",word.dep_)) spacy.explain('nsubj') ``` # Visualizing Dependency using displaCy * from spacy import displacy * displacy.serve() * displacy.render(jupyter=True) # for jupyter notebook. ``` from spacy import displacy displacy.render(ex3,style='dep') ``` # Thanks for reading this notebook.Keep In Touch With Us.Like Our Page [Quantum.ai](https://www.facebook.com/Quantumaibd)
github_jupyter
# 补充资料 ``` # 准备工作 #准备工作 # 基本包的导入 import numpy as np import os # 画图相关 %matplotlib inline import matplotlib import matplotlib.pyplot as plt import pandas as pd from sklearn.model_selection import train_test_split # 忽略警告 import warnings warnings.filterwarnings(action='ignore', module='scipy', message='internal') # 图片存储目录 PROJECT_ROOT_DIR = '../' CHAPTER_ID = 'end_to_end_project' IMAGE_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) def save_fig(fig_id, tight_layout=True, fig_extension='png', resolution=300): path = os.path.join(IMAGE_PATH, fig_id + "." + fig_extension) print("保存图片:", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # 加载数据 HOUSING_PATH = os.path.join("../datasets", "housing") def loading_housing_data(housing_path=HOUSING_PATH): csv_path = os.path.join(housing_path, "housing.csv") return pd.read_csv(csv_path) housing = loading_housing_data() housing["income_cat"] = np.ceil(housing["median_income"] / 1.5) housing["income_cat"].where(housing["income_cat"] < 5, 5.0, inplace=True) from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_index, test_index in split.split(housing, housing["income_cat"]): strat_train_set = housing.loc[train_index] strat_test_set = housing.loc[test_index] for set in (strat_train_set, strat_test_set): set.drop(["income_cat"], axis=1, inplace=True) # 复制一份训练集,并drop预测结果后语后续评估。 housing = strat_train_set.drop("median_house_value", axis=1) housing_labels = strat_train_set["median_house_value"].copy() from sklearn.base import BaseEstimator, TransformerMixin # column index rooms_ix, bedrooms_ix, population_ix, household_ix = 3, 4, 5, 6 class CombinedAttributesAdder(BaseEstimator, TransformerMixin): def __init__(self, add_bedrooms_per_room = True): # no *args or **kargs self.add_bedrooms_per_room = add_bedrooms_per_room def fit(self, X, y=None): return self # nothing else to do def transform(self, X, y=None): rooms_per_household = X[:, rooms_ix] / X[:, household_ix] population_per_household = X[:, population_ix] / X[:, household_ix] if self.add_bedrooms_per_room: bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix] return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room] else: return np.c_[X, rooms_per_household, population_per_household] # 上述函数,其输入是包含1个多个枚举类别的2D数组,需要reshape成为这种数组 # from sklearn.preprocessing import CategoricalEncoder #后面会添加这个方法 from sklearn.base import BaseEstimator, TransformerMixin from sklearn.utils import check_array from sklearn.preprocessing import LabelEncoder from scipy import sparse # 后面再去理解 class CategoricalEncoder(BaseEstimator, TransformerMixin): """Encode categorical features as a numeric array. The input to this transformer should be a matrix of integers or strings, denoting the values taken on by categorical (discrete) features. The features can be encoded using a one-hot aka one-of-K scheme (``encoding='onehot'``, the default) or converted to ordinal integers (``encoding='ordinal'``). This encoding is needed for feeding categorical data to many scikit-learn estimators, notably linear models and SVMs with the standard kernels. Read more in the :ref:`User Guide <preprocessing_categorical_features>`. Parameters ---------- encoding : str, 'onehot', 'onehot-dense' or 'ordinal' The type of encoding to use (default is 'onehot'): - 'onehot': encode the features using a one-hot aka one-of-K scheme (or also called 'dummy' encoding). This creates a binary column for each category and returns a sparse matrix. - 'onehot-dense': the same as 'onehot' but returns a dense array instead of a sparse matrix. - 'ordinal': encode the features as ordinal integers. This results in a single column of integers (0 to n_categories - 1) per feature. categories : 'auto' or a list of lists/arrays of values. Categories (unique values) per feature: - 'auto' : Determine categories automatically from the training data. - list : ``categories[i]`` holds the categories expected in the ith column. The passed categories are sorted before encoding the data (used categories can be found in the ``categories_`` attribute). dtype : number type, default np.float64 Desired dtype of output. handle_unknown : 'error' (default) or 'ignore' Whether to raise an error or ignore if a unknown categorical feature is present during transform (default is to raise). When this is parameter is set to 'ignore' and an unknown category is encountered during transform, the resulting one-hot encoded columns for this feature will be all zeros. Ignoring unknown categories is not supported for ``encoding='ordinal'``. Attributes ---------- categories_ : list of arrays The categories of each feature determined during fitting. When categories were specified manually, this holds the sorted categories (in order corresponding with output of `transform`). Examples -------- Given a dataset with three features and two samples, we let the encoder find the maximum value per feature and transform the data to a binary one-hot encoding. >>> from sklearn.preprocessing import CategoricalEncoder >>> enc = CategoricalEncoder(handle_unknown='ignore') >>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]]) ... # doctest: +ELLIPSIS CategoricalEncoder(categories='auto', dtype=<... 'numpy.float64'>, encoding='onehot', handle_unknown='ignore') >>> enc.transform([[0, 1, 1], [1, 0, 4]]).toarray() array([[ 1., 0., 0., 1., 0., 0., 1., 0., 0.], [ 0., 1., 1., 0., 0., 0., 0., 0., 0.]]) See also -------- sklearn.preprocessing.OneHotEncoder : performs a one-hot encoding of integer ordinal features. The ``OneHotEncoder assumes`` that input features take on values in the range ``[0, max(feature)]`` instead of using the unique values. sklearn.feature_extraction.DictVectorizer : performs a one-hot encoding of dictionary items (also handles string-valued features). sklearn.feature_extraction.FeatureHasher : performs an approximate one-hot encoding of dictionary items or strings. """ def __init__(self, encoding='onehot', categories='auto', dtype=np.float64, handle_unknown='error'): self.encoding = encoding self.categories = categories self.dtype = dtype self.handle_unknown = handle_unknown def fit(self, X, y=None): """Fit the CategoricalEncoder to X. Parameters ---------- X : array-like, shape [n_samples, n_feature] The data to determine the categories of each feature. Returns ------- self """ if self.encoding not in ['onehot', 'onehot-dense', 'ordinal']: template = ("encoding should be either 'onehot', 'onehot-dense' " "or 'ordinal', got %s") raise ValueError(template % self.handle_unknown) if self.handle_unknown not in ['error', 'ignore']: template = ("handle_unknown should be either 'error' or " "'ignore', got %s") raise ValueError(template % self.handle_unknown) if self.encoding == 'ordinal' and self.handle_unknown == 'ignore': raise ValueError("handle_unknown='ignore' is not supported for" " encoding='ordinal'") X = check_array(X, dtype=np.object, accept_sparse='csc', copy=True) n_samples, n_features = X.shape self._label_encoders_ = [LabelEncoder() for _ in range(n_features)] for i in range(n_features): le = self._label_encoders_[i] Xi = X[:, i] if self.categories == 'auto': le.fit(Xi) else: valid_mask = np.in1d(Xi, self.categories[i]) if not np.all(valid_mask): if self.handle_unknown == 'error': diff = np.unique(Xi[~valid_mask]) msg = ("Found unknown categories {0} in column {1}" " during fit".format(diff, i)) raise ValueError(msg) le.classes_ = np.array(np.sort(self.categories[i])) self.categories_ = [le.classes_ for le in self._label_encoders_] return self def transform(self, X): """Transform X using one-hot encoding. Parameters ---------- X : array-like, shape [n_samples, n_features] The data to encode. Returns ------- X_out : sparse matrix or a 2-d array Transformed input. """ X = check_array(X, accept_sparse='csc', dtype=np.object, copy=True) n_samples, n_features = X.shape X_int = np.zeros_like(X, dtype=np.int) X_mask = np.ones_like(X, dtype=np.bool) for i in range(n_features): valid_mask = np.in1d(X[:, i], self.categories_[i]) if not np.all(valid_mask): if self.handle_unknown == 'error': diff = np.unique(X[~valid_mask, i]) msg = ("Found unknown categories {0} in column {1}" " during transform".format(diff, i)) raise ValueError(msg) else: # Set the problematic rows to an acceptable value and # continue `The rows are marked `X_mask` and will be # removed later. X_mask[:, i] = valid_mask X[:, i][~valid_mask] = self.categories_[i][0] X_int[:, i] = self._label_encoders_[i].transform(X[:, i]) if self.encoding == 'ordinal': return X_int.astype(self.dtype, copy=False) mask = X_mask.ravel() n_values = [cats.shape[0] for cats in self.categories_] n_values = np.array([0] + n_values) indices = np.cumsum(n_values) column_indices = (X_int + indices[:-1]).ravel()[mask] row_indices = np.repeat(np.arange(n_samples, dtype=np.int32), n_features)[mask] data = np.ones(n_samples * n_features)[mask] out = sparse.csc_matrix((data, (row_indices, column_indices)), shape=(n_samples, indices[-1]), dtype=self.dtype).tocsr() if self.encoding == 'onehot-dense': return out.toarray() else: return out # 另外一个转换器:选择一个子集 from sklearn.base import BaseEstimator, TransformerMixin # 如上,对于数据集需要进行大量的转换,并且要有一定的顺序。因此sklearn提供了pipeline来进行处理。 from sklearn.pipeline import Pipeline from sklearn.preprocessing import Imputer from sklearn.preprocessing import StandardScaler # Create a class to select numerical or categorical columns # since Scikit-Learn doesn't handle DataFrames yet class DataFrameSelector(BaseEstimator, TransformerMixin): def __init__(self, attribute_names): self.attribute_names = attribute_names def fit(self, X, y=None): return self def transform(self, X): return X[self.attribute_names].values # 由于只能对数值类型进行处理,因此需要去除类别数据。 housing_num = housing.drop("ocean_proximity", axis=1) num_attribs = list(housing_num) cat_attribs = ["ocean_proximity"] num_pipeline = Pipeline([ ('selector', DataFrameSelector(num_attribs)), ('imputer', Imputer(strategy="median")), ('attr_adder', CombinedAttributesAdder()), ('std_scaler', StandardScaler()) ]) cat_pipeline = Pipeline([ ('selector', DataFrameSelector(cat_attribs)), ("cat_encoder", CategoricalEncoder(encoding='onehot-dense')) ]) from sklearn.pipeline import FeatureUnion full_pipeline = FeatureUnion(transformer_list=[ ("num_pipeline", num_pipeline), ("cat_pipeline", cat_pipeline), ]) # 运用结合pipeline对数值类型,类别类型同时进行转换 housing_prepare = full_pipeline.fit_transform(housing) housing_prepare.shape # 由于数据大, 运行慢,减小数据 housing_prepare = housing_prepare[:3000] housing_labels = housing_labels[:3000] ``` # 1. 使用不用内核的svm预测 ``` from sklearn.model_selection import GridSearchCV from sklearn.svm import SVR param_grid = [ {"kernel": ["linear"], "C":[10, 30, 100, 300, 1000, 3000, 10000, 30000]}, {'kernel': ['rbf'], 'C': [1.0, 3.0, 10., 30., 100., 300., 1000.0], 'gamma': [0.01, 0.03, 0.1, 0.3, 1.0, 3.0]}, ] svm_reg = SVR() grid_search = GridSearchCV(svm_reg, param_grid, cv=5, scoring="neg_mean_squared_error", verbose=2, n_jobs=-1) grid_search.fit(housing_prepare, housing_labels) ``` 减少数据集后,使用网格搜索可以看到不同情况的运行数据。 ``` negative_mse = grid_search.best_score_ rmse = np.sqrt(-negative_mse) rmse # 结果比随机森林要差很多 grid_search.best_params_ ``` # 2. 使用随机搜索代替网格搜索 ``` from sklearn.model_selection import RandomizedSearchCV from scipy.stats import expon, reciprocal param_distribs = { 'kernel':["linear", "rbf"], "C":reciprocal(20, 20000), "gamma":expon(scale=1.0),} svm_reg = SVR() rnd_search = RandomizedSearchCV(svm_reg, param_distributions=param_distribs, n_iter=50, cv=5, scoring='neg_mean_squared_error', verbose=2, n_jobs=4, random_state=42) rnd_search.fit(housing_prepare, housing_labels) # 看一下rmse negative_mse = rnd_search.best_score_ rmse = np.sqrt(-negative_mse) rmse #很接近但还不是不如RF rnd_search.best_params_ expon_distrib = expon(scale=1.) samples = expon_distrib.rvs(10000, random_state=42) plt.figure(figsize=(10, 4)) plt.subplot(121) plt.title("Exponential distribution (scale=1.0)") plt.hist(samples, bins=50) plt.subplot(122) plt.title("Log of this distribution") plt.hist(np.log(samples), bins=50) plt.show() reciprocal_distrib = reciprocal(20, 200000) samples = reciprocal_distrib.rvs(10000, random_state=42) plt.figure(figsize=(10, 4)) plt.subplot(121) plt.title("Reciprocal distribution (scale=1.0)") plt.hist(samples, bins=50) plt.subplot(122) plt.title("Log of this distribution") plt.hist(np.log(samples), bins=50) plt.show() ``` # 3. 添加一个转换器,选出最重要的特征 ``` from sklearn.base import BaseEstimator, TransformerMixin def indices_of_top_k(arr, k): return np.sort(no.argpartition(np.array(arr), -k)[-k:]) class TopFeatureSelector(BaseEstimator, TransformerMixin): def __init__(selef, feature_importances, k): self.feature_importances = feature_importances selt.k = k def fit(self, X, y=None): self.feature_indices_ = indices_of_top_k(self.feature_importances) return self def transform(self, X): return X[:, self.feature_indices_] ``` # 4 尝试创建一个完整的数据准备和最终预测的管道。 ``` prepare_select_and_predict_pipeline = Pipeline([ ('preparation', full_pipeline), # ('feature_selection', TopFeatureSelector(feature_importances, k)), ('svm_reg', SVR(**rnd_search.best_params_)) ]) ```
github_jupyter
# Target Capture Efficiency and Sequence Divergence For the Pleurocarpous Moss Tree of Life project, we designed a set of target capture probes from 1KP data and the Physcomitrella proteome. Genes were selected from 1KP orthogroups if there was a Physcomitrella copy and at least four pleurocarpous mosses in the orthogroup. For each selected gene, MycroArray probes were designed from one pleurocarpous moss gene and the orthologous Physcomitrella gene. For our moss backbone phylogeny, we were able to successfully recover about 100 genes from about 130 moss species. Capture efficiency was sufficient in all peristomate mosses, but very few genes were captured from Sphagnum, and none were captured from liverworts. In the heatmap below, columns represent genes, rows represent samples. The shading in each cell represents the length of the gene recovered by HybPiper, as a percentage of the target sequence length (darker is more complete). ![hi](img/moss_backbone_heatmap.png) Most of the white horizontal stripes are mosses outside the Bryoposida (*Sphagnum*, *Andreaea*, *Takakia*) or nematodontous mosses (*Diphyscium*, *Buxbaumia*, *Tetraphis*), or liverworts. Here we investigate exactly how "divgerged" sequences can be before we can no longer capture them efficiently. What is the pratical limit for targeted sequence capture? #### Reading in the data This table contains four columns: Gene Name Sequence Accession Target Type (Physcomitrella or Pleurocarp) Percent Dissimilarity Dissimilarity was calculated from a pairwise sequence alignment between the captured sequence and the target sequence, not including gap characters. Captured sequences that were < 50% of the length of the target sequence were removed. The first task is to find the maximum distance between each captured sequence and a target sequence. ``` %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt minimum_distances = [] capture_distance_fn = "/Users/mjohnson/Desktop/Projects/moss_backbone/nuclear/enrichment/distances.txt" capture_distance = open(capture_distance_fn) while True: line1=capture_distance.readline().rstrip().split() line2=capture_distance.readline().rstrip().split() if not line2: break for line in (line1,line2): if not line[2].startswith("Physco"): line[2] = "Pleurocarp-{}".format(line[0]) try: line1[-1]=float(line1[-1]) line2[-1]=float(line2[-1]) min_pdist = np.array((line1[-1],line2[-1])).argmin() except ValueError: continue if min_pdist: minimum_distances.append(line2) else: minimum_distances.append(line1) capture_distance.close() capture_distance_df = pd.DataFrame.from_records(minimum_distances,columns=["Gene","Species","Target","Distance"]) capture_distance_df["IsOnekp"] = capture_distance_df.Species.str.contains("onekp") ``` We defined a column "IsOneKP" because we used sequences from the OneKP database to fill in "gaps" in our phylogeny, especially where target capture was inefficient. We will make separate calculations for sequences which were obtained through targeted sequencing from those obtained from onekp. First, a histogram of all pairwise distances between sequences and targets. ``` capture_distance_df[(capture_distance_df.Gene == "10176") & (capture_distance_df.Species.str.lower().str.startswith("takakia"))] groups = capture_distance_df.groupby("IsOnekp") fig, ax = plt.subplots() ax.margins(0.05) # Optional, just adds 5% padding to the autoscaling for name, group in groups: ax.hist(group.Distance, bins=40,alpha=0.5) ax.legend(["Captured","OneKp"]) plt.xlabel("Percent Dissimilarity") plt.ylabel("Number of Sequences") plt.rcParams['figure.figsize'] = (3, 4) plt.show() ``` It is obvious from the histogram that captured sequence was most effective when the percent dissimilarity is less than 25%. The sequences from 1KP are skewed to have higher divergence because these are from species that were hard to obtain through sequence capture. Normally it would be tough to tell whether or not a sequence could not be recovered because it is too divergent, or because of some other technical reason. Howevever, we can actually test this because of the data from 1KP. For some species, we have data from both sequence capture and onekp. For these species we can make a direct comparison to see where the "true" divergence is. In the example below, all sequences attributed to some mosses with high divergence to the targets are summarized based on whether they are from OneKP (True) or capture data (False). It's clear that although the averaage dissimilarity between any sequence and a target sequence is about 30%, recovering a sequence using targeted capture with > 30% divergence is not as likely. ``` import seaborn as sns #Set up a data frame containing only the four divergent species, from both capture and OneKP Data takakia = capture_distance_df[(capture_distance_df.Species.str.upper().str.contains("TAKAKIA"))] takakia.Species="Takakia" diphyscium = capture_distance_df[(capture_distance_df.Species.str.lower().str.contains("diphyscium"))] diphyscium.Species="Diphyscium" buxbaumia = capture_distance_df[(capture_distance_df.Species.str.lower().str.contains("buxbaumia"))] buxbaumia.Species="Buxbaumia" tetraphis = capture_distance_df[(capture_distance_df.Species.str.lower().str.contains("tetraphis"))] tetraphis.Species = "Tetraphis" andreaea = capture_distance_df[(capture_distance_df.Species.str.lower().str.contains("andrea"))] andreaea.Species = "Andreaea" sphagnum = capture_distance_df[(capture_distance_df.Species.str.lower().str.startswith("sphagnum"))] sphagnum.Species = "Sphagnum" #Combine the dataframes divergent_mosses = pd.concat([takakia,sphagnum,andreaea,diphyscium,buxbaumia,tetraphis]) divergent_mosses.drop(["Gene","Target"],1,inplace=True) divergent_mosses #Plot the data g = sns.FacetGrid(divergent_mosses,col="Species",size=5,aspect=1,col_wrap=3) g = g.map(sns.violinplot,"IsOnekp","Distance",bins=20,color=".8",inner=None) g = g.map(sns.stripplot,"IsOnekp","Distance",jitter=True) ``` ### Conclusions Based on the data collected from the mosses there is a clear pattern about when to expect efficient sequence capture. Good sequence recovery should be expected with a sequence divergence as much as 25%, and a large drop-off in gene recovery rate should be expected wth a sequence divergence of more than 30%. ``` g.savefig("/Users/mjohnson/Desktop/Projects/moss_backbone/nuclear/enrichment/Hybseq_vs_OneKP.svg") sphagnum_1kp=capture_distance_df[(capture_distance_df.Species.str.startswith("sphagnum")) & (capture_distance_df.Target.str.startswith("Physco"))].Gene.value_counts() sphagnum_hybseq=capture_distance_df[(capture_distance_df.Species.str.startswith("Sphagnum")) & (capture_distance_df.Target.str.startswith("Physco"))].Gene.value_counts() takakia_1kp=capture_distance_df[(capture_distance_df.Species.str.startswith("takakia")) & (capture_distance_df.Target.str.startswith("Physco"))].Gene.value_counts() takakia_hybseq=capture_distance_df[(capture_distance_df.Species.str.startswith("Takakia")) & (capture_distance_df.Target.str.startswith("Physco"))].Gene.value_counts() len(takakia[takakia.IsOnekp == False].Gene.value_counts()) median_sphagnum_onekp = sphagnum[sphagnum.IsOnekp == True].Distance.median() numgenes_sphagnum_onekp = len(sphagnum[sphagnum.IsOnekp == True].Gene.value_counts()) median_sphagnum_hybseq = sphagnum[sphagnum.IsOnekp == False].Distance.median() numgenes_sphagnum_hybseq = len(sphagnum[sphagnum.IsOnekp == False].Gene.value_counts()) print('''The number of Sphagnum genes recovered via OneKP was {} with a median PWD of {}. The number of Sphagnum genes recovered via Hybseq was {} with a median PWD of {}.\n'''.format( numgenes_sphagnum_onekp, median_sphagnum_onekp, numgenes_sphagnum_hybseq, median_sphagnum_hybseq)) median_takakia_onekp = takakia[takakia.IsOnekp == True].Distance.median() numgenes_takakia_onekp = len(takakia[takakia.IsOnekp == True].Gene.value_counts()) median_takakia_hybseq = takakia[takakia.IsOnekp == False].Distance.median() numgenes_takakia_hybseq = len(takakia[takakia.IsOnekp == False].Gene.value_counts()) print('''The number of takakia genes recovered via OneKP was {} with a median PWD of {}. The number of takakia genes recovered via Hybseq was {} with a median PWD of {}.\n'''.format( numgenes_takakia_onekp, median_takakia_onekp, numgenes_takakia_hybseq, median_takakia_hybseq)) ```
github_jupyter
## BO with TuRBO-1 and TS/qEI In this tutorial, we show how to implement Trust Region Bayesian Optimization (TuRBO) [1] in a closed loop in BoTorch. This implementation uses one trust region (TuRBO-1) and supports either parallel expected improvement (qEI) or Thompson sampling (TS). We optimize the $20D$ Ackley function on the domain $[-5, 10]^{20}$ and show that TuRBO-1 outperforms qEI as well as Sobol. Since botorch assumes a maximization problem, we will attempt to maximize $-f(x)$ to achieve $\max_x -f(x)=0$. [1]: [Eriksson, David, et al. Scalable global optimization via local Bayesian optimization. Advances in Neural Information Processing Systems. 2019](https://proceedings.neurips.cc/paper/2019/file/6c990b7aca7bc7058f5e98ea909e924b-Paper.pdf) ``` import os import math from dataclasses import dataclass import torch from botorch.acquisition import qExpectedImprovement from botorch.fit import fit_gpytorch_model from botorch.generation import MaxPosteriorSampling from botorch.models import SingleTaskGP from botorch.optim import optimize_acqf from botorch.test_functions import Ackley from botorch.utils.transforms import unnormalize from torch.quasirandom import SobolEngine import gpytorch from gpytorch.constraints import Interval from gpytorch.kernels import MaternKernel, ScaleKernel from gpytorch.likelihoods import GaussianLikelihood from gpytorch.mlls import ExactMarginalLogLikelihood from gpytorch.priors import HorseshoePrior device = torch.device("cuda" if torch.cuda.is_available() else "cpu") dtype = torch.double SMOKE_TEST = os.environ.get("SMOKE_TEST") ``` ## Optimize the 20-dimensional Ackley function The goal is to minimize the popular Ackley function: $f(x_1,\ldots,x_d) = -20\exp\left(-0.2 \sqrt{\frac{1}{d} \sum_{j=1}^d x_j^2} \right) -\exp \left( \frac{1}{d} \sum_{j=1}^d \cos(2 \pi x_j) \right) + 20 + e$ over the domain $[-5, 10]^{20}$. The global optimal value of $0$ is attained at $x_1 = \ldots = x_d = 0$. As mentioned above, since botorch assumes a maximization problem, we instead maximize $-f(x)$. ``` fun = Ackley(dim=20, negate=True).to(dtype=dtype, device=device) fun.bounds[0, :].fill_(-5) fun.bounds[1, :].fill_(10) dim = fun.dim lb, ub = fun.bounds batch_size = 4 n_init = 2 * dim max_cholesky_size = float("inf") # Always use Cholesky def eval_objective(x): """This is a helper function we use to unnormalize and evalaute a point""" return fun(unnormalize(x, fun.bounds)) ``` ## Maintain the TuRBO state TuRBO needs to maintain a state, which includes the length of the trust region, success and failure counters, success and failure tolerance, etc. In this tutorial we store the state in a dataclass and update the state of TuRBO after each batch evaluation. **Note**: These settings assume that the domain has been scaled to $[0, 1]^d$ and that the same batch size is used for each iteration. ``` @dataclass class TurboState: dim: int batch_size: int length: float = 0.8 length_min: float = 0.5 ** 7 length_max: float = 1.6 failure_counter: int = 0 failure_tolerance: int = float("nan") # Note: Post-initialized success_counter: int = 0 success_tolerance: int = 10 # Note: The original paper uses 3 best_value: float = -float("inf") restart_triggered: bool = False def __post_init__(self): self.failure_tolerance = math.ceil( max([4.0 / self.batch_size, float(self.dim) / self.batch_size]) ) def update_state(state, Y_next): if max(Y_next) > state.best_value + 1e-3 * math.fabs(state.best_value): state.success_counter += 1 state.failure_counter = 0 else: state.success_counter = 0 state.failure_counter += 1 if state.success_counter == state.success_tolerance: # Expand trust region state.length = min(2.0 * state.length, state.length_max) state.success_counter = 0 elif state.failure_counter == state.failure_tolerance: # Shrink trust region state.length /= 2.0 state.failure_counter = 0 state.best_value = max(state.best_value, max(Y_next).item()) if state.length < state.length_min: state.restart_triggered = True return state ``` ## Take a look at the state ``` state = TurboState(dim=dim, batch_size=batch_size) print(state) ``` ## Generate initial points This generates an initial set of Sobol points that we use to start of the BO loop. ``` def get_initial_points(dim, n_pts, seed=0): sobol = SobolEngine(dimension=dim, scramble=True, seed=seed) X_init = sobol.draw(n=n_pts).to(dtype=dtype, device=device) return X_init ``` ## Generate new batch Given the current `state` and a probabilistic (GP) `model` built from observations `X` and `Y`, we generate a new batch of points. This method works on the domain $[0, 1]^d$, so make sure to not pass in observations from the true domain. `unnormalize` is called before the true function is evaluated which will first map the points back to the original domain. We support either TS and qEI which can be specified via the `acqf` argument. ``` def generate_batch( state, model, # GP model X, # Evaluated points on the domain [0, 1]^d Y, # Function values batch_size, n_candidates=None, # Number of candidates for Thompson sampling num_restarts=10, raw_samples=512, acqf="ts", # "ei" or "ts" ): assert acqf in ("ts", "ei") assert X.min() >= 0.0 and X.max() <= 1.0 and torch.all(torch.isfinite(Y)) if n_candidates is None: n_candidates = min(5000, max(2000, 200 * X.shape[-1])) # Scale the TR to be proportional to the lengthscales x_center = X[Y.argmax(), :].clone() weights = model.covar_module.base_kernel.lengthscale.squeeze().detach() weights = weights / weights.mean() weights = weights / torch.prod(weights.pow(1.0 / len(weights))) tr_lb = torch.clamp(x_center - weights * state.length / 2.0, 0.0, 1.0) tr_ub = torch.clamp(x_center + weights * state.length / 2.0, 0.0, 1.0) if acqf == "ts": dim = X.shape[-1] sobol = SobolEngine(dim, scramble=True) pert = sobol.draw(n_candidates).to(dtype=dtype, device=device) pert = tr_lb + (tr_ub - tr_lb) * pert # Create a perturbation mask prob_perturb = min(20.0 / dim, 1.0) mask = ( torch.rand(n_candidates, dim, dtype=dtype, device=device) <= prob_perturb ) ind = torch.where(mask.sum(dim=1) == 0)[0] mask[ind, torch.randint(0, dim - 1, size=(len(ind),), device=device)] = 1 # Create candidate points from the perturbations and the mask X_cand = x_center.expand(n_candidates, dim).clone() X_cand[mask] = pert[mask] # Sample on the candidate points thompson_sampling = MaxPosteriorSampling(model=model, replacement=False) with torch.no_grad(): # We don't need gradients when using TS X_next = thompson_sampling(X_cand, num_samples=batch_size) elif acqf == "ei": ei = qExpectedImprovement(model, train_Y.max(), maximize=True) X_next, acq_value = optimize_acqf( ei, bounds=torch.stack([tr_lb, tr_ub]), q=batch_size, num_restarts=num_restarts, raw_samples=raw_samples, ) return X_next ``` ## Optimization loop This simple loop runs one instance of TuRBO-1 with Thompson sampling until convergence. TuRBO-1 is a local optimizer that can be used for a fixed evaluation budget in a multi-start fashion. Once TuRBO converges, `state["restart_triggered"]` will be set to true and the run should be aborted. If you want to run more evaluations with TuRBO, you simply generate a new set of initial points and then keep generating batches until convergence or when the evaluation budget has been exceeded. It's important to note that evaluations from previous instances are discarded when TuRBO restarts. NOTE: We use a `SingleTaskGP` with a noise constraint to keep the noise from getting too large as the problem is noise-free. ``` X_turbo = get_initial_points(dim, n_init) Y_turbo = torch.tensor( [eval_objective(x) for x in X_turbo], dtype=dtype, device=device ).unsqueeze(-1) state = TurboState(dim, batch_size=batch_size) NUM_RESTARTS = 10 if not SMOKE_TEST else 2 RAW_SAMPLES = 512 if not SMOKE_TEST else 4 N_CANDIDATES = min(5000, max(2000, 200 * dim)) if not SMOKE_TEST else 4 while not state.restart_triggered: # Run until TuRBO converges # Fit a GP model train_Y = (Y_turbo - Y_turbo.mean()) / Y_turbo.std() likelihood = GaussianLikelihood(noise_constraint=Interval(1e-8, 1e-3)) covar_module = ScaleKernel( # Use the same lengthscale prior as in the TuRBO paper MaternKernel(nu=2.5, ard_num_dims=dim, lengthscale_constraint=Interval(0.005, 4.0)) ) model = SingleTaskGP(X_turbo, train_Y, covar_module=covar_module, likelihood=likelihood) mll = ExactMarginalLogLikelihood(model.likelihood, model) # Do the fitting and acquisition function optimization inside the Cholesky context with gpytorch.settings.max_cholesky_size(max_cholesky_size): # Fit the model fit_gpytorch_model(mll) # Create a batch X_next = generate_batch( state=state, model=model, X=X_turbo, Y=train_Y, batch_size=batch_size, n_candidates=N_CANDIDATES, num_restarts=NUM_RESTARTS, raw_samples=RAW_SAMPLES, acqf="ts", ) Y_next = torch.tensor( [eval_objective(x) for x in X_next], dtype=dtype, device=device ).unsqueeze(-1) # Update state state = update_state(state=state, Y_next=Y_next) # Append data X_turbo = torch.cat((X_turbo, X_next), dim=0) Y_turbo = torch.cat((Y_turbo, Y_next), dim=0) # Print current status print( f"{len(X_turbo)}) Best value: {state.best_value:.2e}, TR length: {state.length:.2e}" ) ``` ## GP-EI As a baseline, we compare TuRBO to qEI ``` X_ei = get_initial_points(dim, n_init) Y_ei = torch.tensor( [eval_objective(x) for x in X_ei], dtype=dtype, device=device ).unsqueeze(-1) while len(Y_ei) < len(Y_turbo): train_Y = (Y_ei - Y_ei.mean()) / Y_ei.std() likelihood = GaussianLikelihood(noise_constraint=Interval(1e-8, 1e-3)) model = SingleTaskGP(X_ei, train_Y, likelihood=likelihood) mll = ExactMarginalLogLikelihood(model.likelihood, model) fit_gpytorch_model(mll) # Create a batch ei = qExpectedImprovement(model, train_Y.max(), maximize=True) candidate, acq_value = optimize_acqf( ei, bounds=torch.stack( [ torch.zeros(dim, dtype=dtype, device=device), torch.ones(dim, dtype=dtype, device=device), ] ), q=batch_size, num_restarts=NUM_RESTARTS, raw_samples=RAW_SAMPLES, ) Y_next = torch.tensor( [eval_objective(x) for x in candidate], dtype=dtype, device=device ).unsqueeze(-1) # Append data X_ei = torch.cat((X_ei, candidate), axis=0) Y_ei = torch.cat((Y_ei, Y_next), axis=0) # Print current status print(f"{len(X_ei)}) Best value: {Y_ei.max().item():.2e}") ``` ## Sobol ``` X_Sobol = SobolEngine(dim, scramble=True, seed=0).draw(len(X_turbo)).to(dtype=dtype, device=device) Y_Sobol = torch.tensor([eval_objective(x) for x in X_Sobol], dtype=dtype, device=device).unsqueeze(-1) ``` ## Compare the methods ``` import matplotlib import matplotlib.pyplot as plt import numpy as np from matplotlib import rc %matplotlib inline names = ["TuRBO-1", "EI", "Sobol"] runs = [Y_turbo, Y_ei, Y_Sobol] fig, ax = plt.subplots(figsize=(8, 6)) for name, run in zip(names, runs): fx = np.maximum.accumulate(run.cpu()) plt.plot(fx, marker="", lw=3) plt.plot([0, len(Y_turbo)], [fun.optimal_value, fun.optimal_value], "k--", lw=3) plt.xlabel("Function value", fontsize=18) plt.xlabel("Number of evaluations", fontsize=18) plt.title("20D Ackley", fontsize=24) plt.xlim([0, len(Y_turbo)]) plt.ylim([-15, 1]) plt.grid(True) plt.tight_layout() plt.legend( names + ["Global optimal value"], loc="lower center", bbox_to_anchor=(0, -0.08, 1, 1), bbox_transform=plt.gcf().transFigure, ncol=4, fontsize=16, ) plt.show() ```
github_jupyter
### Import libraries and modify notebook settings ``` # Import libraries import os import sys import h5py import numpy as np import pandas as pd import librosa #import librosa.display import matplotlib.pyplot as plt # Modify notebook settings %matplotlib inline from IPython.display import Audio ``` ### Create paths to data folders and files ``` # Create a variable for the project root directory proj_root = os.path.join(os.pardir) # Save path to the raw metadata file # "UrbanSound8K.csv" metadata_file = os.path.join(proj_root, "data", "raw", "UrbanSound8K", "metadata", "UrbanSound8K.csv") # Save path to the raw audio files raw_audio_path = os.path.join(proj_root, "data", "raw", "UrbanSound8K", "audio") # Save the path to the folder that will contain # the interim data sets for modeling: # /data/interim interim_data_dir = os.path.join(proj_root, "data", "interim") # Save path to the folder for the # spectrogram arrays that we will generate spectrogram_arrays_path = os.path.join(interim_data_dir, "spectrogram_arrays") # add the 'src' directory as one where we can import modules src_dir = os.path.join(proj_root, "src") sys.path.append(src_dir) ``` ### Inspect the metadata ``` df_metadata = pd.read_csv(metadata_file) df_metadata.head() total_obs = len(df_metadata) total_obs ``` #### Is the proportion of observations for each class is roughly similar across the ten folds? Groupby class and fold ``` df_metadata.groupby(['class','fold'])['fold'].count().unstack() ``` The proportion of observations for each class is roughly similar across the ten folds. #### Do all of the audio clips have the same length? We want to only use clips that have the same temporal length. Otherwise, the shape (dimensions) of features that we feed the CNN would not be uniform. In other words, once we create spectrograms from each `.wav` file, we want all of the spectrograms to have the same width. ``` (df_metadata.end - df_metadata.start).value_counts().head(25) ``` The majority of the audio clips are approximately 4 seconds long. Therefore, we will only use audio clips are approximately 4 seconds long and exclude the other audio clips. Let us filter for the observations that had audio clips that are approximately 4 seconds long. ``` # Filter for the observations that had audio clips # that are approximately 4 seconds long. bool_mask = np.isclose((df_metadata.end - df_metadata.start), 4, rtol=1e-05) df_metadata_4s = df_metadata[bool_mask] ``` How much data are we left with? ``` len(df_metadata_4s) len(df_metadata_4s) / total_obs ``` We still have 7333 observations, or about 84% of the original data. #### Are the folds still even? Does each class have enough data? After filtering for the observations that had audio clips that are approximately 4 seconds long, does each class still have enough data? Are there roughly the same number of observations for each class across all folds? ``` df_metadata_4s.groupby(['class','fold'])['fold'].count().unstack() ``` It appears that many of the audio clips that were less than 4 seconds long were recordings of gun shots. There are now too few observations of the `gun_shot` class in our filtered data set to create a predictive model for that class. We will remove the few remaining observations of the `gun_shot` class. In a similar vein, there may be too few observations of the `car_horn` class to create a predictive model for that class. Therefore, we will remove observations of the `car_horn` class. ``` # Create boolean filters not_gun_shot = (df_metadata_4s['class'] != 'gun_shot') not_car_horn = (df_metadata_4s['class'] != 'car_horn') # Filter the df df_metadata_filtered = df_metadata_4s[not_gun_shot & not_car_horn] # reset_index df_metadata_filtered.reset_index(drop=False, inplace=True) len(df_metadata_filtered) len(df_metadata_filtered) / total_obs ``` We still have 7114 observations, or about 81% of the original data. ``` df_metadata_filtered.groupby(['class','fold'])['fold'].count().unstack() ``` ### Are all slice_file_name obs unique? ``` print(df_metadata.slice_file_name.value_counts().max(), ';', df_metadata_filtered.slice_file_name.value_counts().max()) ``` Yes, all observations of slice_file_name are unique. ### One-hot encode the target variable: classID #pd.get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False, columns=None, sparse=False, drop_first=False)[source] df_interim = pd.get_dummies(df_metadata_filtered, prefix='y', prefix_sep='_', columns=['classID'], drop_first=False) df_interim.head() ### Drop unneeded columns and rearrange columns ``` df_interim = df_metadata_filtered[['classID', 'slice_file_name', 'fold']] df_interim.head() ``` ### Sort and reindex the DataFrame ``` df_interim = df_interim.sort_values(['fold', 'slice_file_name']) df_interim.reset_index(drop=True, inplace=True) df_interim.head() ``` ### Create a key table for matching 'classID' with 'class' ``` df_temp = df_metadata.groupby(['class','classID'])['classID'].count().unstack() df_temp classID_list =list(df_temp.columns) class_list = list(df_temp.index) df_class_key = pd.DataFrame({'class': class_list, 'classID': classID_list}) df_class_key ``` ## Define train and test set data. Define folds 1 through 8 to be train data. Define folds 9 and 10 to be test data. ``` df_interim['test_data'] = df_interim.fold >= 9 df_interim.head() np.sum(df_interim.test_data == True) np.sum(df_interim.test_data == False) np.sum(df_interim.test_data == False) / len(df_interim) df_test = df_interim[df_interim.test_data == True] df_test.head() df_train = df_interim[df_interim.test_data == False] df_train.head() ``` ## Save `df_interim` and `df_class_key` to new csv files. ``` new_file_name = 'metadata_interim.csv' new_file_path = os.path.join(interim_data_dir, new_file_name) df_interim.to_csv(new_file_path) new_file_name = 'classID_key.csv' new_file_path = os.path.join(interim_data_dir, new_file_name) df_class_key.to_csv(new_file_path) new_file_name = 'metadata_train.csv' new_file_path = os.path.join(interim_data_dir, new_file_name) df_train.to_csv(new_file_path) new_file_name = 'metadata_test.csv' new_file_path = os.path.join(interim_data_dir, new_file_name) df_test.to_csv(new_file_path) ``` ## Process audio files ``` df_train.head() global_sr = 22050 global_n_mels = 96 #pitch_shift_list = [None, -4, -3, -2, -1, 1, 2, 3, 4] pitch_shift_list = [None, -4, -2, 2, 4] time_stretch_list = [None, 0.8] for index, row in df_train.iterrows(): # Save path to the raw audio files fold_name = 'fold' + str(row['fold']) fold_path = os.path.join(raw_audio_path, fold_name) # Full path to the audio_file audio_file = row['slice_file_name'] audio_path = os.path.join(fold_path, audio_file) # Load the .wav audio_file aud_array, sr = librosa.load(audio_path, sr=global_sr) if index >= 1: break count = 0 for ps in pitch_shift_list: for ts in time_stretch_list: aud_array_aug = aud_array # Pitch shift if ps is not None: aud_array_aug = librosa.effects.pitch_shift(aud_array_aug, global_sr, n_steps=ps) # Time stretch if ts is not None: aud_array_aug = librosa.effects.time_stretch(aud_array_aug, rate=ts) # Create spectrogram array spec_array = librosa.logamplitude(\ librosa.feature.melspectrogram(aud_array_aug, sr=global_sr, n_mels=global_n_mels), ref_power=1.0)[np.newaxis,:,:, np.newaxis] # Time stretch if ts is not None: spec_array_full = spec_array # Left slice spec_array = spec_array_full[:,:,:173,:] # print(spec_array.shape) plt.figure() img = plt.imshow(spec_array[0,:,:,0], cmap='gray') title = 'ps: ' + str(ps) + '; ts: ' + str(ts) + '; slice: Left' plt.title(title) plt.axis('off') count += 1 # Right slice spec_array = spec_array_full[:,:,-173:,:] # print(spec_array.shape) plt.figure() img = plt.imshow(spec_array[0,:,:,0], cmap='gray') title = 'ps: ' + str(ps) + '; ts: ' + str(ts) + '; slice: Right' plt.title(title) plt.axis('off') count += 1 else: spec_array = spec_array[:,:,:173,:] # print(spec_array.shape) plt.figure() img = plt.imshow(spec_array[0,:,:,0], cmap='gray') title = 'ps: ' + str(ps) + '; ts: ' + str(ts) + '; slice: N/A' plt.title(title) plt.axis('off') count += 1 print(count) print('{:,}'.format(count * len(df_train))) train_len = count * len(df_train) train_len test_len = len(df_test) test_len ``` ### Create an `.hdf5` file to store spectrogram data we will generate from the `.wav` files. Determine the size of the dataset ``` # Test set dset_shape_X_test = (test_len, 96, 173, 1) dset_shape_y_test = (test_len, 1) print('dset_shape_X_test:\t', dset_shape_X_test) print('dset_shape_y_test:\t', dset_shape_y_test) # Train set dset_shape_X_train = (train_len, 96, 173, 1) dset_shape_y_train = (train_len, 1) print('dset_shape_X_train:\t', dset_shape_X_train) print('dset_shape_y_train:\t', dset_shape_y_train) # Full path for test_hdf5_path test_hdf5_path = os.path.join(spectrogram_arrays_path, "spectrogram_arrays_test.hdf5") with h5py.File(test_hdf5_path, 'w') as f: f.create_dataset("spectrogram_arrays_X_test", shape=dset_shape_X_test, dtype='float32', data=np.zeros(dset_shape_X_test, dtype='float32'), chunks=(1, 96, 173, 1), compression="gzip") f.create_dataset("spectrogram_arrays_y_test", shape=dset_shape_y_test, dtype='int8', data=np.zeros(dset_shape_y_test, dtype='int8'), compression="gzip") # Full path for train_hdf5_path train_hdf5_path = os.path.join(spectrogram_arrays_path, "spectrogram_arrays_train.hdf5") with h5py.File(train_hdf5_path, 'w') as f: f.create_dataset("spectrogram_arrays_X_train", shape=dset_shape_X_train, dtype='float32', data=np.zeros(dset_shape_X_train, dtype='float32'), chunks=(1, 96, 173, 1), compression="gzip") f.create_dataset("spectrogram_arrays_y_train", shape=dset_shape_y_train, dtype='int8', data=np.zeros(dset_shape_y_train, dtype='int8'), compression="gzip") ``` ### Generate data for spectrogram_arrays_X_test & spectrogram_arrays_y_test ``` count = 0 for index, row in df_test.iterrows(): # Save path to the raw audio files fold_name = 'fold' + str(row['fold']) fold_path = os.path.join(raw_audio_path, fold_name) # Full path to the audio_file audio_file = row['slice_file_name'] audio_path = os.path.join(fold_path, audio_file) # Load the .wav audio_file aud_array, sr = librosa.load(audio_path, sr=global_sr) # Create spectrogram array spec_array = librosa.logamplitude(\ librosa.feature.melspectrogram(aud_array, sr=global_sr, n_mels=global_n_mels), ref_power=1.0)[np.newaxis,:,:, np.newaxis] # Convert spectrogram array from dtype float64 to float32 spec_array = spec_array.astype('float32') # Write to the hdf5 file with h5py.File(test_hdf5_path, "r+") as f: # X_train dset = f['spectrogram_arrays_X_test'] # limit tensor height to 173 (there were a few tensors with 174) dset[count,:,:,:] = spec_array[:,:,:173,:] # y_train dset = f['spectrogram_arrays_y_test'] dset[count,:] = row['classID'] count += 1 ``` ### Generate data for spectrogram_arrays_X_train & spectrogram_arrays_y_train ``` count = 0 for index, row in df_train.iterrows(): # Save path to the raw audio files fold_name = 'fold' + str(row['fold']) fold_path = os.path.join(raw_audio_path, fold_name) # Full path to the audio_file audio_file = row['slice_file_name'] audio_path = os.path.join(fold_path, audio_file) # Load the .wav audio_file aud_array, sr = librosa.load(audio_path, sr=global_sr) for ps in pitch_shift_list: for ts in time_stretch_list: aud_array_aug = aud_array # Pitch shift if ps is not None: aud_array_aug = librosa.effects.pitch_shift(aud_array_aug, global_sr, n_steps=ps) # Time stretch if ts is not None: aud_array_aug = librosa.effects.time_stretch(aud_array_aug, rate=ts) # Create spectrogram array spec_array = librosa.logamplitude(\ librosa.feature.melspectrogram(aud_array_aug, sr=global_sr, n_mels=global_n_mels), ref_power=1.0)[np.newaxis,:,:, np.newaxis] # Time stretch if ts is not None: spec_array_full = spec_array # Left slice spec_array = spec_array_full[:,:,:173,:] # Convert spectrogram array from dtype float64 to float32 spec_array = spec_array.astype('float32') # Write to the hdf5 file with h5py.File(train_hdf5_path, "r+") as f: # X_train dset = f['spectrogram_arrays_X_train'] # limit tensor height to 173 (there were a few tensors with 174) dset[count,:,:,:] = spec_array[:,:,:173,:] # y_train dset = f['spectrogram_arrays_y_train'] dset[count,:] = row['classID'] count += 1 # Right slice spec_array = spec_array_full[:,:,-173:,:] # Convert spectrogram array from dtype float64 to float32 spec_array = spec_array.astype('float32') # Write to the hdf5 file with h5py.File(train_hdf5_path, "r+") as f: # X_train dset = f['spectrogram_arrays_X_train'] # limit tensor height to 173 (there were a few tensors with 174) dset[count,:,:,:] = spec_array[:,:,:173,:] # y_train dset = f['spectrogram_arrays_y_train'] dset[count,:] = row['classID'] count += 1 else: spec_array = spec_array[:,:,:173,:] # Convert spectrogram array from dtype float64 to float32 spec_array = spec_array.astype('float32') # Write to the hdf5 file with h5py.File(train_hdf5_path, "r+") as f: # X_train dset = f['spectrogram_arrays_X_train'] # limit tensor height to 173 (there were a few tensors with 174) dset[count,:,:,:] = spec_array[:,:,:173,:] # y_train dset = f['spectrogram_arrays_y_train'] dset[count,:] = row['classID'] count += 1 ``` # Inspect `HDF5` data ``` with h5py.File(train_hdf5_path, "r") as f: dset = f['spectrogram_arrays_X_train'] print(dset.dtype) dset = f['spectrogram_arrays_y_train'] print(dset.dtype) with h5py.File(test_hdf5_path, "r") as f: dset = f['spectrogram_arrays_X_test'] print(dset.dtype) dset = f['spectrogram_arrays_y_test'] print(dset.dtype) with h5py.File(train_hdf5_path, "r") as f: dset = f['spectrogram_arrays_X_train'] print(dset.shape) dset = f['spectrogram_arrays_y_train'] print(dset.shape) with h5py.File(test_hdf5_path, "r") as f: dset = f['spectrogram_arrays_X_test'] print(dset.shape) dset = f['spectrogram_arrays_y_test'] print(dset.shape) with h5py.File(train_hdf5_path, "r") as f: dset = f['spectrogram_arrays_X_train'] print(dset.ndim) dset = f['spectrogram_arrays_y_train'] print(dset.ndim) with h5py.File(test_hdf5_path, "r") as f: dset = f['spectrogram_arrays_X_test'] print(dset.ndim) dset = f['spectrogram_arrays_y_test'] print(dset.ndim) with h5py.File(train_hdf5_path, "r") as f: dset = f['spectrogram_arrays_X_train'] print(dset.len()) dset = f['spectrogram_arrays_y_train'] print(dset.len()) with h5py.File(test_hdf5_path, "r") as f: dset = f['spectrogram_arrays_X_test'] print(dset.len()) dset = f['spectrogram_arrays_y_test'] print(dset.len()) with h5py.File(train_hdf5_path, "r") as f: dset = f['spectrogram_arrays_X_train'] print(dset.maxshape) dset = f['spectrogram_arrays_y_train'] print(dset.len()) with h5py.File(test_hdf5_path, "r") as f: dset = f['spectrogram_arrays_X_test'] print(dset.len()) dset = f['spectrogram_arrays_y_test'] print(dset.len()) ```
github_jupyter
``` # Copyright 2021 DeepMind Technologies Limited # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Install dependencies for Google Colab. # If you want to run this notebook on your own machine, you can skip this cell !pip install dm-haiku !pip install einops !mkdir /content/perceiver !touch /content/perceiver/__init__.py !wget -O /content/perceiver/io_processors.py https://raw.githubusercontent.com/deepmind/deepmind-research/master/perceiver/io_processors.py !wget -O /content/perceiver/perceiver.py https://raw.githubusercontent.com/deepmind/deepmind-research/master/perceiver/perceiver.py !wget -O /content/perceiver/position_encoding.py https://raw.githubusercontent.com/deepmind/deepmind-research/master/perceiver/position_encoding.py #@title Imports import functools import itertools import pickle import cv2 import haiku as hk import imageio import jax import jax.numpy as jnp import matplotlib.pyplot as plt import numpy as np from perceiver import perceiver, io_processors #@title Model construction # One of learned_position_encoding, fourier_position_encoding, or conv_preprocessing' # learned_position_encoding: Uses a learned position encoding over the image # and 1x1 convolution over the pixels # fourier_position_encoding: Uses a 2D fourier position encoding # and the raw pixels # conv_preprocessing: Uses a 2D fourier position encoding # and a 2D conv-net as preprocessing model_type = 'conv_preprocessing' #@param ['learned_position_encoding', 'fourier_position_encoding', 'conv_preprocessing'] IMAGE_SIZE = (224, 224) learned_pos_configs = dict( input_preprocessor=dict( position_encoding_type='trainable', trainable_position_encoding_kwargs=dict( init_scale=0.02, num_channels=256, ), prep_type='conv1x1', project_pos_dim=256, num_channels=256, spatial_downsample=1, concat_or_add_pos='concat', ), encoder=dict( cross_attend_widening_factor=1, cross_attention_shape_for_attn='kv', dropout_prob=0, num_blocks=8, num_cross_attend_heads=1, num_self_attend_heads=8, num_self_attends_per_block=6, num_z_channels=1024, self_attend_widening_factor=1, use_query_residual=True, z_index_dim=512, z_pos_enc_init_scale=0.02 ), decoder=dict( num_z_channels=1024, position_encoding_type='trainable', trainable_position_encoding_kwargs=dict( init_scale=0.02, num_channels=1024, ), use_query_residual=False, ) ) fourier_pos_configs = dict( input_preprocessor=dict( position_encoding_type='fourier', fourier_position_encoding_kwargs=dict( concat_pos=True, max_resolution=(224, 224), num_bands=64, sine_only=False ), prep_type='pixels', spatial_downsample=1, ), encoder=dict( cross_attend_widening_factor=1, cross_attention_shape_for_attn='kv', dropout_prob=0, num_blocks=8, num_cross_attend_heads=1, num_self_attend_heads=8, num_self_attends_per_block=6, num_z_channels=1024, self_attend_widening_factor=1, use_query_residual=True, z_index_dim=512, z_pos_enc_init_scale=0.02 ), decoder=dict( num_z_channels=1024, position_encoding_type='trainable', trainable_position_encoding_kwargs=dict( init_scale=0.02, num_channels=1024, ), use_query_residual=True, ) ) conv_maxpool_configs = dict( input_preprocessor=dict( position_encoding_type='fourier', fourier_position_encoding_kwargs=dict( concat_pos=True, max_resolution=(56, 56), num_bands=64, sine_only=False ), prep_type='conv', ), encoder=dict( cross_attend_widening_factor=1, cross_attention_shape_for_attn='kv', dropout_prob=0, num_blocks=8, num_cross_attend_heads=1, num_self_attend_heads=8, num_self_attends_per_block=6, num_z_channels=1024, self_attend_widening_factor=1, use_query_residual=True, z_index_dim=512, z_pos_enc_init_scale=0.02 ), decoder=dict( num_z_channels=1024, position_encoding_type='trainable', trainable_position_encoding_kwargs=dict( init_scale=0.02, num_channels=1024, ), use_query_residual=True, ) ) CONFIGS = { 'learned_position_encoding': learned_pos_configs, 'fourier_position_encoding': fourier_pos_configs, 'conv_preprocessing': conv_maxpool_configs, } def imagenet_classifier(config, images): input_preprocessor = io_processors.ImagePreprocessor( **config['input_preprocessor']) encoder = perceiver.PerceiverEncoder(**config['encoder']) decoder = perceiver.ClassificationDecoder( 1000, **config['decoder']) model = perceiver.Perceiver( encoder=encoder, decoder=decoder, input_preprocessor=input_preprocessor) logits = model(images, is_training=False) return logits imagenet_classifier = hk.transform_with_state(imagenet_classifier) #@title Load parameters from checkpoint rng = jax.random.PRNGKey(42) CHECKPOINT_URLS = { 'conv_preprocessing': 'https://storage.googleapis.com/perceiver_io/imagenet_conv_preprocessing.pystate', 'fourier_position_encoding': 'https://storage.googleapis.com/perceiver_io/imagenet_fourier_position_encoding.pystate', 'learned_position_encoding': 'https://storage.googleapis.com/perceiver_io/imagenet_learned_position_encoding.pystate', } url = CHECKPOINT_URLS[model_type] !wget -O imagenet_checkpoint.pystate $url rng = jax.random.PRNGKey(42) with open('imagenet_checkpoint.pystate', 'rb') as f: ckpt = pickle.loads(f.read()) params = ckpt['params'] state = ckpt['state'] #@title Imagenet labels # ImageNet labels were obtained from the ImageNet dataset: https://image-net.org/. # ImageNet is provided for non-commercial research and educational purposes. # See https://image-net.org/download for details. IMAGENET_LABELS = [ "tench, Tinca tinca", "goldfish, Carassius auratus", "great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias", "tiger shark, Galeocerdo cuvieri", "hammerhead, hammerhead shark", "electric ray, crampfish, numbfish, torpedo", "stingray", "cock", "hen", "ostrich, Struthio camelus", "brambling, Fringilla montifringilla", "goldfinch, Carduelis carduelis", "house finch, linnet, Carpodacus mexicanus", "junco, snowbird", "indigo bunting, indigo finch, indigo bird, Passerina cyanea", "robin, American robin, Turdus migratorius", "bulbul", "jay", "magpie", "chickadee", "water ouzel, dipper", "kite", "bald eagle, American eagle, Haliaeetus leucocephalus", "vulture", "great grey owl, great gray owl, Strix nebulosa", "European fire salamander, Salamandra salamandra", "common newt, Triturus vulgaris", "eft", "spotted salamander, Ambystoma maculatum", "axolotl, mud puppy, Ambystoma mexicanum", "bullfrog, Rana catesbeiana", "tree frog, tree-frog", "tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui", "loggerhead, loggerhead turtle, Caretta caretta", "leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea", "mud turtle", "terrapin", "box turtle, box tortoise", "banded gecko", "common iguana, iguana, Iguana iguana", "American chameleon, anole, Anolis carolinensis", "whiptail, whiptail lizard", "agama", "frilled lizard, Chlamydosaurus kingi", "alligator lizard", "Gila monster, Heloderma suspectum", "green lizard, Lacerta viridis", "African chameleon, Chamaeleo chamaeleon", "Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis", "African crocodile, Nile crocodile, Crocodylus niloticus", "American alligator, Alligator mississipiensis", "triceratops", "thunder snake, worm snake, Carphophis amoenus", "ringneck snake, ring-necked snake, ring snake", "hognose snake, puff adder, sand viper", "green snake, grass snake", "king snake, kingsnake", "garter snake, grass snake", "water snake", "vine snake", "night snake, Hypsiglena torquata", "boa constrictor, Constrictor constrictor", "rock python, rock snake, Python sebae", "Indian cobra, Naja naja", "green mamba", "sea snake", "horned viper, cerastes, sand viper, horned asp, Cerastes cornutus", "diamondback, diamondback rattlesnake, Crotalus adamanteus", "sidewinder, horned rattlesnake, Crotalus cerastes", "trilobite", "harvestman, daddy longlegs, Phalangium opilio", "scorpion", "black and gold garden spider, Argiope aurantia", "barn spider, Araneus cavaticus", "garden spider, Aranea diademata", "black widow, Latrodectus mactans", "tarantula", "wolf spider, hunting spider", "tick", "centipede", "black grouse", "ptarmigan", "ruffed grouse, partridge, Bonasa umbellus", "prairie chicken, prairie grouse, prairie fowl", "peacock", "quail", "partridge", "African grey, African gray, Psittacus erithacus", "macaw", "sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita", "lorikeet", "coucal", "bee eater", "hornbill", "hummingbird", "jacamar", "toucan", "drake", "red-breasted merganser, Mergus serrator", "goose", "black swan, Cygnus atratus", "tusker", "echidna, spiny anteater, anteater", "platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus", "wallaby, brush kangaroo", "koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus", "wombat", "jellyfish", "sea anemone, anemone", "brain coral", "flatworm, platyhelminth", "nematode, nematode worm, roundworm", "conch", "snail", "slug", "sea slug, nudibranch", "chiton, coat-of-mail shell, sea cradle, polyplacophore", "chambered nautilus, pearly nautilus, nautilus", "Dungeness crab, Cancer magister", "rock crab, Cancer irroratus", "fiddler crab", "king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica", "American lobster, Northern lobster, Maine lobster, Homarus americanus", "spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish", "crayfish, crawfish, crawdad, crawdaddy", "hermit crab", "isopod", "white stork, Ciconia ciconia", "black stork, Ciconia nigra", "spoonbill", "flamingo", "little blue heron, Egretta caerulea", "American egret, great white heron, Egretta albus", "bittern", "crane", "limpkin, Aramus pictus", "European gallinule, Porphyrio porphyrio", "American coot, marsh hen, mud hen, water hen, Fulica americana", "bustard", "ruddy turnstone, Arenaria interpres", "red-backed sandpiper, dunlin, Erolia alpina", "redshank, Tringa totanus", "dowitcher", "oystercatcher, oyster catcher", "pelican", "king penguin, Aptenodytes patagonica", "albatross, mollymawk", "grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus", "killer whale, killer, orca, grampus, sea wolf, Orcinus orca", "dugong, Dugong dugon", "sea lion", "Chihuahua", "Japanese spaniel", "Maltese dog, Maltese terrier, Maltese", "Pekinese, Pekingese, Peke", "Shih-Tzu", "Blenheim spaniel", "papillon", "toy terrier", "Rhodesian ridgeback", "Afghan hound, Afghan", "basset, basset hound", "beagle", "bloodhound, sleuthhound", "bluetick", "black-and-tan coonhound", "Walker hound, Walker foxhound", "English foxhound", "redbone", "borzoi, Russian wolfhound", "Irish wolfhound", "Italian greyhound", "whippet", "Ibizan hound, Ibizan Podenco", "Norwegian elkhound, elkhound", "otterhound, otter hound", "Saluki, gazelle hound", "Scottish deerhound, deerhound", "Weimaraner", "Staffordshire bullterrier, Staffordshire bull terrier", "American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier", "Bedlington terrier", "Border terrier", "Kerry blue terrier", "Irish terrier", "Norfolk terrier", "Norwich terrier", "Yorkshire terrier", "wire-haired fox terrier", "Lakeland terrier", "Sealyham terrier, Sealyham", "Airedale, Airedale terrier", "cairn, cairn terrier", "Australian terrier", "Dandie Dinmont, Dandie Dinmont terrier", "Boston bull, Boston terrier", "miniature schnauzer", "giant schnauzer", "standard schnauzer", "Scotch terrier, Scottish terrier, Scottie", "Tibetan terrier, chrysanthemum dog", "silky terrier, Sydney silky", "soft-coated wheaten terrier", "West Highland white terrier", "Lhasa, Lhasa apso", "flat-coated retriever", "curly-coated retriever", "golden retriever", "Labrador retriever", "Chesapeake Bay retriever", "German short-haired pointer", "vizsla, Hungarian pointer", "English setter", "Irish setter, red setter", "Gordon setter", "Brittany spaniel", "clumber, clumber spaniel", "English springer, English springer spaniel", "Welsh springer spaniel", "cocker spaniel, English cocker spaniel, cocker", "Sussex spaniel", "Irish water spaniel", "kuvasz", "schipperke", "groenendael", "malinois", "briard", "kelpie", "komondor", "Old English sheepdog, bobtail", "Shetland sheepdog, Shetland sheep dog, Shetland", "collie", "Border collie", "Bouvier des Flandres, Bouviers des Flandres", "Rottweiler", "German shepherd, German shepherd dog, German police dog, alsatian", "Doberman, Doberman pinscher", "miniature pinscher", "Greater Swiss Mountain dog", "Bernese mountain dog", "Appenzeller", "EntleBucher", "boxer", "bull mastiff", "Tibetan mastiff", "French bulldog", "Great Dane", "Saint Bernard, St Bernard", "Eskimo dog, husky", "malamute, malemute, Alaskan malamute", "Siberian husky", "dalmatian, coach dog, carriage dog", "affenpinscher, monkey pinscher, monkey dog", "basenji", "pug, pug-dog", "Leonberg", "Newfoundland, Newfoundland dog", "Great Pyrenees", "Samoyed, Samoyede", "Pomeranian", "chow, chow chow", "keeshond", "Brabancon griffon", "Pembroke, Pembroke Welsh corgi", "Cardigan, Cardigan Welsh corgi", "toy poodle", "miniature poodle", "standard poodle", "Mexican hairless", "timber wolf, grey wolf, gray wolf, Canis lupus", "white wolf, Arctic wolf, Canis lupus tundrarum", "red wolf, maned wolf, Canis rufus, Canis niger", "coyote, prairie wolf, brush wolf, Canis latrans", "dingo, warrigal, warragal, Canis dingo", "dhole, Cuon alpinus", "African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus", "hyena, hyaena", "red fox, Vulpes vulpes", "kit fox, Vulpes macrotis", "Arctic fox, white fox, Alopex lagopus", "grey fox, gray fox, Urocyon cinereoargenteus", "tabby, tabby cat", "tiger cat", "Persian cat", "Siamese cat, Siamese", "Egyptian cat", "cougar, puma, catamount, mountain lion, painter, panther, Felis concolor", "lynx, catamount", "leopard, Panthera pardus", "snow leopard, ounce, Panthera uncia", "jaguar, panther, Panthera onca, Felis onca", "lion, king of beasts, Panthera leo", "tiger, Panthera tigris", "cheetah, chetah, Acinonyx jubatus", "brown bear, bruin, Ursus arctos", "American black bear, black bear, Ursus americanus, Euarctos americanus", "ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus", "sloth bear, Melursus ursinus, Ursus ursinus", "mongoose", "meerkat, mierkat", "tiger beetle", "ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle", "ground beetle, carabid beetle", "long-horned beetle, longicorn, longicorn beetle", "leaf beetle, chrysomelid", "dung beetle", "rhinoceros beetle", "weevil", "fly", "bee", "ant, emmet, pismire", "grasshopper, hopper", "cricket", "walking stick, walkingstick, stick insect", "cockroach, roach", "mantis, mantid", "cicada, cicala", "leafhopper", "lacewing, lacewing fly", "dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", "damselfly", "admiral", "ringlet, ringlet butterfly", "monarch, monarch butterfly, milkweed butterfly, Danaus plexippus", "cabbage butterfly", "sulphur butterfly, sulfur butterfly", "lycaenid, lycaenid butterfly", "starfish, sea star", "sea urchin", "sea cucumber, holothurian", "wood rabbit, cottontail, cottontail rabbit", "hare", "Angora, Angora rabbit", "hamster", "porcupine, hedgehog", "fox squirrel, eastern fox squirrel, Sciurus niger", "marmot", "beaver", "guinea pig, Cavia cobaya", "sorrel", "zebra", "hog, pig, grunter, squealer, Sus scrofa", "wild boar, boar, Sus scrofa", "warthog", "hippopotamus, hippo, river horse, Hippopotamus amphibius", "ox", "water buffalo, water ox, Asiatic buffalo, Bubalus bubalis", "bison", "ram, tup", "bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis", "ibex, Capra ibex", "hartebeest", "impala, Aepyceros melampus", "gazelle", "Arabian camel, dromedary, Camelus dromedarius", "llama", "weasel", "mink", "polecat, fitch, foulmart, foumart, Mustela putorius", "black-footed ferret, ferret, Mustela nigripes", "otter", "skunk, polecat, wood pussy", "badger", "armadillo", "three-toed sloth, ai, Bradypus tridactylus", "orangutan, orang, orangutang, Pongo pygmaeus", "gorilla, Gorilla gorilla", "chimpanzee, chimp, Pan troglodytes", "gibbon, Hylobates lar", "siamang, Hylobates syndactylus, Symphalangus syndactylus", "guenon, guenon monkey", "patas, hussar monkey, Erythrocebus patas", "baboon", "macaque", "langur", "colobus, colobus monkey", "proboscis monkey, Nasalis larvatus", "marmoset", "capuchin, ringtail, Cebus capucinus", "howler monkey, howler", "titi, titi monkey", "spider monkey, Ateles geoffroyi", "squirrel monkey, Saimiri sciureus", "Madagascar cat, ring-tailed lemur, Lemur catta", "indri, indris, Indri indri, Indri brevicaudatus", "Indian elephant, Elephas maximus", "African elephant, Loxodonta africana", "lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens", "giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca", "barracouta, snoek", "eel", "coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch", "rock beauty, Holocanthus tricolor", "anemone fish", "sturgeon", "gar, garfish, garpike, billfish, Lepisosteus osseus", "lionfish", "puffer, pufferfish, blowfish, globefish", "abacus", "abaya", "academic gown, academic robe, judge's robe", "accordion, piano accordion, squeeze box", "acoustic guitar", "aircraft carrier, carrier, flattop, attack aircraft carrier", "airliner", "airship, dirigible", "altar", "ambulance", "amphibian, amphibious vehicle", "analog clock", "apiary, bee house", "apron", "ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin", "assault rifle, assault gun", "backpack, back pack, knapsack, packsack, rucksack, haversack", "bakery, bakeshop, bakehouse", "balance beam, beam", "balloon", "ballpoint, ballpoint pen, ballpen, Biro", "Band Aid", "banjo", "bannister, banister, balustrade, balusters, handrail", "barbell", "barber chair", "barbershop", "barn", "barometer", "barrel, cask", "barrow, garden cart, lawn cart, wheelbarrow", "baseball", "basketball", "bassinet", "bassoon", "bathing cap, swimming cap", "bath towel", "bathtub, bathing tub, bath, tub", "beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon", "beacon, lighthouse, beacon light, pharos", "beaker", "bearskin, busby, shako", "beer bottle", "beer glass", "bell cote, bell cot", "bib", "bicycle-built-for-two, tandem bicycle, tandem", "bikini, two-piece", "binder, ring-binder", "binoculars, field glasses, opera glasses", "birdhouse", "boathouse", "bobsled, bobsleigh, bob", "bolo tie, bolo, bola tie, bola", "bonnet, poke bonnet", "bookcase", "bookshop, bookstore, bookstall", "bottlecap", "bow", "bow tie, bow-tie, bowtie", "brass, memorial tablet, plaque", "brassiere, bra, bandeau", "breakwater, groin, groyne, mole, bulwark, seawall, jetty", "breastplate, aegis, egis", "broom", "bucket, pail", "buckle", "bulletproof vest", "bullet train, bullet", "butcher shop, meat market", "cab, hack, taxi, taxicab", "caldron, cauldron", "candle, taper, wax light", "cannon", "canoe", "can opener, tin opener", "cardigan", "car mirror", "carousel, carrousel, merry-go-round, roundabout, whirligig", "carpenter's kit, tool kit", "carton", "car wheel", "cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM", "cassette", "cassette player", "castle", "catamaran", "CD player", "cello, violoncello", "cellular telephone, cellular phone, cellphone, cell, mobile phone", "chain", "chainlink fence", "chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour", "chain saw, chainsaw", "chest", "chiffonier, commode", "chime, bell, gong", "china cabinet, china closet", "Christmas stocking", "church, church building", "cinema, movie theater, movie theatre, movie house, picture palace", "cleaver, meat cleaver, chopper", "cliff dwelling", "cloak", "clog, geta, patten, sabot", "cocktail shaker", "coffee mug", "coffeepot", "coil, spiral, volute, whorl, helix", "combination lock", "computer keyboard, keypad", "confectionery, confectionary, candy store", "container ship, containership, container vessel", "convertible", "corkscrew, bottle screw", "cornet, horn, trumpet, trump", "cowboy boot", "cowboy hat, ten-gallon hat", "cradle", "crane", "crash helmet", "crate", "crib, cot", "Crock Pot", "croquet ball", "crutch", "cuirass", "dam, dike, dyke", "desk", "desktop computer", "dial telephone, dial phone", "diaper, nappy, napkin", "digital clock", "digital watch", "dining table, board", "dishrag, dishcloth", "dishwasher, dish washer, dishwashing machine", "disk brake, disc brake", "dock, dockage, docking facility", "dogsled, dog sled, dog sleigh", "dome", "doormat, welcome mat", "drilling platform, offshore rig", "drum, membranophone, tympan", "drumstick", "dumbbell", "Dutch oven", "electric fan, blower", "electric guitar", "electric locomotive", "entertainment center", "envelope", "espresso maker", "face powder", "feather boa, boa", "file, file cabinet, filing cabinet", "fireboat", "fire engine, fire truck", "fire screen, fireguard", "flagpole, flagstaff", "flute, transverse flute", "folding chair", "football helmet", "forklift", "fountain", "fountain pen", "four-poster", "freight car", "French horn, horn", "frying pan, frypan, skillet", "fur coat", "garbage truck, dustcart", "gasmask, respirator, gas helmet", "gas pump, gasoline pump, petrol pump, island dispenser", "goblet", "go-kart", "golf ball", "golfcart, golf cart", "gondola", "gong, tam-tam", "gown", "grand piano, grand", "greenhouse, nursery, glasshouse", "grille, radiator grille", "grocery store, grocery, food market, market", "guillotine", "hair slide", "hair spray", "half track", "hammer", "hamper", "hand blower, blow dryer, blow drier, hair dryer, hair drier", "hand-held computer, hand-held microcomputer", "handkerchief, hankie, hanky, hankey", "hard disc, hard disk, fixed disk", "harmonica, mouth organ, harp, mouth harp", "harp", "harvester, reaper", "hatchet", "holster", "home theater, home theatre", "honeycomb", "hook, claw", "hoopskirt, crinoline", "horizontal bar, high bar", "horse cart, horse-cart", "hourglass", "iPod", "iron, smoothing iron", "jack-o'-lantern", "jean, blue jean, denim", "jeep, landrover", "jersey, T-shirt, tee shirt", "jigsaw puzzle", "jinrikisha, ricksha, rickshaw", "joystick", "kimono", "knee pad", "knot", "lab coat, laboratory coat", "ladle", "lampshade, lamp shade", "laptop, laptop computer", "lawn mower, mower", "lens cap, lens cover", "letter opener, paper knife, paperknife", "library", "lifeboat", "lighter, light, igniter, ignitor", "limousine, limo", "liner, ocean liner", "lipstick, lip rouge", "Loafer", "lotion", "loudspeaker, speaker, speaker unit, loudspeaker system, speaker system", "loupe, jeweler's loupe", "lumbermill, sawmill", "magnetic compass", "mailbag, postbag", "mailbox, letter box", "maillot", "maillot, tank suit", "manhole cover", "maraca", "marimba, xylophone", "mask", "matchstick", "maypole", "maze, labyrinth", "measuring cup", "medicine chest, medicine cabinet", "megalith, megalithic structure", "microphone, mike", "microwave, microwave oven", "military uniform", "milk can", "minibus", "miniskirt, mini", "minivan", "missile", "mitten", "mixing bowl", "mobile home, manufactured home", "Model T", "modem", "monastery", "monitor", "moped", "mortar", "mortarboard", "mosque", "mosquito net", "motor scooter, scooter", "mountain bike, all-terrain bike, off-roader", "mountain tent", "mouse, computer mouse", "mousetrap", "moving van", "muzzle", "nail", "neck brace", "necklace", "nipple", "notebook, notebook computer", "obelisk", "oboe, hautboy, hautbois", "ocarina, sweet potato", "odometer, hodometer, mileometer, milometer", "oil filter", "organ, pipe organ", "oscilloscope, scope, cathode-ray oscilloscope, CRO", "overskirt", "oxcart", "oxygen mask", "packet", "paddle, boat paddle", "paddlewheel, paddle wheel", "padlock", "paintbrush", "pajama, pyjama, pj's, jammies", "palace", "panpipe, pandean pipe, syrinx", "paper towel", "parachute, chute", "parallel bars, bars", "park bench", "parking meter", "passenger car, coach, carriage", "patio, terrace", "pay-phone, pay-station", "pedestal, plinth, footstall", "pencil box, pencil case", "pencil sharpener", "perfume, essence", "Petri dish", "photocopier", "pick, plectrum, plectron", "pickelhaube", "picket fence, paling", "pickup, pickup truck", "pier", "piggy bank, penny bank", "pill bottle", "pillow", "ping-pong ball", "pinwheel", "pirate, pirate ship", "pitcher, ewer", "plane, carpenter's plane, woodworking plane", "planetarium", "plastic bag", "plate rack", "plow, plough", "plunger, plumber's helper", "Polaroid camera, Polaroid Land camera", "pole", "police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria", "poncho", "pool table, billiard table, snooker table", "pop bottle, soda bottle", "pot, flowerpot", "potter's wheel", "power drill", "prayer rug, prayer mat", "printer", "prison, prison house", "projectile, missile", "projector", "puck, hockey puck", "punching bag, punch bag, punching ball, punchball", "purse", "quill, quill pen", "quilt, comforter, comfort, puff", "racer, race car, racing car", "racket, racquet", "radiator", "radio, wireless", "radio telescope, radio reflector", "rain barrel", "recreational vehicle, RV, R.V.", "reel", "reflex camera", "refrigerator, icebox", "remote control, remote", "restaurant, eating house, eating place, eatery", "revolver, six-gun, six-shooter", "rifle", "rocking chair, rocker", "rotisserie", "rubber eraser, rubber, pencil eraser", "rugby ball", "rule, ruler", "running shoe", "safe", "safety pin", "saltshaker, salt shaker", "sandal", "sarong", "sax, saxophone", "scabbard", "scale, weighing machine", "school bus", "schooner", "scoreboard", "screen, CRT screen", "screw", "screwdriver", "seat belt, seatbelt", "sewing machine", "shield, buckler", "shoe shop, shoe-shop, shoe store", "shoji", "shopping basket", "shopping cart", "shovel", "shower cap", "shower curtain", "ski", "ski mask", "sleeping bag", "slide rule, slipstick", "sliding door", "slot, one-armed bandit", "snorkel", "snowmobile", "snowplow, snowplough", "soap dispenser", "soccer ball", "sock", "solar dish, solar collector, solar furnace", "sombrero", "soup bowl", "space bar", "space heater", "space shuttle", "spatula", "speedboat", "spider web, spider's web", "spindle", "sports car, sport car", "spotlight, spot", "stage", "steam locomotive", "steel arch bridge", "steel drum", "stethoscope", "stole", "stone wall", "stopwatch, stop watch", "stove", "strainer", "streetcar, tram, tramcar, trolley, trolley car", "stretcher", "studio couch, day bed", "stupa, tope", "submarine, pigboat, sub, U-boat", "suit, suit of clothes", "sundial", "sunglass", "sunglasses, dark glasses, shades", "sunscreen, sunblock, sun blocker", "suspension bridge", "swab, swob, mop", "sweatshirt", "swimming trunks, bathing trunks", "swing", "switch, electric switch, electrical switch", "syringe", "table lamp", "tank, army tank, armored combat vehicle, armoured combat vehicle", "tape player", "teapot", "teddy, teddy bear", "television, television system", "tennis ball", "thatch, thatched roof", "theater curtain, theatre curtain", "thimble", "thresher, thrasher, threshing machine", "throne", "tile roof", "toaster", "tobacco shop, tobacconist shop, tobacconist", "toilet seat", "torch", "totem pole", "tow truck, tow car, wrecker", "toyshop", "tractor", "trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi", "tray", "trench coat", "tricycle, trike, velocipede", "trimaran", "tripod", "triumphal arch", "trolleybus, trolley coach, trackless trolley", "trombone", "tub, vat", "turnstile", "typewriter keyboard", "umbrella", "unicycle, monocycle", "upright, upright piano", "vacuum, vacuum cleaner", "vase", "vault", "velvet", "vending machine", "vestment", "viaduct", "violin, fiddle", "volleyball", "waffle iron", "wall clock", "wallet, billfold, notecase, pocketbook", "wardrobe, closet, press", "warplane, military plane", "washbasin, handbasin, washbowl, lavabo, wash-hand basin", "washer, automatic washer, washing machine", "water bottle", "water jug", "water tower", "whiskey jug", "whistle", "wig", "window screen", "window shade", "Windsor tie", "wine bottle", "wing", "wok", "wooden spoon", "wool, woolen, woollen", "worm fence, snake fence, snake-rail fence, Virginia fence", "wreck", "yawl", "yurt", "web site, website, internet site, site", "comic book", "crossword puzzle, crossword", "street sign", "traffic light, traffic signal, stoplight", "book jacket, dust cover, dust jacket, dust wrapper", "menu", "plate", "guacamole", "consomme", "hot pot, hotpot", "trifle", "ice cream, icecream", "ice lolly, lolly, lollipop, popsicle", "French loaf", "bagel, beigel", "pretzel", "cheeseburger", "hotdog, hot dog, red hot", "mashed potato", "head cabbage", "broccoli", "cauliflower", "zucchini, courgette", "spaghetti squash", "acorn squash", "butternut squash", "cucumber, cuke", "artichoke, globe artichoke", "bell pepper", "cardoon", "mushroom", "Granny Smith", "strawberry", "orange", "lemon", "fig", "pineapple, ananas", "banana", "jackfruit, jak, jack", "custard apple", "pomegranate", "hay", "carbonara", "chocolate sauce, chocolate syrup", "dough", "meat loaf, meatloaf", "pizza, pizza pie", "potpie", "burrito", "red wine", "espresso", "cup", "eggnog", "alp", "bubble", "cliff, drop, drop-off", "coral reef", "geyser", "lakeside, lakeshore", "promontory, headland, head, foreland", "sandbar, sand bar", "seashore, coast, seacoast, sea-coast", "valley, vale", "volcano", "ballplayer, baseball player", "groom, bridegroom", "scuba diver", "rapeseed", "daisy", "yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum", "corn", "acorn", "hip, rose hip, rosehip", "buckeye, horse chestnut, conker", "coral fungus", "agaric", "gyromitra", "stinkhorn, carrion fungus", "earthstar", "hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa", "bolete", "ear, spike, capitulum", "toilet tissue, toilet paper, bathroom tissue", ] # dalmation.jpg is obtained from Getty Images under license # (https://www.gettyimages.co.uk/eula#RF). !wget -O dog.jpg https://storage.googleapis.com/perceiver_io/dalmation.jpg with open('dog.jpg', 'rb') as f: img = imageio.imread(f) #@title Image Utility Functions MEAN_RGB = (0.485 * 255, 0.456 * 255, 0.406 * 255) STDDEV_RGB = (0.229 * 255, 0.224 * 255, 0.225 * 255) def normalize(im): return (im - np.array(MEAN_RGB)) / np.array(STDDEV_RGB) def resize_and_center_crop(image): """Crops to center of image with padding then scales.""" shape = image.shape image_height = shape[0] image_width = shape[1] padded_center_crop_size = ((224 / (224 + 32)) * np.minimum(image_height, image_width).astype(np.float32)).astype(np.int32) offset_height = ((image_height - padded_center_crop_size) + 1) // 2 offset_width = ((image_width - padded_center_crop_size) + 1) // 2 crop_window = [offset_height, offset_width, padded_center_crop_size, padded_center_crop_size] # image = tf.image.crop_to_bounding_box(image_bytes, *crop_window) image = image[crop_window[0]:crop_window[0] + crop_window[2], crop_window[1]:crop_window[1]+crop_window[3]] return cv2.resize(image, (224, 224), interpolation=cv2.INTER_CUBIC) # Imagenet classification # Obtain a [224, 224] crop of the image while preserving aspect ratio. # With Fourier position encoding, no resize is needed -- the model can # generalize to image sizes it never saw in training centered_img = resize_and_center_crop(img) # img logits, _ = imagenet_classifier.apply(params, state, rng, CONFIGS[model_type], normalize(centered_img)[None]) _, indices = jax.lax.top_k(logits[0], 5) probs = jax.nn.softmax(logits[0]) plt.imshow(img) plt.axis('off') print('Top 5 labels:') for i in list(indices): print(f'{IMAGENET_LABELS[i]}: {probs[i]}') ```
github_jupyter
## Advanced Tutorial ### Follow along Code for all the examples is located in your `PYTHONPATH/Lib/site-packages/eonr/examples` folder. With that said, you should be able to make use of `EONR` by following and executing the commands in this tutorial using either the sample data provided or substituting in your own data. *You will find the following code included in the* `advanced_tutorial.py` *or* `advanced_tutorial.ipynb` *(for* [Jupyter notebooks](https://jupyter.org/)*) files in your* `PYTHONPATH/Lib/site-packages/eonr/examples` *folder - feel free to load that into your Python IDE to follow along.* - - - ### Calculate `EONR` for several economic scenarios In this tutorial, we will run `EONR.calculate_eonr()` in a loop, adjusting the economic scenario prior to each run. - - - ### Load modules Load `pandas` and `EONR`: ``` import os import pandas as pd import eonr print('EONR version: {0}'.format(eonr.__version__)) ``` - - - ### Load the data `EONR` uses Pandas dataframes to access and manipulate the experimental data. ``` df_data = pd.read_csv(os.path.join('data', 'minnesota_2012.csv')) df_data ``` - - - ### Set column names and units *The table containing the experimental data* **must** *have a minimum of two columns:* * Nitrogen fertilizer rate * Grain yield We'll also set *nitrogen uptake* and *available nitrogen* columns right away for calculating the socially optimum nitrogen rate. As a reminder, we are declaring the names of these columns and units because they will be passed to `EONR` later. ``` col_n_app = 'rate_n_applied_kgha' col_yld = 'yld_grain_dry_kgha' col_crop_nup = 'nup_total_kgha' col_n_avail = 'crop_n_available_kgha' unit_currency = '$' unit_fert = 'kg' unit_grain = 'kg' unit_area = 'ha' def calc_mineralization(df_data, units_fert='kgha'): ''' Calculates mineralization and adds "crop_available_n" to df ''' df_trt0 = df_data[df_data['rate_n_applied_kgha']==0].copy() df_trt0['mineralize_n'] = (df_trt0['nup_total_kgha'] - df_trt0['soil_plus_fert_n_kgha']) trt0_mineralize = df_trt0['mineralize_n'].mean() crop_n_label = 'crop_n_available_' + units_fert df_data[crop_n_label] = df_data['soil_plus_fert_n_kgha'] + trt0_mineralize return df_data df_data = calc_mineralization(df_data) ``` - - - ### Turn `base_zero` off You might have noticed the `base_zero` option for the `EONR` class in the [API](my_eonr.html#module-eonr.eonr). `base_zero` is a `True`/`False` flag that determines if gross return to nitrogen should be expressed as an absolute values. We will see a bit later that upon executing `EONR.calculate_eonr()`, grain yield from the input dataset is used to create a new column for gross return to nitrogen *("grtn")* by multiplying the grain yield column by the price of grain (`price_grain` variable). If `base_zero` is `True` *(default)*, the observed yield return data are standardized so that the best-fit quadratic-plateau model passes through the y-axis at zero. This is done in two steps: 1. Fit the quadratic-plateau to the original data to determine the value of the y-intercept of the model ($\beta_0$) 2. Subtract $\beta_0$ from all data in the recently created *"grtn"* column (temporarily stored in `EONR.df_data`) This behavior (`base_zero = True`) is the default in `EONR`. However, `base_zero` can simply be set to `False` during the initialization of `EONR`. We will set it store it in its own variable now, then pass to `EONR` during initialization: ``` base_zero = False ``` - - - ### Initialize `EONR` Let's set the base directory and initialize an instance of `EONR`, setting `cost_n_fert = 0`, `costs_fixed = 0`, and `price_grain = 1.0` (\\$1.00 per kg) as the default values (we will adjust them later on in the tutorial): ``` import os base_dir = os.path.join(os.getcwd(), 'eonr_advanced_tutorial') my_eonr = eonr.EONR(cost_n_fert=0, costs_fixed=0, price_grain=1.0, col_n_app=col_n_app, col_yld=col_yld, col_crop_nup=col_crop_nup, col_n_avail=col_n_avail, unit_currency=unit_currency, unit_grain=unit_grain, unit_fert=unit_fert, unit_area=unit_area, base_dir=base_dir, base_zero=base_zero) ``` - - - ### Calculate the *AONR* You may be wondering why `cost_n_fert` was set to 0. Well, setting our nitrogen fertilizer cost to \$0 essentially allows us to calculate the optimum nitrogen rate ignoring the cost of the fertilizer input. This is known as the *Agronomic Optimum Nitrogen Rate (AONR)*. The AONR provides insight into the maximum achievable grain yield. Notice `price_grain` was set to `1.0` - this effectively calculates the AONR so that the maximum return to nitrogen (MRTN), which will be expressed as \$ per ha when ploting via `EONR.plot_eonr()`, is similar to units of kg per ha (the units we are using for grain yield). Let's calculate the AONR and plot it (adjusting `y_max` so it is greater than our maximum grain yield): ``` my_eonr.calculate_eonr(df_data) my_eonr.plot_eonr(x_min=-5, x_max=300, y_min=-100, y_max=18000) ``` We see that the **agronomic** optimum nitrogen rate was calculated as **177** kg per ha, and the MRTN is **13.579 Mg per ha** (yes, it says $13,579, but because `price_grain` was set to \\$1, the values are equivalent and the units can be substituted. If you've gone through the [first tutorial](tutorial.md), you'll notice there are a few major differences in the look of this plot: **The red line representing nitrogen fertilizer cost is missing** **The GRTN line does not pass through the y-intercept at** $\text{y}=0$**?** *Because* `base_zero` *was set to* `False`*, the observed data (blue points) were not standardized as to "force" the best-fit model from passing through at* $\text{y}=0$. - - - ### Bootstrap confidence intervals We will calculate the AONR again, but this time we will compute the **bootstrap** confidence intervals in addition to the **profile-likelihood** and **Wald-type** confidence intervals. To tell `EONR` to compute the **bootstrap** confidence intervals, simply set `bootstrap_ci` to `True` in the `EONR.calcualte_eonr()` function: ``` my_eonr.calculate_eonr(df_data, bootstrap_ci=True) my_eonr.plot_tau() my_eonr.fig_tau = my_eonr.plot_modify_size(fig=my_eonr.fig_tau.fig, plotsize_x=5, plotsize_y=4.0) ``` - - - ### Set fixed costs Fixed costs (on a per area basis) can be considered by `EONR`. Simply set the fixed costs (using `EONR.update_econ()`) before calculating the EONR: ``` costs_fixed = 12.00 # set to $12 per hectare my_eonr.update_econ(costs_fixed=costs_fixed) ``` - - - ### Loop through several economic conditions `EONR` computes the optimum nitrogen rate for any economic scenario that we define. The `EONR` class is designed so the economic conditions can be adjusted, calculating the optimum nitrogen rate after each adjustment. We just have to set up a simple loop to update the economic scenario (using `EONR.update_econ()`) and calculate the EONR (using `EONR.calculate_eonr()`). We will also generate plots and save them to our base directory right away: ``` price_grain = 0.157 # set from $1 to 15.7 cents per kg cost_n_fert_list = [0.44, 1.32, 2.20] for cost_n_fert in cost_n_fert_list: # first, update fertilizer cost my_eonr.update_econ(cost_n_fert=cost_n_fert, price_grain=price_grain) # second, calculate EONR my_eonr.calculate_eonr(df_data) # third, generate (and save) the plots my_eonr.plot_eonr(x_min=-5, x_max=300, y_min=-100, y_max=2600) my_eonr.plot_save(fname='eonr_mn2012_pre.png', fig=my_eonr.fig_eonr) ``` A similar loop can be made adjusting the social cost of nitrogen. `my_eonr.base_zero` is set to `True` to compare the graphs: ``` price_grain = 0.157 # keep at 15.7 cents per kg cost_n_fert = 0.88 # set to be constant my_eonr.update_econ(price_grain=price_grain, cost_n_fert=cost_n_fert) my_eonr.base_zero = True # let's use base zero to compare graphs cost_n_social_list = [0.44, 1.32, 2.20, 4.40] for cost_n_social in cost_n_social_list: # first, update social cost my_eonr.update_econ(cost_n_social=cost_n_social) # second, calculate EONR my_eonr.calculate_eonr(df_data) # third, generate (and save) the plots my_eonr.plot_eonr(x_min=-5, x_max=300, y_min=-400, y_max=1400) my_eonr.plot_save(fname='eonr_mn2012_pre.png', fig=my_eonr.fig_eonr) ``` Notice that we used the same `EONR` instance (`my_eonr`) for all runs. This is convenient if there are many combinations of economic scenarios (or many experimental datasets) that you'd like to loop through. If you'd like the results to be saved separate (perhaps to separate results depending if a social cost is considered) that's fine too. Simply create a new instance of `EONR` and customize how you'd like. - - - ### View results All eight runs can be viewed in the dataframe: ``` my_eonr.df_results ``` `EONR.df_results` contains the following data columns (some columns are hidden by Jupyter in the table above): * **price_grain** -- price of grain * **cost_n_fert** -- cost of nitrogen fertilizer * **cost_n_social** -- other "social" costs of nitrogen * **price_ratio** -- *cost:grain* price ratio * **unit_price_grain** -- units descirbing the price of grain * **unit_cost_n** -- units describing cost of nitrogen (both fertilizer and social costs) * **location** -- location of dataset * **year** -- year of dataset * **time_n** -- nitrogen application timing of dataset * **model** -- the model used to fit the experimental data (e.g., "quadratic" or "quad_plateau"). * **base_zero** -- if `base_zero = True`, this is the y-intercept ($\beta_0$) of the quadratic-plateau model before standardizing the data * **eonr** -- optimum nitrogen rate (can be *agronomic*, *economic*, or *socially* optimum; in units of `EONR.unit_nrate`) * **eonr_bias** -- bias in reparameterized quadratic-plateau model for computation of confidence intervals * **R*** -- the coefficient representing the generalized cost function * **costs_at_onr** -- total costs at the optimum nitrogen rate * **ci_level** -- confidence interval (CI) level (for subsequent confidence bounds) * **ci_wald_l** -- lower Wald CI * **ci_wald_u** -- upper Wald CI * **ci_pl_l** -- lower profile-likelihood CI * **ci_pl_u** -- upper profile-likelihood CI * **ci_boot_l** -- lower bootstrap CI * **ci_boot_u** -- upper bootstrap CI * **mrtn** -- maximum return to nitrogen (in units of `EONR.unit_currency`) * **grtn_r2_adj** -- adjusted $\text{r}^2$ value of the gross return to nitrogen (GRTN) model * **grtn_rmse** -- root mean squared error of the GRTN * **grtn_max_y** -- maximum y value of the GRTN (in units of `EONR.unit_rtn`) * **grtn_crit_x** -- critical x-value of the GRTN (point where the "quadratic" part of the quadratic-plateau model terminates and the "plateau" commences) * **grtn_y_int** -- y-intercept ($\beta_0$) of the GRTN model * **scn_lin_r2** -- adjusted $\text{r}^2$ value of the linear best-fit for social cost of nitrogen * **scn_lin_rmse** -- root mean squared error of the the linear best-fit for social cost of nitrogen * **scn_exp_r2** -- adjusted $\text{r}^2$ value value of the exponential best-fit for social cost of nitrogen * **scn_exp_rmse** -- root mean squared error of the the exponential best-fit for social cost of nitrogen - - - ### Save the data The results generated by `EONR` can be saved to the `EONR.base_dir` using the Pandas `df.to_csv()` function. A folder will be created in the base_dir whose name is determined by the **current economic scenario** of `my_eonr` (in this case "social_336_4400", corresponding to `cost_n_social > 0`, `price_ratio == 33.6`, and `cost_n_social == 4.40` for "social", "336", and "4400" in the folder name, respectively): ``` print(my_eonr.base_dir) my_eonr.df_results.to_csv(os.path.join(os.path.split(my_eonr.base_dir)[0], 'advanced_tutorial_results.csv'), index=False) my_eonr.df_ci.to_csv(os.path.join(os.path.split(my_eonr.base_dir)[0], 'advanced_tutorial_ci.csv'), index=False) ```
github_jupyter
``` import sys sys.path.append("/data/yosef2/users/chenling/harmonization/MAGAN/MAGAN/") sys.path.append("/data/yosef2/users/chenling/HarmonizationSCANVI") import numpy as np import tensorflow as tf import matplotlib from utils import now from model import MAGAN from loader import Loader import matplotlib.pyplot as plt import matplotlib.cm def correspondence_loss(b1, b2): """ The correspondence loss. :param b1: a tensor representing the object in the graph of the current minibatch from domain one :param b2: a tensor representing the object in the graph of the current minibatch from domain two :returns a scalar tensor of the correspondence loss """ domain1cols = [0] domain2cols = [0] loss = tf.constant(0.) for c1, c2 in zip(domain1cols, domain2cols): loss += tf.reduce_mean((b1[:, c1] - b2[:, c2])**2) return loss import os os.chdir("/data/yosef2/users/chenling/HarmonizationSCANVI") import pickle as pkl plotname = 'DentateGyrus' from scvi.dataset.MouseBrain import DentateGyrus10X, DentateGyrusC1 from scvi.dataset.dataset import GeneExpressionDataset dataset1= DentateGyrus10X() dataset1.subsample_genes(dataset1.nb_genes) dataset2 = DentateGyrusC1() dataset2.subsample_genes(dataset2.nb_genes) gene_dataset = GeneExpressionDataset.concat_datasets(dataset1,dataset2) from scvi.dataset.dataset import SubsetGenes dataset1, dataset2, gene_dataset = SubsetGenes(dataset1, dataset2, gene_dataset, plotname) import matplotlib matplotlib.rcParams['pdf.fonttype'] = 42 matplotlib.rcParams['ps.fonttype'] = 42 import matplotlib.pyplot as plt %matplotlib inline batch = gene_dataset.batch_indices.ravel() X = gene_dataset.X scaling_factor = gene_dataset.X.mean(axis=1) norm_X = gene_dataset.X / scaling_factor.reshape(len(scaling_factor), 1) index_0 = np.where(batch == 0)[0] index_1 = np.where(batch == 1)[0] X1 = np.log(1 + norm_X[index_0]) X2 = np.log(1 + norm_X[index_1]) norm_X.sum(axis=1) loadb1 = Loader(X1, shuffle=True) loadb2 = Loader(X2, shuffle=True) # Build the tf graph magan = MAGAN(dim_b1=X1.shape[1], dim_b2=X2.shape[1], correspondence_loss=correspondence_loss) # Train loss=[] for i in range(10000): xb1_ = loadb1.next_batch(100) xb2_ = loadb2.next_batch(100) magan.train(xb1_, xb2_) xb1_ = loadb1.next_batch(5454) xb2_ = loadb2.next_batch(2303) lstring = magan.get_loss(xb1_, xb2_) loss.append(lstring) loss_D = [float(x.split(' ')[0]) for x in loss] loss_G = [float(x.split(' ')[1]) for x in loss] plt.figure(figsize=(4,2)) plt.plot(loss_D[:1000]) plt.savefig('Magan.loss_D.pdf') plt.figure(figsize=(4,2)) plt.plot(loss_G) plt.savefig('Magan.loss_G.pdf') loadb1 = Loader(X1, shuffle=False) loadb2 = Loader(X2, shuffle=False) xb1_ = loadb1.next_batch(len(index_0)) xb2_ = loadb2.next_batch(len(index_1)) lstring = magan.get_loss(xb1_, xb2_) print("{} {}".format(magan.get_loss_names(), lstring)) xb1 = magan.get_layer(xb1_, xb2_, 'xb1') xb2 = magan.get_layer(xb1_, xb2_, 'xb2') Gb1 = magan.get_layer(xb1_, xb2_, 'Gb1') Gb2 = magan.get_layer(xb1_, xb2_, 'Gb2') arr1 = np.zeros_like(X, dtype=np.float) arr1[index_0] = xb1 arr1[index_1] = Gb1 arr2 = np.zeros_like(X, dtype=np.float) arr2[index_0] = Gb2 arr2[index_1] = xb2 print(xb1.shape,xb2.shape,Gb1.shape,Gb2.shape) np.save('../%s/%s.magan.npy'%(plotname,'imputed0'),arr1) np.save('../%s/%s.magan.npy'%(plotname,'imputed1'),arr2) from sklearn.decomposition import PCA latent = PCA(n_components=10).fit_transform(arr1) np.save('../%s/%s.magan.npy'%(plotname,'latent0'),latent) from sklearn.decomposition import PCA latent = PCA(n_components=10).fit_transform(arr2) np.save('../%s/%s.magan.npy'%(plotname,'latent1'),latent) from umap import UMAP latent = np.load('../%s/%s.magan.npy'%(plotname,'latent0')) latent_u = UMAP(spread=2).fit_transform(latent) labels = gene_dataset.labels.ravel() plt.figure(figsize=(10, 5)) plt.subplot(121) for i,x in enumerate([0,1]): idx = (batch==x) plt.scatter(latent_u[idx, 0], latent_u[idx, 1],label=x,edgecolors='none',s=5) plt.axis("off") # plt.legend() plt.tight_layout() plt.subplot(122) for i,x in enumerate(np.unique(labels)): idx = (labels==x) plt.scatter(latent_u[idx, 0], latent_u[idx, 1],label=x,edgecolors='none',s=5) plt.axis("off") # plt.legend() plt.tight_layout() plt.savefig('Magan.latent0_u.pdf') latent = np.load('../%s/%s.magan.npy'%(plotname,'latent1')) latent_u = UMAP(spread=2).fit_transform(latent) labels = gene_dataset.labels.ravel() plt.figure(figsize=(10, 5)) plt.subplot(121) for i,x in enumerate([0,1]): idx = (batch==x) plt.scatter(latent_u[idx, 0], latent_u[idx, 1],label=x,edgecolors='none',s=5) plt.axis("off") plt.tight_layout() plt.subplot(122) for i,x in enumerate(np.unique(labels)): idx = (labels==x) plt.scatter(latent_u[idx, 0], latent_u[idx, 1],label=x,edgecolors='none',s=5) plt.axis("off") plt.tight_layout() plt.savefig('Magan.latent1_u.pdf') plt.figure(figsize=(10, 5)) plt.subplot(121) for i,x in enumerate([0,1]): idx = (batch==x) plt.scatter(latent_u[0, 0], latent_u[0, 1],label=x,edgecolors='none') plt.axis("off") plt.legend() plt.tight_layout() plt.subplot(122) for i,x in enumerate(np.unique(labels)): idx = (labels==x) plt.scatter(latent_u[0, 0], latent_u[0, 1], label=gene_dataset.cell_types[i],edgecolors='none') plt.axis("off") plt.legend() plt.tight_layout() plt.savefig('Magan.legend.pdf') from scvi.inference.posterior import entropy_batch_mixing batch_entropy = entropy_batch_mixing(latent, batch) batch_entropy ```
github_jupyter
# Detection of peaks in data > Marcos Duarte > Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) > Federal University of ABC, Brazil One way to detect peaks (local maxima) or valleys (local minima) in data is to use the property that a peak (or valley) must be greater (or smaller) than its immediate neighbors. The function `detect_peaks.py` (code at the end of this notebook) detects peaks (or valleys) based on this feature and other characteristics. The function signature is: ```python ind = detect_peaks(x, mph=None, mpd=1, threshold=0, edge='rising', kpsh=False, valley=False, show=False, ax=None, title=True) ``` The parameters `mph`, `mpd`, and `threshold` follow the convention of the Matlab function `findpeaks.m`. Let's see how to use `detect_peaks.py`; first let's import the necessary Python libraries and configure the environment: <!-- TEASER_END --> ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline import sys sys.path.insert(1, r'./../functions') # add to pythonpath from detect_peaks import detect_peaks ``` Running the function examples: ``` >>> from detect_peaks import detect_peaks >>> x = np.random.randn(100) >>> x[60:81] = np.nan >>> # detect all peaks and plot data >>> ind = detect_peaks(x, show=True) >>> print(ind) >>> x = np.sin(2*np.pi*5*np.linspace(0, 1, 200)) + np.random.randn(200)/5 >>> # set minimum peak height = 0 and minimum peak distance = 20 >>> detect_peaks(x, mph=0, mpd=20, show=True) >>> x = [0, 1, 0, 2, 0, 3, 0, 2, 0, 1, 0] >>> # set minimum peak distance = 2 >>> detect_peaks(x, mpd=2, show=True) >>> x = np.sin(2*np.pi*5*np.linspace(0, 1, 200)) + np.random.randn(200)/5 >>> # detection of valleys instead of peaks >>> detect_peaks(x, mph=-1.2, mpd=20, valley=True, show=True) >>> x = [0, 1, 1, 0, 1, 1, 0] >>> # detect both edges >>> detect_peaks(x, edge='both', show=True) >>> x = [-2, 1, -2, 2, 1, 1, 3, 0] >>> # set threshold = 2 >>> detect_peaks(x, threshold = 2, show=True) >>> x = [-2, 1, -2, 2, 1, 1, 3, 0] >>> fig, axs = plt.subplots(ncols=2, nrows=1, figsize=(10, 4)) >>> detect_peaks(x, show=True, ax=axs[0], threshold=0.5, title=False) >>> detect_peaks(x, show=True, ax=axs[1], threshold=1.5, title=False) ``` ## Function performance The function `detect_peaks.py` is relatively fast but the parameter minimum peak distance (mpd) slows down the function if the data has several peaks (>1000). Try to decrease the number of peaks by tuning the other parameters or smooth the data before calling this function with several peaks in the data. Here is a simple test of its performance: ``` x = np.random.randn(10000) ind = detect_peaks(x) print('Data with %d points and %d peaks\n' %(x.size, ind.size)) print('Performance (without the minimum peak distance parameter):') print('detect_peaks(x)') %timeit detect_peaks(x) print('\nPerformance (using the minimum peak distance parameter):') print('detect_peaks(x, mpd=10)') %timeit detect_peaks(x, mpd=10) ``` ## Function `detect_peaks.py` ``` # %load ./../functions/detect_peaks.py """Detect peaks in data based on their amplitude and other features.""" from __future__ import division, print_function import numpy as np __author__ = "Marcos Duarte, https://github.com/demotu/BMC" __version__ = "1.0.6" __license__ = "MIT" def detect_peaks(x, mph=None, mpd=1, threshold=0, edge='rising', kpsh=False, valley=False, show=False, ax=None, title=True): """Detect peaks in data based on their amplitude and other features. Parameters ---------- x : 1D array_like data. mph : {None, number}, optional (default = None) detect peaks that are greater than minimum peak height (if parameter `valley` is False) or peaks that are smaller than maximum peak height (if parameter `valley` is True). mpd : positive integer, optional (default = 1) detect peaks that are at least separated by minimum peak distance (in number of data). threshold : positive number, optional (default = 0) detect peaks (valleys) that are greater (smaller) than `threshold` in relation to their immediate neighbors. edge : {None, 'rising', 'falling', 'both'}, optional (default = 'rising') for a flat peak, keep only the rising edge ('rising'), only the falling edge ('falling'), both edges ('both'), or don't detect a flat peak (None). kpsh : bool, optional (default = False) keep peaks with same height even if they are closer than `mpd`. valley : bool, optional (default = False) if True (1), detect valleys (local minima) instead of peaks. show : bool, optional (default = False) if True (1), plot data in matplotlib figure. ax : a matplotlib.axes.Axes instance, optional (default = None). title : bool or string, optional (default = True) if True, show standard title. If False or empty string, doesn't show any title. If string, shows string as title. Returns ------- ind : 1D array_like indeces of the peaks in `x`. Notes ----- The detection of valleys instead of peaks is performed internally by simply negating the data: `ind_valleys = detect_peaks(-x)` The function can handle NaN's See this IPython Notebook [1]_. References ---------- .. [1] http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/DetectPeaks.ipynb Examples -------- >>> from detect_peaks import detect_peaks >>> x = np.random.randn(100) >>> x[60:81] = np.nan >>> # detect all peaks and plot data >>> ind = detect_peaks(x, show=True) >>> print(ind) >>> x = np.sin(2*np.pi*5*np.linspace(0, 1, 200)) + np.random.randn(200)/5 >>> # set minimum peak height = 0 and minimum peak distance = 20 >>> detect_peaks(x, mph=0, mpd=20, show=True) >>> x = [0, 1, 0, 2, 0, 3, 0, 2, 0, 1, 0] >>> # set minimum peak distance = 2 >>> detect_peaks(x, mpd=2, show=True) >>> x = np.sin(2*np.pi*5*np.linspace(0, 1, 200)) + np.random.randn(200)/5 >>> # detection of valleys instead of peaks >>> detect_peaks(x, mph=-1.2, mpd=20, valley=True, show=True) >>> x = [0, 1, 1, 0, 1, 1, 0] >>> # detect both edges >>> detect_peaks(x, edge='both', show=True) >>> x = [-2, 1, -2, 2, 1, 1, 3, 0] >>> # set threshold = 2 >>> detect_peaks(x, threshold = 2, show=True) >>> x = [-2, 1, -2, 2, 1, 1, 3, 0] >>> fig, axs = plt.subplots(ncols=2, nrows=1, figsize=(10, 4)) >>> detect_peaks(x, show=True, ax=axs[0], threshold=0.5, title=False) >>> detect_peaks(x, show=True, ax=axs[1], threshold=1.5, title=False) Version history --------------- '1.0.6': Fix issue of when specifying ax object only the first plot was shown Add parameter to choose if a title is shown and input a title '1.0.5': The sign of `mph` is inverted if parameter `valley` is True """ x = np.atleast_1d(x).astype('float64') if x.size < 3: return np.array([], dtype=int) if valley: x = -x if mph is not None: mph = -mph # find indices of all peaks dx = x[1:] - x[:-1] # handle NaN's indnan = np.where(np.isnan(x))[0] if indnan.size: x[indnan] = np.inf dx[np.where(np.isnan(dx))[0]] = np.inf ine, ire, ife = np.array([[], [], []], dtype=int) if not edge: ine = np.where((np.hstack((dx, 0)) < 0) & (np.hstack((0, dx)) > 0))[0] else: if edge.lower() in ['rising', 'both']: ire = np.where((np.hstack((dx, 0)) <= 0) & (np.hstack((0, dx)) > 0))[0] if edge.lower() in ['falling', 'both']: ife = np.where((np.hstack((dx, 0)) < 0) & (np.hstack((0, dx)) >= 0))[0] ind = np.unique(np.hstack((ine, ire, ife))) # handle NaN's if ind.size and indnan.size: # NaN's and values close to NaN's cannot be peaks ind = ind[np.in1d(ind, np.unique(np.hstack((indnan, indnan-1, indnan+1))), invert=True)] # first and last values of x cannot be peaks if ind.size and ind[0] == 0: ind = ind[1:] if ind.size and ind[-1] == x.size-1: ind = ind[:-1] # remove peaks < minimum peak height if ind.size and mph is not None: ind = ind[x[ind] >= mph] # remove peaks - neighbors < threshold if ind.size and threshold > 0: dx = np.min(np.vstack([x[ind]-x[ind-1], x[ind]-x[ind+1]]), axis=0) ind = np.delete(ind, np.where(dx < threshold)[0]) # detect small peaks closer than minimum peak distance if ind.size and mpd > 1: ind = ind[np.argsort(x[ind])][::-1] # sort ind by peak height idel = np.zeros(ind.size, dtype=bool) for i in range(ind.size): if not idel[i]: # keep peaks with the same height if kpsh is True idel = idel | (ind >= ind[i] - mpd) & (ind <= ind[i] + mpd) \ & (x[ind[i]] > x[ind] if kpsh else True) idel[i] = 0 # Keep current peak # remove the small peaks and sort back the indices by their occurrence ind = np.sort(ind[~idel]) if show: if indnan.size: x[indnan] = np.nan if valley: x = -x if mph is not None: mph = -mph _plot(x, mph, mpd, threshold, edge, valley, ax, ind, title) return ind def _plot(x, mph, mpd, threshold, edge, valley, ax, ind, title): """Plot results of the detect_peaks function, see its help.""" try: import matplotlib.pyplot as plt except ImportError: print('matplotlib is not available.') else: if ax is None: _, ax = plt.subplots(1, 1, figsize=(8, 4)) no_ax = True else: no_ax = False ax.plot(x, 'b', lw=1) if ind.size: label = 'valley' if valley else 'peak' label = label + 's' if ind.size > 1 else label ax.plot(ind, x[ind], '+', mfc=None, mec='r', mew=2, ms=8, label='%d %s' % (ind.size, label)) ax.legend(loc='best', framealpha=.5, numpoints=1) ax.set_xlim(-.02*x.size, x.size*1.02-1) ymin, ymax = x[np.isfinite(x)].min(), x[np.isfinite(x)].max() yrange = ymax - ymin if ymax > ymin else 1 ax.set_ylim(ymin - 0.1*yrange, ymax + 0.1*yrange) ax.set_xlabel('Data #', fontsize=14) ax.set_ylabel('Amplitude', fontsize=14) if title: if not isinstance(title, str): mode = 'Valley detection' if valley else 'Peak detection' title = "%s (mph=%s, mpd=%d, threshold=%s, edge='%s')"% \ (mode, str(mph), mpd, str(threshold), edge) ax.set_title(title) # plt.grid() if no_ax: plt.show() ```
github_jupyter
# CNN implementation on MNIST dataset in Keras- Detailed explanation <hr> First version - 25/07/2018 Last version - 24/11/2018 <hr> 1. **Introduction** 2. **Data pre-processing** 2.1. Load data 2.2. Check shape, data type 2.3. Extract xtrain, ytrain 2.4. Mean and std of classes 2.5. Check nuls and missing values 2.6. Check nuls and missing values 2.6. Visualization 2.7. Normalization 2.8. Reshape 2.9. One hot encoding of label 2.10. Split training and validation sets 3. **CNN** 3.1. Define model architecture 3.2. Compile the moedl 3.3. Set other parameters 3.4. Fit model 3.5. Plot loss and accuracy 3.6. Plot confusion matrix 3.7. Plot errors 4. **Predict and save to csv** # 1. **Introduction** This is my first CNN kernel and as such, I believe the [Digit Recognizer dataset/competition](https://www.kaggle.com/c/digit-recognizer) is a very suitable set of images for a beginner CNN project, considering the image size is homogeneous across all images (not common in real-world problems), that the size is small (28x28) so no resizing required, they are in grayscale and they are already in a csv, which can be easily read into a dataframe. <img src="https://www.codeproject.com/KB/AI/1233183/dMRUT6k.png" ></img> Given the comfort that this dataset provides and taking inspration from very popular kernels such as [yassineghouzam's kernel](https://www.kaggle.com/yassineghouzam/introduction-to-cnn-keras-0-997-top-6) and [poonaml's kernel](https://www.kaggle.com/poonaml/deep-neural-network-keras-way) among others, I've created my own kernel joining what I have found most useful from each kernel, as well as adding what I have learnt in the process and other notes that may be helpful for others or for future me. The kernel consists in 3 main parts: * *Data preparation*: Firstly, even if the input data is already quite clean as mentioned before, it still needs some preparation and pre-processing in order to be in an appropriate format to then later be fed to the NN. This includes data separation, reshaping and visualization which might give insight to the data scientist as to the nature of the images. * *CNN*: Afterwards, the NN is defined (this is where Keras comes in), the convolutional steps added, NN parameters initialized, and the model trained. This part takes the most time in a ML project. * *Evaluation*: Once the model is trained, it's interesting to evaluate the model performance by seeing the progress of the loss and extract some conclusions, that the model is overfitting, or if there is high variance for instance. If you find some errors in theoretical concepts, comments of any kind or suggestions, please do let me know :) ``` # import libraries import numpy as np # linear algebra, matrix multiplications import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) ``` # 2. **Data pre-processing** ## 2.1 Load data * *train*: this is the data used to train the CNN. the image data and their corresponding class is provided. the CNN learns the weights to create the mapping from the image data to their corresponding class. * *test*: this is the data used to test the CNN. only the image data is provided. the prediction is submitted to the competition and depending on the accuracy, a score is obtained. ``` train = pd.read_csv("../input/train.csv") test = pd.read_csv("../input/test.csv") ``` ## 2.3. Check shape, data type * *train*: the train dataframe contains data from 42k images. the data from each image is streched out in 1D with 28*28 = 784 pixels. the first column is the label/class it belongs to, the digit it represents. * *test*: the test dataframe contains data from 28k images. this data shall be fed to the CNN so that it's new data, that the CNN has never seen before. same as in the train dataset, image data is streched out in 1D with 784 pixels. there is no label information, that is the goal of the competition, predicting labels as well as possible. ``` print(train.shape) ntrain = train.shape[0] print(test.shape) ntest = test.shape[0] train.head(10) # check data type print(train.dtypes[:5]) # all int64, otherwise do train = train.astype('int64') print(train.dtypes[:5]) # all int64, otherwise do test = test.astype('int64') ``` ## 2.4 Extract xtrain, ytrain The CNN will be fed xtrain and it will learn the weights to map xtrain to ytrain ``` # array containing labels of each image ytrain = train["label"] print("Shape of ytrain: ", ytrain.shape) # dataframe containing all pixels (the label column is dropped) xtrain = train.drop("label", axis=1) # the images are in square form, so dim*dim = 784 from math import sqrt dim = int(sqrt(xtrain.shape[1])) print("The images are {}x{} squares.".format(dim, dim)) print("Shape of xtrain: ", xtrain.shape) ytrain.head(5) ``` ## 2.5. Mean and std of the classes ``` import seaborn as sns sns.set(style='white', context='notebook', palette='deep') # plot how many images there are in each class sns.countplot(ytrain) print(ytrain.shape) print(type(ytrain)) # array with each class and its number of images vals_class = ytrain.value_counts() print(vals_class) # mean and std cls_mean = np.mean(vals_class) cls_std = np.std(vals_class,ddof=1) print("The mean amount of elements per class is", cls_mean) print("The standard deviation in the element per class distribution is", cls_std) # 68% - 95% - 99% rule, the 68% of the data should be cls_std away from the mean and so on # https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule if cls_std > cls_mean * (0.6827 / 2): print("The standard deviation is high") # if the data is skewed then we won't be able to use accurace as its results will be misleading and we may use F-beta score instead. ``` > Summary Shape of *xtrain* is: (42000, 784) Shape of *ytrain* is: (42000, ) Shape of *test* is: (28000, 784) number of classes = 10, the distribution of the pictures per class has a mean of 4200 images and a std of 237 images. The digit 1 has the most representation (4684 images) and the digit 5 the least (3795 images). This data can be seen by printing *vals_class* This corresponds to a small standard deviation (5.64%) so there is no class imbalance. In case there was, other techniques would have to be considered but this is outside the scope of this notebook. ## 2.6. Check nuls and missing values ```python df.isnull() ``` returns a boolean df with true if value is NaN and false otherwise. ```python df.isnull().any() ``` returns a df with 1 col and ncol rows where each row says if there is a NaN value present in that col. ```python df.isnull().any().any() ``` returns a bool with True if any of the df.isnull().any() rows is True ``` def check_nan(df): print(df.isnull().any().describe()) print("There are missing values" if df.isnull().any().any() else "There are no missing values") if df.isnull().any().any(): print(df.isnull().sum(axis=0)) print() check_nan(xtrain) check_nan(test) ``` ## 2.7. Visualization The first nine images in the dataset (which are not ordered by digit) are plotted, just for visualization. There is only one color channel (grayscale) and moreover the pixels are binarized, meaning that hey are either black (with value 0) or white (255). This makes the classification problem easier. Imagine that the CNN received colored digits, either solid, gradient, or digits with many colors. Probably some part of the neural network would focus on learning to tell the digits apart by looking at the colors, when the actual difference between the digits is in their shape. ``` import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline # convert train dataset to (num_images, img_rows, img_cols) format in order to plot it xtrain_vis = xtrain.values.reshape(ntrain, dim, dim) # https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplot.html # subplot(2,3,3) = subplot(233) # a grid of 3x3 is created, then plots are inserted in some of these slots for i in range(0,9): # how many imgs will show from the 3x3 grid plt.subplot(330 + (i+1)) # open next subplot plt.imshow(xtrain_vis[i], cmap=plt.get_cmap('gray')) plt.title(ytrain[i]); ``` ## 2.8. Normalization Pixels are represented in the range [0-255], but the NN converges faster with smaller values, in the range [0-1] so they are normalized to this range. ``` # Normalize the data xtrain = xtrain / 255.0 test = test / 255.0 ``` ## 2.9. Reshape ``` # reshape of image data to (nimg, img_rows, img_cols, 1) def df_reshape(df): print("Previous shape, pixels are in 1D vector:", df.shape) df = df.values.reshape(-1, dim, dim, 1) # -1 means the dimension doesn't change, so 42000 in the case of xtrain and 28000 in the case of test print("After reshape, pixels are a 28x28x1 3D matrix:", df.shape) return df xtrain = df_reshape(xtrain) # numpy.ndarray type test = df_reshape(test) # numpy.ndarray type ``` > Note In real world problems, the dimensions of images could diverge from this particular 28x28x3 set in two ways: * Images are usually much bigger In this case all images are 28x28x1, but in another problem I'm working on, I have images of 3120x4160x3, so much bigger and in RGB. Usually images are resized to much smaller dimensions, in my case I'm resizing them to 64x64x3 but they can be made much smaller depending on the problem. In this MNIST dataset there is no such problem since the dimensions are already small. * Images don't usually have the same dimensions Different dimension images are a problem since dense layers at the end of the CNN have a fixed number of neurons, which cannot be dynamically changed. This means that the layer expects fixed image dimensions, which means all images must be resized to the same dimensions before training. There is another option, namely, using a FCN (fully convoluted network) which consits solely of convolutional layers and a very big pooling in the end, so each image can be of any size, but this architecture isn't as popular as the CNN + FC (fully connected) layers which is the one I'm familiarized with. There are various methods to make images have the same dimensions: * resize to a fixed dimension * add padding to some images and resize * ... In my other problem I have scanned pictures, so I trim the whitespace and resize afterwards. Being this a beginner-friendly dataset, all digits are the same size, binarized and well centered so no need to worry about resizing. ## 2.10. One hot encoding of label At this point in the notebook the labels vary in the range [0-9] which is intuitive, but in order to define the type of loss for the NN later, which in this case is categorical_crossentropy (reason is explained in section 2), the targets should be in categorical format (=one hot-vectors): ex : 2 -> [0,0,1,0,0,0,0,0,0,0] ytrain before 0 1 1 0 2 1 3 4 4 0 where the first column is the index, ytrain after [[0. 1. 0. ... 0. 0. 0.] [1. 0. 0. ... 0. 0. 0.] [0. 1. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 1. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 1.]] ``` from keras.utils.np_utils import to_categorical print(type(ytrain)) # number of classes, in this case 10 nclasses = ytrain.max() - ytrain.min() + 1 print("Shape of ytrain before: ", ytrain.shape) # (42000,) ytrain = to_categorical(ytrain, num_classes = nclasses) print("Shape of ytrain after: ", ytrain.shape) # (42000, 10), also numpy.ndarray type print(type(ytrain)) ``` ## 2.11. Split training and validation sets The available data is 42k images. If the NN is trained with these 42k images, it might overfit and respond poorly to new data. Overfitting means that the NN doesn't generalize for the digits, it just learns the differences in those 42k images. When faced with new digits slightly different, the performance decreases considerably. This is not a good outcome, since the goal of the NN is to learn from the training set digits so that it does well on the **new digits**. In order to avoid submitting the predictions and risking a bad performance, and to determine whether the NN overfits, a small percentage of the train data is separated and named validation data. The ratio of the split can vary from 10% in small datasets to 1% in cases with 1M images. The NN is then trained with the remaining of the training data, and in each step/epoch, the NN is tested against the validation data and we can see its performance. That way we can watch how the loss and accuracy metrics vary during training, and in the end determine where there is overfitting and take action (more on this later). For example, the results I had after the 20th epoch with a certain CNN architecture which turned out to overfit: > loss: 0.0066 - acc: 0.9980 - val_loss: 0.0291 - val_acc: 0.9940 In this example and without getting much into detail, the __training loss__ is very low while the __val_loss__ is 4 times higher, and the __training accuracy__ is a little higher than the __val_acc__. The accuracy difference is not that much, partly because we are talking about 0.998 vs 0.994, which is exceptionally high, but the difference in loss suggests an overfitting problem. Coming back to the general idea, the **val_acc** is the important metric. The NN might do very well with trained data but the goal is that the NN learns to generalize other than learning the training data "by heart". If the NN does well with val data, it's probable that it generalizes well to a certain extent and it will do well with the test data. (more on this in section 2 regarding CNNs). __random_state__ in train_test_split ensures that the data is pseudo-randomly divided. If the images were ordered by class, activating this feature guarantees their pseudo-random split. The seed means that every time this pseudo-randomization is applied, the distribution is the same. __stratify__ in train_test_split ensures that there is no overrepresentation of classes in the val set. It is used to avoid some labels being overrepresented in the val set. > Note: only works with sklearn version > 0.17 ``` from sklearn.model_selection import train_test_split # fix random seed for reproducibility seed = 2 np.random.seed(seed) # percentage of xtrain which will be xval split_pct = 0.1 # Split the train and the validation set xtrain, xval, ytrain, yval = train_test_split(xtrain, ytrain, test_size=split_pct, random_state=seed, shuffle=True, stratify=ytrain ) print(xtrain.shape, ytrain.shape, xval.shape, yval.shape) ``` > Summary The available data is now divided as follows: * **Train data**: images (xtrain) and labels (ytrain), 90% of the available data * **Validation data**: images (xval) and labels (yval), 10% of the available data # 3. **CNN** In this section the CNN is defined, including architecture, optimizers, metrics, learning rate reductions, data augmentation... Then it is compiled and fit to the training set. ``` from keras import backend as K # for the architecture from keras.models import Sequential from keras.layers import Dense, Dropout, Lambda, Flatten, BatchNormalization from keras.layers import Conv2D, MaxPool2D, AvgPool2D # optimizer, data generator and learning rate reductor from keras.optimizers import Adam from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import ReduceLROnPlateau ``` ## 3.1. Define model architecture Below is an example CNN architecture: ![CNN architecture](https://cdn-images-1.medium.com/max/1000/1*xlOZHo8svfDWDyxFFnMurQ.png) My final CNN architechture is: > In &rarr; [ [Conv2D &rarr; relu]\*2 &rarr; MaxPool2D &rarr; Dropout ]\*2 &rarr; Flatten &rarr; Dense &rarr; Dropout &rarr; Out I'd like to encourage everyone who wants to learn about CNNs to begin with a simpler one, such as > In &rarr; [Conv2D &rarr; relu] &rarr; MaxPool2D &rarr; Flatten &rarr; Dense &rarr; Out , check the performance and keep adding layers or tweaking the parameters until you reach an architecture (that may or may not be like mine) with a __val_acc__ of 0.996 more or less, trying to improve that takes much more time and it's really about the details, but of course feel free to try it out. I just encourage that you build your own model and do your own tests, instead of looking at an already well-performing model and using that. In my case I started with the simple architecture, kept a log where I wrote down the loss and accuracy results, changed one thing at a time, checked performance and how it changed regarding the previous version, wrote down the changes I had made and how the result changed, and made further changes based on that. More info on CNN architectures here: [How to choose CNN Architecture MNIST](https://www.kaggle.com/cdeotte/how-to-choose-cnn-architecture-mnist) ### Architecture layers You can read about the theory of CNNs on the Internet from people more knowledgeable than me and who surely explain it much better. So I will skip the theory explanation for the Conv2D, MaxPool2D, Flatten and Dense layers and I will focus on smaller details. * Conv2D * __filters__: usually on the first convolutional layers there are less filters, and more deeper down the CNN. Usually a power of 2 is set, and in this case 16 offered poorer performance and I didn't want to make a big CNN with 64 or 128 filters for digit classification. * __kernel_size__: this is the filter size, usually (3,3) or (5,5) is set. I advise setting one, building the architecture and changing it to see if it affects the performance though it usually doesn't. * __padding__: two options * valid padding: no padding, the image shrinks after convolution: n - f + 1 * same padding: padding of 2, the image doesn't shrink after convolution: p = (f-1)/2 &rarr; (n+2) - f(=3) + 1 = n * __activation__: ReLU is represented mathematically by max(0,X) and offers good performance in CNNs (source: the Internet) * MaxPool2D: the goal is to reduce variance/overfitting and reduce computational complexity since it makes the image smaller. two pooling options * MaxPool2D: extracts the most important features like edges * AvgPool2D: extracts smooth features My personal conclusion then is that for binarized images, with noticeable edge differences, MaxPool performs better. * Dropout: you can read the theory on the Internet, it's a useful tool to reduce overfitting. The net becomes less sensitive to the specific weights of neurons and is more capable of better generalization and less likely to overfit to the train data. The optimal dropout value in Conv layers is 0.2, and if you want to implement it in the dense layers, its optimal value is 0.5: [Dropout in ML](https://medium.com/@amarbudhiraja/https-medium-com-amarbudhiraja-learning-less-to-learn-better-dropout-in-deep-machine-learning-74334da4bfc5) ``` model = Sequential() dim = 28 nclasses = 10 model.add(Conv2D(filters=32, kernel_size=(5,5), padding='same', activation='relu', input_shape=(dim,dim,1))) model.add(Conv2D(filters=32, kernel_size=(5,5), padding='same', activation='relu',)) model.add(MaxPool2D(pool_size=(2,2), strides=(2,2))) model.add(Dropout(0.2)) model.add(Conv2D(filters=64, kernel_size=(5,5), padding='same', activation='relu')) model.add(Conv2D(filters=64, kernel_size=(5,5), padding='same', activation='relu')) model.add(MaxPool2D(pool_size=(2,2), strides=(2,2))) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(120, activation='relu')) model.add(Dense(84, activation='relu')) model.add(Dense(nclasses, activation='softmax')) model.summary() ``` This summary shows the summary of the model, displaying each layer with the shape of the output as well as the number of parameters it needs. The first dense layer is the one with the most parameters, since it maps the 3136 outputs of the Flatten layer to the 120 neurons of the Dense layer. Since the layer is a fully connected layer, the number of parameters is: 120 * 3136 + 120. The amount of trainable parameters is roughly half a million, which is not that much considering the architecture has medium size and the input dimensions (28,28,3) are small. ## 3.2. Compile the model * **Optimizer**: it represents the gradient descent algorithm, whose goal is to minimize the cost function to approach the minimum point. **Adam** optimizer is one of the best-performing algorithms: [Adam: A Method for Stochastic Optimization](https://arxiv.org/abs/1412.6980v8). The default learning rate for the Adam optimizer is 0.001. Another optimizer choice may be RMSprop or SGD. * **Loss function**: It is a measure of the overall loss in the network after assigning values to the parameters during the forward phase so it indicates how well the parameters were chosen during the forward propagation phase. This loss function requires the labels to be encoded as one-hot vectors which is why this step was taken back in 1.8. * **Metrics**: this refers to which metric the network should achieve, the most common one being 'accuracy' but there are other metrics to measure the performance other than accuracy, such as precision or recall or F1 score. The choice depends on the problem itself. Where high recall means low number of false negatives , High precision means low number of false positives and F1 score is a trade off between them: [Precision-Recall](http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html). Depending on the problem, accuracy may not be the best metric. Suppose a binary classification problem where there are much more 0 values than 1, and therefore it's crucial that the predicted 1's are mostly correct. A network that just outputs 0 every time would get very high accuracy but the model still wouldn't perform well. Take the popular example: > A ML company has built a tool to identify terrorists among the population and they claim to have 99.99% accuracy. When inspecting their product, turns out they just output 0 in every case. Since there is only one terrorist for every 10000 people (this is made up, I actually have no idea what the probability is but all I know is that it's very low), the company has a very high precision, but there's no need of a ML tool for that. With the class imbalance being so high, accuracy is not a good metric anymore and other options should be considered. ``` model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]) ``` ## 3.3. Set other parameters ### Learning rate annealer This is a useful tool which reduces the learning rate when there is a plateau on a certain value, which you can specify. In this case the monitoring value is __val_acc__. When there is no change in __val_acc__ in 3 epochs (patience), the learning rate is multiplied by 0.5 (factor). If the learning rate has the value of min_lr, it stops decreasing. ``` lr_reduction = ReduceLROnPlateau(monitor='val_acc', patience=3, verbose=1, factor=0.5, min_lr=0.00001) ``` ### Data augmentation Data augmentation is a technique used to artificially make the training set bigger. There are a number of options for this, the most common ones include rotating images, zooming in a small range and shifting images horizontally and vertically. Beware that activating some features may be confusing for the network, imagine that when taking img1 and flipping it, it may be very similar to img2 which has a different label. With the digits 6 and 9 for example, if you take either and flip it vertically and horizontally, it becomes the other. So if you do that with the digit 9, flip it in both edges and tell the network that the digit is still a 9 when it actually is very similar to the images of the digit 6, the performance will drop considerably. So take into account the images and how activating the features may affect the labeling. ``` datagen = ImageDataGenerator( featurewise_center=False, # set input mean to 0 over the dataset samplewise_center=False, # set each sample mean to 0 featurewise_std_normalization=False, # divide inputs by std of the dataset samplewise_std_normalization=False, # divide each input by its std zca_whitening=False, # apply ZCA whitening rotation_range=30, # randomly rotate images in the range (degrees, 0 to 180) zoom_range = 0.1, # Randomly zoom image width_shift_range=0.1, # randomly shift images horizontally (fraction of total width) height_shift_range=0.1, # randomly shift images vertically (fraction of total height) horizontal_flip=False, # randomly flip images vertical_flip=False) # randomly flip images datagen.fit(xtrain) ``` ### Epochs and batch_size * **Epochs**: based on my experiments, the loss and accuracy get into a plateau at around the 10th epoch, so I usually set it to 15. * **Batch_size**: I skip the theory which you can read it on the Internet. I recommend that you try changing it and seeing the change in the loss and accuracy, in my case a batch_size of 16 turned out to be disastrous and the best case occurred when I set it to 64. ``` epochs = 15 batch_size = 64 ``` ## 3.4 Fit the model Since there is data augmentation, the fitting function changes from fit (when there is no data augmentation) to fit_generator. The first input argument is slightly different. Otherwise you can specify the verbosity, number of epochs, validation data if any, any callbacks you want to include... This is one of the most time consuming cells in the notebook, and its running time depends on the number of epochs specified, number of trainable parameters in the network and input dimensions. Changing the batch_size also conrtibutes to changes in time, the bigger the batch_size, the faster the epoch. In a GPU such as the one Kaggle ofers, the training is done in 16s per epoch adding up to a total of 400s. > Note: remember to create and compile the model all over again whenever you change something, such as batch_size or epochs or anything related to the CNN. if you don't and just run the fit cell, it will continue training on the old network. ``` history = model.fit_generator(datagen.flow(xtrain,ytrain, batch_size=batch_size), epochs=epochs, validation_data=(xval,yval), verbose=1, steps_per_epoch=xtrain.shape[0] // batch_size, callbacks=[lr_reduction]) ``` ## 3.5. Plot loss and accuracy After training the model, it's useful to plot the loss and accuracy in training and validation to see its progress and detect problems. In this particular case with this particular network, the training loss decreases, which means the network is learning, and there is no substantial difference between the training loss and validation loss wich indicates no overfitting. At this levels where the loss is so low and accuracy is so high there really is no bias or variance problem, but if you want to improve results further you could approach a bias problem, in other words, that the training loss is too high. To reduce this the recommended solutions are making a bigger network and training for a longer time. Feel free to tweak the network or epochs. ``` fig, ax = plt.subplots(2,1) ax[0].plot(history.history['loss'], color='b', label="Training loss") ax[0].plot(history.history['val_loss'], color='r', label="Validation loss",axes =ax[0]) ax[0].grid(color='black', linestyle='-', linewidth=0.25) legend = ax[0].legend(loc='best', shadow=True) ax[1].plot(history.history['acc'], color='b', label="Training accuracy") ax[1].plot(history.history['val_acc'], color='r',label="Validation accuracy") ax[1].grid(color='black', linestyle='-', linewidth=0.25) legend = ax[1].legend(loc='best', shadow=True) ``` ## 3.6. Plot confusion matrix I imported this from [yassineghouzam's kernel](https://www.kaggle.com/yassineghouzam/introduction-to-cnn-keras-0-997-top-6). The confusion matrix is a nclasses x nclasses matrix (nclasses is the number of classes/labels in your classification problem). The vertical axis shows the actual or true labels, while the horizontal axis shows the predicted labels, that is, the labels that the network has predicted. In an ideal case, the matrix would be an identity matrix. All the points in the matrix would be 0 except the ones in the diagonal. This would happen if the network predicted the correct label every time and each label were predicted correctly every time, but it rarely happens. A more common situation is that there are many values in the diagonal, many occurrences of correctly labeled images, while there are some scattered wrong-labeled-images. In this case the wrong values seem to be randomly disrtibuted, which gives no information about how to proceed. If there were a bigger number of visible errors, such as the digit 4 being mistaken by the digit 9 several times, it would be intuitive to understand what is happening, since depending on how someone writes the digit 4, it might be very similar to 9. The case of the most popular digit, the digit 1, is also noticeable (popular I mean there are 463 images corresponding to the digit 1, more than any other digit). In this case, the matrix shows that if the network predicts the digit 2, then it is always correct (the second column is all 0 except at digit 2), the network has perfect precision. However, the digit 2 is not always correctly labeled (second row), so the network doesn't have perfect recall. All in all, the more images available for a class, the less errors the network usually does. In an unrelated project I'm working with a small and imbalanced dataset and there's one class very under-represented and I'm having difficulties to predict it correctly. ``` from sklearn.metrics import confusion_matrix import itertools def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # Predict the values from the validation dataset ypred_onehot = model.predict(xval) # Convert predictions classes from one hot vectors to labels: [0 0 1 0 0 ...] --> 2 ypred = np.argmax(ypred_onehot,axis=1) # Convert validation observations from one hot vectors to labels ytrue = np.argmax(yval,axis=1) # compute the confusion matrix confusion_mtx = confusion_matrix(ytrue, ypred) # plot the confusion matrix plot_confusion_matrix(confusion_mtx, classes=range(nclasses)) ``` ## 3.7. Plot errors Anoher useful approach is to plot the errors or wrongly labeled images, hoping it provides some intuition about what the network might be doing wrong. The cell outputs just 6 error images, you can change the range in the code to see other images. After running the cell, almost all images are very confusing and it would be difficult for a human to label them correctly. The third digit could be 1 or 7, the fourth a 0 or 6, the last could be either a 3 or a 8.. Even humans could make mistakes when labelling these images. Generally speaking, a network might have high bias (high training loss) and you could spend hours trying to decrease it, but the network may have reached the best human result. If the best digit-recognizer can only achieve 90% accuracy (assume that this is true for the sake of the example), the network won't be able to do much better. So when the network reaches this point, it's usually the limit of what it can do. ``` errors = (ypred - ytrue != 0) # array of bools with true when there is an error or false when the image is cor ypred_er = ypred_onehot[errors] ypred_classes_er = ypred[errors] ytrue_er = ytrue[errors] xval_er = xval[errors] def display_errors(errors_index, img_errors, pred_errors, obs_errors): """ This function shows 6 images with their predicted and real labels""" n = 0 nrows = 2 ncols = 3 fig, ax = plt.subplots(nrows, ncols, sharex=True, sharey=True) for row in range(nrows): for col in range(ncols): error = errors_index[n] ax[row,col].imshow((img_errors[error]).reshape((28,28))) ax[row,col].set_title("Predicted label :{}\nTrue label :{}".format(pred_errors[error],obs_errors[error])) n += 1 # Probabilities of the wrong predicted numbers ypred_er_prob = np.max(ypred_er,axis=1) # Predicted probabilities of the true values in the error set true_prob_er = np.diagonal(np.take(ypred_er, ytrue_er, axis=1)) # Difference between the probability of the predicted label and the true label delta_pred_true_er = ypred_er_prob - true_prob_er # Sorted list of the delta prob errors sorted_delta_er = np.argsort(delta_pred_true_er) # Top 6 errors. You can change the range to see other images most_important_er = sorted_delta_er[-6:] # Show the top 6 errors display_errors(most_important_er, xval_er, ypred_classes_er, ytrue_er) ``` # 4. **Predict and save to csv** Once you are happy with your network, use the test data to create the prediction, this cell creates the csv format necessary to submit to the [Digit Recognizer](https://www.kaggle.com/c/digit-recognizer). > Remember that the csv is only created when you Commit the kernel, not when you run the cell in the editor. Make sure you commit the kernel, that it is successfull, and then in the kernel page (clicking the left arrows on the top left), there will be a tab called __Output__ where you can find your csv and submit it directly to the competition. ``` predictions = model.predict_classes(test, verbose=1) submissions = pd.DataFrame({"ImageId": list(range(1,len(predictions)+1)), "Label": predictions}) submissions.to_csv("mnist2908.csv", index=False, header=True) ``` And that's it! I hope the explanations were informative and the code was clean and well-formatted. If there are any comments or suggestions whatsoever, if something is not clear or if you have any (positive) criticism towards the kernel, don't hesitate to tell me!
github_jupyter
``` import torch import os import cv2 import torch.nn as nn import torch.nn.functional as F import numpy as np from torchvision import transforms transform_data = transforms.Compose( [ transforms.ToPILImage(), transforms.RandomVerticalFlip(), transforms.RandomHorizontalFlip(), transforms.RandomCrop((112)), transforms.ToTensor(), ] ) def load_data(img_size=112): data = [] labels = {} index = -1 for label in os.listdir('./data/'): index += 1 labels[label] = index print(len(labels)) X = [] y = [] for label in labels: for file in os.listdir(f'./data/{label}/'): path = f'./data/{label}/{file}' img = cv2.imread(path) img = cv2.resize(img,(img_size,img_size)) data.append([np.array(transform_data(np.array(img))),labels[label]]) X.append(np.array(transform_data(np.array(img)))) y.append(labels[label]) np.random.shuffle(data) np.random.shuffle(data) np.random.shuffle(data) np.random.shuffle(data) np.random.shuffle(data) np.save('./data.npy',data) VAL_SPLIT = 0.25 VAL_SPLIT = len(X)*VAL_SPLIT VAL_SPLIT = int(VAL_SPLIT) X_train = X[:-VAL_SPLIT] y_train = y[:-VAL_SPLIT] X_test = X[-VAL_SPLIT:] y_test = y[-VAL_SPLIT:] X = torch.from_numpy(np.array(X)) y = torch.from_numpy(np.array(y)) X_train = np.array(X_train) X_test = np.array(X_test) y_train = np.array(y_train) y_test = np.array(y_test) X_train = torch.from_numpy(X_train) X_test = torch.from_numpy(X_test) y_train = torch.from_numpy(y_train) y_test = torch.from_numpy(y_test) return X,y,X_train,X_test,y_train,y_test X,y,X_train,X_test,y_train,y_test = load_data() ``` ## Modelling ``` import torch.nn as nn import torch.nn.functional as F class BaseLine(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3,32,5) self.conv2 = nn.Conv2d(32,64,5) self.conv2batchnorm = nn.BatchNorm2d(64) self.conv3 = nn.Conv2d(64,128,5) self.fc1 = nn.Linear(128*10*10,256) self.fc2 = nn.Linear(256,128) self.fc3 = nn.Linear(128,50) self.relu = nn.ReLU() def forward(self,X): preds = F.max_pool2d(self.relu(self.conv1(X)),(2,2)) preds = F.max_pool2d(self.relu(self.conv2batchnorm(self.conv2(preds))),(2,2)) preds = F.max_pool2d(self.relu(self.conv3(preds)),(2,2)) preds = preds.view(-1,128*10*10) preds = self.relu(self.fc1(preds)) preds = self.relu(self.fc2(preds)) preds = self.relu(self.fc3(preds)) return preds device = torch.device('cuda') from torchvision import models # model = BaseLine().to(device) # model = model.to(device) model = models.resnet18(pretrained=True).to(device) in_f = model.fc.in_features model.fc = nn.Linear(in_f,50) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(),lr=0.1) PROJECT_NAME = 'Car-Brands-Images-Clf' import wandb EPOCHS = 100 BATCH_SIZE = 32 from tqdm import tqdm def get_loss(criterion,y,model,X): model.to('cuda') preds = model(X.view(-1,3,112,112).to('cuda').float()) preds.to('cuda') loss = criterion(preds,torch.tensor(y,dtype=torch.long).to('cuda')) loss.backward() return loss.item() def test(net,X,y): device = 'cuda' net.to(device) correct = 0 total = 0 net.eval() with torch.no_grad(): for i in range(len(X)): real_class = torch.argmax(y[i]).to(device) net_out = net(X[i].view(-1,3,112,112).to(device).float()) net_out = net_out[0] predictied_class = torch.argmax(net_out) if predictied_class == real_class: correct += 1 total += 1 net.train() net.to('cuda') return round(correct/total,3) # wandb.init(project=PROJECT_NAME,name='transfer-learning') # for _ in tqdm(range(EPOCHS)): # for i in range(0,len(X_train),BATCH_SIZE): # X_batch = X_train[i:i+BATCH_SIZE].view(-1,3,112,112).to(device) # y_batch = y_train[i:i+BATCH_SIZE].to(device) # model.to(device) # preds = model(X_batch) # preds = preds.to(device) # loss = criterion(preds,y_batch) # optimizer.zero_grad() # loss.backward() # optimizer.step() # wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,y_test,model,X_test),'accuracy':test(model,X_train,y_train),'val_accuracy':test(model,X_test,y_test)}) # TL vs Custom Model best = TL EPOCHS = round(2.5) BATCH_SIZE = 32 model = models.resnet18(pretrained=False, num_classes=50).to(device) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(),lr=0.1) wandb.init(project=PROJECT_NAME,name=f'models.resnet18') for _ in tqdm(range(EPOCHS),leave=False): for i in tqdm(range(0,len(X_train),BATCH_SIZE),leave=False): X_batch = X_train[i:i+BATCH_SIZE].view(-1,3,112,112).to(device) y_batch = y_train[i:i+BATCH_SIZE].to(device) model.to(device) preds = model(X_batch) preds = preds.to(device) loss = criterion(preds,y_batch) optimizer.zero_grad() loss.backward() optimizer.step() wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,y_test,model,X_test),'accuracy':test(model,X_train,y_train),'val_accuracy':test(model,X_test,y_test)}) preds y_batch ```
github_jupyter
## What is Tensorflow? TensorFlow is an open source software library for numerical computation using data flow graphs TensorFlow programs are usually structured into a construction phase, that assembles a graph, and an execution phase that uses a session to execute ops in the graph A TF program often has 2 phases: - Assemble a graph - Use a session to execute operations in the graph ## Type of Tensor in Tensorflow The main types of tensors are: - tf.Variable / tf.get_variable - tf.constant - tf.placeholder ## Basic Code Structure - Graphs - Constants are fixed value tensors - not trainable - Variables are tensors initialized in a session - trainable - Placeholders are tensors of values that are unknown during the graph construction, but passed as input during a session - Ops are functions on tensors ## Activity: first construct a multiplication and then execute it in Tensorflow ``` import tensorflow as tf input1 = tf.placeholder(tf.float32) input2 = tf.placeholder(tf.float32) output = tf.multiply(input1, input2) with tf.Session() as sess: print(sess.run(output, feed_dict={input1: [7.], input2: [2.]})) ``` ## Visualization of Tensors and ops <img src="tensorflow_graph_tensor_ops.png" width="500" height="500"> ## Activity: Write a linear regression with Tensorflow Change the indentation of `print(sess.run(W))` and see the result ``` from __future__ import print_function import tensorflow as tf import numpy import matplotlib.pyplot as plt rng = numpy.random # Parameters learning_rate = 0.01 training_epochs = 1000 display_step = 50 # Training Data train_X = numpy.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167, 7.042,10.791,5.313,7.997,5.654,9.27,3.1]) train_Y = numpy.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221, 2.827,3.465,1.65,2.904,2.42,2.94,1.3]) n_samples = train_X.shape[0] # tf Graph Input X = tf.placeholder("float") Y = tf.placeholder("float") # Set model weights W = tf.Variable(rng.randn(), name="weight") b = tf.Variable(rng.randn(), name="bias") # Construct a linear model pred = tf.add(tf.multiply(X, W), b) # Mean squared error cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples) # Gradient descent # Note, minimize() knows to modify W and b because Variable objects are trainable=True by default optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) # Initialize the variables (i.e. assign their default value) init = tf.global_variables_initializer() # Start training with tf.Session() as sess: # Run the initializer sess.run(init) # Fit all training data for epoch in range(training_epochs): for (x, y) in zip(train_X, train_Y): sess.run(optimizer, feed_dict={X: x, Y: y}) print(sess.run(W)) ``` ## Activity: Build a NN for Stock Market Prediction with Tensorflow ``` import tensorflow as tf import numpy as np import pandas as pd from sklearn.preprocessing import MinMaxScaler import matplotlib.pyplot as plt # Import data data = pd.read_csv('data_stocks.csv') # Drop date variable data = data.drop(['DATE'], 1) # Dimensions of dataset n = data.shape[0] p = data.shape[1] # Make data a np.array data = data.values # Training and test data train_start = 0 train_end = int(np.floor(0.8*n)) test_start = train_end + 1 test_end = n data_train = data[np.arange(train_start, train_end), :] data_test = data[np.arange(test_start, test_end), :] # Scale data scaler = MinMaxScaler(feature_range=(-1, 1)) scaler.fit(data_train) data_train = scaler.transform(data_train) data_test = scaler.transform(data_test) # Build X and y X_train = data_train[:, 1:] y_train = data_train[:, 0] X_test = data_test[:, 1:] y_test = data_test[:, 0] # Number of stocks in training data n_stocks = X_train.shape[1] # Neurons n_neurons_1 = 1024 n_neurons_2 = 512 n_neurons_3 = 256 n_neurons_4 = 128 # Session net = tf.InteractiveSession() # Placeholder X = tf.placeholder(dtype=tf.float32, shape=[None, n_stocks]) Y = tf.placeholder(dtype=tf.float32, shape=[None]) # Initializers sigma = 1 weight_initializer = tf.variance_scaling_initializer(mode="fan_avg", distribution="uniform", scale=sigma) bias_initializer = tf.zeros_initializer() # Hidden weights W_hidden_1 = tf.Variable(weight_initializer([n_stocks, n_neurons_1])) bias_hidden_1 = tf.Variable(bias_initializer([n_neurons_1])) W_hidden_2 = tf.Variable(weight_initializer([n_neurons_1, n_neurons_2])) bias_hidden_2 = tf.Variable(bias_initializer([n_neurons_2])) W_hidden_3 = tf.Variable(weight_initializer([n_neurons_2, n_neurons_3])) bias_hidden_3 = tf.Variable(bias_initializer([n_neurons_3])) W_hidden_4 = tf.Variable(weight_initializer([n_neurons_3, n_neurons_4])) bias_hidden_4 = tf.Variable(bias_initializer([n_neurons_4])) # Output weights W_out = tf.Variable(weight_initializer([n_neurons_4, 1])) bias_out = tf.Variable(bias_initializer([1])) # Hidden layer hidden_1 = tf.nn.relu(tf.add(tf.matmul(X, W_hidden_1), bias_hidden_1)) hidden_2 = tf.nn.relu(tf.add(tf.matmul(hidden_1, W_hidden_2), bias_hidden_2)) hidden_3 = tf.nn.relu(tf.add(tf.matmul(hidden_2, W_hidden_3), bias_hidden_3)) hidden_4 = tf.nn.relu(tf.add(tf.matmul(hidden_3, W_hidden_4), bias_hidden_4)) # Output layer (transpose!) out = tf.transpose(tf.add(tf.matmul(hidden_4, W_out), bias_out)) # Cost function mse = tf.reduce_mean(tf.squared_difference(out, Y)) # Optimizer opt = tf.train.AdamOptimizer().minimize(mse) # Init net.run(tf.global_variables_initializer()) # Setup plot plt.ion() fig = plt.figure() ax1 = fig.add_subplot(111) line1, = ax1.plot(y_test) line2, = ax1.plot(y_test * 0.5) plt.show() # Fit neural net batch_size = 256 mse_train = [] mse_test = [] # Run epochs = 10 for e in range(epochs): # Shuffle training data shuffle_indices = np.random.permutation(np.arange(len(y_train))) X_train = X_train[shuffle_indices] y_train = y_train[shuffle_indices] # Minibatch training for i in range(0, len(y_train) // batch_size): start = i * batch_size batch_x = X_train[start:start + batch_size] batch_y = y_train[start:start + batch_size] # Run optimizer with batch net.run(opt, feed_dict={X: batch_x, Y: batch_y}) # Show progress if np.mod(i, 50) == 0: # MSE train and test mse_train.append(net.run(mse, feed_dict={X: X_train, Y: y_train})) mse_test.append(net.run(mse, feed_dict={X: X_test, Y: y_test})) print('MSE Train: ', mse_train[-1]) print('MSE Test: ', mse_test[-1]) # Prediction pred = net.run(out, feed_dict={X: X_test}) line2.set_ydata(pred) plt.title('Epoch ' + str(e) + ', Batch ' + str(i)) plt.pause(0.01) ``` ## While loop in Tensorflow ``` import tensorflow as tf i = tf.constant(0) c = lambda i: tf.less(i, 10) b = lambda i: tf.add(i, 1) r = tf.while_loop(c, b, [i]) with tf.Session() as sess: print(sess.run(r)) import tensorflow as tf a, b = tf.while_loop(lambda x, y: x < 30, lambda x, y: (x * 3, y * 2), [2, 3]) # Run the while loop and get the resulting values. with tf.Session() as sess: print(sess.run([a, b])) ``` ## Subgraph in Tensorflow <img src="tensorflow_subgraph.png" width="500" height="500"> ``` with tf.device("/gpu:0"): # Setup operations with tf.Session() as sess: # Run your code ``` ## Without specifing which operations on which hardware ``` import tensorflow as tf c = [] a = tf.get_variable("a", [2, 2], initializer=tf.random_uniform_initializer(-1, 1)) b = tf.get_variable("b", [2, 2], initializer=tf.random_uniform_initializer(-1, 1)) c.append(tf.matmul(a, b)) c.append(a + b) d = tf.add_n(c) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) print(sess.run(c)) print(sess.run(d)) ``` ## specifing which operations on which hardware ``` # https://jhui.github.io/2017/03/07/TensorFlow-GPU/ import tensorflow as tf c = [] a = tf.get_variable(f"a", [2, 2], initializer=tf.random_uniform_initializer(-1, 1)) b = tf.get_variable(f"b", [2, 2], initializer=tf.random_uniform_initializer(-1, 1)) with tf.device('/gpu:0'): c.append(tf.matmul(a, b)) with tf.device('/gpu:1'): c.append(a + b) with tf.device('/cpu:0'): d = tf.add_n(c) sess = tf.Session(config=tf.ConfigProto(log_device_placement=True, allow_soft_placement=True)) init = tf.global_variables_initializer() sess.run(init) print(sess.run(d)) ``` ## Tensor manipulation in TF <img src="tensorflow_scatter_gather.png" width="500" height="500"> ``` import tensorflow as tf a = tf.Variable(initial_value=tf.constant([1, 2, 3, 4, 5, 6, 7, 8])) indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) update = tf.scatter_nd_update(a, indices, updates) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(update)) import tensorflow as tf # Zeros matrix num = tf.get_variable('num111111', shape=[5, 3], initializer=tf.zeros_initializer(), dtype=tf.float32) updates = tf.ones([2, 3], dtype=tf.float32) num = tf.scatter_nd_update(num, [[0], [4]], updates) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(num)) import tensorflow as tf # Zeros matrix num = tf.get_variable('num1', shape=[5, 3], initializer=tf.zeros_initializer(), dtype=tf.float32) # Looping variable i = tf.constant(0, dtype=tf.int32) def body(i, num, j): # Update values updates = tf.ones([1, 3], dtype=tf.float32) for i in range(sess.run(tf.add(i, j))): num = tf.scatter_nd_update(num, [[i]], updates) return num sess = tf.Session() sess.run(tf.global_variables_initializer()) print(sess.run(body(i, num, 2))) import tensorflow as tf # Zeros matrix num = tf.get_variable('numb', shape=[5, 3], initializer=tf.zeros_initializer(), dtype=tf.float32) # Looping variable #i = tf.constant(0, dtype=tf.int32) def body(num, j): # Update values updates = tf.ones([1, 3], dtype=tf.float32) for i in range(j): num = tf.scatter_nd_update(num, [[i]], updates) return num sess = tf.Session() sess.run(tf.global_variables_initializer()) print(sess.run(body(num, 2))) ``` ## Resources: - http://www.cs.tau.ac.il/~joberant/teaching/advanced_nlp_spring_2018/files/tensorflow_tutorial.pdf
github_jupyter
#KNN ``` from __future__ import division import numpy as np import matplotlib.pyplot as plt from operator import itemgetter from tabulate import tabulate from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.neighbors import KNeighborsClassifier from sklearn.pipeline import Pipeline from sklearn.grid_search import GridSearchCV from sklearn.grid_search import RandomizedSearchCV from sklearn.cross_validation import StratifiedKFold from sklearn.cross_validation import ShuffleSplit from sklearn.metrics import confusion_matrix, classification_report import sys, math, time # private functions sys.path.append('/home/george/Dropbox/MNIST/src') from MNIST_utilities import load_all_MNIST, \ plot_confusion_matrix, \ print_imgs, \ plot_learning_curve %matplotlib inline #%qtconsole ``` #Load the MNIST data ``` # load MNIST here trainX, trainY, testX, testY = \ load_all_MNIST(portion=1.0) ``` ##Note that distance measures can be significantly affected by variable scale ``` # ... but in this case the test-set predictions were much worse #scaler = StandardScaler() #trainX = scaler.fit_transform(trainX) #testX = scaler.transform(testX) ``` #Perform a grid search to find the best KNN parameters ``` t0 = time.time() knn = KNeighborsClassifier(weights='distance', p=2) pipe = Pipeline(steps=[ ('knn', knn)]) #Parameters of pipelines can be set using ‘__’ separated parameter names: search_grid = dict(knn__n_neighbors = np.arange(1,8+1)) # ---------------------------------------------------------------------------- # you can't randomize if the number of grid points is less than the iterations n_iter = 100 grid_points = 1 for value in search_grid.itervalues(): grid_points *= len(value) print("Total points in the search grid: {}".format(grid_points)) if grid_points <= n_iter: estimator = GridSearchCV(estimator = pipe, param_grid = search_grid, cv = StratifiedKFold(y = trainY, n_folds = 5), n_jobs=-1, pre_dispatch=10, verbose=1) else: estimator = RandomizedSearchCV(estimator = pipe, param_distributions = search_grid, n_iter = n_iter, cv = StratifiedKFold(y = trainY, n_folds = 5), n_jobs=-1, pre_dispatch=10, verbose=1) # ---------------------------------------------------------------------------- estimator.fit(trainX, trainY) print("\ntime in minutes {0:.2f}".format((time.time()-t0)/60)) ``` ##Analyze the results of the grid search ``` # what proportion of parameter combinations # had an accuracy below 98% (anything 98% or below is not a contender) # -------------------------------------------------------------------- mean_score_list = [score.mean_validation_score for score in estimator.grid_scores_] print("\nProportion of scores below 98%: {0:.2f}\n". \ format(sum(np.array(mean_score_list)<0.98)/len(mean_score_list))) # what do the top 10 parameter combinations look like? # ---------------------------------------------------- for score in sorted(estimator.grid_scores_, key=itemgetter(1), reverse=True)[:10]: print score ``` #Predict the test data ``` target_names = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"] predicted_values = estimator.predict(testX) y_true, y_pred = testY, predicted_values #print(classification_report(y_true, y_pred, target_names=target_names)) cm = confusion_matrix(y_true, y_pred) print(cm) model_accuracy = sum(cm.diagonal())/len(testY) model_misclass = 1 - model_accuracy print("\nModel accuracy: {0}, model misclass rate: {1}".format(model_accuracy, model_misclass)) plot_confusion_matrix(cm, target_names) ``` ##Print a random sample of predictions ``` print_imgs(images = testX, actual_labels = y_true, predicted_labels = y_pred, starting_index = np.random.randint(0, high=testY.shape[0]-36, size=1)[0], size = 6) ``` ##Learning Curves 1. do we predict the training data well? (flat red line hugs the 1.0 line) 2. does the prediction improve with more data? (green line increases from left to right) ``` t0 = time.time() parm_list = "" for i, param in enumerate(estimator.best_params_): if i % 3 == 0: parm_list += "\n" param_val = estimator.best_params_[param] parm_list += param + "=" + str(param_val) + " " plot_learning_curve(estimator = estimator.best_estimator_, title = "KNN" + parm_list, X = trainX, y = trainY, ylim = (0.85, 1.01), cv = ShuffleSplit(n = trainX.shape[0], n_iter = 5, test_size = 0.2, random_state = 0), n_jobs = -1) plt.show() print("\ntime in minutes {0:.2f}".format((time.time()-t0)/60)) ```
github_jupyter
# Interactions and ANOVA Note: This script is based heavily on Jonathan Taylor's class notes https://web.stanford.edu/class/stats191/notebooks/Interactions.html Download and format data: ``` %matplotlib inline from urllib.request import urlopen import numpy as np np.set_printoptions(precision=4, suppress=True) import pandas as pd pd.set_option("display.width", 100) import matplotlib.pyplot as plt from statsmodels.formula.api import ols from statsmodels.graphics.api import interaction_plot, abline_plot from statsmodels.stats.anova import anova_lm try: salary_table = pd.read_csv("salary.table") except: # recent pandas can read URL without urlopen url = "http://stats191.stanford.edu/data/salary.table" fh = urlopen(url) salary_table = pd.read_table(fh) salary_table.to_csv("salary.table") E = salary_table.E M = salary_table.M X = salary_table.X S = salary_table.S ``` Take a look at the data: ``` plt.figure(figsize=(6, 6)) symbols = ["D", "^"] colors = ["r", "g", "blue"] factor_groups = salary_table.groupby(["E", "M"]) for values, group in factor_groups: i, j = values plt.scatter(group["X"], group["S"], marker=symbols[j], color=colors[i - 1], s=144) plt.xlabel("Experience") plt.ylabel("Salary") ``` Fit a linear model: ``` formula = "S ~ C(E) + C(M) + X" lm = ols(formula, salary_table).fit() print(lm.summary()) ``` Have a look at the created design matrix: ``` lm.model.exog[:5] ``` Or since we initially passed in a DataFrame, we have a DataFrame available in ``` lm.model.data.orig_exog[:5] ``` We keep a reference to the original untouched data in ``` lm.model.data.frame[:5] ``` Influence statistics ``` infl = lm.get_influence() print(infl.summary_table()) ``` or get a dataframe ``` df_infl = infl.summary_frame() df_infl[:5] ``` Now plot the residuals within the groups separately: ``` resid = lm.resid plt.figure(figsize=(6, 6)) for values, group in factor_groups: i, j = values group_num = i * 2 + j - 1 # for plotting purposes x = [group_num] * len(group) plt.scatter( x, resid[group.index], marker=symbols[j], color=colors[i - 1], s=144, edgecolors="black", ) plt.xlabel("Group") plt.ylabel("Residuals") ``` Now we will test some interactions using anova or f_test ``` interX_lm = ols("S ~ C(E) * X + C(M)", salary_table).fit() print(interX_lm.summary()) ``` Do an ANOVA check ``` from statsmodels.stats.api import anova_lm table1 = anova_lm(lm, interX_lm) print(table1) interM_lm = ols("S ~ X + C(E)*C(M)", data=salary_table).fit() print(interM_lm.summary()) table2 = anova_lm(lm, interM_lm) print(table2) ``` The design matrix as a DataFrame ``` interM_lm.model.data.orig_exog[:5] ``` The design matrix as an ndarray ``` interM_lm.model.exog interM_lm.model.exog_names infl = interM_lm.get_influence() resid = infl.resid_studentized_internal plt.figure(figsize=(6, 6)) for values, group in factor_groups: i, j = values idx = group.index plt.scatter( X[idx], resid[idx], marker=symbols[j], color=colors[i - 1], s=144, edgecolors="black", ) plt.xlabel("X") plt.ylabel("standardized resids") ``` Looks like one observation is an outlier. ``` drop_idx = abs(resid).argmax() print(drop_idx) # zero-based index idx = salary_table.index.drop(drop_idx) lm32 = ols("S ~ C(E) + X + C(M)", data=salary_table, subset=idx).fit() print(lm32.summary()) print("\n") interX_lm32 = ols("S ~ C(E) * X + C(M)", data=salary_table, subset=idx).fit() print(interX_lm32.summary()) print("\n") table3 = anova_lm(lm32, interX_lm32) print(table3) print("\n") interM_lm32 = ols("S ~ X + C(E) * C(M)", data=salary_table, subset=idx).fit() table4 = anova_lm(lm32, interM_lm32) print(table4) print("\n") ``` Replot the residuals ``` resid = interM_lm32.get_influence().summary_frame()["standard_resid"] plt.figure(figsize=(6, 6)) resid = resid.reindex(X.index) for values, group in factor_groups: i, j = values idx = group.index plt.scatter( X.loc[idx], resid.loc[idx], marker=symbols[j], color=colors[i - 1], s=144, edgecolors="black", ) plt.xlabel("X[~[32]]") plt.ylabel("standardized resids") ``` Plot the fitted values ``` lm_final = ols("S ~ X + C(E)*C(M)", data=salary_table.drop([drop_idx])).fit() mf = lm_final.model.data.orig_exog lstyle = ["-", "--"] plt.figure(figsize=(6, 6)) for values, group in factor_groups: i, j = values idx = group.index plt.scatter( X[idx], S[idx], marker=symbols[j], color=colors[i - 1], s=144, edgecolors="black", ) # drop NA because there is no idx 32 in the final model fv = lm_final.fittedvalues.reindex(idx).dropna() x = mf.X.reindex(idx).dropna() plt.plot(x, fv, ls=lstyle[j], color=colors[i - 1]) plt.xlabel("Experience") plt.ylabel("Salary") ``` From our first look at the data, the difference between Master's and PhD in the management group is different than in the non-management group. This is an interaction between the two qualitative variables management,M and education,E. We can visualize this by first removing the effect of experience, then plotting the means within each of the 6 groups using interaction.plot. ``` U = S - X * interX_lm32.params["X"] plt.figure(figsize=(6, 6)) interaction_plot( E, M, U, colors=["red", "blue"], markers=["^", "D"], markersize=10, ax=plt.gca() ) ``` ## Minority Employment Data ``` try: jobtest_table = pd.read_table("jobtest.table") except: # do not have data already url = "http://stats191.stanford.edu/data/jobtest.table" jobtest_table = pd.read_table(url) factor_group = jobtest_table.groupby(["MINORITY"]) fig, ax = plt.subplots(figsize=(6, 6)) colors = ["purple", "green"] markers = ["o", "v"] for factor, group in factor_group: ax.scatter( group["TEST"], group["JPERF"], color=colors[factor], marker=markers[factor], s=12 ** 2, ) ax.set_xlabel("TEST") ax.set_ylabel("JPERF") min_lm = ols("JPERF ~ TEST", data=jobtest_table).fit() print(min_lm.summary()) fig, ax = plt.subplots(figsize=(6, 6)) for factor, group in factor_group: ax.scatter( group["TEST"], group["JPERF"], color=colors[factor], marker=markers[factor], s=12 ** 2, ) ax.set_xlabel("TEST") ax.set_ylabel("JPERF") fig = abline_plot(model_results=min_lm, ax=ax) min_lm2 = ols("JPERF ~ TEST + TEST:MINORITY", data=jobtest_table).fit() print(min_lm2.summary()) fig, ax = plt.subplots(figsize=(6, 6)) for factor, group in factor_group: ax.scatter( group["TEST"], group["JPERF"], color=colors[factor], marker=markers[factor], s=12 ** 2, ) fig = abline_plot( intercept=min_lm2.params["Intercept"], slope=min_lm2.params["TEST"], ax=ax, color="purple", ) fig = abline_plot( intercept=min_lm2.params["Intercept"], slope=min_lm2.params["TEST"] + min_lm2.params["TEST:MINORITY"], ax=ax, color="green", ) min_lm3 = ols("JPERF ~ TEST + MINORITY", data=jobtest_table).fit() print(min_lm3.summary()) fig, ax = plt.subplots(figsize=(6, 6)) for factor, group in factor_group: ax.scatter( group["TEST"], group["JPERF"], color=colors[factor], marker=markers[factor], s=12 ** 2, ) fig = abline_plot( intercept=min_lm3.params["Intercept"], slope=min_lm3.params["TEST"], ax=ax, color="purple", ) fig = abline_plot( intercept=min_lm3.params["Intercept"] + min_lm3.params["MINORITY"], slope=min_lm3.params["TEST"], ax=ax, color="green", ) min_lm4 = ols("JPERF ~ TEST * MINORITY", data=jobtest_table).fit() print(min_lm4.summary()) fig, ax = plt.subplots(figsize=(8, 6)) for factor, group in factor_group: ax.scatter( group["TEST"], group["JPERF"], color=colors[factor], marker=markers[factor], s=12 ** 2, ) fig = abline_plot( intercept=min_lm4.params["Intercept"], slope=min_lm4.params["TEST"], ax=ax, color="purple", ) fig = abline_plot( intercept=min_lm4.params["Intercept"] + min_lm4.params["MINORITY"], slope=min_lm4.params["TEST"] + min_lm4.params["TEST:MINORITY"], ax=ax, color="green", ) # is there any effect of MINORITY on slope or intercept? table5 = anova_lm(min_lm, min_lm4) print(table5) # is there any effect of MINORITY on intercept table6 = anova_lm(min_lm, min_lm3) print(table6) # is there any effect of MINORITY on slope table7 = anova_lm(min_lm, min_lm2) print(table7) # is it just the slope or both? table8 = anova_lm(min_lm2, min_lm4) print(table8) ``` ## One-way ANOVA ``` try: rehab_table = pd.read_csv("rehab.table") except: url = "http://stats191.stanford.edu/data/rehab.csv" rehab_table = pd.read_table(url, delimiter=",") rehab_table.to_csv("rehab.table") fig, ax = plt.subplots(figsize=(8, 6)) fig = rehab_table.boxplot("Time", "Fitness", ax=ax, grid=False) rehab_lm = ols("Time ~ C(Fitness)", data=rehab_table).fit() table9 = anova_lm(rehab_lm) print(table9) print(rehab_lm.model.data.orig_exog) print(rehab_lm.summary()) ``` ## Two-way ANOVA ``` try: kidney_table = pd.read_table("./kidney.table") except: url = "http://stats191.stanford.edu/data/kidney.table" kidney_table = pd.read_csv(url, delim_whitespace=True) ``` Explore the dataset ``` kidney_table.head(10) ``` Balanced panel ``` kt = kidney_table plt.figure(figsize=(8, 6)) fig = interaction_plot( kt["Weight"], kt["Duration"], np.log(kt["Days"] + 1), colors=["red", "blue"], markers=["D", "^"], ms=10, ax=plt.gca(), ) ``` You have things available in the calling namespace available in the formula evaluation namespace ``` kidney_lm = ols("np.log(Days+1) ~ C(Duration) * C(Weight)", data=kt).fit() table10 = anova_lm(kidney_lm) print( anova_lm(ols("np.log(Days+1) ~ C(Duration) + C(Weight)", data=kt).fit(), kidney_lm) ) print( anova_lm( ols("np.log(Days+1) ~ C(Duration)", data=kt).fit(), ols("np.log(Days+1) ~ C(Duration) + C(Weight, Sum)", data=kt).fit(), ) ) print( anova_lm( ols("np.log(Days+1) ~ C(Weight)", data=kt).fit(), ols("np.log(Days+1) ~ C(Duration) + C(Weight, Sum)", data=kt).fit(), ) ) ``` ## Sum of squares Illustrates the use of different types of sums of squares (I,II,II) and how the Sum contrast can be used to produce the same output between the 3. Types I and II are equivalent under a balanced design. Do not use Type III with non-orthogonal contrast - ie., Treatment ``` sum_lm = ols("np.log(Days+1) ~ C(Duration, Sum) * C(Weight, Sum)", data=kt).fit() print(anova_lm(sum_lm)) print(anova_lm(sum_lm, typ=2)) print(anova_lm(sum_lm, typ=3)) nosum_lm = ols( "np.log(Days+1) ~ C(Duration, Treatment) * C(Weight, Treatment)", data=kt ).fit() print(anova_lm(nosum_lm)) print(anova_lm(nosum_lm, typ=2)) print(anova_lm(nosum_lm, typ=3)) ```
github_jupyter
# Model development and training In this notebook we will develop a Recurrent Neural Network (RNN) using LSTM nodes, so as to predict the next character, given a sequence of 100 characters. First of all, we import the numpy and keras modules, important for storing data and defining the model respectively. ``` import numpy as np import keras from keras.models import Sequential from keras.layers import Dense, LSTM, Dropout, Activation from keras.optimizers import RMSprop, Adam from keras.callbacks import ModelCheckpoint from keras.utils import np_utils SEQ_LENGTH = 100 ``` We define the model now. The model is given an input of 100 character sequences and it outputs the respective probabilities with which a character can succeed the input sequence. The model consists of 3 hidden layers. The first two hidden layers consist of 256 LSTM cells, and the second layer is fully connected to the third layer. The number of neurons in the third layer is same as the number of unique characters in the training set. The neurons in the third layer, use softmax activation so as to convert their outputs into respective probabilities. The loss used is Categorical cross entropy and the optimizer used is Adam. ``` def buildmodel(VOCABULARY): model = Sequential() model.add(LSTM(256, input_shape = (SEQ_LENGTH, 1), return_sequences = True)) model.add(Dropout(0.2)) model.add(LSTM(256)) model.add(Dropout(0.2)) model.add(Dense(VOCABULARY, activation = 'softmax')) model.compile(loss = 'categorical_crossentropy', optimizer = 'adam') return model ``` Next, we download the training data. The children book "Alice's Adventures in Wonderland" written by Lewis Caroll has been used as the training data in this project. The ebook can be downloaded from [here](http://www.gutenberg.org/ebooks/11?msg=welcome_stranger) in Plain Text UTF-8 format. The downloaded book has been stored in the root directory with the name 'wonderland.txt'. We open this book using the open command and convert all characters into lowercase (so as to reduce the number of characters in the vocabulary, making it easier to learn for the model.) ``` file = open('wonderland.txt', encoding = 'utf8') raw_text = file.read() #you need to read further characters as well raw_text = raw_text.lower() ``` Next, we store all the distinct characters occurying in the book in the chars variable. We also remove some of the rare characters (stored in bad-chars) from the book. The final vocabulary of the book is printed at the end of code segment. ``` chars = sorted(list(set(raw_text))) print(chars) bad_chars = ['#', '*', '@', '_', '\ufeff'] for i in range(len(bad_chars)): raw_text = raw_text.replace(bad_chars[i],"") chars = sorted(list(set(raw_text))) print(chars) ``` Next, we summarize the entire book to find that the book consists of a total of 163,721 characters (which is relatively small) and the final number of characters in the vocabulary is 56. ``` text_length = len(raw_text) char_length = len(chars) VOCABULARY = char_length print("Text length = " + str(text_length)) print("No. of characters = " + str(char_length)) ``` Now, we need to create our dataset in the form the model will need. So, we create an input window of 100 characters (SEQ_LENGTH = 100) and shift the window one character at a time until we reach the end of the book. An encoding is used, so as to map each of the characters into it's corresponding location in the vocabulary. Each time the input window contains a new sequence, it is converted into integers, using this encoding and appended to the input list input_strings. For all such input windows, the character just following the sequence is appended to the output list output_strings. ``` char_to_int = dict((c, i) for i, c in enumerate(chars)) input_strings = [] output_strings = [] for i in range(len(raw_text) - SEQ_LENGTH): X_text = raw_text[i: i + SEQ_LENGTH] X = [char_to_int[char] for char in X_text] input_strings.append(X) Y = raw_text[i + SEQ_LENGTH] output_strings.append(char_to_int[Y]) ``` Now, the input_strings and output_strings lists are converted into a numpy array of required dimensions, so that they can be fed to the model for the training. ``` length = len(input_strings) input_strings = np.array(input_strings) input_strings = np.reshape(input_strings, (input_strings.shape[0], input_strings.shape[1], 1)) input_strings = input_strings/float(VOCABULARY) output_strings = np.array(output_strings) output_strings = np_utils.to_categorical(output_strings) print(input_strings.shape) print(output_strings.shape) ``` Now, finally the model is build and then fitted for training. The model is trained for 50 epochs and a batch size of 128 sequences has been used. After every epoch, the current model state is saved if the model has the least loss encountered till that time. We can see that the training loss decreases in almost every epoch till 50 epochs and probably there is further scope to improve the model. The total training time is ~90 hours on a CPU and ~12 hours if performed on a GPU. ``` model = buildmodel(VOCABULARY) filepath="saved_models/weights-improvement-{epoch:02d}-{loss:.4f}.hdf5" checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min') callbacks_list = [checkpoint] history = model.fit(input_strings, output_strings, epochs = 50, batch_size = 128, callbacks = callbacks_list) ``` Now that our model has been trained, we can use it for different generating texts as well as predicting next word, which is what we will do in the next notebooks.
github_jupyter
# Goals * sentiment analysis on twitter thread interactions. * train a model to be able to predict conversation sentiment. * based on the first tweet? * proactive measures for dealing with customers. ``` !pip install -q kaggle from google.colab import files files.upload() !mkdir ~/.kaggle !cp kaggle.json ~/.kaggle/ !chmod 600 ~/.kaggle/kaggle.json !kaggle datasets download -d thoughtvector/customer-support-on-twitter #in drive https://drive.google.com/drive/u/1/folders/1Sh4w-8e1p2Yl_9QrAvpgRf4nmeY_KW-z !pip install cloudmesh-installer !pip install cloudmesh-common ``` ## Import libraries ``` import time from cloudmesh.common.StopWatch import StopWatch from cloudmesh.common.Benchmark import Benchmark from cloudmesh.common.Shell import Shell import zipfile #dealing with data import matplotlib.pyplot as plt import pandas as pd import numpy as np import tensorflow as tf import keras from keras.preprocessing.text import Tokenizer from keras.preprocessing import sequence import sklearn from sklearn.model_selection import train_test_split #nl libraries import string import nltk #natural language tool kit from nltk.sentiment.vader import SentimentIntensityAnalyzer nltk.download("vader_lexicon") nltk.download('punkt') ``` ## Download and reduce the data working with ``` StopWatch.start("get_data") zfile=zipfile.ZipFile('/content/customer-support-on-twitter.zip') data=pd.read_csv(zfile.open('twcs/twcs.csv')) StopWatch.stop("get_data") print(data.shape) data.head() StopWatch.start("manageability") df=data.drop(['inbound','created_at'],axis=1) df["text"]=df["text"].astype(str) df.sample(frac=1) df=df.head(650) StopWatch.stop("manageability") StopWatch.start("removing_companies") #companies are not anon so they have actual letters def companies(df): idx=[] for id in range(len(df["author_id"])): #assuming at least one vowel in each company name vowels=['a','e','i','o', 'u'] for vowel in vowels: if vowel in df["author_id"][id]: idx.append(id) break return df.drop(idx) #we only want the consumer side customers=companies(df) StopWatch.stop("removing_companies") ``` # Analyze senitment for individual tweets ``` StopWatch.start("sentiment_score") sent_analyzer=SentimentIntensityAnalyzer() #analyze the raw tweets customers["sentiment"]=customers["text"].apply(lambda x: sent_analyzer.polarity_scores(x)["compound"]) StopWatch.stop("sentiment_score") ``` # Build a new dataframe ``` # #show distribution of overall tweets and by customers cust_average=customers.groupby("author_id") cust_sent_average=cust_average.sentiment.mean() cust_sent_average=pd.DataFrame({'author_id':cust_sent_average.index, 'sentiment_average': cust_sent_average.values}) #getting the first tweet for each author customers = customers[pd.isnull(df.in_response_to_tweet_id)] #new dataframe with the essentials customer_first=pd.DataFrame({'author_id':customers['author_id'], 'tweet':customers['text'], 'first_sentiment':customers['sentiment'], 'classification': 'Na'}) #add the overall sentiment binary for the entire thread for author in customer_first['author_id']: sentiment=cust_sent_average.loc[cust_sent_average.author_id==author, 'sentiment_average'] if sentiment.values < 0: customer_first.loc[customer_first.author_id==author,'classification'] = 0 else: customer_first.loc[customer_first.author_id==author,'classification'] = 1 #add the sentiment from the first tweet sent by author for author in customer_first['author_id']: sent=customer_first.loc[customer_first.author_id==author, 'first_sentiment'] if sent.values < 0: customer_first.loc[customer_first.author_id==author,'first_sentiment'] = 0 else: customer_first.loc[customer_first.author_id==author,'first_sentiment'] = 1 customer_first.head() ``` # show sentiment distribution ``` fig=plt.figure() axs0=plt.subplot(211) axs0.hist(customers["sentiment"]) axs0.set_title("Sentiment distribution of all customer Tweets") axs1=plt.subplot(223) axs1.hist(customer_first["first_sentiment"]) axs1.set_title("First average sentiment dist") axs2=plt.subplot(224) axs2.hist(customer_first["classification"]) axs2.set_title("Average sentiment dist ") plt.tight_layout() fig.savefig("customer_dist.png") fig.show() nltk.download('averaged_perceptron_tagger') def encode_tweets(df): count=0 encoded=[] embedding={} for scentence in df.tweet: encode=[] token=nltk.word_tokenize(scentence.lower()) token=[ele for word_tuple in nltk.pos_tag(token) for ele in word_tuple ] for word in token: if word not in embedding: embedding[word]=count count+=1 encode.append(embedding[word]) encoded.append(encode) return (encoded) #endoce the natural text into something the network will be able to read encoded=encode_tweets(customer_first) #replace the raw text with the encoded text customer_first['tweet']=encoded customer_first['tweet']=np.reshape(customer_first['tweet'].values, (-1,1)) ``` # Dividing train and test data ``` #train test split the data should be 80 20 by default #fit data into the models #split the data into train and test subsets train_text, test_text, train_sent, test_sent=train_test_split(customer_first['tweet'], customer_first['classification']) num_classes=3 train_text=train_text.values test_text=test_text.values #pad the text for uniformity on length train_text=sequence.pad_sequences(train_text, maxlen=225) test_text=sequence.pad_sequences(test_text, maxlen=225) train_sent=train_sent.values test_sent=test_sent.values train_sent=keras.utils.to_categorical(train_sent) test_sent=keras.utils.to_categorical(test_sent) test_text.shape ``` # Cnn model ``` #Cnn libraries import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation, Dropout from keras.layers import Conv1D, MaxPooling1D, Flatten, AveragePooling1D, Embedding from tensorflow.keras.utils import to_categorical, plot_model ``` # Building the model ``` word_dict=1000000 num_labels=2 input_shape=test_text.shape batch_size=50 kernel_size=3 pool_size=2 filters=64 dropout=0.2 epochs=10 StopWatch.start("cnn_model_building") cnn_model=Sequential() cnn_model.add(Embedding(word_dict, input_shape[1], input_length=input_shape[1])) cnn_model.add(Conv1D(filters=filters, kernel_size=kernel_size, activation='relu', input_shape=input_shape, padding='same')) cnn_model.add(MaxPooling1D(pool_size)) cnn_model.add(Conv1D(filters=filters, kernel_size=kernel_size, activation='relu', input_shape=input_shape, padding='same')) cnn_model.add(MaxPooling1D(pool_size)) cnn_model.add(Conv1D(filters=filters, kernel_size=kernel_size, activation='relu', input_shape=input_shape, padding='same')) cnn_model.add(MaxPooling1D(pool_size)) cnn_model.add(Conv1D(filters=filters, kernel_size=kernel_size, activation='relu', input_shape=input_shape, padding='same')) cnn_model.add(Flatten()) cnn_model.add(Dropout(dropout)) cnn_model.add(Dense(num_labels)) cnn_model.add(Activation('softmax')) cnn_model.summary() plot_model(cnn_model, to_file='cnn_model.png', show_shapes=True) StopWatch.stop("cnn_model_building") ``` # Compile ``` StopWatch.start("cnn_compile") cnn_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) StopWatch.stop("cnn_compile") ``` # Model Fit ``` StopWatch.start("cnn_train") cnn_model.fit(train_text, train_sent, batch_size=batch_size,epochs=epochs) StopWatch.stop("cnn_train") ``` # Predicting ``` StopWatch.start("predict") predicted=cnn_model.predict(test_text) StopWatch.stop("predict") ``` # Evaluate ``` StopWatch.start("cnn_evaluate") cnn_loss, cnn_accuracy=cnn_model.evaluate(predicted, test_sent, batch_size = batch_size) print("CNN Accuracy: %.1f%%" %(100.0*cnn_accuracy)) StopWatch.stop("cnn_evaluate") StopWatch.benchmark() ```
github_jupyter
##### Copyright 2020 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Recommending movies: retrieval <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/recommenders/examples/basic_retrieval"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/basic_retrieval.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/recommenders/blob/main/docs/examples/basic_retrieval.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/basic_retrieval.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Real-world recommender systems are often composed of two stages: 1. The retrieval stage is responsible for selecting an initial set of hundreds of candidates from all possible candidates. The main objective of this model is to efficiently weed out all candidates that the user is not interested in. Because the retrieval model may be dealing with millions of candidates, it has to be computationally efficient. 2. The ranking stage takes the outputs of the retrieval model and fine-tunes them to select the best possible handful of recommendations. Its task is to narrow down the set of items the user may be interested in to a shortlist of likely candidates. In this tutorial, we're going to focus on the first stage, retrieval. If you are interested in the ranking stage, have a look at our [ranking](basic_ranking) tutorial. Retrieval models are often composed of two sub-models: 1. A query model computing the query representation (normally a fixed-dimensionality embedding vector) using query features. 2. A candidate model computing the candidate representation (an equally-sized vector) using the candidate features The outputs of the two models are then multiplied together to give a query-candidate affinity score, with higher scores expressing a better match between the candidate and the query. In this tutorial, we're going to build and train such a two-tower model using the Movielens dataset. We're going to: 1. Get our data and split it into a training and test set. 2. Implement a retrieval model. 3. Fit and evaluate it. 4. Export it for efficient serving by building an approximate nearest neighbours (ANN) index. ## The dataset The Movielens dataset is a classic dataset from the [GroupLens](https://grouplens.org/datasets/movielens/) research group at the University of Minnesota. It contains a set of ratings given to movies by a set of users, and is a workhorse of recommender system research. The data can be treated in two ways: 1. It can be interpreted as expressesing which movies the users watched (and rated), and which they did not. This is a form of implicit feedback, where users' watches tell us which things they prefer to see and which they'd rather not see. 2. It can also be seen as expressesing how much the users liked the movies they did watch. This is a form of explicit feedback: given that a user watched a movie, we can tell roughly how much they liked by looking at the rating they have given. In this tutorial, we are focusing on a retrieval system: a model that predicts a set of movies from the catalogue that the user is likely to watch. Often, implicit data is more useful here, and so we are going to treat Movielens as an implicit system. This means that every movie a user watched is a positive example, and every movie they have not seen is an implicit negative example. ## Imports Let's first get our imports out of the way. ``` !pip install -q tensorflow-recommenders !pip install -q --upgrade tensorflow-datasets !pip install -q scann import os import pprint import tempfile from typing import Dict, Text import numpy as np import tensorflow as tf import tensorflow_datasets as tfds import tensorflow_recommenders as tfrs ``` ## Preparing the dataset Let's first have a look at the data. We use the MovieLens dataset from [Tensorflow Datasets](https://www.tensorflow.org/datasets). Loading `movielens/100k_ratings` yields a `tf.data.Dataset` object containing the ratings data and loading `movielens/100k_movies` yields a `tf.data.Dataset` object containing only the movies data. Note that since the MovieLens dataset does not have predefined splits, all data are under `train` split. ``` # Ratings data. ratings = tfds.load("movielens/100k-ratings", split="train") # Features of all the available movies. movies = tfds.load("movielens/100k-movies", split="train") ``` The ratings dataset returns a dictionary of movie id, user id, the assigned rating, timestamp, movie information, and user information: ``` for x in ratings.take(1).as_numpy_iterator(): pprint.pprint(x) ``` The movies dataset contains the movie id, movie title, and data on what genres it belongs to. Note that the genres are encoded with integer labels. ``` for x in movies.take(1).as_numpy_iterator(): pprint.pprint(x) ``` In this example, we're going to focus on the ratings data. Other tutorials explore how to use the movie information data as well to improve the model quality. We keep only the `user_id`, and `movie_title` fields in the dataset. ``` ratings = ratings.map(lambda x: { "movie_title": x["movie_title"], "user_id": x["user_id"], }) movies = movies.map(lambda x: x["movie_title"]) ``` To fit and evaluate the model, we need to split it into a training and evaluation set. In an industrial recommender system, this would most likely be done by time: the data up to time $T$ would be used to predict interactions after $T$. In this simple example, however, let's use a random split, putting 80% of the ratings in the train set, and 20% in the test set. ``` tf.random.set_seed(42) shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False) train = shuffled.take(80_000) test = shuffled.skip(80_000).take(20_000) ``` Let's also figure out unique user ids and movie titles present in the data. This is important because we need to be able to map the raw values of our categorical features to embedding vectors in our models. To do that, we need a vocabulary that maps a raw feature value to an integer in a contiguous range: this allows us to look up the corresponding embeddings in our embedding tables. ``` movie_titles = movies.batch(1_000) user_ids = ratings.batch(1_000_000).map(lambda x: x["user_id"]) unique_movie_titles = np.unique(np.concatenate(list(movie_titles))) unique_user_ids = np.unique(np.concatenate(list(user_ids))) unique_movie_titles[:10] ``` ## Implementing a model Choosing the architecure of our model a key part of modelling. Because we are building a two-tower retrieval model, we can build each tower separately and then combine them in the final model. ### The query tower Let's start with the query tower. The first step is to decide on the dimensionality of the query and candidate representations: ``` embedding_dimension = 32 ``` Higher values will correspond to models that may be more accurate, but will also be slower to fit and more prone to overfitting. The second is to define the model itself. Here, we're going to use Keras preprocessing layers to first convert user ids to integers, and then convert those to user embeddings via an `Embedding` layer. Note that we use the list of unique user ids we computed earlier as a vocabulary: ``` user_model = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=unique_user_ids, mask_token=None), # We add an additional embedding to account for unknown tokens. tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension) ]) ``` A simple model like this corresponds exactly to a classic [matrix factorization](https://ieeexplore.ieee.org/abstract/document/4781121) approach. While defining a subclass of `tf.keras.Model` for this simple model might be overkill, we can easily extend it to an arbitrarily complex model using standard Keras components, as long as we return an `embedding_dimension`-wide output at the end. ### The candidate tower We can do the same with the candidate tower. ``` movie_model = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=unique_movie_titles, mask_token=None), tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension) ]) ``` ### Metrics In our training data we have positive (user, movie) pairs. To figure out how good our model is, we need to compare the affinity score that the model calculates for this pair to the scores of all the other possible candidates: if the score for the positive pair is higher than for all other candidates, our model is highly accurate. To do this, we can use the `tfrs.metrics.FactorizedTopK` metric. The metric has one required argument: the dataset of candidates that are used as implicit negatives for evaluation. In our case, that's the `movies` dataset, converted into embeddings via our movie model: ``` metrics = tfrs.metrics.FactorizedTopK( candidates=movies.batch(128).map(movie_model) ) ``` ### Loss The next component is the loss used to train our model. TFRS has several loss layers and tasks to make this easy. In this instance, we'll make use of the `Retrieval` task object: a convenience wrapper that bundles together the loss function and metric computation: ``` task = tfrs.tasks.Retrieval( metrics=metrics ) ``` The task itself is a Keras layer that takes the query and candidate embeddings as arguments, and returns the computed loss: we'll use that to implement the model's training loop. ### The full model We can now put it all together into a model. TFRS exposes a base model class (`tfrs.models.Model`) which streamlines bulding models: all we need to do is to set up the components in the `__init__` method, and implement the `compute_loss` method, taking in the raw features and returning a loss value. The base model will then take care of creating the appropriate training loop to fit our model. ``` class MovielensModel(tfrs.Model): def __init__(self, user_model, movie_model): super().__init__() self.movie_model: tf.keras.Model = movie_model self.user_model: tf.keras.Model = user_model self.task: tf.keras.layers.Layer = task def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor: # We pick out the user features and pass them into the user model. user_embeddings = self.user_model(features["user_id"]) # And pick out the movie features and pass them into the movie model, # getting embeddings back. positive_movie_embeddings = self.movie_model(features["movie_title"]) # The task computes the loss and the metrics. return self.task(user_embeddings, positive_movie_embeddings) ``` The `tfrs.Model` base class is a simply convenience class: it allows us to compute both training and test losses using the same method. Under the hood, it's still a plain Keras model. You could achieve the same functionality by inheriting from `tf.keras.Model` and overriding the `train_step` and `test_step` functions (see [the guide](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit) for details): ``` class NoBaseClassMovielensModel(tf.keras.Model): def __init__(self, user_model, movie_model): super().__init__() self.movie_model: tf.keras.Model = movie_model self.user_model: tf.keras.Model = user_model self.task: tf.keras.layers.Layer = task def train_step(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor: # Set up a gradient tape to record gradients. with tf.GradientTape() as tape: # Loss computation. user_embeddings = self.user_model(features["user_id"]) positive_movie_embeddings = self.movie_model(features["movie_title"]) loss = self.task(user_embeddings, positive_movie_embeddings) # Handle regularization losses as well. regularization_loss = sum(self.losses) total_loss = loss + regularization_loss gradients = tape.gradient(total_loss, self.trainable_variables) self.optimizer.apply_gradients(zip(gradients, self.trainable_variables)) metrics = {metric.name: metric.result() for metric in self.metrics} metrics["loss"] = loss metrics["regularization_loss"] = regularization_loss metrics["total_loss"] = total_loss return metrics def test_step(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor: # Loss computation. user_embeddings = self.user_model(features["user_id"]) positive_movie_embeddings = self.movie_model(features["movie_title"]) loss = self.task(user_embeddings, positive_movie_embeddings) # Handle regularization losses as well. regularization_loss = sum(self.losses) total_loss = loss + regularization_loss metrics = {metric.name: metric.result() for metric in self.metrics} metrics["loss"] = loss metrics["regularization_loss"] = regularization_loss metrics["total_loss"] = total_loss return metrics ``` In these tutorials, however, we stick to using the `tfrs.Model` base class to keep our focus on modelling and abstract away some of the boilerplate. ## Fitting and evaluating After defining the model, we can use standard Keras fitting and evaluation routines to fit and evaluate the model. Let's first instantiate the model. ``` model = MovielensModel(user_model, movie_model) model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1)) ``` Then shuffle, batch, and cache the training and evaluation data. ``` cached_train = train.shuffle(100_000).batch(8192).cache() cached_test = test.batch(4096).cache() ``` Then train the model: ``` model.fit(cached_train, epochs=3) ``` As the model trains, the loss is falling and a set of top-k retrieval metrics is updated. These tell us whether the true positive is in the top-k retrieved items from the entire candidate set. For example, a top-5 categorical accuracy metric of 0.2 would tell us that, on average, the true positive is in the top 5 retrieved items 20% of the time. Note that, in this example, we evaluate the metrics during training as well as evaluation. Because this can be quite slow with large candidate sets, it may be prudent to turn metric calculation off in training, and only run it in evaluation. Finally, we can evaluate our model on the test set: ``` model.evaluate(cached_test, return_dict=True) ``` Test set performance is much worse than training performance. This is due to two factors: 1. Our model is likely to perform better on the data that it has seen, simply because it can memorize it. This overfitting phenomenon is especially strong when models have many parameters. It can be mediated by model regularization and use of user and movie features that help the model generalize better to unseen data. 2. The model is re-recommending some of users' already watched movies. These known-positive watches can crowd out test movies out of top K recommendations. The second phenomenon can be tackled by excluding previously seen movies from test recommendations. This approach is relatively common in the recommender systems literature, but we don't follow it in these tutorials. If not recommending past watches is important, we should expect appropriately specified models to learn this behaviour automatically from past user history and contextual information. Additionally, it is often appropriate to recommend the same item multiple times (say, an evergreen TV series or a regularly purchased item). ## Making predictions Now that we have a model, we would like to be able to make predictions. We can use the `tfrs.layers.factorized_top_k.BruteForce` layer to do this. ``` # Create a model that takes in raw query features, and index = tfrs.layers.factorized_top_k.BruteForce(model.user_model) # recommends movies out of the entire movies dataset. index.index(movies.batch(100).map(model.movie_model), movies) # Get recommendations. _, titles = index(tf.constant(["42"])) print(f"Recommendations for user 42: {titles[0, :3]}") ``` Of course, the `BruteForce` layer is going to be too slow to serve a model with many possible candidates. The following sections shows how to speed this up by using an approximate retrieval index. ## Model serving After the model is trained, we need a way to deploy it. In a two-tower retrieval model, serving has two components: - a serving query model, taking in features of the query and transforming them into a query embedding, and - a serving candidate model. This most often takes the form of an approximate nearest neighbours (ANN) index which allows fast approximate lookup of candidates in response to a query produced by the query model. In TFRS, both components can be packaged into a single exportable model, giving us a model that takes the raw user id and returns the titles of top movies for that user. This is done via exporting the model to a `SavedModel` format, which makes it possible to serve using [TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving). To deploy a model like this, we simply export the `BruteForce` layer we created above: ``` # Export the query model. with tempfile.TemporaryDirectory() as tmp: path = os.path.join(tmp, "model") # Save the index. index.save(path) # Load it back; can also be done in TensorFlow Serving. loaded = tf.keras.models.load_model(path) # Pass a user id in, get top predicted movie titles back. scores, titles = loaded(["42"]) print(f"Recommendations: {titles[0][:3]}") ``` We can also export an approximate retrieval index to speed up predictions. This will make it possible to efficiently surface recommendations from sets of tens of millions of candidates. To do so, we can use the `scann` package. This is an optional dependency of TFRS, and we installed it separately at the beginning of this tutorial by calling `!pip install -q scann`. Once installed we can use the TFRS `ScaNN` layer: ``` scann_index = tfrs.layers.factorized_top_k.ScaNN(model.user_model) scann_index.index(movies.batch(100).map(model.movie_model), movies) ``` This layer will perform _approximate_ lookups: this makes retrieval slightly less accurate, but orders of magnitude faster on large candidate sets. ``` # Get recommendations. _, titles = scann_index(tf.constant(["42"])) print(f"Recommendations for user 42: {titles[0, :3]}") ``` Exporting it for serving is as easy as exporting the `BruteForce` layer: ``` # Export the query model. with tempfile.TemporaryDirectory() as tmp: path = os.path.join(tmp, "model") # Save the index. scann_index.save( path, options=tf.saved_model.SaveOptions(namespace_whitelist=["Scann"]) ) # Load it back; can also be done in TensorFlow Serving. loaded = tf.keras.models.load_model(path) # Pass a user id in, get top predicted movie titles back. scores, titles = loaded(["42"]) print(f"Recommendations: {titles[0][:3]}") ``` To learn more about using and tuning fast approximate retrieval models, have a look at our [efficient serving](https://tensorflow.org/recommenders/examples/efficient_serving) tutorial. ## Next steps This concludes the retrieval tutorial. To expand on what is presented here, have a look at: 1. Learning multi-task models: jointly optimizing for ratings and clicks. 2. Using movie metadata: building a more complex movie model to alleviate cold-start. ``` ```
github_jupyter
<h1> Using Machine Learning APIs </h1> First, visit <a href="http://console.cloud.google.com/apis">API console</a>, choose "Credentials" on the left-hand menu. Choose "Create Credentials" and generate an API key for your application. You should probably restrict it by IP address to prevent abuse, but for now, just leave that field blank and delete the API key after trying out this demo. Copy-paste your API Key here: ``` APIKEY="CHANGE-THIS-KEY" # Replace with your API key ``` <b> Note: Make sure you generate an API Key and replace the value above. The sample key will not work.</b> From the same API console, choose "Dashboard" on the left-hand menu and "Enable API". Enable the following APIs for your project (search for them) if they are not already enabled: <ol> <li> Google Translate API </li> <li> Google Cloud Vision API </li> <li> Google Natural Language API </li> <li> Google Cloud Speech API </li> </ol> Finally, because we are calling the APIs from Python (clients in many other languages are available), let's install the Python package (it's not installed by default on Datalab) ``` !pip install --upgrade google-api-python-client ``` <h2> Invoke Translate API </h2> ``` # running Translate API from googleapiclient.discovery import build service = build('translate', 'v2', developerKey=APIKEY) # use the service inputs = ['is it really this easy?', 'amazing technology', 'wow'] outputs = service.translations().list(source='en', target='fr', q=inputs).execute() # print outputs for input, output in zip(inputs, outputs['translations']): print("{0} -> {1}".format(input, output['translatedText'])) ``` <h2> Invoke Vision API </h2> The Vision API can work off an image in Cloud Storage or embedded directly into a POST message. I'll use Cloud Storage and do OCR on this image: <img src="https://storage.googleapis.com/cloud-training-demos/vision/sign2.jpg" width="200" />. That photograph is from http://www.publicdomainpictures.net/view-image.php?image=15842 ``` # Running Vision API import base64 IMAGE="gs://cloud-training-demos/vision/sign2.jpg" vservice = build('vision', 'v1', developerKey=APIKEY) request = vservice.images().annotate(body={ 'requests': [{ 'image': { 'source': { 'gcs_image_uri': IMAGE } }, 'features': [{ 'type': 'TEXT_DETECTION', 'maxResults': 3, }] }], }) responses = request.execute(num_retries=3) print(responses) foreigntext = responses['responses'][0]['textAnnotations'][0]['description'] foreignlang = responses['responses'][0]['textAnnotations'][0]['locale'] print(foreignlang, foreigntext) ``` <h2> Translate sign </h2> ``` inputs=[foreigntext] outputs = service.translations().list(source=foreignlang, target='en', q=inputs).execute() # print(outputs) for input, output in zip(inputs, outputs['translations']): print("{0} -> {1}".format(input, output['translatedText'])) ``` <h2> Sentiment analysis with Language API </h2> Let's evaluate the sentiment of some famous quotes using Google Cloud Natural Language API. ``` lservice = build('language', 'v1beta1', developerKey=APIKEY) quotes = [ 'To succeed, you must have tremendous perseverance, tremendous will.', 'It’s not that I’m so smart, it’s just that I stay with problems longer.', 'Love is quivering happiness.', 'Love is of all passions the strongest, for it attacks simultaneously the head, the heart, and the senses.', 'What difference does it make to the dead, the orphans and the homeless, whether the mad destruction is wrought under the name of totalitarianism or in the holy name of liberty or democracy?', 'When someone you love dies, and you’re not expecting it, you don’t lose her all at once; you lose her in pieces over a long time — the way the mail stops coming, and her scent fades from the pillows and even from the clothes in her closet and drawers. ' ] for quote in quotes: response = lservice.documents().analyzeSentiment( body={ 'document': { 'type': 'PLAIN_TEXT', 'content': quote } }).execute() polarity = response['documentSentiment']['polarity'] magnitude = response['documentSentiment']['magnitude'] print('POLARITY=%s MAGNITUDE=%s for %s' % (polarity, magnitude, quote)) ``` <h2> Speech API </h2> The Speech API can work on streaming data, audio content encoded and embedded directly into the POST message, or on a file on Cloud Storage. Here I'll pass in this <a href="https://storage.googleapis.com/cloud-training-demos/vision/audio.raw">audio file</a> in Cloud Storage. ``` sservice = build('speech', 'v1beta1', developerKey=APIKEY) response = sservice.speech().syncrecognize( body={ 'config': { 'encoding': 'LINEAR16', 'sampleRate': 16000 }, 'audio': { 'uri': 'gs://cloud-training-demos/vision/audio.raw' } }).execute() print(response) print(response['results'][0]['alternatives'][0]['transcript']) print('Confidence=%f' % response['results'][0]['alternatives'][0]['confidence']) ``` <h2> Clean up </h2> Remember to delete the API key by visiting <a href="http://console.cloud.google.com/apis">API console</a>. If necessary, commit all your notebooks to git. If you are running Datalab on a Compute Engine VM or delegating to one, remember to stop or shut it down so that you are not charged. ## Challenge Exercise Here are a few portraits from the Metropolitan Museum of Art, New York (they are part of a [BigQuery public dataset](https://bigquery.cloud.google.com/dataset/bigquery-public-data:the_met) ): * gs://cloud-training-demos/images/met/APS6880.jpg * gs://cloud-training-demos/images/met/DP205018.jpg * gs://cloud-training-demos/images/met/DP290402.jpg * gs://cloud-training-demos/images/met/DP700302.jpg Use the Vision API to identify which of these images depict happy people and which ones depict unhappy people. Hint (highlight to see): <p style="color:white">You will need to look for joyLikelihood and/or sorrowLikelihood from the response.</p> Copyright 2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import pydicom # Makes it so that any changes in pymedphys is automatically # propagated into the notebook without needing a kernel reset. from IPython.lib.deepreload import reload %load_ext autoreload %autoreload 2 import pymedphys from pymedphys._mosaiq.helpers import FIELD_TYPES data_paths = pymedphys.zip_data_paths("tel-dicom-pairs.zip") dir_name = [path.parent.name for path in data_paths if 'BreastAndBoost' in path.parent.name][0] dir_name def get_file_type(input_paths, file_type, exact_name=False): if exact_name: paths = [path for path in input_paths if path.name == file_type] else: paths = [path for path in input_paths if file_type in path.name] assert len(paths) == 1 return paths[0] current_paths = [path for path in data_paths if path.parent.name == dir_name] current_paths tel_path_fraction_1 = get_file_type(current_paths, "tel.1", exact_name=True) tel_path_fraction_2 = get_file_type(current_paths, "rxB_tel.1", exact_name=True) dcm_path = get_file_type(current_paths, "dcm") tel_path_fraction_1 dcm_path delivery_dcm = pymedphys.Delivery.from_dicom( pydicom.read_file(str(dcm_path), force=True), fraction_number=1 ) delivery_monaco = pymedphys.Delivery.from_monaco(tel_path_fraction_1) plt.plot(delivery_monaco.gantry) plt.plot(delivery_dcm.gantry) dcm_mudensity = delivery_dcm.mudensity() monaco_mudensity = delivery_monaco.mudensity() GRID = pymedphys.mudensity.grid() pymedphys.mudensity.display(GRID, dcm_mudensity) pymedphys.mudensity.display(GRID, monaco_mudensity) diff = monaco_mudensity - dcm_mudensity pymedphys.mudensity.display(GRID, diff) def get_patient_fields(cursor, patient_id): """Returns all of the patient fields for a given Patient ID. """ patient_id = str(patient_id) patient_field_results = pymedphys.mosaiq.execute( cursor, """ SELECT TxField.FLD_ID, TxField.Field_Label, TxField.Field_Name, TxField.Version, TxField.Meterset, TxField.Type_Enum, Site.Site_Name FROM Ident, TxField, Site WHERE TxField.Pat_ID1 = Ident.Pat_ID1 AND TxField.SIT_Set_ID = Site.SIT_Set_ID AND Ident.IDA = %(patient_id)s """, {"patient_id": patient_id}, ) table = pd.DataFrame( data=patient_field_results, columns=[ "field_id", "field_label", "field_name", "field_version", "monitor_units", "field_type", "site", ], ) table.drop_duplicates(inplace=True) table["field_type"] = [FIELD_TYPES[item] for item in table["field_type"]] return table mosaiq_sql_host = 'physics-server:31433' patient_id = '200054' with pymedphys.mosaiq.connect(mosaiq_sql_host) as cursor: patient_fields = get_patient_fields(cursor, patient_id) patient_fields with pymedphys.mosaiq.connect(mosaiq_sql_host) as cursor: mosaiq_delivery = pymedphys.Delivery.from_mosaiq(cursor, 7115) mosaiq_mudensity = mosaiq_delivery.mudensity() pymedphys.mudensity.display(GRID, mosaiq_mudensity) pymedphys.mudensity.display(GRID, mosaiq_mudensity - dcm_mudensity) # delivery_dcm.gantry len(delivery_monaco.gantry) len(delivery_dcm.gantry) len(delivery_monaco.mlc) len(delivery_dcm.mlc) monaco_mlc = np.array(delivery_monaco.mlc) dcm_mlc = np.array(delivery_dcm.mlc) diff = monaco_mlc - dcm_mlc np.shape(diff) monaco_mlc[0,:,:] dcm_mlc[0,:,:] diff[0,:,:] np.where(np.abs(diff) > 0.09) ```
github_jupyter
# Benchmarking Nearest Neighbor Searches in Python *This notebook originally appeared as a* [*blog post*](http://jakevdp.github.com/blog/2013/04/29/benchmarking-nearest-neighbor-searches-in-python/) *by Jake Vanderplas on* [*Pythonic Perambulations*](http://jakevdp.github.com/) <!-- PELICAN_BEGIN_SUMMARY --> I recently submitted a scikit-learn [pull request](https://github.com/scikit-learn/scikit-learn/pull/1732) containing a brand new ball tree and kd-tree for fast nearest neighbor searches in python. In this post I want to highlight some of the features of the new ball tree and kd-tree code that's part of this pull request, compare it to what's available in the ``scipy.spatial.cKDTree`` implementation, and run a few benchmarks showing the performance of these methods on various data sets. <!-- PELICAN_END_SUMMARY --> My first-ever open source contribution was a C++ Ball Tree code, with a SWIG python wrapper, that I submitted to scikit-learn. A [Ball Tree](https://en.wikipedia.org/wiki/Ball_tree) is a data structure that can be used for fast high-dimensional nearest-neighbor searches: I'd written it for some work I was doing on nonlinear dimensionality reduction of astronomical data (work that eventually led to [these](http://adsabs.harvard.edu/abs/2009AJ....138.1365V) [two](http://adsabs.harvard.edu/abs/2011AJ....142..203D) papers), and thought that it might find a good home in the scikit-learn project, which Gael and others had just begun to bring out of hibernation. After a short time, it became clear that the C++ code was not performing as well as it could be. I spent a bit of time writing a Cython adaptation of the Ball Tree, which is what currently resides in the [``sklearn.neighbors``](http://scikit-learn.org/0.13/modules/neighbors.html) module. Though this implementation is fairly fast, it still has several weaknesses: - It only works with a Minkowski distance metric (of which Euclidean is a special case). In general, a ball tree can be written to handle any true metric (i.e. one which obeys the triangle inequality). - It implements only the single-tree approach, not the potentially faster dual-tree approach in which a ball tree is constructed for both the training and query sets. - It implements only nearest-neighbors queries, and not any of the other tasks that a ball tree can help optimize: e.g. kernel density estimation, N-point correlation function calculations, and other so-called [Generalized N-body Problems](http://www.fast-lab.org/nbodyproblems.html). I had started running into these limits when creating astronomical data analysis examples for [astroML](http://www.astroML.org), the Python library for Astronomy and Machine Learning Python that I released last fall. I'd been thinking about it for a while, and finally decided it was time to invest the effort into updating and enhancing the Ball Tree. It took me longer than I planned (in fact, some of my [first posts](http://jakevdp.github.io/blog/2012/08/08/memoryview-benchmarks/) on this blog last August came out of the benchmarking experiments aimed at this task), but just a couple weeks ago I finally got things working and submitted a [pull request](https://github.com/scikit-learn/scikit-learn/pull/1732) to scikit-learn with the new code. ## Features of the New Ball Tree and KD Tree The new code is actually more than simply a new ball tree: it's written as a generic *N* dimensional binary search tree, with specific methods added to implement a ball tree and a kd-tree on top of the same core functionality. The new trees have a lot of very interesting and powerful features: - The ball tree works with any of the following distance metrics, which match those found in the module ``scipy.spatial.distance``: ``['euclidean', 'minkowski', 'manhattan', 'chebyshev', 'seuclidean', 'mahalanobis', 'wminkowski', 'hamming', 'canberra', 'braycurtis', 'matching', 'jaccard', 'dice', 'kulsinski', 'rogerstanimoto', 'russellrao', 'sokalmichener', 'sokalsneath', 'haversine']`` Alternatively, the user can specify a callable Python function to act as the distance metric. While this will be quite a bit slower than using one of the optimized metrics above, it adds nice flexibility. - The kd-tree works with only the first four of the above metrics. This limitation is primarily because the distance bounds are less efficiently calculated for metrics which are not axis-aligned. - Both the ball tree and kd-tree implement k-neighbor and bounded neighbor searches, and can use either a single tree or dual tree approach, with either a breadth-first or depth-first tree traversal. Naive nearest neighbor searches scale as $\mathcal{O}[N^2]$; the tree-based methods here scale as $\mathcal{O}[N \log N]$. - Both the ball tree and kd-tree have their memory pre-allocated entirely by ``numpy``: this not only leads to code that's easier to debug and maintain (no memory errors!), but means that either data structure can be serialized using Python's ``pickle`` module. This is a very important feature in some contexts, most notably when estimators are being sent between multiple machines in a parallel computing framework. - Both the ball tree and kd-tree implement fast kernel density estimation (KDE), which can be used within any of the valid distance metrics. The supported kernels are ``['gaussian', 'tophat', 'epanechnikov', 'exponential', 'linear', 'cosine']`` the combination of these kernel options with the distance metric options above leads to an extremely large number of effective kernel forms. Naive KDE scales as $\mathcal{O}[N^2]$; the tree-based methods here scale as $\mathcal{O}[N \log N]$. - Both the ball tree and kd-tree implement fast 2-point correlation functions. A correlation function is a statistical measure of the distribution of data (related to the Fourier power spectrum of the density distribution). Naive 2-point correlation calculations scale as $\mathcal{O}[N^2]$; the tree-based methods here scale as $\mathcal{O}[N \log N]$. ## Comparison with cKDTree As mentioned above, there is another nearest neighbor tree available in the SciPy: ``scipy.spatial.cKDTree``. There are a number of things which distinguish the ``cKDTree`` from the new kd-tree described here: - like the new kd-tree, ``cKDTree`` implements only the first four of the metrics listed above. - Unlike the new ball tree and kd-tree, ``cKDTree`` uses explicit dynamic memory allocation at the construction phase. This means that the trained tree object cannot be pickled, and must be re-constructed in place of being serialized. - Because of the flexibility gained through the use of dynamic node allocation, ``cKDTree`` can implement a more sophisticated building methods: it uses the "sliding midpoint rule" to ensure that nodes do not become too long and thin. One side-effect of this, however, is that for certain distributions of points, you can end up with a large proliferation of the number of nodes, which may lead to a huge memory footprint (even memory errors in some cases) and potentially inefficient searches. - The ``cKDTree`` builds its nodes covering the entire $N$-dimensional data space. this leads to relatively efficient build times because node bounds do not need to be recomputed at each level. However, the resulting tree is not as compact as it could be, which potentially leads to slower query times. The new ball tree and kd tree code shrinks nodes to only cover the part of the volume which contains points. With these distinctions, I thought it would be interesting to do some benchmarks and get a detailed comparison of the performance of the three trees. Note that the ``cKDTree`` has just recently been re-written and extended, and is much faster than its previous incarnation. For that reason, I've run these benchmarks with the current bleeding-edge scipy. ## Preparing the Benchmarks But enough words. Here we'll create some scripts to run these benchmarks. There are several variables that will affect the computation time for a neighbors query: - **The number of points** $N$: for a brute-force search, the query will scale as $\mathcal{O}[N^2]$ . Tree methods usually bring this down to $\mathcal{O}[N \log N]$ . - **The dimension of the data**, $D$ : both brute-force and tree-based methods will scale approximately as $\mathcal{O}[D]$ . For high dimensions, however, the [curse of dimensionality](http://en.wikipedia.org/wiki/Curse_of_dimensionality) can make this scaling much worse. - **The desired number of neighbors**, $k$ : $k$ does not affect build time, but affects query time in a way that is difficult to quantify - **The tree leaf size**, ``leaf_size``: The leaf size of a tree roughly specifies the number of points at which the tree switches to brute-force, and encodes the tradeoff between the cost of accessing a node, and the cost of computing the distance function. - **The structure of the data**: though data structure and distribution do not affect brute-force queries, they can have a large effect on the query times of tree-based methods. - **Single/Dual tree query**: A single-tree query searches for neighbors of one point at a time. A dual tree query builds a tree on both sets of points, and traverses both trees at the same time. This can lead to significant speedups in some cases. - **Breadth-first vs Depth-first search**: This determines how the nodes are traversed. In practice, it seems not to make a significant difference, so it won't be explored here. - **The chosen metric**: some metrics are slower to compute than others. The metric may also affect the structure of the data, the geometry of the tree, and thus the query and build times. In reality, query times depend on all seven of these variables in a fairly complicated way. For that reason, I'm going to show several rounds of benchmarks where these variables are modified while holding the others constant. We'll do all our tests here with the most common Euclidean distance metric, though others could be substituted if desired. We'll start by doing some imports to get our IPython notebook ready for the benchmarks. Note that at present, you'll have to install scikit-learn off [my development branch](https://github.com/jakevdp/scikit-learn/tree/new_ball_tree) for this to work. In the future, the new KDTree and BallTree will be part of a scikit-learn release. ``` %pylab inline import numpy as np from scipy.spatial import cKDTree from sklearn.neighbors import KDTree, BallTree ``` ### Data Sets For spatial tree benchmarks, it's important to use various realistic data sets. In practice, data rarely looks like a uniform distribution, so running benchmarks on such a distribution will not lead to accurate expectations of the algorithm performance. For this reason, we'll test three datasets side-by-side: a uniform distribution of points, a set of pixel values from images of hand-written digits, and a set of flux observations from astronomical spectra. ``` # Uniform random distribution uniform_N = np.random.random((10000, 4)) uniform_D = np.random.random((1797, 128)) # Digits distribution from sklearn.datasets import load_digits digits = load_digits() print(digits.images.shape) # We need more than 1797 digits, so let's stack the central # regions of the images to inflate the dataset. digits_N = np.vstack([digits.images[:, 2:4, 2:4], digits.images[:, 2:4, 4:6], digits.images[:, 4:6, 2:4], digits.images[:, 4:6, 4:6], digits.images[:, 4:6, 5:7], digits.images[:, 5:7, 4:6]]) digits_N = digits_N.reshape((-1, 4))[:10000] # For the dimensionality test, we need up to 128 dimesnions, so # we'll combine some of the images. digits_D = np.hstack((digits.data, np.vstack((digits.data[:1000], digits.data[1000:])))) # The edge pixels are all basically zero. For the dimensionality tests # to be reasonable, we want the low-dimension case to probe interir pixels digits_D = np.hstack([digits_D[:, 28:], digits_D[:, :28]]) # The spectra can be downloaded with astroML: see http://www.astroML.org from astroML.datasets import fetch_sdss_corrected_spectra spectra = fetch_sdss_corrected_spectra()['spectra'] spectra.shape # Take sections of spectra and stack them to reach N=10000 samples spectra_N = np.vstack([spectra[:, 500:504], spectra[:, 504:508], spectra[:2000, 508:512]]) # Take a central region of the spectra for the dimensionality study spectra_D = spectra[:1797, 400:528] print(uniform_N.shape, uniform_D.shape) print(digits_N.shape, digits_D.shape) print(spectra_N.shape, spectra_D.shape) ``` We now have three datasets with similar sizes. Just for the sake of visualization, let's visualize two dimensions from each as a scatter-plot: ``` titles = ['Uniform', 'Digits', 'Spectra'] datasets_D = [uniform_D, digits_D, spectra_D] datasets_N = [uniform_N, digits_N, spectra_N] fig, ax = plt.subplots(1, 3, figsize=(12, 3.5)) for axi, title, dataset in zip(ax, titles, datasets_D): axi.plot(dataset[:, 1], dataset[:, 2], '.k') axi.set_title(title, size=14) ``` We can see how different the structure is between these three sets. The uniform data is randomly and densely distributed throughout the space. The digits data actually comprise discrete values between 0 and 16, and more-or-less fill certain regions of the parameter space. The spectra display strongly-correlated values, such that they occupy a very small fraction of the total parameter volume. ### Benchmarking Scripts Now we'll create some scripts that will help us to run the benchmarks. Don't worry about these details for now -- you can simply scroll down past these and get to the plots. ``` from time import time def average_time(executable, *args, **kwargs): """Compute the average time over N runs""" N = 5 t = 0 for i in range(N): t0 = time() res = executable(*args, **kwargs) t1 = time() t += (t1 - t0) return res, t * 1. / N TREE_DICT = dict(cKDTree=cKDTree, KDTree=KDTree, BallTree=BallTree) colors = dict(cKDTree='black', KDTree='red', BallTree='blue', brute='gray', gaussian_kde='black') def bench_knn_query(tree_name, X, N, D, leaf_size, k, build_args=None, query_args=None): """Run benchmarks for the k-nearest neighbors query""" Tree = TREE_DICT[tree_name] if build_args is None: build_args = {} if query_args is None: query_args = {} NDLk = np.broadcast(N, D, leaf_size, k) t_build = np.zeros(NDLk.size) t_query = np.zeros(NDLk.size) for i, (N, D, leaf_size, k) in enumerate(NDLk): XND = X[:N, :D] if tree_name == 'cKDTree': build_args['leafsize'] = leaf_size else: build_args['leaf_size'] = leaf_size tree, t_build[i] = average_time(Tree, XND, **build_args) res, t_query[i] = average_time(tree.query, XND, k, **query_args) return t_build, t_query def plot_scaling(data, estimate_brute=False, suptitle='', **kwargs): """Plot the scaling comparisons for different tree types""" # Find the iterable key iterables = [key for (key, val) in kwargs.items() if hasattr(val, '__len__')] if len(iterables) != 1: raise ValueError("A single iterable argument must be specified") x_key = iterables[0] x = kwargs[x_key] # Set some defaults if 'N' not in kwargs: kwargs['N'] = data.shape[0] if 'D' not in kwargs: kwargs['D'] = data.shape[1] if 'leaf_size' not in kwargs: kwargs['leaf_size'] = 15 if 'k' not in kwargs: kwargs['k'] = 5 fig, ax = plt.subplots(1, 2, figsize=(10, 4), subplot_kw=dict(yscale='log', xscale='log')) for tree_name in ['cKDTree', 'KDTree', 'BallTree']: t_build, t_query = bench_knn_query(tree_name, data, **kwargs) ax[0].plot(x, t_build, color=colors[tree_name], label=tree_name) ax[1].plot(x, t_query, color=colors[tree_name], label=tree_name) if tree_name != 'cKDTree': t_build, t_query = bench_knn_query(tree_name, data, query_args=dict(breadth_first=True, dualtree=True), **kwargs) ax[0].plot(x, t_build, color=colors[tree_name], linestyle='--') ax[1].plot(x, t_query, color=colors[tree_name], linestyle='--') if estimate_brute: Nmin = np.min(kwargs['N']) Dmin = np.min(kwargs['D']) kmin = np.min(kwargs['k']) # get a baseline brute force time by setting the leaf size large, # ensuring a brute force calculation over the data _, t0 = bench_knn_query('KDTree', data, N=Nmin, D=Dmin, leaf_size=2 * Nmin, k=kmin) # use the theoretical scaling: O[N^2 D] if x_key == 'N': exponent = 2 elif x_key == 'D': exponent = 1 else: exponent = 0 t_brute = t0 * (np.array(x, dtype=float) / np.min(x)) ** exponent ax[1].plot(x, t_brute, color=colors['brute'], label='brute force (est.)') for axi in ax: axi.grid(True) axi.set_xlabel(x_key) axi.set_ylabel('time (s)') axi.legend(loc='upper left') axi.set_xlim(np.min(x), np.max(x)) info_str = ', '.join([key + '={' + key + '}' for key in ['N', 'D', 'k'] if key != x_key]) ax[0].set_title('Tree Build Time ({0})'.format(info_str.format(**kwargs))) ax[1].set_title('Tree Query Time ({0})'.format(info_str.format(**kwargs))) if suptitle: fig.suptitle(suptitle, size=16) return fig, ax ``` ## Benchmark Plots Now that all the code is in place, we can run the benchmarks. For all the plots, we'll show the build time and query time side-by-side. Note the scales on the graphs below: overall, the build times are usually a factor of 10-100 faster than the query times, so the differences in build times are rarely worth worrying about. A note about legends: we'll show **single-tree approaches as a solid line**, and we'll show **dual-tree approaches as dashed lines**. In addition, where it's relevant, we'll estimate the brute force scaling for ease of comparison. ### Scaling with Leaf Size We will start by exploring the scaling with the ``leaf_size`` parameter: recall that the leaf size controls the minimum number of points in a given node, and effectively adjusts the tradeoff between the cost of node traversal and the cost of a brute-force distance estimate. ``` leaf_size = 2 ** np.arange(10) for title, dataset in zip(titles, datasets_N): fig, ax = plot_scaling(dataset, N=2000, leaf_size=leaf_size, suptitle=title) ``` Note that with larger leaf size, the build time decreases: this is because fewer nodes need to be built. For the query times, we see a distinct minimum. For very small leaf sizes, the query slows down because the algorithm must access many nodes to complete the query. For very large leaf sizes, the query slows down because there are too many pairwise distance computations. If we were to use a less efficient metric function, the balance between these would change and a larger leaf size would be warranted. This benchmark motivates our setting the leaf size to 15 for the remaining tests. ### Scaling with Number of Neighbors Here we'll plot the scaling with the number of neighbors $k$. This should not effect the build time, because $k$ does not enter there. It will, however, affect the query time: ``` k = 2 ** np.arange(1, 10) for title, dataset in zip(titles, datasets_N): fig, ax = plot_scaling(dataset, N=4000, k=k, suptitle=title, estimate_brute=True) ``` Naively you might expect linear scaling with $k$, but for large $k$ that is not the case. Because a priority queue of the nearest neighbors must be maintained, the scaling is super-linear for large $k$. We also see that brute force has no dependence on $k$ (all distances must be computed in any case). This means that if $k$ is very large, a brute force approach will win out (though the exact value for which this is true depends on $N$, $D$, the structure of the data, and all the other factors mentioned above). Note that although the cKDTree build time is a factor of ~3 faster than the others, the absolute time difference is less than two milliseconds: a difference which is orders of magnitude smaller than the query time. This is due to the shortcut mentioned above: the ``cKDTree`` doesn't take the time to shrink the bounds of each node. ### Scaling with the Number of Points This is where things get interesting: the scaling with the number of points $N$ : ``` N = (10 ** np.linspace(2, 4, 10)).astype(int) for title, dataset in zip(titles, datasets_N): plot_scaling(dataset, N=N, estimate_brute=True, suptitle=title) ``` We have set *d* = 4 and *k* = 5 in each case for ease of comparison. Examining the graphs, we see some common traits: all the tree algorithms seem to be scaling as approximately $\mathcal{O}[N\log N]$, and both kd-trees are beating the ball tree. Somewhat surprisingly, the dual tree approaches are slower than the single-tree approaches. For 10,000 points, the speedup over brute force is around a factor of 50, and this speedup will get larger as $N$ further increases. Additionally, the comparison of datasets is interesting. Even for this low dimensionality, the tree methods tend to be slightly faster for structured data than for uniform data. Surprisingly, the ``cKDTree`` performance gets *worse* for highly structured data. I believe this is due to the use of the sliding midpoint rule: it works well for evenly distributed data, but for highly structured data can lead to situations where there are many very sparsely-populated nodes. ### Scaling with the Dimension As a final benchmark, we'll plot the scaling with dimension. ``` D = 2 ** np.arange(8) for title, dataset in zip(titles, datasets_D): plot_scaling(dataset, D=D, estimate_brute=True, suptitle=title) ``` As we increase the dimension, we see something interesting. For more broadly-distributed data (uniform and digits), the dual-tree approach begins to out-perform the single-tree approach, by as much as a factor of 2. In bottom-right panel, we again see a strong effect of the cKDTree's shortcut in construction: because it builds nodes which span the entire volume of parameter space, most of these nodes are quite empty, especially as the dimension is increased. This leads to queries which are quite a bit slower for sparse data in high dimensions, and overwhelms by a factor of 100 any computational savings at construction. ## Conclusion In a lot of ways, the plots here are their own conclusion. But in general, this exercise convinces me that the new Ball Tree and KD Tree in scikit-learn are at the very least equal to the scipy implementation, and in some cases much better: - All three trees scale in the expected way with the number and dimension of the data - All three trees beat brute force by orders of magnitude in all but the most extreme circumstances. - The ``cKDTree`` seems to be less optimal for highly-structured data, which is the kind of data that is generally of interest. - The ``cKDTree`` has the further disadvantage of using dynamically allocated nodes, which cannot be serialized. The pre-allocation of memory for the new ball tree and kd tree solves this problem. On top of this, the new ball tree and kd tree have several other advantages, including more flexibility in traversal methods, more available metrics, and more availale query types (e.g. KDE and 2-point correlation). One thing that still puzzles me is the fact that the dual tree approaches don't offer much of an improvement over single tree. The literature on the subject would make me expect otherwise ([FastLab](http://www.fast-lab.org/nbodyproblems.html), for example, quotes near-linear-time queries for dual tree approaches), so perhaps there's some efficiency I've missed. In a later post, I plan to go into more detail and explore and benchmark some of the new functionalities added: the kernel density estimation and 2-point correlation function methods. Until then, I hope you've found this post interesting, and I hope you find this new code useful! This post was written entirely in the IPython notebook. You can [download](http://jakevdp.github.com/downloads/notebooks/TreeBench.ipynb) this notebook, or see a static view [here](http://nbviewer.ipython.org/url/jakevdp.github.com/downloads/notebooks/TreeBench.ipynb).
github_jupyter
<a href="https://colab.research.google.com/github/Isiumlord/GlowUpDataEngineerStudy/blob/main/PythonNotebooks/Variavel-TiposDeValores.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ><small>*Caso não consiga ver os retornos dos códigos, sugiro ver diretamente na plataforma do Colab. É só clicar no botão a esquerda "Open in Colab"* #VARIÁVEIS Vamos ver como as variavéis se apresentam dentro do código. ``` #Exemplo de variáveis nome = "Maria" numero = 1 decimal = 7.1 condicao = True a = 0 b = 2 conta = a + b ``` #TIPOS DE VALORES TESTANDO VARIÁVEIS PARA SABER QUAL O TIPO DELAS Para testar as variavéis usaremos a função `type()`, essa função lê a variável e retorna o tipo de valor que essa variavél tem. ``` #Testando variável nome = "Maria" type(nome) #Testando variável numero = 1 type(numero) #Testando variável decimal = 7.1 type(decimal) #Testando variável condicao = True type(condicao) #Testando variável conta = a + b type(conta) ``` Notou que a função `type()` retornou para cada variável um resultado diferente? Isso se dá por que cada variável de exemplo contem um tipo de valor diferente. * str (abreviação de string) = texto * int = número inteiro * float = número com casa decimal * bool (abreviação de boolean) = True ou False <br><br> #COMPORTAMENTO Agora que sabemos como identificar os tipos das variavéis, vamos utiliza-las. Lembra que no começo foi dito que deveriamos saber os tipos dos valores por que ao fazermos contas, por exemplo 1+1, "string" e "int" não calculavam. Então, vamos ver se é verdade? ``` #Variáveis com valores a = "1" b = 1 #Calculo calculo = a + b #Resposta print(calculo) ``` Ocorreu um erro. * "string" + "int", não calcula. Por que palavra + número, não calcula. <br> O Python fala isso para você, que a operação não é suportada. Mas e se fizermos: * "int" + "int"? * Ou, "string" + "string"? <br> Vamos tentar: ``` #Variáveis com valores a = 1 b = "2" #Calculos calculo1 = a + a calculo2 = b + b #Respostas print(calculo1) print(calculo2) ``` Agora sim, calculados: * "int" + "int" = *1 + 1 = 2.* <br> Tudo normal no mundo da matématica. <br> Mas... * "string" + "string" resultou em 22, por quê? <br> *Antes que você pense que esse resultado é vinte e dois, já adianto. Não é vinte e dois.* >Quando realizamos "string" + "string", na verdade estamos CONCATENANDO. <br> Então o nosso resultado não é uma soma, e sim, uma UNIÃO. Onde: * "string" + "string" = 2 + 2 resultou em 2 e 2. <br> >É como se você ganhasse dois brinquedos no formato do número 2, mesmo colocando um do lado do outro e formando o número vinte e dois, você ainda assim, só tem dois brinquedos no formato do número 2; e não vinte e dois brinquedos. <br> Para melhor desenhar essa situação, vamos testar mais uma vezes "string" + "string". ``` #Váriaveis com valores a = "Você" b = "&" c = "Python" #Calculos calculo1 = a + b + c calculo2 = c + b + a #Resultados print(calculo1) print(calculo2) ``` Viu. * No calculo1, o cálculo uniu as palavras "Você" + "&" + "Python", gerando uma nova palavra "Você&Python". * No calculo2, a mesma coisa acontece, apenas troca-se as palavras de lugar. <br> Isso é o resultado de uma CONCATENAÇÃO. Por enquanto é isto. [Clique para saber+](https://github.com/Isiumlord/GlowUpDataEngineerStudy/blob/main/Python.md).
github_jupyter
## 2.5 简单文档的制作 LaTeX不但适合制作篇幅较大的文档,在制作篇幅较小的文档比如手稿、作业等时也十分方便。在LaTeX的各类文档中,最为常用的文档类型为article (文章),以下将介绍如何制作一个简单文档。 ### 2.5.1 添加标题、日期、作者信息 添加标题、日期、作者信息一般是在`\begin{document}`之前,格式如下: ```tex % 输入空格表示空的 \title{标题} \author{作者名字} \date{日期} ``` 如果要显示添加的相关信息,需要在`\begin{document}`之后使用`\maketitle `命令。 【**例1**】以文档类型article为例,在创建的简单文档中添加添加标题、日期、作者等信息。 ```tex \documentclass[a4paper, 12pt]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{palatino} \title{LaTeX cook-book}% 标题 \author{Author}% 作者 \date{2021/12/31}% 日期 \begin{document} \maketitle % 显示命令 Hello, LaTeXers! This is our first LaTeX document. \end{document} ``` 编译上述代码,得到文档如图2-5-1所示。 <p align="center"> <img align="middle" src="graphics/example2_4_1.png" width="450" /> </p> <center><b>图2-5-1</b> 编译后的文档</center> ### 2.5.2 开始创建文档 在LaTeX中,以`\begin{document}`命令为分界线,该命令之前的代码都统称为前导代码,这些代码能设置全局参数。位于`\begin{document}`和`\end{document}`之间的代码被视为主体代码,我们所创作文档的具体内容也都是放在这两个命令之间。 #### 设置章节 文档的章节是文档逻辑关系的重要体现,无论是中文论文还是英文论文都会有严谨的格式,章、节、段分明。在LaTeX中,不同的文档类设置章节的命令有些许差别,`\chapter`命令只在book、report两个文档类中有定义,article类型中设置章节可以通过`\section{name}`及`\subsection{name}`等简单的命令进行实现。 【**例2**】在例1的基础上使用`\section{name}`及`\subsection{name}`创建二级标题。 ```tex \documentclass[a4paper, 12pt]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{palatino} \title{LaTeX cook-book} \author{Author} \date{2021/12/31} \begin{document} \maketitle \section{Introduction}% 一级标题 \subsection{Hellow LaTeXers}% 二级标题 Hello, LaTeXers! This is our first LaTeX document. \end{document} ``` 编译上述代码,得到文档如图2-5-2所示。 <p align="center"> <img align="middle" src="graphics/example2_4_2.png" width="450" /> </p> <center><b>图2-5-2</b> 编译后的文档</center> #### 新增段落 段落是文章的基础,在LaTeX中,可以直接 在文档中间键入文本作为段落,也可以使用`\paragraph{name}`和 `\subparagraph{name}`插入带标题的段落和亚段落。 【**例3**】在例2的基础上,使用`\paragraph{name}`和 `\subparagraph{name}`插入带标题的段落。 ```tex \documentclass[a4paper, 12pt]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{palatino} \title{LaTeX cook-book} \author{Xinyu Chen} \date{2021/07/06} \begin{document} \maketitle \section{Introduction}% 一级标题 \subsection{Hellow LaTeXers}% 二级标题 \paragraph{PA} Hello, LaTeXers! This is our first LaTeX document. \subparagraph{Pa1} This document is our starting point for learning LaTeX and writing with it. It would not be difficult. \end{document} ``` 编译上述代码,得到文档如图2-5-3所示。 <p align="center"> <img align="middle" src="graphics/example2_4_3.png" width="450" /> </p> <center><b>图2-5-3</b> 编译后的文档</center> #### 生成目录 在LaTeX中,我们通过一行简单的命令便可以生成文档的目录,即`\tableofcontents`。命令放在哪里,就会在哪里自动创建一个目录。默认情况下,该命令会根据用户定义的篇章节标题生成文档目录。目录中包含`\subsubsection`及其更高层次的结构标题,而段落和子段信息则不会出现在文档目录中。注意,如果有带`*`号的章节命令,则该章节标题也不会出现在目录中。如果想让文档正文内容与目录不在同一页,可在`\tableofcontents`命令后使用`\newpage`命令或者`clearpage`命令。 【**例4**】使用`\tableofcontents`为一个简单文档创建目录。 ```tex \documentclass[12pt]{article} \begin{document} \tableofcontents \section{LaTeX1} \subsection{1.1} The text of 1.1 \subsection{1.2} The text of 1.2 \subsection{1.3} The text of 1.3 \section{LaTeX2} \subsection{2.1} The text of 2.1 \subsection{2.2} The text of 2.2 \subsection{2.3} The text of 2.3 \section*{LaTeX3} \end{document} ``` 编译上述代码,得到文档如图2-5-4所示。 <p align="center"> <table> <tr> <td><img align="middle" src="graphics/example2_4_4_1.png" width="300"></td> <td><img align="middle" src="graphics/example2_4_4_2.png" width="300"></td> </tr> </table> </p> <center><b>图2-5-4</b> 编译后的文档</center> 类似对章节编号深度的设置,我们通过调用计数器命令\mintinline{tex}{\setcounter}也可以指定目录层次深度。例如: - `\setcounter{tocdepth}{0}` %目录层次仅包括\part - `\setcounter{tocdepth}{1}` % 目录层次深入到\section - `\setcounter{tocdepth}{2}` % 目录层次深入到\subsection - `\setcounter{tocdepth}{3}` % 目录层次深入到\subsubsection,默认值 除此之外,我们还可以在章节前面添加`\addtocontents{toc}{\setcounter{tocdepth}{}}`命令对每个章节设置不同深度的目录。另外还有一些其他的目录格式调整命令,如果我们想让创建的目录在文档中独占一页,只需要在目录生成命令前后添加`\newpage`;如果我们需要让目录页面不带有全文格式,只需要在生成目录命令后面加上`\thispagestyle{empty}`命令;如果我们想设置目录页之后设置页码为1,则需要在生成目录命令后面加上`\setcounter{page}{1}`命令。 如果我们想要创建图目录或表目录,分别使用`\listoffigures`、`\listoftables`命令即可,与创建章节目录的过程类似,这两个命令会根据文档中图表的标题产生图表目录,但不同之处在于,图目录或表目录中所有标题均属于同一层次。 ### 练习题 > 打开LaTeX在线系统[https://www.overleaf.com](https://www.overleaf.com/project)或本地安装好的LaTeX编辑器,创建名为LaTeX_practice的项目,并同时新建一个以`.tex`为拓展名的源文件,完成以下几道练习题。 [1] 使用`mathpazo`工具包中提供的默认字体创建一个简单文档。 > `mathpazo`工具包提供的字体是在Palatino字体基础上定义出来的。 ```tex \documentclass[a4paper, 12pt]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} % 请在此处申明使用mathpazo工具包 \begin{document} Hello, LaTeXers! This is our first LaTeX document. \end{document} ``` 【回放】[**2.4 一些基本命令**](https://nbviewer.jupyter.org/github/xinychen/latex-cookbook/blob/main/chapter-2/section4.ipynb) 【继续】[**2.6 制作中文文档**](https://nbviewer.jupyter.org/github/xinychen/latex-cookbook/blob/main/chapter-2/section6.ipynb) ### License <div class="alert alert-block alert-danger"> <b>This work is released under the MIT license.</b> </div>
github_jupyter
``` a = np.array([1, -1, 1, -1]) b = np.array([1, 1, -1, 1]) np.sum(a!=b) import itertools def KendallTau(y_pred, y_true): a = np.array(y_pred) b = np.array(y_true) n = len(y_pred) score = (np.sum(a==b)-np.sum(a!=b))/n return score a = np.array([1, 10, 100, 1000, 10000, 10000000]) b = np.array([1, 10, 100, 1000, 10000, 10000000]) def CreateRankedLabels(a): pw = list(itertools.combinations(a,2)) labels = [1 if item[0]>item[1] else -1 for item in pw] return labels a_labels = CreateRankedLabels(a) b_labels = CreateRankedLabels(b) KendallTau(a_labels,b_labels) import pandas as pd import numpy as np import sklearn as sk import math import itertools from scipy import stats from sklearn.model_selection import KFold from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LinearRegression, Ridge, Lasso, HuberRegressor from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor from sklearn.kernel_ridge import KernelRidge from sklearn.svm import SVC, SVR from sklearn.preprocessing import PolynomialFeatures %%writefile ../../src/models/model_utils.py # %load ../../src/models/model_utils.py # %%writefile ../../src/models/model_utils.py """ Author: Jim Clauwaert Created in the scope of my PhD """ import pandas as pd import numpy as np import sklearn as sk import math import itertools from scipy import stats from sklearn.model_selection import KFold from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LinearRegression, Ridge, Lasso, HuberRegressor from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor from sklearn.kernel_ridge import KernelRidge from sklearn.svm import SVC, SVR from sklearn.preprocessing import PolynomialFeatures def CreateRankedLabels(a): pw = list(itertools.combinations(a,2)) labels = [1 if item[0]>item[1] else -1 for item in pw] return labels def GetParameterSet(parLabel, parRange): """Retrieve a set of parameter values used for training of a model in sklearn. Parameters ----------- parLabel : 1-dimensional numpy array (str) numpy array holding a set of parameter labels. Valid labels include: [alpha, gamma, C, coef0, epsilon, max_depth, min_samples, max_features] parRange : 1-dimensional numpy array (int) numpy array with the amount of parameters returned for every parameter label. parLabel and parRange must be of the same dimension. Returns -------- parSet : Dictionary Dictionary containing a set of parameters for every label """ if parLabel[0] in ['max_depth','min_samples_split', 'max_features']: parameters = [np.zeros(parRange[u],dtype=np.int) for u in range(len(parRange))] else: parameters = [np.zeros(parRange[u]) for u in range(len(parRange))] for i in range(len(parLabel)): if parLabel[i] == "alpha": parameters[i][:] = [math.pow(10,(u - np.around(parRange[i]/2))) for u in range(parRange[i])] elif parLabel[i] == "gamma": parameters[i][:] = [math.pow(10,(u - np.around(parRange[i]/2))) for u in range(parRange[i])] elif parLabel[i] == "C": parameters[i][:] = [math.pow(10,(u - np.around(parRange[i]/2))) for u in range(parRange[i])] elif parLabel[i] == "coef0": parameters[i][:] = [math.pow(10,(u - np.around(parRange[i]/2))) for u in range(parRange[i])] elif parLabel[i] == "epsilon": parameters[i][:] = [0+2/parRange[i]*u for u in range(parRange[i])] elif parLabel[i] == "max_depth": parameters[i][:] = [int(u+1) for u in range(parRange[i])] elif parLabel[i] == 'min_samples_split': parameters[i][:] = [int(u+2) for u in range(parRange[i])] elif parLabel[i] == 'max_features': parameters[i][:] = [int(u+2) for u in range(parRange[i])] else: return print("Not a valid parameter") parSet = {parLabel[u]:parameters[u] for u in range(len(parLabel))} return parSet def EvaluateParameterSet(X_train, X_test, y_train, y_test, parModel, parSet): """Evaluate the scores of a set of parameters for a given model. Parameters ----------- X_train: Training dataset features X_test: Test dataset features y_train Training dataset labels y_test Test dataset labels parModel: Dictionary parSet : Dictionary Dictionary holding the parameter label and values over which the model has to be evaluated. This can be created through the function GetParameterSet. Accepted keys are: [alpha, gamma, C, coef0, epsilon, max_depth, min_samples, max_features] Returns -------- scores: 1-dimensional numpy array: int Fitted scores of the model with each of the parametersSets optimalPar: int Optimal parameter value for a given parameter label """ scores = [] for i in range(len(parSet[parLabel])): parSetIt = {parLabel:parSet[parLabel][i]} model = SelectModel(**parModel,**parEvalIt) model.fit(X_train,y_train) scores = np.append(model.score(X_test,y_test)) optimalPar = parSet[parLabel][np.argmax(scores)] return scores, optimalPar def EvaluateScore(X_train, X_test, y_train, y_test, parModel, scoring='default', pw=False): """Evaluates the score of a model given for a given test and training data Parameters ----------- X_train, X_test: DataFrame Test and training data of the features y_train, y_test: 1-dimensional numpy array Test and training data of the labels parModel: dictionary Parameters indicating the model and some of its features Returns -------- score: int Score of the test data on the model y_pred: 1-dimensional array An array giving the predicted labels for a given test set """ model = SelectModel(**parModel) model.fit(X_train,y_train) y_pred = model.predict(X_test) if scoring == 'default': score = model.score(X_test,y_test) elif scoring == 'kt': if pw is True: score = KendallTau(y_pred, y_test) if pw is False: y_pred_pw = CreateRankedLabels(y_pred) y_test_pw = CreateRankedLabels(y_test) score = KendallTau(y_pred_pw, y_test_pw) elif scoring == 'spearman': score = stats.spearmanr(y_test, y_pred)[0] else: raise("Scoring type not defined. Possible options are: 'default', 'kt', and 'spearman'") return score, y_pred def KendallTau(y_pred, y_true): a = np.array(y_pred) b = np.array(y_true) n = len(y_pred) score = (np.sum(a==b)-np.sum(a!=b))/n return score def LearningCurveInSample(dfDataset, featureBox, y ,parModel, scoring='default', k=5, pw=False, step=1): """Calculates the learning curve of a dataset for a given model Parameters ----------- dfDataset: Dataframe Dataframe holding sequences, featureBox: Dataframe Test dataset features y: 1-dimensional numpy array parModel: Dictionary k: int pw: Boolean step: int Returns -------- scores: 1-dimensional numpy array: int Fitted scores of the model with each of the parametersSets optimalPar: int Optimal parameter value for a given parameter label """ X = featureBox.values if pw is True: temp = np.unique(dfDataset[['ID_1', 'ID_2']].values) dfId = pd.Series(temp[:-(len(temp)%k)]) else: dfId = dfDataset['ID'][:-(len(dfDataset)%k)] lenId = len(dfId) Id = dfId.values indexId = np.array(range(lenId)) scores = np.array([]) it=0 for i in range(k): boolTest = np.logical_and(indexId>=i*lenId/k,indexId<(i+1)*lenId/k) test = Id[boolTest] train = Id[np.invert(boolTest)] if pw is True: indexTest = (dfDataset['ID_1'].isin(test) | dfDataset['ID_2'].isin(test)).values else: indexTest = dfDataset['ID'].isin(test).values dfDatasetTrain = dfDataset[np.invert(indexTest)] X_train, y_train = featureBox[np.invert(indexTest)], y[np.invert(indexTest)] X_test, y_test = featureBox[indexTest], y[indexTest] for j in range((len(train)-5)//step): print("\rProgress {:2.1%}".format(it/k+(j/len(train)/k*step)), end='') trainInner = train[:(j*step)+5] if pw is True: indexTrainInner = (dfDatasetTrain['ID_1'].isin(trainInner) & dfDatasetTrain['ID_2'].isin(trainInner)).values else: indexTrainInner = (dfDatasetTrain['ID'].isin(trainInner)).values X_trainInner, y_trainInner = X_train[indexTrainInner], y_train[indexTrainInner] score, y_pred = EvaluateScore(X_trainInner, X_test, y_trainInner, y_test, {**parModel}, scoring, pw) scores = np.append(scores,score) it+=1 scores = scores.reshape((k,-1)) return scores def LearningCurveInSampleEnriched(dfDataset, featureBox, enrichBox, y, y_enrich ,parModel, scoring='default', k=5, pw=True, step=1): """Calculates the learning curve of an enriched dataset for a given model Parameters ----------- dfDataset: Dataframe Dataframe holding sequences, featureBox: Dataframe Test dataset features y: 1-dimensional numpy array parModel: Dictionary k: int pw: Boolean step: int Returns -------- scores: 1-dimensional numpy array: int Fitted scores of the model with each of the parametersSets optimalPar: int Optimal parameter value for a given parameter label """ if pw is True: temp = np.unique(dfDataset[['ID_1', 'ID_2']].values) dfId = pd.Series(temp[:-(len(temp)%k)]) else: dfId = dfDataset['ID'][:-(len(dfDataset)%k)] lenId = len(dfId) Id = dfId.values indexId = np.array(range(lenId)) scores = np.array([]) it=0 for i in range(k): boolTest = np.logical_and(indexId>=i*lenId/k,indexId<(i+1)*lenId/k) test = Id[boolTest] train = Id[np.invert(boolTest)] if pw is True: indexTest = (dfDataset['ID_1'].isin(test) | dfDataset['ID_2'].isin(test)).values else: indexTest = dfDataset['ID'].isin(test).values dfDatasetTrain = dfDataset[np.invert(indexTest)] X_train = featureBox[np.invert(indexTest)] y_train = y[np.invert(indexTest)] X_test, y_test = featureBox[indexTest], y[indexTest] for j in range((len(train))//step): print("\rProgress {:2.1%}".format(it/k+(j/len(train)/k*step)), end='') trainInner = train[:(j*step)] if pw is True: indexTrainInner = (dfDatasetTrain['ID_1'].isin(trainInner) & dfDatasetTrain['ID_2'].isin(trainInner)).values else: indexTrainInner = (dfDatasetTrain['ID'].isin(trainInner)).values X_trainInner = np.vstack((enrichBox,X_train[indexTrainInner])) y_trainInner = np.append(y_enrich, y_train[indexTrainInner]) score, y_pred = EvaluateScore(X_trainInner, X_test, y_trainInner, y_test, {**parModel}, scoring, pw) scores = np.append(scores,score) it+=1 scores = scores.reshape((k,-1)) return scores def LearningCurveOutOfSample(dfDataset, featureBox, y , dataList, parModel, scoring='default', pw=False, step=1): """Calculates the learning curve of a dataset for a given model Parameters ----------- dfDataset: Dataframe Dataframe holding sequences, featureBox: Dataframe Test dataset features y: 1-dimensional numpy array parModel: Dictionary k: int pw: Boolean step: int Returns -------- scores: 1-dimensional numpy array: int Fitted scores of the model with each of the parametersSets optimalPar: int Optimal parameter value for a given parameter label """ if pw is True: temp = np.unique(dfDataset[['ID_1', 'ID_2']].values) dfId = pd.Series(temp) else: dfId = dfDataset['ID'] lenId = len(dfId) Id = dfId.values indexId = np.array(range(lenId)) scores = np.zeros(shape=(len(dataList),(lenId-5)//step)) for i in range((lenId-5)//step): print("\rProgress {:2.1%}".format(i/lenId*step), end='') train = Id[:((i*step)+5)] if pw is True: indexTrain = (dfDataset['ID_1'].isin(train) & dfDataset['ID_2'].isin(train)).values else: indexTrain = dfDataset['ID'].isin(train).values X_train, y_train = featureBox[indexTrain], y[indexTrain] for j in range(len(dataList)): score, y_pred = EvaluateScore(X_train, dataList[j][1].values, y_train, dataList[j][2], {**parModel}, scoring, pw) scores[j,i] = score return scores def LearningCurveOutOfSampleEnriched(dfDataset, featureBox, enrichBox, y, y_enrich, dataOutList, parModel, scoring='default', pw=True, step=1): if pw is True: temp = np.unique(dfDataset[['ID_1', 'ID_2']].values) dfId = pd.Series(temp) else: dfId = dfDataset['ID'] lenId = len(dfId) Id = dfId.values indexId = np.array(range(lenId)) scores = np.zeros(shape=(len(dataOutList),(lenId)//step)) for i in range((lenId)//step): print("\rProgress {:2.1%}".format(i/lenId*step), end='') train = Id[:(i*step)] if pw is True: indexTrain = (dfDataset['ID_1'].isin(train) & dfDataset['ID_2'].isin(train)).values else: indexTrain = dfDataset['ID'].isin(train).values X_train = np.vstack((enrichBox ,featureBox[indexTrain])) y_train = np.append(y_enrich, y[indexTrain]) for j in range(len(dataOutList)): score, y_pred = EvaluateScore(X_train, dataOutList[j][1].values, y_train, dataOutList[j][2], {**parModel}, scoring, pw) if pw is True: scores[j,i] = score return scores def SelectModel(modelType, poly=None, kernel=None, alpha=0.1, gamma=0.1, epsilon=0.1, coef0=1, fitInt=True, normalize=True, max_depth=None, max_features=None, min_samples_split = 2, n_estimators = 50, C=1, n_jobs=12): """ Initializes the correct model for a given set of parameters. Parameters ----------- modelType: str Type of model. Possible values are: ['ridge', 'SVC', 'SVR', OLS', 'lasso', 'huber', 'treeReg', 'treeClass', 'forestReg', 'forestClass'] other parameters include (further information can be found on sklearn): poly: int kernel: str alpha: int gamma: int epsilon: int coef0: int fit_intercept= Bool normalize = Bool max_depth = int max_features = int min_samples_split = int n_estimators = int C = int n_jobs= int Returns ------- model: Class sklearn-type model """ if kernel: if modelType == "ridge": model = KernelRidge(alpha=alpha, gamma=gamma, kernel=kernel, coef0=coef0) if modelType == "SVC": model = SVC(C=C, kernel=kernel, gamma=gamma, coef0=coef0, degree=poly) if modelType == "SVR": model = SVR(C=C, kernel=kernel, gamma=gamma, coef0=coef0, epsilon=epsilon, degree=poly) elif poly: if modelType == "OLS": model = make_pipeline(PolynomialFeatures(poly), LinearRegression(fit_intercept=fit_intercept, normalize=normalize)) if modelType == "ridge": model = make_pipeline(PolynomialFeatures(poly), Ridge(alpha= alpha, normalize=normalize)) if modelType == "lasso": model = make_pipeline(PolynomialFeatures(poly), Lasso(alpha= alpha, normalize=normalize)) if modelType == "huber": model = make_pipeline(PolynomialFeatures(poly), HuberRegressor(fit_intercept=fitInt, epsilon=epsilon, alpha=alpha)) else: if modelType == "OLS": model = LinearRegression(fit_intercept=fitInt, normalize=normalize) if modelType == "ridge": model = Ridge(alpha= alpha, normalize=normalize) if modelType == "lasso": model = Lasso(alpha= alpha, normalize=normalize) if modelType == "huber": model = HuberRegressor(fit_intercept=fitInt, alpha=alpha, epsilon=epsilon) if modelType == "treeReg": model = DecisionTreeRegressor(max_depth= max_depth, max_features=max_features, min_samples_split = min_samples_split) if modelType == "treeClass": model = DecisionTreeClassifier(max_depth = max_depth, max_features=max_features, min_samples_split = min_samples_split) if modelType == "forestReg": model = RandomForestRegressor(n_estimators = n_estimators, max_depth = max_depth, max_features= max_features, min_samples_split = min_samples_split, n_jobs=n_jobs) if modelType == "forestClass": model = RandomForestClassifier(n_estimators = n_estimators, max_depth = max_depth, max_features= max_features, min_samples_split = min_samples_split, n_jobs=n_jobs) return model def SetupModel(modelInit, parOptional={}): #model selection and hyperparameters modelType = modelInit[0] kernel = modelInit[1] poly= modelInit[2] parModel = {"modelType":modelType, "poly":poly, "kernel":kernel, **parOptional } return parModel GetParameterSet(['alpha', 'gamma'],[10, 10]) import sys sys.path.append("../../src/") import features.feature_utils as fu import models.model_utils as mu import plots.plot_utils as pu import pandas as pd import math import numpy as np import matplotlib.pyplot as plt from scipy import stats from sklearn.model_selection import GridSearchCV %matplotlib inline model = ['forestClass',None,None] modelOpt = {'n_estimators':10} pw = True step = 2 seqRegions = [[0,12],[-6,11]] data = '../../data/interim/pw_hammer_prom_lib.csv' dataEnrich = ['../../data/interim/pw_mutalik_prom_lib.csv'] dataOut = ['../../data/interim/pw_anderson_prom_lib.csv','../../data/interim/pw_brewster_prom_lib.csv' ,'../../data/interim/pw_inbio_prom_lib.csv'] dataOutLabels = ['anderson','brewster','inbio'] def LearningCurveOutOfSampleEnriched(dfDataset, featureBox, enrichBox, y, y_enrich, dataOutList, parModel, pw=False, step=1): if pw is True: temp = np.unique(dfDataset[['ID_1', 'ID_2']].values) dfId = pd.Series(temp) else: dfId = dfDataset['ID'] lenId = len(dfId) Id = dfId.values indexId = np.array(range(lenId)) scores = np.zeros(shape=(len(dataOut),(lenId-5)//step)) for i in range((lenId)//step): print("\rProgress {:2.1%}".format(i/len(Id)), end='') train = Id[:(i*step)] if pw is True: indexTrain = (dfDataset['ID_1'].isin(train) & dfDataset['ID_2'].isin(train)).values else: indexTrain = dfDataset['ID'].isin(train).values X_train = np.vstack((enrichBoxFull ,featureBox[indexTrain])) y_train = np.append(y_enrich, y[indexTrain]) for j in range(len(dataOut)): score, y_pred = EvaluateScore(X_train, dataOutList[j][1].values, y_train, dataOutList[j][2], {**parModel}) if pw is True: scores[j,i] = score else: scores[j,i] = abs(stats.spearmanr(dataOutList[j][0]['mean_score'],y_pred)[0]) return scores parModel = SetupModel(model,modelOpt) dfDataset , featureBox = fu.CreateFeaturesFromData(data, seqRegions, pw, shuffle=True) data enrichBoxList = [] y_enrich = [] for e in dataEnrich: dfEnrich, enrichBox = fu.CreateFeaturesFromData(e, seqRegions, pw) y = dfEnrich['rank'] enrichBoxList.append(enrichBox) y_enrich.append(y) #dataEnrichList.append((dfEnrich, enrichBox, y)) enrichBox = np.vstack((enrichBoxList[:])) dataOutList = [] for d in dataOut: dfOut, outBox = fu.CreateFeaturesFromData(d, seqRegions, pw) y = dfOut['rank'] dataOutList.append((dfOut, outBox, y)) X = featureBox.values y = dfDataset['rank'] scores = LearningCurveOutOfSampleEnriched(dfDataset, featureBox, enrichBox, y, y_enrich, dataOutList, parModel, pw, step) fig, ax = plt.subplots(1,1, figsize=(8,6)) ax.set_title("Learning curve out of sample score") colors = ['bo','ro','yo','go','wo','mo','co','ko','bo','co'] for j in range(len(dataOut)): ax.plot(range((lenId)//step),scores[j,:], colors[j], label=dataOutLabels[j]) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) ax.set_xlabel("Step") ax.set_ylabel("Score") model = ['forestClass',None,None] modelOpt = {'n_estimators':10} k= 5 pw = True step = 1 seqRegions = [[-7,12],[-6,11]] data = '../../data/interim/pw_hammer_prom_lib.csv' dataEnrich= '../../data/interim/pw_mutalik_prom_lib.csv' dfDataset , featureBox = fu.CreateFeaturesFromData(data, seqRegions, pw, shuffle=True) dfEnrich, enrichBox = fu.CreateFeaturesFromData(dataEnrich, seqRegions, pw, shuffle=True) X_enrich = enrichBox.values y_enrich = dfEnrich['rank'] X = featureBox.values y = dfDataset['rank'] parModel = mu.SetupModel(model, modelOpt) scores if pw is True: temp = np.unique(dfDataset[['ID_1', 'ID_2']].values) dfId = pd.Series(temp[:-(len(temp)%k)]) else: dfId = dfDataset['ID'][:-(len(dfDataset)%k)] lenId = len(dfId) Id = dfId.values indexId = np.array(range(lenId)) scores = np.array([]) it=0 for i in range(k): boolTest = np.logical_and(indexId>=i*lenId/k,indexId<(i+1)*lenId/k) test = Id[boolTest] train = Id[np.invert(boolTest)] if pw is True: indexTest = (dfDataset['ID_1'].isin(test) | dfDataset['ID_2'].isin(test)).values else: indexTest = dfDataset['ID'].isin(test).values dfDatasetTrain = dfDataset[np.invert(indexTest)] X_train = featureBox[np.invert(indexTest)] y_train = y[np.invert(indexTest)] X_test, y_test = featureBox[indexTest], y[indexTest] for j in range((len(train))//step): print("\rProgress {:2.1%}".format(it/k+(j/len(train)/k)), end='') trainInner = train[:(j*step)] if pw is True: indexTrainInner = (dfDatasetTrain['ID_1'].isin(trainInner) & dfDatasetTrain['ID_2'].isin(trainInner)).values else: indexTrainInner = (dfDatasetTrain['ID'].isin(trainInner)).values X_trainInner = np.vstack((enrichBox,X_train[indexTrainInner])) y_trainInner = np.append(y_enrich, y_train[indexTrainInner]) score, y_pred = mu.EvaluateScore(X_trainInner, X_test, y_trainInner, y_test, {**parModel}) scores = np.append(scores,score) it+=1 scores = scores.reshape((k,-1)) fig, (ax1,ax2) = plt.subplots(2,1, figsize=(10,8),sharex=True) ax1.set_title("Learning curve in sample score of enriched dataset") for i in range(k): colors = ['bo','ro','yo','go','wo','mo','co','ko','bo','co'] ax1.plot(range(len(scores[i,:])),scores[i,:], colors[i]) ax1.set_xlabel("Step") ax1.set_ylabel("Score") meanScores=np.mean(scores,axis=0) stdScores=np.std(scores,axis=0) ax2.errorbar(range(len(meanScores)), meanScores[:], stdScores[:]) ax2.set_xlabel("Step") ax2.set_ylabel("Score") fig, (ax1,ax2) = plt.subplots(2,1, figsize=(10,8),sharex=True) ax1.set_title("Learning curve in sample score, stepwise increase of promoter in train library") for i in range(k): colors = ['bo','ro','yo','go','wo','mo','co','ko','bo','co'] ax1.plot(range(len(scores[i,:])),scores[i,:], colors[i]) ax1.set_xlabel("Step") ax1.set_ylabel("Score") meanScores=np.mean(scores,axis=0) stdScores=np.std(scores,axis=0) ax2.errorbar(range(len(meanScores)), meanScores[:], stdScores[:]) ax2.set_xlabel("Step") ax2.set_ylabel("Score") dfDataset = pd.read_csv('../../data/interim/pw_mutalik_prom_lib.csv') dfDatasetAligned = fu.AlignSequences(dfDataset, pw=True) dfDatasetShuffled , featureBox = fu.PositionalFeaturesPW(dfDatasetAligned, [[-7,12],[-6,11]], shuffle=True) X = featureBox.values y = dfDatasetShuffled['rank'] """parSet = GetParameterSet(parLabel, parRange) model = SelectModel(**parModel) GS = GridSearchCV(model, parSet, cv=k, n_jobs=n_jobs) GS.fit(X,y)""" GS.best_estimator_ np.unique(dfDatasetShuffled[['ID_1']].values).size scores len(train)*5 ```
github_jupyter
# MDM demos Make sure replace the following variables to the corresponding ones in your environment. - athena_output_bucket - APIUrl - ML_endpoint ``` from pyathena import connect import pandas as pd import matplotlib.pyplot as plt import altair as alt from vega_datasets import data athena_output_bucket = 'ml-prediction-pipeline-athenaquerybucket-q07u8q4i0x4n' region = 'us-east-1' connection = connect(s3_staging_dir='s3://{}/'.format(athena_output_bucket), region_name=region) APIUrl = 'https://gxhy3hhbcb.execute-api.us-east-1.amazonaws.com/prod/{}' ML_endpoint = "ml-endpoint-weather-0731" ``` ## Real-time forecast for a specific meter ``` # Invoke API gateway to send forecast request via Lambda to Sagemaker endpoint # if using notebook, Sagemaker role needs to have API gateway invoke permission import json from os import environ import boto3 import sys import urllib3 def getForecast(meter_id, start, end, period, withweather): # Access API to get cluster endpoint name and temporary credentials http = urllib3.PoolManager() forecastAPIUrl = APIUrl.format('get-forecast') event_payload = { "Meter_id": meter_id, "Data_start": start, "Data_end": end, "Forecast_period": period, "With_weather_data": withweather, "ML_endpoint_name": ML_endpoint } response = http.request('POST', forecastAPIUrl, body=json.dumps(event_payload)) return response.data resp = getForecast('MAC004734', "2013-05-01", "2013-10-01", 7, 1) # convert response to dataframe and visualize data = json.loads(json.loads(resp)) df = pd.read_json(json.dumps(data)) df.plot() ``` ## Get forecast from batch forecast result, can be one or many meters ```python meter_range = ['MAC000002', 'MAC000010'] query = '''select meter_id, datetime, consumption from "meter-data".forecast where meter_id between {} and {};'''.format(meter_range[0], meter_range[1]) df = pd.read_sql(query, connection) ``` ## Get Anomaly for a specific meter ``` def plot_anomalies(forecasted): interval = alt.Chart(forecasted).mark_area(interpolate="basis", color = '#7FC97F').encode( x=alt.X('ds:T', title ='date'), y='yhat_upper', y2='yhat_lower', tooltip=['ds', 'consumption', 'yhat_lower', 'yhat_upper', 'temperature', 'apparenttemperature'] ).interactive().properties( title='Anomaly Detection' ) fact = alt.Chart(forecasted).mark_line(color = '#774009').encode( x='ds:T', y=alt.Y('consumption', title='consumption') ).interactive() apparenttemperature = alt.Chart(forecasted).mark_line(color = '#40F9F9').encode( x='ds:T', y='apparenttemperature' ) anomalies = alt.Chart(forecasted[forecasted.anomaly!=0]).mark_circle(size=30, color = 'Red').encode( x='ds:T', y=alt.Y('consumption', title='consumption'), tooltip=['ds', 'consumption', 'yhat_lower', 'yhat_upper', 'temperature', 'apparenttemperature'], size = alt.Size( 'importance', legend=None) ).interactive() return alt.layer(interval, fact, apparenttemperature, anomalies)\ .properties(width=870, height=450)\ .configure_title(fontSize=20) def getForecast(meter_id, start, end, outlier_only): # Access API to get cluster endpoint name and temporary credentials http = urllib3.PoolManager() anomalyAPIUrl = APIUrl.format('get-anomaly') event_payload = { "Meter_id": meter_id, "Data_start": start, "Data_end": end, "Outlier_only": outlier_only } print(json.dumps(event_payload)) response = http.request('POST', anomalyAPIUrl, body=json.dumps(event_payload)) return response.data # Call rest API to get anomaly resp = getForecast('MAC000005', "2013-01-01", "2013-12-31", 0) # convert response to dataframe and visualize data = json.loads(json.loads(resp)) df = pd.read_json(json.dumps(data)) plot_anomalies(df) ``` ## Get outage ``` def getOutage(start, end): # Access API to get cluster endpoint name and temporary credentials http = urllib3.PoolManager() outageAPIUrl = 'https://l6hvunf6r9.execute-api.us-east-1.amazonaws.com/prod' event_payload = { "startDateTime": start, "endDateTime": end } print(json.dumps(event_payload)) response = http.request('POST', outageAPIUrl, body=json.dumps(event_payload)) return response.data # Call rest API to get anomaly resp = getOutage("2013-01-03 09:00:01", "2013-01-03 10:59:59") data = json.loads(resp) df = pd.DataFrame(data['Items']) df_result = df[['meter_id', 'lat', 'long']].drop_duplicates() df_result from vega_datasets import data counties = alt.topo_feature(data.us_10m.url, 'counties') # New York state background # County id code starts with state id. 36 is NY state map_newyork =( alt.Chart(data = counties) .mark_geoshape( stroke='black', strokeWidth=1 ) .transform_calculate(state_id = "(datum.id / 1000)|0") .transform_filter((alt.datum.state_id)==36) .encode(color=alt.value('lightgray')) .properties( width=800, height=640 ) ) # meter positions on background points = alt.Chart(df_result.head(500)).mark_circle().encode( longitude='long:Q', latitude='lat:Q', color=alt.value('orange'), tooltip=['meter_id'] ).properties( title='Power outage in New York' ) map_newyork + points ```
github_jupyter
# Data Preparation Workflow ## Set Up the Jupyter Notebook for Analysis Note: We have our new package called swat - SAS Scripting Wrapper for Analytics Transfer - available on GitHub via pip install <br> ``` # Import necessary packages and modules import swat %matplotlib inline # Set the connection by specifying the hostname, port, username, and password conn = swat.CAS(hostname, port, username, password) # Get the hmeq csv file from SAS support documentation and lift into memory castbl = conn.read_csv('http://support.sas.com/documentation/onlinedoc/viya/exampledatasets/hmeq.csv', casout = 'hmeq') castbl.replace = True ``` ## Examine the first few rows ``` # Assign the variable name df to the new CASTable object df = conn.CASTable('hmeq') # Perform the head method to return the first 5 rows df.head() ``` ## Create new features ``` # How much of their mortgage have they paid off? df['MORTPAID'] = df['VALUE'] - df['MORTDUE'] df.head() # What percent of the time does this happen? df.query('MORTPAID < 0')['MORTPAID'].count()/len(df) ``` ## Examine numeric variable distribution ``` # Use the pandas/matplotlib method for plotting a histogram of all numeric variables df.hist(figsize = (15, 10)); ``` ## Examine summary statistics ``` # Use the pandas describe method, then switch rows and columns summary = df.describe(include = 'all').transpose() summary ``` ## Is there an issue with missing data? ``` # Create percent missing column for plotting summary['pctmiss'] = (len(df) - summary['count'])/len(df) # Make a bar graph using pandas/matplotlib functionality summary.query('pctmiss > 0')['pctmiss'].plot('bar', title = 'Pct Missing Values', figsize = (10, 6), color = 'c'); ``` ## The variables need to be imputed for missing values ``` # This is using the CAS action impute - really nice method for imputing all variables at once ## Impute the median for numeric, most common for categorical df.impute( methodContinuous = 'MEDIAN', methodNominal = 'MODE', inputs = list(summary.index[1:]), # exclude target column copyAllVars = True, casOut = castbl ) ``` ## Create partition indicator so models won't overfit ``` # Load the sampling actionset conn.loadactionset('sampling') # Do a simple random sample with a 70/30 split df.srs(samppct = 30, partind = True, seed = 1, output = dict(casout = castbl, copyvars = 'all')) # What percentage is in each split? castbl['_PartInd_'].groupby('_PartInd_').count()/len(castbl) ``` ## Make sure the dataset looks good ``` castbl.head() ``` ## Promote the table to public memory ``` # This allows me to share my data across sessions and with other colleagues conn.promote(name = castbl, targetlib = 'public', target = 'hmeq_prepped') ``` ## Ensure that everything worked as intended ``` # Verify that hmeq_prepped has made it into the public caslib conn.tableinfo(caslib = 'public')['TableInfo'].query('Name == "HMEQ_PREPPED"') # End the session conn.endsession() ```
github_jupyter
``` from importlib import reload import os import json import numpy as np import torch from torch.utils.data import DataLoader, Dataset import pandas as pd import matplotlib.pyplot as plt from itertools import product as iter_product import src, src.debias, src.models, src.ranking, src.datasets, src.data_utils if torch.cuda.device_count() > 1: use_device_id = int(input(f"Choose cuda index, from [0-{torch.cuda.device_count()-1}]: ").strip()) else: use_device_id = 0 use_device = "cuda:"+str(use_device_id) if torch.cuda.is_available() else "cpu" if not torch.cuda.is_available(): input("CUDA isn't available, so using cpu. Please press any key to confirm this isn't an error: \n") print("Using device", use_device) torch.cuda.set_device(use_device_id) with open(src.PATHS.TRAINED_MODELS.METADATA, mode="r") as _runs_metafile: runs_metadata = json.load(_runs_metafile) with open(src.PATHS.TRAINED_MODELS.TEST_PROMPTS, mode="r") as _test_promptsfile: test_prompts_data = json.load(_test_promptsfile) for module in [src, src.debias, src.models, src.ranking, src.datasets, src.data_utils]: reload(module) eval_dss = {"gender": [("FairFace", "val"), ("UTKFace", "val"), ("COCOGender", "val"), ],#("CelebA", "val")], # has 200k images so takes looong to compute and we don't focus on it anyway "race": [("FairFace", "val"), ("UTKFace", "val")]} clip_arch = "openai/CLIP/ViT-B/16" evaluations = ["maxskew", "ndkl", "clip_audit"] perf_evaluations = ["cifar10", "flickr1k", "cifar100"] # flickr1k, cifar100, cifar10 all_experiment_results = pd.DataFrame() clip_audit_results = pd.DataFrame() batch_sz = 256 try: with torch.cuda.device(use_device_id): for run_id, run_metadata in list(runs_metadata.items()): experiment_results = pd.DataFrame() if int(run_id) != 91: continue print(run_id, run_metadata) n_debias_tokens = 2 if int(run_id) < 100 else 0 model_save_name = f"best_ndkl_oai-clip-vit-b-16_neptune_run_OXVLB-{run_id}_model_e{run_metadata['epoch']}_step_{run_metadata['step']}.pt" model, preprocess, tokenizer, model_alias = src.models.DebiasCLIP.from_cfg(src.Dotdict({ "CLIP_ARCH": clip_arch, "DEVICE": use_device, "num_debias_tokens": n_debias_tokens })) model_alias = model_save_name model.load_state_dict(torch.load(os.path.join(src.PATHS.TRAINED_MODELS.BASE, model_save_name), map_location=use_device), strict=True) model = model.eval().to(use_device) debias_class = run_metadata["debias_class"] test_prompts = test_prompts_data[debias_class] test_prompts_df = pd.DataFrame({"prompt": test_prompts}) test_prompts_df["group"] = debias_class if "clip_audit" in evaluations: ca_prompts = test_prompts_data["clip_audit"] ca_ds = src.datasets.FairFace(iat_type="race", lazy=True, _n_samples=None, transforms=preprocess, mode="val") ca_dl = DataLoader(ca_ds, batch_size=batch_sz, shuffle=False, num_workers=8) # Shuffling ISN'T(!) reflected in the cache ca_res = src.ranking.do_clip_audit(ca_dl, ca_prompts, model, model_save_name, tokenizer, preprocess, use_device, use_templates=True) for k, v in {"model_name": model_save_name, "dataset": "FairFaceVal", "evaluation": "clip_audit"}.items(): ca_res[k] = v clip_audit_results = clip_audit_results.append(ca_res, ignore_index=True) for perf_eval in perf_evaluations: perf_res = {"model_name": model_save_name, "dataset": perf_eval, "evaluation": perf_eval, "mean": src.debias.run_perf_eval(perf_eval, model, tokenizer, preprocess, use_device)} experiment_results = experiment_results.append(pd.DataFrame([perf_res]), ignore_index=True) n_imgs = None # First run populates cache, thus run with None first, later runs can reduce number for dset_name, dset_mode in eval_dss[debias_class]: ds = getattr(src.datasets, dset_name)(lazy=True, _n_samples=n_imgs, transforms=preprocess, mode=dset_mode) dl = DataLoader(ds, batch_size=batch_sz, shuffle=False, num_workers=8) # Shuffling ISN'T(!) reflected in the cache for evaluation in evaluations: if evaluation == "clip_audit": continue model.eval() _res = src.debias.run_bias_eval(evaluation, test_prompts_df, model, model_save_name, tokenizer, dl, use_device, cache_suffix="") _res = src.debias.mean_of_bias_eval(_res, evaluation, "dem_par") res = {} for key, val in _res.items(): for rename in ["mean_", "std_"]: if key.startswith(rename): res[rename[:-1]] = val break else: res[key] = val res["model_name"] = model_save_name res["dataset"] = dset_name+dset_mode.capitalize() res["evaluation"] = evaluation experiment_results = experiment_results.append(pd.DataFrame([res]), ignore_index=True) experiment_results["debias_class"] = debias_class experiment_results["train_ds"] = run_metadata["train_ds"] all_experiment_results = all_experiment_results.append(experiment_results) del model, preprocess, tokenizer finally: result_name = f"exp_test_bias_results.csv" ca_result_name = f"exp_test_clip_audit_results.csv" all_experiment_results.to_csv(os.path.join(src.PATHS.PLOTS.BASE, result_name)) clip_audit_results.to_csv(os.path.join(src.PATHS.PLOTS.BASE, ca_result_name)) display(clip_audit_results) display(all_experiment_results) ```
github_jupyter
# Bayesian Regression Using NumPyro In this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](#References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. ## Tutorial Outline: 1. [Dataset](#Dataset) 2. [Regression Model to Predict Divorce Rate](#Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](#Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](#Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](#Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](#Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](#Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](#Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](#Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](#Divorce-Rate-Residuals-by-State) 3. [Regression Model with Measurement Error](#Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](#Effect-of-Incorporating-Measurement-Noise-on-Residuals) 4. [References](#References) ``` %reset -s -f import os from IPython.display import set_matplotlib_formats import jax.numpy as jnp from jax import random, vmap from jax.scipy.special import logsumexp import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import numpyro from numpyro.diagnostics import hpdi import numpyro.distributions as dist from numpyro import handlers from numpyro.infer import MCMC, NUTS plt.style.use('bmh') if "NUMPYRO_SPHINXBUILD" in os.environ: set_matplotlib_formats('svg') assert numpyro.__version__.startswith('0.3.0') ``` ## Dataset For this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](#References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses. ``` DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv' dset = pd.read_csv(DATASET_URL, sep=';') dset ``` Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`. ``` vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce'] sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl'); ``` From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results. ``` sns.regplot('WaffleHouses', 'Divorce', dset); ``` This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](#References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. ## Regression Model to Predict Divorce Rate Let us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html). ``` standardize = lambda x: (x - x.mean()) / x.std() dset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize) dset['MarriageScaled'] = dset.Marriage.pipe(standardize) dset['DivorceScaled'] = dset.Divorce.pipe(standardize) ``` We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](#Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](#References)], [[4](#References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](#Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example. ``` def model(marriage=None, age=None, divorce=None): a = numpyro.sample('a', dist.Normal(0., 0.2)) M, A = 0., 0. if marriage is not None: bM = numpyro.sample('bM', dist.Normal(0., 0.5)) M = bM * marriage if age is not None: bA = numpyro.sample('bA', dist.Normal(0., 0.5)) A = bA * age sigma = numpyro.sample('sigma', dist.Exponential(1.)) mu = a + M + A numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce) ``` ### Model 1: Predictor - Marriage Rate We first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](#References)] for more details on the NUTS algorithm) to run inference on this simple model. The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.html#numpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution. Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jax#random-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.html#numpyro.mcmc.HMC) class. ``` # Start from this source of randomness. We will split keys for subsequent operations. rng_key = random.PRNGKey(0) rng_key, rng_key_ = random.split(rng_key) num_warmup, num_samples = 1000, 2000 # Run NUTS. kernel = NUTS(model) mcmc = MCMC(kernel, num_warmup, num_samples) mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values) mcmc.print_summary() samples_1 = mcmc.get_samples() ``` #### Posterior Distribution over the Regression Parameters We notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase. During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](#References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution. At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.html#numpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.html#numpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model. To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.html#numpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis. ``` def plot_regression(x, y_mean, y_hpdi): # Sort values for plotting by x axis idx = jnp.argsort(x) marriage = x[idx] mean = y_mean[idx] hpdi = y_hpdi[:, idx] divorce = dset.DivorceScaled.values[idx] # Plot fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6)) ax.plot(marriage, mean) ax.plot(marriage, divorce, 'o') ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True) return ax # Compute empirical posterior distribution over mu posterior_mu = jnp.expand_dims(samples_1['a'], -1) + \ jnp.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values mean_mu = jnp.mean(posterior_mu, axis=0) hpdi_mu = hpdi(posterior_mu, 0.9) ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu) ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI'); ``` We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. #### Posterior Predictive Distribution Let us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.html#numpyro.infer.util.Predictive) utility for this purpose. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument. ``` from numpyro.infer import Predictive rng_key, rng_key_ = random.split(rng_key) predictive = Predictive(model, samples_1) predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs'] df = dset.filter(['Location']) df['Mean Predictions'] = jnp.mean(predictions, axis=0) df.head() ``` #### Predictive Utility With Effect Handlers To remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jax#auto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions. ``` def predict(rng_key, post_samples, model, *args, **kwargs): model = handlers.condition(handlers.seed(model, rng_key), post_samples) model_trace = handlers.trace(model).get_trace(*args, **kwargs) return model_trace['obs']['value'] # vectorize predictions via vmap predict_fn = vmap(lambda rng_key, samples: predict(rng_key, samples, model, marriage=dset.MarriageScaled.values)) ``` Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jax#auto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class. ``` # Using the same key as we used for Predictive - note that the results are identical. predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1) mean_pred = jnp.mean(predictions_1, axis=0) df = dset.filter(['Location']) df['Mean Predictions'] = jnp.mean(predictions, axis=0) df.head() hpdi_pred = hpdi(predictions_1, 0.9) ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred) ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI'); ``` We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. #### Posterior Predictive Density Likewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](#References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\ = \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S)) $$. Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model. ``` def log_likelihood(rng_key, params, model, *args, **kwargs): model = handlers.condition(model, params) model_trace = handlers.trace(model).get_trace(*args, **kwargs) obs_node = model_trace['obs'] return obs_node['fn'].log_prob(obs_node['value']) def log_pred_density(rng_key, params, model, *args, **kwargs): n = list(params.values())[0].shape[0] log_lk_fn = vmap(lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs)) log_lk_vals = log_lk_fn(random.split(rng_key, n), params) return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum() ``` Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.html#log-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack. ``` rng_key, rng_key_ = random.split(rng_key) print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_, samples_1, model, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values))) ``` ### Model 2: Predictor - Median Age of Marriage We will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate. ``` rng_key, rng_key_ = random.split(rng_key) mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values) mcmc.print_summary() samples_2 = mcmc.get_samples() posterior_mu = jnp.expand_dims(samples_2['a'], -1) + \ jnp.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values mean_mu = jnp.mean(posterior_mu, axis=0) hpdi_mu = hpdi(posterior_mu, 0.9) ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu) ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI'); rng_key, rng_key_ = random.split(rng_key) predictions_2 = Predictive(model, samples_2)(rng_key_, age=dset.AgeScaled.values)['obs'] mean_pred = jnp.mean(predictions_2, axis=0) hpdi_pred = hpdi(predictions_2, 0.9) ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred) ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI'); rng_key, rng_key_ = random.split(rng_key) print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_, samples_2, model, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values))) ``` ### Model 3: Predictor - Marriage Rate and Median Age of Marriage Finally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known. ``` rng_key, rng_key_ = random.split(rng_key) mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values) mcmc.print_summary() samples_3 = mcmc.get_samples() rng_key, rng_key_ = random.split(rng_key) print('Log posterior predictive density: {}'.format( log_pred_density(rng_key_, samples_3, model, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values) )) ``` ### Divorce Rate Residuals by State The regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states. ``` # Predictions for Model 3. rng_key, rng_key_ = random.split(rng_key) predictions_3 = Predictive(model, samples_3)(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values)['obs'] y = jnp.arange(50) fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16)) pred_mean = jnp.mean(predictions_3, axis=0) pred_hpdi = hpdi(predictions_3, 0.9) residuals_3 = dset.DivorceScaled.values - predictions_3 residuals_mean = jnp.mean(residuals_3, axis=0) residuals_hpdi = hpdi(residuals_3, 0.9) idx = jnp.argsort(residuals_mean) # Plot posterior predictive ax[0].plot(jnp.zeros(50), y, '--') ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx], marker='o', ms=5, mew=4, ls='none', alpha=0.8) ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o', ls='none', color='gray') ax[0].set(xlabel='Posterior Predictive (red) vs. Actuals (gray)', ylabel='State', title='Posterior Predictive with 90% CI') ax[0].set_yticks(y) ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10); # Plot residuals residuals_3 = dset.DivorceScaled.values - predictions_3 residuals_mean = jnp.mean(residuals_3, axis=0) residuals_hpdi = hpdi(residuals_3, 0.9) err = residuals_hpdi[1] - residuals_mean ax[1].plot(jnp.zeros(50), y, '--') ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx], marker='o', ms=5, mew=4, ls='none', alpha=0.8) ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI') ax[1].set_yticks(y) ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10); ``` The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true. Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. ## Regression Model with Measurement Error Note that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 15 of [[1](#References)]. To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1). ``` def model_se(marriage, age, divorce_sd, divorce=None): a = numpyro.sample('a', dist.Normal(0., 0.2)) bM = numpyro.sample('bM', dist.Normal(0., 0.5)) M = bM * marriage bA = numpyro.sample('bA', dist.Normal(0., 0.5)) A = bA * age sigma = numpyro.sample('sigma', dist.Exponential(1.)) mu = a + M + A divorce_rate = numpyro.sample('divorce_rate', dist.Normal(mu, sigma)) numpyro.sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce) # Standardize dset['DivorceScaledSD'] = dset['Divorce SE'] / jnp.std(dset.Divorce.values) rng_key, rng_key_ = random.split(rng_key) kernel = NUTS(model_se, target_accept_prob=0.9) mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000) mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values, divorce_sd=dset.DivorceScaledSD.values, divorce=dset.DivorceScaled.values) mcmc.print_summary() samples_4 = mcmc.get_samples() ``` ### Effect of Incorporating Measurement Noise on Residuals Notice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier. ``` rng_key, rng_key_ = random.split(rng_key) predictions_4 = Predictive(model_se, samples_4)(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values, divorce_sd=dset.DivorceScaledSD.values)['obs'] sd = dset.DivorceScaledSD.values residuals_4 = dset.DivorceScaled.values - predictions_4 residuals_mean = jnp.mean(residuals_4, axis=0) residuals_hpdi = hpdi(residuals_4, 0.9) err = residuals_hpdi[1] - residuals_mean idx = jnp.argsort(residuals_mean) y = jnp.arange(50) fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16)) # Plot Residuals ax.plot(jnp.zeros(50), y, '--') ax.errorbar(residuals_mean[idx], y, xerr=err[idx], marker='o', ms=5, mew=4, ls='none', alpha=0.8) # Plot SD ax.errorbar(residuals_mean[idx], y, xerr=sd[idx], ls='none', color='orange', alpha=0.9) # Plot earlier mean residual ax.plot(jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y, ls='none', marker='o', ms=6, color='black', alpha=0.6) ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI') ax.set_yticks(y) ax.set_yticklabels(dset.Loc.values[idx], fontsize=10); ax.text(-2.8, -7, 'Residuals (with error-bars) from current model (in red). ' 'Black marker \nshows residuals from the previous model (Model 3). ' 'Measurement \nerror is indicated by orange bar.'); ``` The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model. To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise. ``` fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6)) x = dset.DivorceScaledSD.values y1 = jnp.mean(residuals_3, 0) y2 = jnp.mean(residuals_4, 0) ax.plot(x, y1, ls='none', marker='o') ax.plot(x, y2, ls='none', marker='o') for i, (j, k) in enumerate(zip(y1, y2)): ax.plot([x[i], x[i]], [j, k], '--', color='gray'); ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)'); ``` The plot above shows what has happend in more detail - the regression line itself has moved to ensure a better fit for observations with low measurement noise (left of the plot) where the residuals have shrunk very close to 0. That is to say that data points with low measurement error have a concomitantly higher contribution in determining the regression line. On the other hand, for states with high measurement error (right of the plot), incorporating measurement noise allows us to move our posterior distribution mass closer to the observations resulting in a shrinkage of residuals as well. ## References 1. McElreath, R. (2016). Statistical Rethinking: A Bayesian Course with Examples in R and Stan CRC Press. 2. Stan Development Team. [Stan User's Guide](https://mc-stan.org/docs/2_19/stan-users-guide/index.html) 3. Goodman, N.D., and StuhlMueller, A. (2014). [The Design and Implementation of Probabilistic Programming Languages](http://dippl.org/) 4. Pyro Development Team. [Poutine: A Guide to Programming with Effect Handlers in Pyro](http://pyro.ai/examples/effect_handlers.html) 5. Hoffman, M.D., Gelman, A. (2011). The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo. 6. Betancourt, M. (2017). A Conceptual Introduction to Hamiltonian Monte Carlo. 7. JAX Development Team (2018). [Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more](https://github.com/google/jax) 8. Gelman, A., Hwang, J., and Vehtari A. [Understanding predictive information criteria for Bayesian models](https://arxiv.org/pdf/1307.5928.pdf)
github_jupyter
``` # Copyright 2019 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # **Purchase Prediction with AutoML Tables** <table align="left"> <td> <a href="https://colab.sandbox.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/notebooks/samples/tables/purchase_prediction/purchase_prediction.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/notebooks/samples/tables/purchase_prediction/purchase_prediction.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> ## **Overview** One of the most common use cases in Marketing is to predict the likelihood of conversion. Conversion could be defined by the marketer as taking a certain action like making a purchase, signing up for a free trial, subscribing to a newsletter, etc. Knowing the likelihood that a marketing lead or prospect will ‘convert’ can enable the marketer to target the lead with the right marketing campaign. This could take the form of remarketing, targeted email campaigns, online offers or other treatments. Here we demonstrate how you can use BigQuery and AutoML Tables to build a supervised binary classification model for purchase prediction. ### **Dataset** The model uses a real dataset from the [Google Merchandise store](https://www.googlemerchandisestore.com/) consisting of Google Analytics web sessions. The goal here is to predict the likelihood of a web visitor visiting the online Google Merchandise Store making a purchase on the website during that Google Analytics session. Past web interactions of the user on the store website in addition to information like browser details and geography are used to make this prediction. This is framed as a binary classification model, to label a user during a session as either true (makes a purchase) or false (does not make a purchase). Dataset Details The dataset consists of a set of tables corresponding to Google Analytics sessions being tracked on the Google Merchandise Store. Each table is a single day of GA sessions. More details around the schema can be seen here. You can access the data on BigQuery [here](https://support.google.com/analytics/answer/3437719?hl=en&ref_topic=3416089). ### **Costs** This tutorial uses billable components of Google Cloud Platform (GCP): * Cloud AI Platform * Cloud Storage * BigQuery * AutoML Tables Learn about [Cloud AI Platform pricing](https://cloud.google.com/ml-engine/docs/pricing), [Cloud Storage pricing](https://cloud.google.com/storage/pricing), [BigQuery pricing](https://cloud.google.com/bigquery/pricing) and [AutoML Tables pricing](https://cloud.google.com/automl-tables/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. ## Set up your local development environment **If you are using Colab or AI Platform Notebooks**, your environment already meets all the requirements to run this notebook. If you are using **AI Platform Notebook**, make sure the machine configuration type is **4 vCPU, 15 GB RAM** or above. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements. You need the following: * The Google Cloud SDK * Git * Python 3 * virtualenv * Jupyter notebook running in a virtual environment with Python 3 The Google Cloud guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: 1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/) 2. [Install Python 3.](https://cloud.google.com/python/setup#installing_python) 3. [Install virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv) and create a virtual environment that uses Python 3. 4. Activate that environment and run `pip install jupyter` in a shell to install Jupyter. 5. Run `jupyter notebook` in a shell to launch Jupyter. 6. Open this notebook in the Jupyter Notebook Dashboard. ## **Set up your GCP project** **The following steps are required, regardless of your notebook environment.** 1. [Select or create a GCP project.](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs. 2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project) 3. [Enable the AI Platform APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component) 4. [Enable AutoML API.](https://console.cloud.google.com/apis/library/automl.googleapis.com?q=automl) ## **PIP Install Packages and dependencies** Install addional dependencies not installed in Notebook environment ``` ! pip install --upgrade --quiet --user google-cloud-automl ! pip install --upgrade --quiet --user google-cloud-bigquery ! pip install --upgrade --quiet --user google-cloud-storage ! pip install --upgrade --quiet --user matplotlib ! pip install --upgrade --quiet --user pandas ! pip install --upgrade --quiet --user pandas-gbq ! pip install --upgrade --quiet --user gcsfs ``` **Note:** Try installing using `sudo`, if the above command throw any permission errors. `Restart` the kernel to allow automl_v1beta1 to be imported for Jupyter Notebooks. ``` from IPython.core.display import HTML HTML("<script>Jupyter.notebook.kernel.restart()</script>") ``` ## **Set up your GCP Project Id** Enter your `Project Id` in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. ``` PROJECT_ID = "[your-project-id]" # @param {type:"string"} COMPUTE_REGION = "us-central1" ``` ## **Authenticate your GCP account** **If you are using AI Platform Notebooks**, your environment is already authenticated. Skip this step. Otherwise, follow these steps: 1. In the GCP Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey). 2. From the **Service account** drop-down list, select **New service account**. 3. In the **Service account name** field, enter a name. 4. From the **Role** drop-down list, select **AutoML > AutoML Admin**, **Storage > Storage Admin** and **BigQuery > BigQuery Admin**. 5. Click *Create*. A JSON file that contains your key downloads to your local environment. **Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. ``` import sys # Upload the downloaded JSON file that contains your key. if 'google.colab' in sys.modules: from google.colab import files keyfile_upload = files.upload() keyfile = list(keyfile_upload.keys())[0] %env GOOGLE_APPLICATION_CREDENTIALS $keyfile ! gcloud auth activate-service-account --key-file $keyfile ``` ***If you are running the notebook locally***, enter the path to your service account key as the `GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell ``` # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. %env GOOGLE_APPLICATION_CREDENTIALS /path/to/service/account ! gcloud auth activate-service-account --key-file '/path/to/service/account' ``` ## **Create a Cloud Storage bucket** **The following steps are required, regardless of your notebook environment.** When you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. AI Platform runs the code from this package. In this tutorial, AI Platform also saves the trained model that results from your job in the same bucket. You can then create an AI Platform model version based on this output in order to serve online predictions. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the `REGION` variable, which is used for operations throughout the rest of this notebook. Make sure to [choose a region where Cloud AI Platform services are available](https://cloud.google.com/ml-engine/docs/tensorflow/regions). You may not use a Multi-Regional Storage bucket for training with AI Platform. ``` BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"} ``` **Only if your bucket doesn't exist**: Run the following cell to create your Cloud Storage bucket. Make sure Storage > Storage Admin role is enabled ``` ! gsutil mb -p $PROJECT_ID -l $COMPUTE_REGION gs://$BUCKET_NAME ``` Finally, validate access to your Cloud Storage bucket by examining its contents: ``` ! gsutil ls -al gs://$BUCKET_NAME ``` ## **Import libraries and define constants** Import relevant packages. ``` from __future__ import absolute_import from __future__ import division from __future__ import print_function # AutoML library. from google.cloud import automl_v1beta1 as automl import google.cloud.automl_v1beta1.proto.data_types_pb2 as data_types from google.cloud import bigquery from google.cloud import storage import matplotlib.pyplot as plt import datetime import pandas as pd import numpy as np from sklearn import metrics ``` Populate the following cell with the necessary constants and run it to initialize constants. ``` #@title Constants { vertical-output: true } # A name for the AutoML tables Dataset to create. DATASET_DISPLAY_NAME = 'purchase_prediction' #@param {type: 'string'} # A name for the file to hold the nested data. NESTED_CSV_NAME = 'FULL.csv' #@param {type: 'string'} # A name for the file to hold the unnested data. UNNESTED_CSV_NAME = 'FULL_unnested.csv' #@param {type: 'string'} # A name for the input train data. TRAINING_CSV = 'training_unnested_balanced_FULL' #@param {type: 'string'} # A name for the input validation data. VALIDATION_CSV = 'validation_unnested_FULL' #@param {type: 'string'} # A name for the AutoML tables model to create. MODEL_DISPLAY_NAME = 'model_1' #@param {type:'string'} assert all([ PROJECT_ID, COMPUTE_REGION, DATASET_DISPLAY_NAME, MODEL_DISPLAY_NAME, ]) ``` Initialize client for AutoML, AutoML Tables, BigQuery and Storage. ``` # Initialize the clients. automl_client = automl.AutoMlClient() tables_client = automl.TablesClient(project=PROJECT_ID, region=COMPUTE_REGION) bq_client = bigquery.Client() storage_client = storage.Client() ``` ## **Test the set up** To test whether your project set up and authentication steps were successful, run the following cell to list your datasets in this project. If no dataset has previously imported into AutoML Tables, you shall expect an empty return. ``` # List the datasets. list_datasets = tables_client.list_datasets() datasets = { dataset.display_name: dataset.name for dataset in list_datasets } datasets ``` You can also print the list of your models by running the following cell. If no model has previously trained using AutoML Tables, you shall expect an empty return. ``` # List the models. list_models = tables_client.list_models() models = { model.display_name: model.name for model in list_models } models ``` ##**Transformation and Feature Engineering Functions** The data cleaning and transformation step was by far the most involved. It includes a few sections that create an AutoML tables dataset, pull the Google merchandise store data from BigQuery, transform the data, and save it multiple times to csv files in google cloud storage. The dataset that is made viewable in the AutoML Tables UI. It will eventually hold the training data after that training data is cleaned and transformed. This dataset has only around 1% of its values with a positive label value of True i.e. cases when a transaction was made. This is a class imbalance problem. There are several ways to handle class imbalance. We chose to oversample the positive class by random over sampling. This resulted in an artificial increase in the sessions with the positive label of true transaction value. There were also many columns with either all missing or all constant values. These columns would not add any signal to our model, so we dropped them. There were also columns with NaN rather than 0 values. For instance, rather than having a count of 0, a column might have a null value. So we added code to change some of these null values to 0, specifically in our target column, in which null values were not allowed by AutoML Tables. However, AutoML Tables can handle null values for the features. **Feature Engineering** The dataset had rich information on customer location and behavior; however, it can be improved by performing feature engineering. Moreover, there was a concern about data leakage. The decision to do feature engineering, therefore, had two contributing motivations: remove data leakage without too much loss of useful data, and to improve the signal in our data. **Weekdays** The date seemed like a useful piece of information to include, as it could capture seasonal effects. Unfortunately, we only had one year of data, so seasonality on an annual scale would be difficult (read impossible) to incorporate. Fortunately, we could try and detect seasonal effects on a micro, with perhaps equally informative results. We ended up creating a new column of weekdays out of dates, to denote which day of the week the session was held on. This new feature turned out to have some useful predictive power, when added as a variable into our model. **Data Leakage** The marginal gain from adding a weekday feature, was overshadowed by the concern of data leakage in our training data. In the initial naive models we trained, we got outstanding results. So outstanding that we knew that something must be going on. As it turned out, quite a few features functioned as proxies for the feature we were trying to predict: meaning some of the features we conditioned on to build the model had an almost 1:1 correlation with the target feature. Intuitively, this made sense. One feature that exhibited this behavior was the number of page views a customer made during a session. By conditioning on page views in a session, we could very reliably predict which customer sessions a purchase would be made in. At first this seems like the golden ticket, we can reliably predict whether or not a purchase is made! The catch: the full page view information can only be collected at the end of the session, by which point we would also have whether or not a transaction was made. Seen from this perspective, collecting page views at the same time as collecting the transaction information would make it pointless to predict the transaction information using the page views information, as we would already have both. One solution was to drop page views as a feature entirely. This would safely stop the data leakage, but we would lose some critically useful information. Another solution, (the one we ended up going with), was to track the page view information of all previous sessions for a given customer, and use it to inform the current session. This way, we could use the page view information, but only the information that we would have before the session even began. So we created a new column called previous_views, and populated it with the total count of all previous page views made by the customer in all previous sessions. We then deleted the page views feature, to stop the data leakage. Our rationale for this change can be boiled down to the concise heuristic: only use the information that is available to us on the first click of the session. Applying this reasoning, we performed similar data engineering on other features which we found to be proxies for the label feature. We also refined our objective in the process: For a visit to the Google Merchandise store, what is the probability that a customer will make a purchase, and can we calculate this probability the moment the customer arrives? By clarifying the question, we both made the result more powerful/useful, and eliminated the data leakage that threatened to make the predictive power trivial. ``` def balanceTable(table): # class count. count_class_false, count_class_true = table.totalTransactionRevenue\ .value_counts() # divide by class. table_class_false = table[table["totalTransactionRevenue"]==False] table_class_true = table[table["totalTransactionRevenue"]==True] # random over-sampling. table_class_true_over = table_class_true.sample( count_class_false, replace=True) table_test_over = pd.concat([table_class_false, table_class_true_over]) return table_test_over def partitionTable(table, dt=20170500): # The automl tables model could be training on future data and implicitly learning about past data in the testing # dataset, this would cause data leakage. To prevent this, we are training only with the first 9 months of data (table1) # and doing validation with the last three months of data (table2). table1 = table[table["date"]<=dt].copy(deep=False) table2 = table[table["date"]>dt].copy(deep=False) return table1, table2 def N_updatePrevCount(table, new_column, old_column): table = table.fillna(0) table[new_column] = 1 table.sort_values(by=['fullVisitorId','date']) table[new_column] = table.groupby(['fullVisitorId'])[old_column].apply( lambda x: x.cumsum()) table.drop([old_column], axis=1, inplace=True) return table def N_updateDate(table): table['weekday'] = 1 table['date'] = pd.to_datetime(table['date'].astype(str), format='%Y%m%d') table['weekday'] = table['date'].dt.dayofweek return table def change_transaction_values(table): table['totalTransactionRevenue'] = table['totalTransactionRevenue'].fillna(0) table['totalTransactionRevenue'] = table['totalTransactionRevenue'].apply( lambda x: x!=0) return table def saveTable(table, csv_file_name, bucket_name): table.to_csv(csv_file_name, index=False) bucket = storage_client.get_bucket(bucket_name) blob = bucket.blob(csv_file_name) blob.upload_from_filename(filename=csv_file_name) ``` ##**Getting training data** If you are using **Colab** the memory may not be sufficient enough to generate Nested and Unnested data using the queries. In this case, you can directly download the unnested data **FULL_unnested.csv** from [here](https://storage.cloud.google.com/cloud-ml-data/automl-tables/notebooks/trial_for_c4m/FULL_unnested.csv) and upload the file manually to GCS bucket that was created in the previous steps `(BUCKET_NAME)`. *If* you are using **AI Platform Notebook or Local environment**, run the following code ``` # Save table. query = """ SELECT date, device, geoNetwork, totals, trafficSource, fullVisitorId FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*` WHERE _TABLE_SUFFIX BETWEEN FORMAT_DATE('%Y%m%d',DATE_SUB('2017-08-01', INTERVAL 366 DAY)) AND FORMAT_DATE('%Y%m%d',DATE_SUB('2017-08-01', INTERVAL 1 DAY)) """ df = bq_client.query(query).to_dataframe() print(df.iloc[:3]) saveTable(df, NESTED_CSV_NAME, BUCKET_NAME) # Unnest the Data. nested_gcs_uri = 'gs://{}/{}'.format(BUCKET_NAME, NESTED_CSV_NAME) table = pd.read_csv(nested_gcs_uri, low_memory=False) column_names = ['device', 'geoNetwork','totals', 'trafficSource'] for name in column_names: print(name) table[name] = table[name].apply(lambda i: dict(eval(i))) temp = table[name].apply(pd.Series) table = pd.concat([table, temp], axis=1).drop(name, axis=1) # need to drop a column. table.drop(['adwordsClickInfo'], axis=1, inplace=True) saveTable(table, UNNESTED_CSV_NAME, BUCKET_NAME) ``` ### **Run the Transformations** ``` # Run the transformations. unnested_gcs_uri = 'gs://{}/{}'.format(BUCKET_NAME, UNNESTED_CSV_NAME) table = pd.read_csv(unnested_gcs_uri, low_memory=False) consts = ['transactionRevenue', 'transactions', 'adContent', 'browserSize', 'campaignCode', 'cityId', 'flashVersion', 'javaEnabled', 'language', 'latitude', 'longitude', 'mobileDeviceBranding', 'mobileDeviceInfo', 'mobileDeviceMarketingName','mobileDeviceModel','mobileInputSelector', 'networkLocation', 'operatingSystemVersion', 'screenColors', 'screenResolution', 'screenviews', 'sessionQualityDim', 'timeOnScreen', 'visits', 'uniqueScreenviews', 'browserVersion', 'referralPath','fullVisitorId', 'date'] table = N_updatePrevCount(table, 'previous_views', 'pageviews') table = N_updatePrevCount(table, 'previous_hits', 'hits') table = N_updatePrevCount(table, 'previous_timeOnSite', 'timeOnSite') table = N_updatePrevCount(table, 'previous_Bounces', 'bounces') table = change_transaction_values(table) table1, table2 = partitionTable(table) table1 = N_updateDate(table1) table2 = N_updateDate(table2) table1.drop(consts, axis=1, inplace=True) table2.drop(consts, axis=1, inplace=True) saveTable(table2,'{}.csv'.format(VALIDATION_CSV), BUCKET_NAME) table1 = balanceTable(table1) # training_unnested_FULL.csv = the first 9 months of data. saveTable(table1, '{}.csv'.format(TRAINING_CSV), BUCKET_NAME) ``` ## **Import Training Data** Select a dataset display name and pass your table source information to create a new dataset. #### **Create Dataset** ``` # Create dataset. dataset = tables_client.create_dataset( dataset_display_name=DATASET_DISPLAY_NAME) dataset_name = dataset.name dataset ``` #### **Import Data** ``` # Read the data source from GCS. dataset_gcs_input_uris = ['gs://{}/{}.csv'.format(BUCKET_NAME, TRAINING_CSV)] import_data_response = tables_client.import_data( dataset=dataset, gcs_input_uris=dataset_gcs_input_uris ) print('Dataset import operation: {}'.format(import_data_response.operation)) # Synchronous check of operation status. Wait until import is done. print('Dataset import response: {}'.format(import_data_response.result())) # Verify the status by checking the example_count field. dataset = tables_client.get_dataset(dataset_name=dataset_name) dataset ``` ## **Review the specs** Run the following command to see table specs such as row count. ``` # List table specs. list_table_specs_response = tables_client.list_table_specs(dataset=dataset) table_specs = [s for s in list_table_specs_response] # List column specs. list_column_specs_response = tables_client.list_column_specs(dataset=dataset) column_specs = {s.display_name: s for s in list_column_specs_response} # Print Features and data_type. features = [(key, data_types.TypeCode.Name(value.data_type.type_code)) for key, value in column_specs.items()] print('Feature list:\n') for feature in features: print(feature[0],':', feature[1]) # Table schema pie chart. type_counts = {} for column_spec in column_specs.values(): type_name = data_types.TypeCode.Name(column_spec.data_type.type_code) type_counts[type_name] = type_counts.get(type_name, 0) + 1 plt.pie(x=type_counts.values(), labels=type_counts.keys(), autopct='%1.1f%%') plt.axis('equal') plt.show() ``` ##**Update dataset: assign a label column and enable nullable columns** AutoML Tables automatically detects your data column type. Depending on the type of your label column, AutoML Tables chooses to run a classification or regression model. If your label column contains only numerical values, but they represent categories, change your label column type to categorical by updating your schema. ### **Update a column: set to not nullable** ``` # Update column. column_spec_display_name = 'totalTransactionRevenue' #@param {type: 'string'} update_column_response = tables_client.update_column_spec( dataset=dataset, column_spec_display_name=column_spec_display_name, nullable=False, ) update_column_response ``` **Tip:** You can use kwarg `type_code='CATEGORY'` in the preceding `update_column_spec(..)` call to convert the column data type from `FLOAT64` to `CATEGORY`. ###**Update dataset: assign a target column** ``` # Assign target column. column_spec_display_name = 'totalTransactionRevenue' #@param {type: 'string'} update_dataset_response = tables_client.set_target_column( dataset=dataset, column_spec_display_name=column_spec_display_name, ) update_dataset_response ``` ##**Creating a model** ####**Train a model** To create the datasets for training, testing and validation, we first had to consider what kind of data we were dealing with. The data we had keeps track of all customer sessions with the Google Merchandise store over a year. AutoML tables does its own training and testing, and delivers a quite nice UI to view the results in. For the training and testing dataset then, we simply used the over sampled, balanced dataset created by the transformations described above. But we first partitioned the dataset to include the first 9 months in one table and the last 3 in another. This allowed us to train and test with an entirely different dataset that what we used to validate. Moreover, we held off on oversampling for the validation dataset, to not bias the data that we would ultimately use to judge the success of our model. The decision to divide the sessions along time was made to avoid the model training on future data to predict past data. (This can be avoided with a datetime variable in the dataset and by toggling a button in the UI) Training the model may take one hour or more. The following cell keeps running until the training is done. If your Colab times out, use `client.list_models()` to check whether your model has been created. Then use model name to continue to the next steps. Run the following command to retrieve your model. Replace `model_name` with its actual value. model = client.get_model(model_name=model_name) Note that we trained on the first 9 months of data and we validate using the last 3. For demonstration purpose, the following command sets the budget as 1 node hour `('train_budget_milli_node_hours': 1000)`. You can increase that number up to a maximum of 72 hours `('train_budget_milli_node_hours': 72000)` for the best model performance. Even with a budget of 1 node hour (the minimum possible budget), training a model can take more than the specified node hours. You can also select the objective to optimize your model training by setting optimization_objective. This solution optimizes the model by using default optimization objective. Refer [link](https://cloud.google.com/automl-tables/docs/train#opt-obj) for more details. ``` # The number of hours to train the model. model_train_hours = 1 #@param {type:'integer'} create_model_response = tables_client.create_model( MODEL_DISPLAY_NAME, dataset=dataset, train_budget_milli_node_hours=model_train_hours*1000, ) operation_id = create_model_response.operation.name print('Create model operation: {}'.format(create_model_response.operation)) # Wait until model training is done. model = create_model_response.result() model_name = model.name model ``` ##**Make a prediction** In this section, we take our validation data prediction results and plot the Precision Recall curve and the ROC curve of both the false and true predictions. There are two different prediction modes: online and batch. The following cell shows you how to make a batch prediction. ``` #@title Start batch prediction { vertical-output: true } batch_predict_gcs_input_uris = ['gs://{}/{}.csv'.format(BUCKET_NAME, VALIDATION_CSV)] #@param {type:'string'} batch_predict_gcs_output_uri_prefix = 'gs://{}'.format(BUCKET_NAME) #@param {type:'string'} batch_predict_response = tables_client.batch_predict( model=model, gcs_input_uris=batch_predict_gcs_input_uris, gcs_output_uri_prefix=batch_predict_gcs_output_uri_prefix, ) print('Batch prediction operation: {}'.format(batch_predict_response.operation)) # Wait until batch prediction is done. batch_predict_result = batch_predict_response.result() batch_predict_response.metadata ``` ##**Evaluate your prediction** The follow cell creates a Precision Recall curve and a ROC curve for both the true and false classifications. ``` def invert(x): return 1-x def switch_label(x): return(not x) batch_predict_results_location = batch_predict_response.metadata\ .batch_predict_details.output_info\ .gcs_output_directory table = pd.read_csv('{}/tables_1.csv'.format(batch_predict_results_location)) y = table["totalTransactionRevenue"] scores = table["totalTransactionRevenue_True_score"] scores_invert = table['totalTransactionRevenue_False_score'] # code for ROC curve, for true values. fpr, tpr, thresholds = metrics.roc_curve(y, scores) roc_auc = metrics.auc(fpr, tpr) plt.figure() lw = 2 plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area=%0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic for True') plt.legend(loc="lower right") plt.show() # code for ROC curve, for false values. plt.figure() lw = 2 label_invert = y.apply(switch_label) fpr, tpr, thresholds = metrics.roc_curve(label_invert, scores_invert) plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area=%0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic for False') plt.legend(loc="lower right") plt.show() # code for PR curve, for true values. precision, recall, thresholds = metrics.precision_recall_curve(y, scores) plt.figure() lw = 2 plt.plot( recall, precision, color='darkorange', lw=lw, label='Precision recall curve for True') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('Recall') plt.ylabel('Precision') plt.title('Precision Recall Curve for True') plt.legend(loc="lower right") plt.show() # code for PR curve, for false values. precision, recall, thresholds = metrics.precision_recall_curve( label_invert, scores_invert) print(precision.shape) print(recall.shape) plt.figure() lw = 2 plt.plot( recall, precision, color='darkorange', label='Precision recall curve for False') plt.xlim([0.0, 1.1]) plt.ylim([0.0, 1.1]) plt.xlabel('Recall') plt.ylabel('Precision') plt.title('Precision Recall Curve for False') plt.legend(loc="lower right") plt.show() ``` ## **Cleaning up** To clean up all GCP resources used in this project, you can [delete the GCP project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial. ``` # Delete model resource. tables_client.delete_model(model_name=model_name) # Delete dataset resource. tables_client.delete_dataset(dataset_name=dataset_name) # Delete Cloud Storage objects that were created. ! gsutil -m rm -r gs://$BUCKET_NAME # If training model is still running, cancel it. automl_client.transport._operations_client.cancel_operation(operation_id) ```
github_jupyter
# Saving and Loading Models In this notebook, I'll show you how to save and load models with PyTorch. This is important because you'll often want to load previously trained models to use in making predictions or to continue training on new data. ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms import helper import fc_model # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) # Download and load the training data trainset = datasets.FashionMNIST('F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) ``` Here we can see one of the images. ``` image, label = next(iter(trainloader)) helper.imshow(image[0,:]); ``` # Train a network To make things more concise here, I moved the model architecture and training code from the last part to a file called `fc_model`. Importing this, we can easily create a fully-connected network with `fc_model.Network`, and train the network using `fc_model.train`. I'll use this model (once it's trained) to demonstrate how we can save and load models. ``` # Create the network, define the criterion and optimizer model = fc_model.Network(784, 10, [512, 256, 128]) criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=2) ``` ## Saving and loading networks As you can imagine, it's impractical to train a network every time you need to use it. Instead, we can save trained networks then load them later to train more or use them for predictions. The parameters for PyTorch networks are stored in a model's `state_dict`. We can see the state dict contains the weight and bias matrices for each of our layers. ``` print("Our model: \n\n", model, '\n') print("The state dict keys: \n\n", model.state_dict().keys()) ``` The simplest thing to do is simply save the state dict with `torch.save`. For example, we can save it to a file `'checkpoint.pth'`. ``` torch.save(model.state_dict(), 'checkpoint.pth') ``` Then we can load the state dict with `torch.load`. ``` state_dict = torch.load('checkpoint.pth') print(state_dict.keys()) ``` And to load the state dict in to the network, you do `model.load_state_dict(state_dict)`. ``` model.load_state_dict(state_dict) ``` Seems pretty straightforward, but as usual it's a bit more complicated. Loading the state dict works only if the model architecture is exactly the same as the checkpoint architecture. If I create a model with a different architecture, this fails. ``` # Try this model = fc_model.Network(784, 10, [400, 200, 100]) # This will throw an error because the tensor sizes are wrong! model.load_state_dict(state_dict) ``` This means we need to rebuild the model exactly as it was when trained. Information about the model architecture needs to be saved in the checkpoint, along with the state dict. To do this, you build a dictionary with all the information you need to compeletely rebuild the model. ``` checkpoint = {'input_size': 784, 'output_size': 10, 'hidden_layers': [each.out_features for each in model.hidden_layers], 'state_dict': model.state_dict()} torch.save(checkpoint, 'checkpoint.pth') ``` Now the checkpoint has all the necessary information to rebuild the trained model. You can easily make that a function if you want. Similarly, we can write a function to load checkpoints. ``` def load_checkpoint(filepath): checkpoint = torch.load(filepath) model = fc_model.Network(checkpoint['input_size'], checkpoint['output_size'], checkpoint['hidden_layers']) model.load_state_dict(checkpoint['state_dict']) return model model = load_checkpoint('checkpoint.pth') print(model) ```
github_jupyter
``` import gym import itertools import gym_tic_tac_toe import plotting from plotting import EpisodeStats from collections import defaultdict from copy import deepcopy import numpy as np import operator # Example of Q dictionary used in q_learn function # # Q = { # "0000000001": { # 0: 0, # 1: 0, # 2: 0, # # ... # 8: 0 # }, # # ... # "1-11-1-11-11-10-1": { # 8: 0 # }, # } # ile zajmuje wyuczenie sie # procent klasyfikacji, dokładność, # jak zależy od rozmiaru zbioru uczacego def hash_state(state): board = state['board'] move = state['on_move'] return ''.join(str(b) for b in board)+str(move) def hash_action(action): return action[1] def get_best_action_idx(Q, state_hash, action_hashes): if state_hash in Q: best_action_idx = np.argmax(Q[state_hash]) else: Q[state_hash] = dict((ah, 0) for ah in action_hashes) best_action_idx = np.random.choice(len(action_hashes)) return best_action_idx def create_policy(Q, epsilon): def get_action_probs(state_hash, action_hashes): num_actions = len(action_hashes) action_probs = np.ones(num_actions, dtype = float) * epsilon / num_actions best_action_idx = get_best_action_idx(Q, state_hash, action_hashes) action_probs[best_action_idx] += (1.0 - epsilon) return action_probs return get_action_probs def q_learn(num_episodes, discount_factor = 1.0, alpha = 0.6, epsilon = 0.1, print_log = False): Q = {} env = gym.make('tic_tac_toe-v1') stats = plotting.EpisodeStats( episode_lengths = np.zeros(num_episodes), episode_rewards = np.zeros(num_episodes)) policy = create_policy(Q, epsilon) for ith_episode in range(num_episodes): if (ith_episode%1000==0): print(ith_episode) env.reset() state = deepcopy(env.state) for t in itertools.count(): state_hash = hash_state(state) actions = env.move_generator() action_hashes = [hash_action(act) for act in actions] action_probabilities = policy(state_hash, action_hashes) action_idx = np.random.choice(np.arange(len(actions)), p=action_probabilities) action = actions[action_idx] action_hash = hash_action(action) next_state, reward, done, _ = env.step(action) stats.episode_rewards[ith_episode] += reward stats.episode_lengths[ith_episode] = t next_state_hash = hash_state(next_state) next_action_hashes = [hash_action(act) for act in env.move_generator()] if len(next_action_hashes) == 0: next_action_score = 0 else: best_next_action_idx = get_best_action_idx(Q, next_state_hash, next_action_hashes) best_next_action_hash = next_action_hashes[best_next_action_idx] next_max = Q[next_state_hash][best_next_action_hash] old_value = Q[state_hash][action_hash] new_value = (1 - alpha) * old_value + alpha * (reward + discount_factor * next_max) Q[state_hash][action_hash] = new_value if print_log: print('\n\n----------- STATE -------------') env.render() print(state_hash) print(actions) print(action_hashes) print(action_probabilities) print(action_idx) print(action) print(action_hash) print(reward) print(td_target) print(td_delta) print(done) if done: break state = deepcopy(next_state) return Q, stats def play_game(Q, player = -1): env = gym.make('tic_tac_toe-v1') state = env.reset() env.render() on_move = state['on_move'] reward = 0 done = False while not done: on_move = state['on_move'] if player == on_move: print('Pick a move index') moves = env.move_generator() print(list(enumerate(moves))) idx = int(input()) action = moves[idx] else: actions = Q[hash_state(state)].items() print(actions) best_action_hash = max(actions, key=operator.itemgetter(1)) print(best_action_hash) best_action_hash = best_action_hash[0] action = [on_move, best_action_hash] state, reward, done, _ = env.step(action) env.render() if reward == 0: print("Draw!") elif on_move == player: print('You won!') else: print('AI won!') return env (Q, stats) = q_learn(30000, print_log=False) play_game(Q) ```
github_jupyter
# Adding "Open In Callysto" Buttons to notebooks ``` import os import json import pandas as pd def button_code_generator(notebook_path, notebook_filename): #notebook_path = notebook_path.strip('./')#.replace('./','',1) notebook_path = 'ColonizingMars/' notebook_filename = notebook_filename.strip('./') button_image = 'https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true' repo_path = 'https%3A%2F%2Fgithub.com%2Fcallysto%2Fhackathon&branch=master' a = '<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=' size_etc = '" width="123" height="24" alt="Open in Callysto"/></a>' button_code = a+repo_path+'&subPath='+notebook_path+notebook_filename+'&depth=1" target="_parent"><img src="'+button_image+size_etc return button_code def replace_first_cell(notebook_name_and_path, first_cell_code): original_file = open(notebook_name_and_path, 'r') notebook_contents = json.load(original_file) original_file.close() del notebook_contents['cells'][0] notebook_contents['cells'].insert(0, first_cell_code) with open(notebook_name_and_path, 'w') as notebook_file: json.dump(notebook_contents, notebook_file) ``` ## Create Notebooks DataFrame ``` df = pd.DataFrame(columns=['Notebook', 'Button Code']) for root, dirs, files in os.walk("."): for filename in files: if filename.endswith('.ipynb'): if not 'checkpoint' in filename: notebook_name_and_path = os.path.join(root, filename) button_code = button_code_generator(root, filename) df = df.append({'Notebook':notebook_name_and_path, 'Button Code':button_code}, ignore_index=True) df df['Button Code'][3] ``` ## Iterate through the DataFrame Replace the first cell in each notebook with the banner and button. ``` for i, row in df.iterrows(): notebook_name_and_path = row['Notebook'] banner_code = '![Callysto.ca Banner](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-top.jpg?raw=true)' first_cell_code = {'cell_type': 'markdown', 'metadata': {}, 'source': [banner_code, '\n', '\n', row['Button Code']]} if notebook_name_and_path != './add-open-in-callysto-button.ipynb': replace_first_cell(notebook_name_and_path, first_cell_code) ``` ## Check our Work ``` df2 = pd.DataFrame(columns=['Name','First Cell']) for root, dirs, files in os.walk("."): for filename in files: if filename.endswith('.ipynb'): if not 'checkpoint' in filename: notebook_name = filename[:-6] notebook_name_and_path = os.path.join(root, filename) notebook = json.load(open(notebook_name_and_path)) first_cell = notebook['cells'][0]['source']#[0]] button_code = button_code_generator(root, filename) df2 = df2.append({'Name':notebook_name,'First Cell':first_cell}, ignore_index=True) for i, row in df2.iterrows(): print(i, row['Name']) print(row['First Cell']) print('') ```
github_jupyter
``` from pylab import * %matplotlib inline import caffe import os os.chdir('/home/mckc/image class/') from caffe import layers as L, params as P def lenet(data_location, batch_size): # our version of LeNet: a series of linear and simple nonlinear transformations n = caffe.NetSpec() n.data, n.label = L.ImageData(batch_size=batch_size, source=data_location, transform_param=(dict(scale=1./255,mirror=True,crop_size= 224)), ntop=2) n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier')) n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX) n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier')) n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX) n.fc1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier')) n.relu1 = L.ReLU(n.fc1, in_place=True) n.score = L.InnerProduct(n.relu1, num_output=2, weight_filler=dict(type='xavier')) n.loss = L.SoftmaxWithLoss(n.score, n.label) return n.to_proto() with open('lenet_auto_train.prototxt', 'w') as f: f.write(str(lenet('caffe_train.txt', 2))) with open('lenet_auto_test.prototxt', 'w') as f: f.write(str(lenet('caffe_validate.txt', 2))) !cat lenet_auto_train.prototxt from caffe.proto import caffe_pb2 def solver(train_net_path, test_net_path): s = caffe_pb2.SolverParameter() # Specify locations of the train and test networks. s.train_net = train_net_path s.test_net.append(test_net_path) s.test_interval = 10 # Test after every 1000 training iterations. s.test_iter.append(250) # Test 250 "batches" each time we test. s.max_iter = 10000 # # of times to update the net (training iterations) # Set the initial learning rate for stochastic gradient descent (SGD). s.base_lr = 0.0001 # Set `lr_policy` to define how the learning rate changes during training. # Here, we 'step' the learning rate by multiplying it by a factor `gamma` # every `stepsize` iterations. s.lr_policy = 'step' s.gamma = 0.1 #s.stepsize = 5000 # Set other optimization parameters. Setting a non-zero `momentum` takes a # weighted average of the current gradient and previous gradients to make # learning more stable. L2 weight decay regularizes learning, to help prevent # the model from overfitting. s.momentum = 0.9 s.weight_decay = 5e-4 # Display the current training loss and accuracy every 1000 iterations. s.display = 1000 # Snapshots are files used to store networks we've trained. Here, we'll # snapshot every 10K iterations -- just once at the end of training. # For larger networks that take longer to train, you may want to set # snapshot < max_iter to save the network and training state to disk during # optimization, preventing disaster in case of machine crashes, etc. s.snapshot = 100 s.snapshot_prefix= "lenet" #s.snapshot_prefix = 'examples/hdf5_classification/data/train' # We'll train on the CPU for fair benchmarking against scikit-learn. # Changing to GPU should result in much faster training! #s.solver_mode = caffe_pb2.SolverParameter.CPU return s solver_path = 'logreg_solver.prototxt' with open(solver_path, 'w') as f: f.write(str(solver('lenet_auto_train.prototxt', 'lenet_auto_test.prototxt'))) !cat lenet_auto_solver.prototxt !cat logreg_solver.prototxt caffe.set_device(0) caffe.set_mode_gpu() ### load the solver and create train and test nets solver = None # ignore this workaround for lmdb data (can't instantiate two solvers on the same data) #solver = caffe.SGDSolver('lenet_auto_solver.prototxt') solver = caffe.SGDSolver('logreg_solver.prototxt') # each output is (batch size, feature dim, spatial dim) [(k, v.data.shape) for k, v in solver.net.blobs.items()] # just print the weight sizes (we'll omit the biases) [(k, v[0].data.shape) for k, v in solver.net.params.items()] solver.net.forward() # train net solver.test_nets[0].forward() # test net (there can be more than one) solver.step(1) for i in range(10): solver.step(1) print solver.net.forward() imshow(solver.net.params['conv1'][0].diff[:, 0].reshape(4, 5, 5, 5) .transpose(0, 2, 1, 3).reshape(4*5, 5*5), cmap='gray'); axis('off') ```
github_jupyter
<!--BOOK_INFORMATION--> <img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png"> *This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).* *The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!* <!--NAVIGATION--> < [In-Depth: Kernel Density Estimation](05.13-Kernel-Density-Estimation.ipynb) | [Contents](Index.ipynb) | [Further Machine Learning Resources](05.15-Learning-More.ipynb) > <a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.14-Image-Features.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a> # Application: A Face Detection Pipeline This chapter has explored a number of the central concepts and algorithms of machine learning. But moving from these concepts to real-world application can be a challenge. Real-world datasets are noisy and heterogeneous, may have missing features, and data may be in a form that is difficult to map to a clean ``[n_samples, n_features]`` matrix. Before applying any of the methods discussed here, you must first extract these features from your data: there is no formula for how to do this that applies across all domains, and thus this is where you as a data scientist must exercise your own intuition and expertise. One interesting and compelling application of machine learning is to images, and we have already seen a few examples of this where pixel-level features are used for classification. In the real world, data is rarely so uniform and simple pixels will not be suitable: this has led to a large literature on *feature extraction* methods for image data (see [Feature Engineering](05.04-Feature-Engineering.ipynb)). In this section, we will take a look at one such feature extraction technique, the [Histogram of Oriented Gradients](https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients) (HOG), which transforms image pixels into a vector representation that is sensitive to broadly informative image features regardless of confounding factors like illumination. We will use these features to develop a simple face detection pipeline, using machine learning algorithms and concepts we've seen throughout this chapter. We begin with the standard imports: ``` %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns; sns.set() import numpy as np ``` ## HOG Features The Histogram of Gradients is a straightforward feature extraction procedure that was developed in the context of identifying pedestrians within images. HOG involves the following steps: 1. Optionally pre-normalize images. This leads to features that resist dependence on variations in illumination. 2. Convolve the image with two filters that are sensitive to horizontal and vertical brightness gradients. These capture edge, contour, and texture information. 3. Subdivide the image into cells of a predetermined size, and compute a histogram of the gradient orientations within each cell. 4. Normalize the histograms in each cell by comparing to the block of neighboring cells. This further suppresses the effect of illumination across the image. 5. Construct a one-dimensional feature vector from the information in each cell. A fast HOG extractor is built into the Scikit-Image project, and we can try it out relatively quickly and visualize the oriented gradients within each cell: ``` from skimage import data, color, feature import skimage.data image = color.rgb2gray(data.chelsea()) hog_vec, hog_vis = feature.hog(image, visualize=True) fig, ax = plt.subplots(1, 2, figsize=(12, 6), subplot_kw=dict(xticks=[], yticks=[])) ax[0].imshow(image, cmap='gray') ax[0].set_title('input image') ax[1].imshow(hog_vis) ax[1].set_title('visualization of HOG features'); ``` ## HOG in Action: A Simple Face Detector Using these HOG features, we can build up a simple facial detection algorithm with any Scikit-Learn estimator; here we will use a linear support vector machine (refer back to [In-Depth: Support Vector Machines](05.07-Support-Vector-Machines.ipynb) if you need a refresher on this). The steps are as follows: 1. Obtain a set of image thumbnails of faces to constitute "positive" training samples. 2. Obtain a set of image thumbnails of non-faces to constitute "negative" training samples. 3. Extract HOG features from these training samples. 4. Train a linear SVM classifier on these samples. 5. For an "unknown" image, pass a sliding window across the image, using the model to evaluate whether that window contains a face or not. 6. If detections overlap, combine them into a single window. Let's go through these steps and try it out: ### 1. Obtain a set of positive training samples Let's start by finding some positive training samples that show a variety of faces. We have one easy set of data to work with—the Labeled Faces in the Wild dataset, which can be downloaded by Scikit-Learn: ``` from sklearn.datasets import fetch_lfw_people faces = fetch_lfw_people() positive_patches = faces.images positive_patches.shape ``` This gives us a sample of 13,000 face images to use for training. ### 2. Obtain a set of negative training samples Next we need a set of similarly sized thumbnails which *do not* have a face in them. One way to do this is to take any corpus of input images, and extract thumbnails from them at a variety of scales. Here we can use some of the images shipped with Scikit-Image, along with Scikit-Learn's ``PatchExtractor``: ``` from skimage import data, transform imgs_to_use = ['camera', 'text', 'coins', 'moon', 'page', 'clock', 'immunohistochemistry', 'chelsea', 'coffee', 'hubble_deep_field'] images = [color.rgb2gray(getattr(data, name)()) for name in imgs_to_use] #The next cell crashes on Jupyter in Docker - low memory from sklearn.feature_extraction.image import PatchExtractor def extract_patches(img, N, scale=1.0, patch_size=positive_patches[0].shape): extracted_patch_size = tuple((scale * np.array(patch_size)).astype(int)) extractor = PatchExtractor(patch_size=extracted_patch_size, max_patches=N, random_state=0) patches = extractor.transform(img[np.newaxis]) if scale != 1: patches = np.array([transform.resize(patch, patch_size) for patch in patches]) return patches negative_patches = np.vstack([extract_patches(im, 1000, scale) for im in images for scale in [0.5, 1.0, 2.0]]) negative_patches.shape ``` We now have 30,000 suitable image patches which do not contain faces. Let's take a look at a few of them to get an idea of what they look like: ``` fig, ax = plt.subplots(6, 10) for i, axi in enumerate(ax.flat): axi.imshow(negative_patches[500 * i], cmap='gray') axi.axis('off') ``` Our hope is that these would sufficiently cover the space of "non-faces" that our algorithm is likely to see. ### 3. Combine sets and extract HOG features Now that we have these positive samples and negative samples, we can combine them and compute HOG features. This step takes a little while, because the HOG features involve a nontrivial computation for each image: ``` from itertools import chain X_train = np.array([feature.hog(im) for im in chain(positive_patches, negative_patches)]) y_train = np.zeros(X_train.shape[0]) y_train[:positive_patches.shape[0]] = 1 X_train.shape ``` We are left with 43,000 training samples in 1,215 dimensions, and we now have our data in a form that we can feed into Scikit-Learn! ### 4. Training a support vector machine Next we use the tools we have been exploring in this chapter to create a classifier of thumbnail patches. For such a high-dimensional binary classification task, a Linear support vector machine is a good choice. We will use Scikit-Learn's ``LinearSVC``, because in comparison to ``SVC`` it often has better scaling for large number of samples. First, though, let's use a simple Gaussian naive Bayes to get a quick baseline: ``` from sklearn.naive_bayes import GaussianNB from sklearn.model_selection import cross_val_score cross_val_score(GaussianNB(), X_train, y_train) ``` We see that on our training data, even a simple naive Bayes algorithm gets us upwards of 90% accuracy. Let's try the support vector machine, with a grid search over a few choices of the C parameter: ``` from sklearn.svm import LinearSVC from sklearn.model_selection import GridSearchCV grid = GridSearchCV(LinearSVC(), {'C': [1.0, 2.0, 4.0, 8.0]}) grid.fit(X_train, y_train) grid.best_score_ grid.best_params_ ``` Let's take the best estimator and re-train it on the full dataset: ``` model = grid.best_estimator_ model.fit(X_train, y_train) ``` ### 5. Find faces in a new image Now that we have this model in place, let's grab a new image and see how the model does. We will use one portion of the astronaut image for simplicity (see discussion of this in [Caveats and Improvements](#Caveats-and-Improvements)), and run a sliding window over it and evaluate each patch: ``` test_image = skimage.data.astronaut() test_image = skimage.color.rgb2gray(test_image) test_image = skimage.transform.rescale(test_image, 0.5) test_image = test_image[:160, 40:180] plt.imshow(test_image, cmap='gray') plt.axis('off'); ``` Next, let's create a window that iterates over patches of this image, and compute HOG features for each patch: ``` def sliding_window(img, patch_size=positive_patches[0].shape, istep=2, jstep=2, scale=1.0): Ni, Nj = (int(scale * s) for s in patch_size) for i in range(0, img.shape[0] - Ni, istep): for j in range(0, img.shape[1] - Ni, jstep): patch = img[i:i + Ni, j:j + Nj] if scale != 1: patch = transform.resize(patch, patch_size) yield (i, j), patch indices, patches = zip(*sliding_window(test_image)) patches_hog = np.array([feature.hog(patch) for patch in patches]) patches_hog.shape ``` Finally, we can take these HOG-featured patches and use our model to evaluate whether each patch contains a face: ``` labels = model.predict(patches_hog) labels.sum() ``` We see that out of nearly 2,000 patches, we have found 30 detections. Let's use the information we have about these patches to show where they lie on our test image, drawing them as rectangles: ``` fig, ax = plt.subplots() ax.imshow(test_image, cmap='gray') ax.axis('off') Ni, Nj = positive_patches[0].shape indices = np.array(indices) for i, j in indices[labels == 1]: ax.add_patch(plt.Rectangle((j, i), Nj, Ni, edgecolor='red', alpha=0.3, lw=2, facecolor='none')) ``` All of the detected patches overlap and found the face in the image! Not bad for a few lines of Python. ## Caveats and Improvements If you dig a bit deeper into the preceding code and examples, you'll see that we still have a bit of work before we can claim a production-ready face detector. There are several issues with what we've done, and several improvements that could be made. In particular: ### Our training set, especially for negative features, is not very complete The central issue is that there are many face-like textures that are not in the training set, and so our current model is very prone to false positives. You can see this if you try out the above algorithm on the *full* astronaut image: the current model leads to many false detections in other regions of the image. We might imagine addressing this by adding a wider variety of images to the negative training set, and this would probably yield some improvement. Another way to address this is to use a more directed approach, such as *hard negative mining*. In hard negative mining, we take a new set of images that our classifier has not seen, find all the patches representing false positives, and explicitly add them as negative instances in the training set before re-training the classifier. ### Our current pipeline searches only at one scale As currently written, our algorithm will miss faces that are not approximately 62×47 pixels. This can be straightforwardly addressed by using sliding windows of a variety of sizes, and re-sizing each patch using ``skimage.transform.resize`` before feeding it into the model. In fact, the ``sliding_window()`` utility used here is already built with this in mind. ### We should combine overlapped detection patches For a production-ready pipeline, we would prefer not to have 30 detections of the same face, but to somehow reduce overlapping groups of detections down to a single detection. This could be done via an unsupervised clustering approach (MeanShift Clustering is one good candidate for this), or via a procedural approach such as *non-maximum suppression*, an algorithm common in machine vision. ### The pipeline should be streamlined Once we address these issues, it would also be nice to create a more streamlined pipeline for ingesting training images and predicting sliding-window outputs. This is where Python as a data science tool really shines: with a bit of work, we could take our prototype code and package it with a well-designed object-oriented API that give the user the ability to use this easily. I will leave this as a proverbial "exercise for the reader". ### More recent advances: Deep Learning Finally, I should add that HOG and other procedural feature extraction methods for images are no longer state-of-the-art techniques. Instead, many modern object detection pipelines use variants of deep neural networks: one way to think of neural networks is that they are an estimator which determines optimal feature extraction strategies from the data, rather than relying on the intuition of the user. An intro to these deep neural net methods is conceptually (and computationally!) beyond the scope of this section, although open tools like Google's [TensorFlow](https://www.tensorflow.org/) have recently made deep learning approaches much more accessible than they once were. As of the writing of this book, deep learning in Python is still relatively young, and so I can't yet point to any definitive resource. That said, the list of references in the following section should provide a useful place to start! <!--NAVIGATION--> < [In-Depth: Kernel Density Estimation](05.13-Kernel-Density-Estimation.ipynb) | [Contents](Index.ipynb) | [Further Machine Learning Resources](05.15-Learning-More.ipynb) > <a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.14-Image-Features.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
github_jupyter
## Dependencies ``` import json from tweet_utility_scripts import * from transformers import TFDistilBertModel, DistilBertConfig from tokenizers import BertWordPieceTokenizer from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense, Input, Dropout, GlobalAveragePooling1D, GlobalMaxPooling1D, Concatenate ``` # Load data ``` test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv') print('Test samples: %s' % len(test)) display(test.head()) ``` # Model parameters ``` input_base_path = '/kaggle/input/30-tweet-train-distilbert-base-norm-smooth-sigmoid/' with open(input_base_path + 'config.json') as json_file: config = json.load(json_file) config base_path = '/kaggle/input/qa-transformers/distilbert/' tokenizer_path = input_base_path + 'vocab.txt' model_path_list = glob.glob(input_base_path + '*.h5') model_path_list.sort() print('Models to predict:') print(*model_path_list, sep = "\n") ``` # Tokenizer ``` tokenizer = BertWordPieceTokenizer(tokenizer_path , lowercase=True) ``` # Pre process ``` test['text'].fillna('', inplace=True) test["text"] = test["text"].apply(lambda x: x.lower()) x_test = get_data_test(test, tokenizer, config['MAX_LEN']) ``` # Model ``` module_config = DistilBertConfig.from_pretrained(config['config_path'], output_hidden_states=False) def model_fn(MAX_LEN): input_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask') token_type_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='token_type_ids') base_model = TFDistilBertModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model") sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}) last_state = sequence_output[0] x = GlobalAveragePooling1D()(last_state) y_start = Dense(MAX_LEN, activation='sigmoid', name='y_start')(x) y_end = Dense(MAX_LEN, activation='sigmoid', name='y_end')(x) model = Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[y_start, y_end]) return model ``` # Make predictions ``` NUM_TEST_IMAGES = len(test) test_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN'])) test_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN'])) for model_path in model_path_list: print(model_path) model = model_fn(config['MAX_LEN']) model.load_weights(model_path) test_preds = model.predict(x_test) test_start_preds += test_preds[0] / len(model_path_list) test_end_preds += test_preds[1] / len(model_path_list) ``` # Post process ``` test['start'] = test_start_preds.argmax(axis=-1) test['end'] = test_end_preds.argmax(axis=-1) test['text_len'] = test['text'].apply(lambda x : len(x)) test["end"].clip(0, test["text_len"], inplace=True) test["start"].clip(0, test["end"], inplace=True) test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1) test["selected_text"].fillna('', inplace=True) ``` # Visualize predictions ``` display(test.head(10)) ``` # Test set predictions ``` submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv') submission['selected_text'] = test["selected_text"] submission.to_csv('submission.csv', index=False) submission.head(10) ```
github_jupyter
## Dependencies ``` import json, warnings, shutil from tweet_utility_scripts import * from tweet_utility_preprocess_roberta_scripts import * from transformers import TFRobertaModel, RobertaConfig from tokenizers import ByteLevelBPETokenizer from tensorflow.keras.models import Model from tensorflow.keras import optimizers, metrics, losses, layers from tensorflow.keras.callbacks import EarlyStopping, TensorBoard, ModelCheckpoint SEED = 0 seed_everything(SEED) warnings.filterwarnings("ignore") ``` # Load data ``` database_base_path = '/kaggle/input/tweet-dataset-split-roberta-base-96/' k_fold = pd.read_csv(database_base_path + '5-fold.csv') display(k_fold.head()) # Unzip files !tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_1.tar.gz !tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_2.tar.gz !tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_3.tar.gz # !tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_4.tar.gz # !tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_5.tar.gz ``` # Model parameters ``` vocab_path = database_base_path + 'vocab.json' merges_path = database_base_path + 'merges.txt' base_path = '/kaggle/input/qa-transformers/roberta/' config = { "MAX_LEN": 96, "BATCH_SIZE": 32, "EPOCHS": 5, "LEARNING_RATE": 3e-5, "ES_PATIENCE": 1, "question_size": 4, "N_FOLDS": 3, "base_model_path": base_path + 'roberta-base-tf_model.h5', "config_path": base_path + 'roberta-base-config.json' } with open('config.json', 'w') as json_file: json.dump(json.loads(json.dumps(config)), json_file) ``` # Model ``` module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False) def model_fn(MAX_LEN): input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask') base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model") sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask}) last_state = sequence_output[0] x_start = layers.Dropout(.1)(last_state) x_start = layers.Conv1D(1, 1)(x_start) x_start = layers.Flatten()(x_start) y_start = layers.Activation('sigmoid', name='y_start')(x_start) x_end = layers.Dropout(.1)(last_state) x_end = layers.Conv1D(1, 1)(x_end) x_end = layers.Flatten()(x_end) y_end = layers.Activation('sigmoid', name='y_end')(x_end) model = Model(inputs=[input_ids, attention_mask], outputs=[y_start, y_end]) model.compile(optimizers.Adam(lr=config['LEARNING_RATE']), loss=losses.BinaryCrossentropy(), metrics=[metrics.BinaryAccuracy()]) return model ``` # Tokenizer ``` tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path, lowercase=True, add_prefix_space=True) tokenizer.save('./') ``` # Train ``` history_list = [] AUTO = tf.data.experimental.AUTOTUNE for n_fold in range(config['N_FOLDS']): n_fold +=1 print('\nFOLD: %d' % (n_fold)) # Load data base_data_path = 'fold_%d/' % (n_fold) x_train = np.load(base_data_path + 'x_train.npy') y_train = np.load(base_data_path + 'y_train.npy') x_valid = np.load(base_data_path + 'x_valid.npy') y_valid = np.load(base_data_path + 'y_valid.npy') ### Delete data dir shutil.rmtree(base_data_path) # Train model model_path = 'model_fold_%d.h5' % (n_fold) model = model_fn(config['MAX_LEN']) es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'], restore_best_weights=True, verbose=1) checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min', save_best_only=True, save_weights_only=True) history = model.fit(list(x_train), list(y_train), validation_data=(list(x_valid), list(y_valid)), batch_size=config['BATCH_SIZE'], callbacks=[checkpoint, es], epochs=config['EPOCHS'], verbose=2).history history_list.append(history) # Make predictions train_preds = model.predict(list(x_train)) valid_preds = model.predict(list(x_valid)) k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'train', 'start_fold_%d' % (n_fold)] = train_preds[0].argmax(axis=-1) k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'train', 'end_fold_%d' % (n_fold)] = train_preds[1].argmax(axis=-1) k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'validation', 'start_fold_%d' % (n_fold)] = valid_preds[0].argmax(axis=-1) k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'validation', 'end_fold_%d' % (n_fold)] = valid_preds[1].argmax(axis=-1) k_fold['end_fold_%d' % (n_fold)] = k_fold['end_fold_%d' % (n_fold)].astype(int) k_fold['start_fold_%d' % (n_fold)] = k_fold['start_fold_%d' % (n_fold)].astype(int) k_fold['end_fold_%d' % (n_fold)].clip(0, k_fold['text_len'], inplace=True) k_fold['start_fold_%d' % (n_fold)].clip(0, k_fold['end_fold_%d' % (n_fold)], inplace=True) k_fold['prediction_fold_%d' % (n_fold)] = k_fold.apply(lambda x: decode(x['start_fold_%d' % (n_fold)], x['end_fold_%d' % (n_fold)], x['text'], config['question_size'], tokenizer), axis=1) k_fold['prediction_fold_%d' % (n_fold)].fillna('', inplace=True) k_fold['jaccard_fold_%d' % (n_fold)] = k_fold.apply(lambda x: jaccard(x['text'], x['prediction_fold_%d' % (n_fold)]), axis=1) ``` # Model loss graph ``` sns.set(style="whitegrid") for n_fold in range(config['N_FOLDS']): print('Fold: %d' % (n_fold+1)) plot_metrics(history_list[n_fold]) ``` # Model evaluation ``` display(evaluate_model_kfold(k_fold, config['N_FOLDS']).style.applymap(color_map)) ``` # Visualize predictions ``` display(k_fold[[c for c in k_fold.columns if not (c.startswith('textID') or c.startswith('text_len') or c.startswith('selected_text_len') or c.startswith('text_wordCnt') or c.startswith('selected_text_wordCnt') or c.startswith('fold_') or c.startswith('start_fold_') or c.startswith('end_fold_'))]].head(15)) ```
github_jupyter
#### New to Plotly? Plotly's Python library is free and open source! [Get started](https://plotly.com/python/getting-started/) by dowloading the client and [reading the primer](https://plotly.com/python/getting-started/). <br>You can set up Plotly to work in [online](https://plotly.com/python/getting-started/#initialization-for-online-plotting) or [offline](https://plotly.com/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plotly.com/python/getting-started/#start-plotting-online). <br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! #### Imports The tutorial below imports [Numpy](http://www.numpy.org/), [Pandas](https://plotly.com/pandas/intro-to-pandas-tutorial/), and [SciPy](https://www.scipy.org/). ``` import plotly.plotly as py import plotly.graph_objs as go from plotly.tools import FigureFactory as FF import numpy as np import pandas as pd import scipy ``` #### Import Data We will import a dataset to perform our discrete frequency analysis on. We will look at the consumption of alcohol by country in 2010. ``` data = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/2010_alcohol_consumption_by_country.csv') df = data[0:10] table = FF.create_table(df) py.iplot(table, filename='alcohol-data-sample') ``` #### Probability Distribution We can produce a histogram plot of the data with the y-axis representing the probability distribution of the data. ``` x = data['alcohol'].values.tolist() trace = go.Histogram(x=x, histnorm='probability', xbins=dict(start=np.min(x), size=0.25, end=np.max(x)), marker=dict(color='rgb(25, 25, 100)')) layout = go.Layout( title="Histogram with Probability Distribution" ) fig = go.Figure(data=go.Data([trace]), layout=layout) py.iplot(fig, filename='histogram-prob-dist') ``` #### Frequency Counts ``` trace = go.Histogram(x=x, xbins=dict(start=np.min(x), size=0.25, end=np.max(x)), marker=dict(color='rgb(25, 25, 100)')) layout = go.Layout( title="Histogram with Frequency Count" ) fig = go.Figure(data=go.Data([trace]), layout=layout) py.iplot(fig, filename='histogram-discrete-freq-count') ``` #### Percentage ``` trace = go.Histogram(x=x, histnorm='percent', xbins=dict(start=np.min(x), size=0.25, end=np.max(x)), marker=dict(color='rgb(50, 50, 125)')) layout = go.Layout( title="Histogram with Frequency Count" ) fig = go.Figure(data=go.Data([trace]), layout=layout) py.iplot(fig, filename='histogram-percentage') ``` #### Cumulative Density Function We can also take the cumulatve sum of our dataset and then plot the cumulative density function, or `CDF`, as a scatter plot ``` cumsum = np.cumsum(x) trace = go.Scatter(x=[i for i in range(len(cumsum))], y=10*cumsum/np.linalg.norm(cumsum), marker=dict(color='rgb(150, 25, 120)')) layout = go.Layout( title="Cumulative Distribution Function" ) fig = go.Figure(data=go.Data([trace]), layout=layout) py.iplot(fig, filename='cdf-dataset') from IPython.display import display, HTML display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />')) display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">')) ! pip install git+https://github.com/plotly/publisher.git --upgrade import publisher publisher.publish( 'python-Discrete-Frequency.ipynb', 'python/discrete-frequency/', 'Discrete Frequency | plotly', 'Learn how to perform discrete frequency analysis using Python.', title='Discrete Frequency in Python. | plotly', name='Discrete Frequency', language='python', page_type='example_index', has_thumbnail='false', display_as='statistics', order=3, ipynb= '~notebook_demo/110') ```
github_jupyter
# MatPlotLib Basics ## Draw a line graph ``` %matplotlib inline from scipy.stats import norm import matplotlib.pyplot as plt import numpy as np x = np.arange(-3, 3, 0.01) plt.plot(x, norm.pdf(x)) plt.show() ``` ## Mutiple Plots on One Graph ``` plt.plot(x, norm.pdf(x)) plt.plot(x, norm.pdf(x, 1.0, 0.5)) plt.show() ``` ## Save it to a File ``` plt.plot(x, norm.pdf(x)) plt.plot(x, norm.pdf(x, 1.0, 0.5)) plt.savefig('MyPlot.png', format='png') ``` ## Adjust the Axes ``` axes = plt.axes() axes.set_xlim([-5, 5]) axes.set_ylim([0, 1.0]) axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]) axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]) plt.plot(x, norm.pdf(x)) plt.plot(x, norm.pdf(x, 1.0, 0.5)) plt.show() ``` ## Add a Grid ``` axes = plt.axes() axes.set_xlim([-5, 5]) axes.set_ylim([0, 1.0]) axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]) axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]) axes.grid() plt.plot(x, norm.pdf(x)) plt.plot(x, norm.pdf(x, 1.0, 0.5)) plt.show() ``` ## Change Line Types and Colors ``` axes = plt.axes() axes.set_xlim([-5, 5]) axes.set_ylim([0, 1.0]) axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]) axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]) axes.grid() plt.plot(x, norm.pdf(x), 'b-') plt.plot(x, norm.pdf(x, 1.0, 0.5), 'r:') plt.show() ``` ## Labeling Axes and Adding a Legend ``` axes = plt.axes() axes.set_xlim([-5, 5]) axes.set_ylim([0, 1.0]) axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]) axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]) axes.grid() plt.xlabel('Greebles') plt.ylabel('Probability') plt.plot(x, norm.pdf(x), 'b-') plt.plot(x, norm.pdf(x, 1.0, 0.5), 'r:') plt.legend(['Sneetches', 'Gacks'], loc=4) plt.show() ``` ## XKCD Style :) ``` plt.xkcd() fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') plt.xticks([]) plt.yticks([]) ax.set_ylim([-30, 10]) data = np.ones(100) data[70:] -= np.arange(30) plt.annotate( 'THE DAY I REALIZED\nI COULD COOK BACON\nWHENEVER I WANTED', xy=(70, 1), arrowprops=dict(arrowstyle='->'), xytext=(15, -10)) plt.plot(data) plt.xlabel('time') plt.ylabel('my overall health') ``` ## Pie Chart ``` # Remove XKCD mode: plt.rcdefaults() values = [12, 55, 4, 32, 14] colors = ['r', 'g', 'b', 'c', 'm'] explode = [0, 0, 0.2, 0, 0] labels = ['India', 'United States', 'Russia', 'China', 'Europe'] plt.pie(values, colors= colors, labels=labels, explode = explode) plt.title('Student Locations') plt.show() ``` ## Bar Chart ``` values = [12, 55, 4, 32, 14] colors = ['r', 'g', 'b', 'c', 'm'] plt.bar(range(0,5), values, color= colors) plt.show() ``` ## Scatter Plot ``` from pylab import randn X = randn(500) Y = randn(500) plt.scatter(X,Y) plt.show() ``` ## Histogram ``` incomes = np.random.normal(27000, 15000, 10000) plt.hist(incomes, 50) plt.show() ``` ## Box & Whisker Plot Useful for visualizing the spread & skew of data. The red line represents the median of the data, and the box represents the bounds of the 1st and 3rd quartiles. So, half of the data exists within the box. The dotted-line "whiskers" indicate the range of the data - except for outliers, which are plotted outside the whiskers. Outliers are 1.5X or more the interquartile range. This example below creates uniformly distributed random numbers between -40 and 60, plus a few outliers above 100 and below -100: ``` uniformSkewed = np.random.rand(100) * 100 - 40 high_outliers = np.random.rand(10) * 50 + 100 low_outliers = np.random.rand(10) * -50 - 100 data = np.concatenate((uniformSkewed, high_outliers, low_outliers)) plt.boxplot(data) plt.show() ``` ## Activity Try creating a scatter plot representing random data on age vs. time spent watching TV. Label the axes.
github_jupyter
# Imports ``` import sys sys.path.append("..") import numpy as np import pandas as pd from src import FlairDataset from sklearn.metrics import confusion_matrix import matplotlib.pyplot as plt import seaborn as sns from tqdm import tqdm_notebook %matplotlib inline path = "../data/external/real_fake_disaster/" dataset = FlairDataset.csv_classification( data_folder=path, filename='data', column_mapping=['text', 'label']) ``` # Tag Prediction ``` from flair.data import Sentence from flair.models import SequenceTagger # load the NER tagger tagger_ner = SequenceTagger.load('ner-fast') tagger_pos = SequenceTagger.load('pos-fast') test = dataset.train_data[:10][['label', 'text']] test def ner_tag(row): sentence = Sentence(row['text'],use_tokenizer=True) temp = tagger_ner.predict(sentence) row['ner_tag'] = sentence.to_tagged_string() return row def pos_tag(row): sentence = Sentence(row['text'],use_tokenizer=True) temp = tagger_pos.predict(sentence) row['pos_tag'] = sentence.to_tagged_string() return row test = test.apply(ner_tag, axis=1) test test = test.apply(pos_tag, axis=1) # used with flair get_spans # tags are not in format similar to BILOU # we get the score for each tags def ner_tags_updated(rows,): sentence = Sentence(rows["text"],use_tokenizer=True) temp = tagger_ner.predict(sentence) text = sentence.to_tokenized_string().split(" ") entity_tagged = sentence.get_spans('ner') tagged_text = [ent.text for ent in entity_tagged] tagged_label = [ent.tag for ent in entity_tagged] tagged_score = [ent.score for ent in entity_tagged] corpus = [] cleaned_ner_tag = [] score = [] for i in text: if i in tagged_text: corpus.append(i) index = tagged_text.index(i) cleaned_ner_tag.append(tagged_label[index]) score.append(round(tagged_score[index],2)) else: corpus.append(i) cleaned_ner_tag.append("NA") score.append(np.NaN) rows["updated_ner_corpus"] = corpus rows["updated_cleaned_ner"] = cleaned_ner_tag rows["updated_ner_score"] = score return rows test = test.apply(ner_tags_updated,axis=1) import re def clean_tags(rows,): for j in ['<', '>']: rows = str(rows).replace(j, "") rows = re.sub(' +', ' ', str(rows)) rows = str(rows).strip() return rows def extract_pos_tags(rows,): text = rows['pos_tag'].split(" ") corpus = [i for i in text if not i.strip().startswith("<")] tags = [clean_tags(i) for i in text if i.strip().startswith("<")] if len(corpus)== len(tags): rows['pos_corpus'] = corpus rows['cleaned_pos_tags'] = tags return rows test = test.apply(extract_pos_tags,axis=1) test # if used Sentence.to_tagged_string() def extract_ner_tags(rows,): text = rows['ner_tag'].split(" ") #print(text) tot_words = len(text) #print(tot_words) words = [] tags = [] for i,wd in enumerate(text): if wd.startswith("<"): continue # print(words) # print(tags) if i+1 < tot_words: #print(i) if text[i+1].startswith("<"): # print(wd) words.append(wd) tags.append(clean_tags(text[i+1])) else: #print(wd) words.append(wd) tags.append("NA") else: if not text[i].startswith("<"): words.append(wd) tags.append("NA") if len(words) == len(tags): rows['ner_corpus'] = words rows['cleaned_ner_tags'] = tags return rows test = test.apply(extract_ner_tags,axis = 1) test test.ner_corpus == test.pos_corpus ``` # Understanding Conll data fromat for training our own corpus ## Loading conll dataset The data file contains one word per line, with empty lines representing sentence boundaries. ``` with open('../data/external/pos_tag_retraining/conll.train', 'r') as f: txt = f.read() txt.split("\n")[:10] ``` ## Preprocessing ``` txt = txt.split("\n") txt = [x for x in txt if x != '-DOCSTART- -X- -X- O'] txt[:10] # Initialize empty list for storing words words = [] # initialize empty list for storing sentences # corpus = [] for i in tqdm_notebook(txt): if i == '': ## previous words form a sentence ## corpus.append(' '.join(words)) ## Refresh Word list ## words = [] else: ## word at index 0 ## words.append(i.split()[0]) corpus[:10] corpus = [x for x in corpus if x != ''] # Initialize empty list for storing word pos w_pos = [] #initialize empty list for storing sentence pos # POS = [] for i in tqdm_notebook(txt): ## blank sentence = new line ## if i == '': ## previous words form a sentence POS ## POS.append(' '.join(w_pos)) ## Refresh words list ## w_pos = [] else: ## pos tag from index 1 ## w_pos.append(i.split()[1]) POS = [x for x in POS if x != ''] ``` ## Flair Prediction ``` f_pos = [] for i in tqdm_notebook(corpus[:10]): sentence = Sentence(i) tagger_pos.predict(sentence) ## append tagged sentence ## f_pos.append(sentence.to_tagged_string()) f_pos[:1] for i in tqdm_notebook(range(len(f_pos))): ## for every words ith sentence ## for j in corpus[i].split(): ## replace that word from ith sentence in f_pos ## f_pos[i] = str(f_pos[i]).replace(j, "", 1) f_pos[:1] f_pos = [clean_tags(i) for i in tqdm_notebook(f_pos)] f_pos[:1] corpus[:1],f_pos[:1] f_pos[:1][0].split(" ")[2] f_pos[0] corpus[0] ``` # Converting our dataset in conll dataset format ``` test.columns #pos_tags train_corpus = "" for row in tqdm_notebook(test.itertuples()): words = "" corpus = row.pos_corpus pos_tags = row.cleaned_pos_tags ner_tags = row.cleaned_ner_tags for j, word in enumerate(corpus): txt_tag = str(word) + " " + pos_tags[j] + " " + ner_tags[j] words = words + "\n" + txt_tag train_corpus = train_corpus + "\n" + words train_corpus = train_corpus[2:] print(train_corpus) ``` # Retraining POS Tag ``` tagger = SequenceTagger.load("pos-fast") with open("../data/interim/pos_train.txt",'w') as f: f.write(train_corpus) from flair.data import Corpus from flair.datasets import ColumnCorpus columns = {0: 'text', 1: 'pos', 2: 'ner'} data_folder = "../data/interim" corpus: Corpus = ColumnCorpus(data_folder, columns, train_file='pos_train.txt') tag_type = 'pos' tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) vars(tag_dictionary) from flair.trainers import ModelTrainer trainer: ModelTrainer = ModelTrainer(tagger, corpus) trainer.train('../models/retraining/pos/flair_pos_test', train_with_dev=False, max_epochs=1) ``` # Retraining NER Tags ``` tagger = SequenceTagger.load("ner-fast") corpus: Corpus = ColumnCorpus(data_folder, columns, train_file='pos_train.txt') tag_type = 'ner' tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) vars(tag_dictionary) from flair.trainers import ModelTrainer trainer: ModelTrainer = ModelTrainer(tagger, corpus) trainer.train('../models/retraining/ner/flair_ner_test', train_with_dev=False, max_epochs=1) ```
github_jupyter
``` import osmnx as ox, pandas as pd, time, geopandas as gpd, os %matplotlib inline ox.config(use_cache=True, log_file=True, log_console=True, log_filename='calculate_stats_neighborhoods', data_folder='G:/Geoff/osmnx/neighborhoods', cache_folder='G:/Geoff/osmnx/cache/neighborhoods') data_folder = 'G:/Geoff/osmnx/neighborhoods' ``` ## Make a DataFrame of all the cities that have .graphml files saved in the folder ``` places = [] for state_folder in os.listdir(data_folder): for city_folder in os.listdir('{}/{}'.format(data_folder, state_folder)): for nhood_file in os.listdir('{}/{}/{}'.format(data_folder, state_folder, city_folder)): if '.graphml' in nhood_file: data = {} data['state_fips'] = state_folder.split('_')[0] data['state'] = state_folder.split('_')[1] data['geoid'] = city_folder.split('_')[0] data['city'] = city_folder.replace('{}_'.format(data['geoid']), '') data['nhood'] = nhood_file.replace('.graphml', '').replace('-', ' ') data['path'] = '{}/{}/{}'.format(data_folder, state_folder, city_folder) data['file'] = nhood_file places.append(data) df = pd.DataFrame(places) df.head() ``` ## Load graph and calculate stats for each neighborhood ``` def load_graph_get_stats(row): try: start_time = time.time() G = ox.load_graphml(filename=row['file'], folder=row['path']) nhood_area_m = float(G.graph['nhood_area_m']) stats = ox.basic_stats(G, area=nhood_area_m) stats['nhood'] = row['nhood'] stats['city'] = row['city'] stats['state'] = row['state'] stats['geoid'] = row['geoid'] # calculate/drop the extended stats that have values per node extended_stats = ox.extended_stats(G) se = pd.Series(extended_stats) se = se.drop(['avg_neighbor_degree', 'avg_weighted_neighbor_degree', 'clustering_coefficient', 'clustering_coefficient_weighted', 'degree_centrality', 'pagerank']) extended_stats_clean = se.to_dict() for key in extended_stats_clean: stats[key] = extended_stats_clean[key] stats['area_km'] = nhood_area_m / 1e6 stats['area'] = nhood_area_m stats['time'] = time.time()-start_time return pd.Series(stats) except Exception as e: print('{}, {}, {} failed: {}'.format(row['nhood'], row['city'], row['state'], e)) return pd.Series() #sample = list(range(0, len(df), int(len(df)/100))) #stats = df.iloc[sample].apply(load_graph_get_stats, axis=1) stats = df.apply(load_graph_get_stats, axis=1) stats.to_csv('stats_every_nhood.csv', encoding='utf-8', index=False) print(len(stats)) stats['time'].sum() ```
github_jupyter
# [LEGALST-123] Lab 18: Regular Expressions This lab will cover the basics of regular expression: finding, extracting and manipulating pieces of text based on specific patterns within strings. *Estimated Time: 45 minutes* ### Table of Contents [The Data](#section data)<br> [Overview](#section context)<br> 0- [Matching with Regular Expressions](#section 0)<br> 1 - [Introduction to Essential RegEx](#section 1)<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1 - [Special Characters](#subsection 1) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2 - [Quantifiers](#subsection 2) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3 - [Sets](#subsection 3) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4 - [Special Sequences](#subsection 4) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5 - [Groups and Logical OR](#subsection 4) 2- [Python RegEx Methods](#section 2)<br> 3 - [Valuation Extraction](#section 3)<br> ## The Data <a id='data'></a> You will again be working with the Old Bailey data set to practice matching and manipulating pieces of the textual data. ## Overview <a id='data'></a> Regular Expressions operations ("RegEx") are a very flexible version of the text search function that you find in most text processing software. In those regular search functions, you press `ctrl+F` (or `command+F`) and type in the search phrase you are looking for e.g. "Congress". If your software finds an exact match for your search phrase ("Congress"), it jumps to its position in the text and you can take it from there. Thinking a bit more abstractly about this, "Congress" is nothing else than a very specific search. In it, we ask the search function to report the position where it finds a capital "C" followed seven lower case letters ("o", "n", "g", "r", "e","s","s"), all in a specific order. Depending on your text, it may have been sufficient to let your search function look for all words starting with the captial letter "C", or for those words starting with "C" and ending with "ess". This kind of flexibility is exactly what RegEx provides. RegEx is more flexible than the customary search function as it does not restrict you to spell out the literal word, number or phrase you are looking for. Rather, in RegEx you can describe the necessary characteristics for a match. You can enter these characteristics based on rules and special characters that make RegEx what it is. Regular expressions are useful in a variety of applications, and can be used in different programs and programming languages. We will start by learning the general components of regular expressions, using a simple online tool, Regex101. Then at the end of the workshop, we'll learn how to use regular expressions to conduct devaluation exploration on the Old Bailey dataset - we will look at how often plaintiffs had the amount they were charged with stealing reduced when they were sentenced by matching valuations in the text such as 'value 8s 6p'. __IT IS IMPORTANT to have an experimental mindset as you go through today's practice problems.__ Practice and curiosity are the keys to success! Each indiviual character expression may output a simple pattern, but you will need to explore different combinations to match more and more complicated sets of strings. Feel free to go beyond what the questions ask and test different expressions as you work through this notebook. __Dependencies__: Run the cell below. We will go over what this python library does in the Python Methods section of this lab. ``` import re ``` --- ## Introduction to Essential RegEx<a id='section 1'></a> ### 0. Matching with Regular Expressions <a id='subsection 0'></a> Before we dive into the different character expressions and their meanings, let's explore what it looks like to match some basic expressions. Open up [Regex101](https://regex101.com/r/Una9U7/4), an online Python regular expression editor. This editor will allow us to input any test string and practice using regular expressions while receiving verification and tips in real-time. There should already be an excerpt from the Old Bailey Set (edited, for the sake of practice problems) in the `Test String` box. You can think of the `Regular Expression` field like the familiar `ctrl+F` search box. Try typing in the following, one at a time, to the `Regular Expression` field: ~~~ {.input} 1. lowercase letter: d 2. uppercase letter: D 3. the word: lady 4. the word: Lady 5. the word: our 6. the word: Our 7. a single space 8. a single period ~~~ __Question 1:__ What do you notice? __Your Answer:__ *Write your Answer Here:* Note that: 1. RegEx is case sensitive: it matches _exactly_ what you tell it to match. 2. RegEx looks for the exact order of each character you input into the expression. In the entire text, it found 'our' in 'Hon`our`able' and 'F`our`score'. However, nowhere in the text was there the exact sequence of letters O-u-r starting with a capital 'O', so 'Our' doesn't match anything. 3. The space character ` ` highlights all the single spaces in the text. 4. the period character `.` matches all the characters in the text, not just the periods... why? This last question takes us now to what is called __special characters__. --- ### 1. Special Characters <a id='subsection 1'></a> Strings are composed of characters, and we are writing patterns to match specific sequences of characters. Various characters have special meaning in regular expressions. When we use these characters in an expression, we aren't matching the identical character, we're using the character as a placeholder for some other character(s) or part(s) of a string. ~~~ {.input} . any single character except newline character ^ start of string $ end of entire string \n new line \r carriage return \t tab ~~~ Note: if you want to actually match a character that happens to be a special character, you have to escape it with a backslash `\`. __Question 2:__ Try typing the following special characters into the `Regular Expression` field on the same Regex101 site. What happens when you type: 1. `Samuel` vs. `^Samuel` vs. `Samuel$`? 2. `.` vs. `\.` 3. `the` vs. `th.` vs. `th..` ? __Your Answer:__ *Write your Answer Here*: 1. 2. 3. --- ### 2. Quantifiers<a id='subsection 2'></a> Some special characters refer to optional characters, to a specific number of characters, or to an open-ended number of characters matching the preceding pattern. ~~~ {.input} * 0 or more of the preceding character/expression + 1 or more of the preceding character/expression ? 0 or 1 of the preceding character/expression {n} n copies of the preceding character/expression {n,m} n to m copies of the preceding character/expression ~~~ __Question 3:__ For this question, click [here](https://regex101.com/r/ssAUXx/1) to open another Regex101 page. What do the expressions `of`, `of*`, `of+`, `of?`, `of{1}`, `of{1,2}` match? Remember that the quantifier only applies to the character *immediately* preceding it. For example, the `*` in `of*` applies only to the `f`, so the expression looks for a pattern starting with __exactly one__ `o` and __0 or more__ `f`'s. __Your Answer:__ *Write your answer here:* --- ### 3. Sets<a id='subsection 3'></a> A set by itself is merely a __collection__ of characters the computer may choose from to match a __single__ character in a pattern. We can define these sets of characters using `square brackets []`. Within a set of square brackets, you may list characters individually, e.g. `[aeiou]`, or in a range, e.g. `[A-Z]` (note that all regular expressions are case sensitive). You can also create a complement set by excluding certain characters, using `^` as the first character in the set. The set `[^A-Za-z]` will match any character except a letter. All other special characters loose their special meaning inside a set, so the set `[.?]` will look for a literal period or question mark. The set will match only one character contained within that set, so to find sequences of multiple characters from the same set, use a quantifier like `+` or a specific number or number range `{n,m}`. ~~~ {.input} [0-9] any numeric character [a-z] any lowercase alphabetic character [A-Z] any uppercase alphabetic character [aeiou] any vowel (i.e. any character within the brackets) [0-9a-z] to combine sets, list them one after another [^...] exclude specific characters ~~~ __Question 4:__ Let's switch back to the excerpt from the Old Bailey data set (link [here](https://regex101.com/r/Una9U7/2) for convenience). Can you write a regular expression that matches __all consonants__ in the text string? __Your Answer:__ ``` # YOUR EXPRESSION HERE ``` --- ### 4. Special sequences<a id='subsection 4'></a> If we want to define a set of all 26 characters of the alphabet, we would have to write an extremely long expression inside a square bracket. Fortunately, there are several special characters that denote special sequences. These begin with a `\` followed by a letter. Note that the uppercase version is usually the complement of the lowercase version. ~~~ {.input} \d Any digit \D Any non-digit character \w Any alphanumeric character [0-9a-zA-Z_] \W Any non-alphanumeric character \s Any whitespace (space, tab, new line) \S Any non-whitespace character \b Matches the beginning or end of a word (does not consume a character) \B Matches only when the position is not the beginning or end of a word (does not consume a character) ~~~ __Question 5:__ Write a regular expression that matches all numbers (without punctuation marks or spaces) in the Old Bailey excerpt. Make sure you are matching whole numbers (i.e. `250`) as opposed to individual digits within the number (i.e. `2`, `5`, `0`). __Your Answer:__ ``` # YOUR EXPRESSION HERE ``` __Question 6:__ Write a regular expression that matches all patterns with __at least__ 2 and __at most__ 3 digit and/or white space characters in the Old Bailey excerpt. __Your Answer:__ ``` #YOUR EXPRESSION HERE ``` --- ### 5. Groups and Logical OR<a id='subsection 5'></a> Parentheses are used to designate groups of characters, to aid in logical conditions, and to be able to retrieve the contents of certain groups separately. The pipe character `|` serves as a logical OR operator, to match the expression before or after the pipe. Group parentheses can be used to indicate which elements of the expression are being operated on by the `|`. ~~~ {.input} | Logical OR opeator (...) Matches whatever regular expression is inside the parentheses, and notes the start and end of a group (this|that) Matches the expression "this" or the expression "that" ~~~ __Question 7:__ Write an expression that matches groups of `Samuel` or `Prisoner` in the Old Bailey excerpt. __Your Answer:__ ``` # YOUR EXPRESSION HERE ``` --- ## Python RegEx Methods <a id='section 2'></a> So how do we actually use RegEx for analysis in Python? Python has a RegEx library called `re` that contains various methods so we can manipulate text using RegEx. The following are some useful Python Methods we may use for text analysis: - ``.findall(pattern, string)``: Checks whether your pattern appears somewhere inside your text (including the start). If so, it returns all phrases that matched your pattern, but not their position. - ``.sub(pattern, repl, string)``: Return the string obtained by replacing the leftmost non-overlapping occurrences of pattern in string by the replacement repl. - ``.split(pattern, string)``: Split string by the occurrences of pattern. If capturing parentheses are used in pattern, then the text of all groups in the pattern are also returned as part of the resulting list. We will only be using the `.findall()` method for the purposes of today's lab, so don't worry if the functionality of each method isn't clear right now. If you are curious about all the module content within the `re` library, take a look at the [documentation for `re`](https://docs.python.org/2/library/re.html) on your own time! --- ## Extracting Valuation from Old Bailey <a id='section 3'></a> Let's apply our new RegEx knowledge to extract all valuation information from the text! The next cell simply assigns a long string containing three separate theft cases to a variable called `old_bailey`. Within the text are valuations which indicate the worth of the items stolen. We will use this string, what we can observe about the format of valuation notes in the text, and what we just learned about regular expressions to __find all instances of valuations in the text__. Valuations will look something like: `val. 4 s. 6 d.` *Note:* British Currency before 1971 was divided into pounds (`l`), shillings (`s`), and pennies (`d`) - that's what the letters after the values represent. We want to make sure to keep the values and units together when extracting valuations. __STEP 1__: We will first write expression(s) that will match the valuations. Take a moment to look for a pattern you notice across the valuations: ``` old_bailey = """"Samuel Davis, of the Parish of St. James Westminster, was indicted for feloniously Stealing 58 Diamonds set in Silver gilt, value 250 l. the Goods of the Honourable Catherine Lady Herbert, on the 28th of July last. It appeared that the Jewels were put up in a Closet, which was lockt, and the Prisoner being a Coachman in the House, took his opportunity to take them; the Lady, when missing them, offered a Reward of Fourscore Pounds to any that could give any notice of it; upon enquiry, the Lady heard that a Diamond was sold on London-Bridge, and they described the Prisoner who sold it, and pursuing him, found the Prisoner at East-Ham, with all his Goods bundled up ready to be gone, and in his Trunk found all the Diamonds but one, which was found upon him in the Role of his Stocking, when searcht before the Justice. He denied the Fact, saying, He found them upon a great Heap of Rubbish, but could not prove it; and that being but a weak Excuse, the Jury found him guilty. John Emory, was indicted for stealing eleven crown pieces, twenty four half crowns, one Spanish piece, val. 4 s. 6 d. one silk purse, and 4 s. 6 d. in silver, the goods of Ann Kempster, in the dwelling house of Walter Jones. December 17. Acquitted. He was a second time indicted for stealing one pair of stockings, val. 6 d. the goods of John Hilliard . GEORGE MORGAN was indicted for that he, about the hour of ten in the night of the 10th of December , being in the dwelling-house of George Brookes , feloniously did steal two hundred and three copper halfpence, five china bowls, value 30s. a tea-caddie, value 5s. a pound of green tea, value 8s. four glass rummers, value 2s. and a wooden drawer, called a till, value 6d. the property of the said George, and that he having committed the said felony about the hour of twelve at night, burglariously did break the dwelling-house of the said George to get out of the same.""" ``` You might notice that there are multiple ways in which valuations are noted. It can take the form: ~~~ {.input} value 30s. val. 6 d. 4 s. 6 d. ~~~ ...and so on. Fortunately, we only care about the values and the associaed units, so the ommission or abbreviation of the word `value` can be ignored - we only care about: ~~~ {.input} 30s. 6 d. 4 s. 6 d. ~~~ Unfortunately, we can see that the format is still not consistent. The first one has no space between the number and unit, but the second and third do. The first and second have a single number and unit, but the third has two of each. How might you write an expression that would account for the variations in how valuations are written? Can you write a single regular expression that would match all the different forms of valuations exactly? Or do we need to have a few different expressions to account for these differnces, look for each pattern individually, and combine them somehow in the end? Real data is messy. When manipulating real data, you will inevitably encounter inconsistencies and you will need to ask yourself questions such as the above. You will have to figure out how to clean and/or work with the mess. With that in mind, click [here](https://regex101.com/r/2lal6d/1) to open up a new Regex101 with `old_bailey` already in the Test String. We will compose a regular expression, in three parts, that will account for all forms of valuations in the string above. __PART 1: Write an expression__ that matches __all__ valuations of the form `30s.` AND `6 d.`, but does not match _anything else_ (e.g. your expression should not match any dates). Try not to look at the hints on your first attempt! Save this expression __as a string__ in `exp1`. _Hint1:_ Notice the structure of valuations. It begins with a number, then an _optional_ space, then a single letter followed by a period. _Hint2:_ What _quantifier_ allows you to choose _0 or more of the previous character_? _Hint3:_ If you are still stuck, look back to the practice problems and see that we've explored/written expressions to match all components of this expression! It's just a matter of putting it together. ``` #Your Expression Here exp1 = ``` __PART 2:__ For the third case we found above, there are multiple values and units in the valuation. What can you add to what you came up with above so that we have another expression that matches this specific case? Save this expression as a string in `exp2`. ``` #Your Expression Here exp2 = ... ``` __PART 3:__ Now that you have expressions that account for the different valuation formats, combine it into one long expression that looks for (_hint_) one expression __OR__ the other. Set this expression to `final`. Be careful about the order in which you ask the computer to look for patterns (i.e. should it look for the shorter expression first, or the longer expression first?). Save this final expression as a string in `final`. ``` #Your Expression Here final = ``` __STEP 2:__ Now that you have the right regular expression that would match our valuations, how would you use it to _extract_ all instances of valuations from the text saved in `old_bailey`? Remember, you need to input your regular expression as a __string__ into the method. ``` #Your Expression Here ``` __Congratulations!!__ You've successfully extracted all valuations from our sample. When you are extracting valuations from a larger text for your devaluation exploration, keep in mind all the possible variations in valuation that may not have been covered by our example above. You now have all the skills necessary to tweak the expression to account for such minor variations -- Good Luck! --- ## Bibliography - The Python Standard Library. (2018, February). Regular Expression Operations. https://docs.python.org/2/library/re.html - Bloy, Marjie. (2006, June). British Currency Before 2971. http://www.victorianweb.org/economics/currency.html - The Proceedings of the Old Bailey. https://www.oldbaileyonline.org/ --- Notebook Developed by: Keiko Kamei Data Science Modules: http://data.berkeley.edu/education/modules
github_jupyter
# Advanced visulization tutorial `mendeleev` support two interactive plotting backends 1. [Plotly](https://plotly.com/) (default) 2. [Bokeh](http://bokeh.pydata.org/en/latest/) ## Note Depending on your environment being the classic jupyter notebook or jupyterlab you might have to do additional configuration steps, so if you're not getting expected results see plotly of bokeh documentation. ## Accessing lower level plotting functions There are two plotting functions for each of the backends: - `periodic_table_plotly` in `mendeleev.vis.plotly` - `periodic_table_bokeh` in `mendeleev.vis.bokeh` that you can use to customize the visualizations even further. Both functions take the same keyword arguments as the `periodic_table` function but the also require a `DataFrame` with periodic table data. You can get the default data using the `create_vis_dataframe` function. Let's start with an example using the `plotly` backend. ``` from mendeleev.vis import create_vis_dataframe, periodic_table_plotly elements = create_vis_dataframe() periodic_table_plotly(elements) ``` ## Custom coloring scheme You can also add a custom color map by assigning color to all the elments. This can be done by creating a custom column in the `DataFrame` and then using `colorby` argument to specify which column contains colors. Let's try to color the elements according to the block they belong to. ``` import seaborn as sns from matplotlib import colors blockcmap = {b : colors.rgb2hex(c) for b, c in zip(['s', 'p', 'd', 'f'], sns.color_palette('deep'))} elements['block_color'] = elements['block'].map(blockcmap) periodic_table_plotly(elements, colorby='block_color') ``` ## Custom properties You can also create your own properties to be displayed using [pandas](http://pandas.pydata.org/) awesome methods for manipulating data. For example let's consider the difference of electronegativity between every element and the Oxygen atom. To calculate the values we will use Allen scale this time and call our new value `ENX-ENO`. ``` elements.loc[:, 'ENX-ENO'] = elements.loc[elements['symbol'] == 'O', 'en_allen'].values - elements.loc[:, 'en_allen'] periodic_table_plotly(elements, attribute='ENX-ENO', colorby='attribute', cmap='viridis', title='Allen Electronegativity wrt. Oxygen') ``` As a second example let's consider a difference between the `covalent_radius_slater` and `covalent_radius_pyykko` values ``` elements['cov_rad_diff'] = elements['atomic_radius'] - elements['covalent_radius_pyykko'] periodic_table_plotly(elements, attribute='cov_rad_diff', colorby='attribute', title='Covalent Radii Difference', cmap='viridis') ``` ## Bokeh backend We can also use the `Bokeh` backed in the same way but we need to take a few extra steps to render the result in a notebook ``` from bokeh.plotting import show, output_notebook from mendeleev.vis import periodic_table_bokeh ``` First we need to enable notebook output ``` output_notebook() fig = periodic_table_bokeh(elements) show(fig) fig = periodic_table_bokeh(elements, attribute="atomic_radius", colorby="attribute") show(fig) ```
github_jupyter
<a href="https://colab.research.google.com/github/magenta/ddsp/blob/main/ddsp/colab/demos/timbre_transfer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ##### Copyright 2021 Google LLC. Licensed under the Apache License, Version 2.0 (the "License"); ``` # Copyright 2021 Google LLC. All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== ``` # DDSP Timbre Transfer Demo This notebook is a demo of timbre transfer using DDSP (Differentiable Digital Signal Processing). The model here is trained to generate audio conditioned on a time series of fundamental frequency and loudness. * [DDSP ICLR paper](https://openreview.net/forum?id=B1x1ma4tDr) * [Audio Examples](http://goo.gl/magenta/ddsp-examples) This notebook extracts these features from input audio (either uploaded files, or recorded from the microphone) and resynthesizes with the model. <img src="https://magenta.tensorflow.org/assets/ddsp/ddsp_cat_jamming.png" alt="DDSP Tone Transfer" width="700"> By default, the notebook will download pre-trained models. You can train a model on your own sounds by using the [Train Autoencoder Colab](https://github.com/magenta/ddsp/blob/main/ddsp/colab/demos/train_autoencoder.ipynb). Have fun! And please feel free to hack this notebook to make your own creative interactions. ### Instructions for running: * Make sure to use a GPU runtime, click: __Runtime >> Change Runtime Type >> GPU__ * Press ▶️ on the left of each of the cells * View the code: Double-click any of the cells * Hide the code: Double click the right side of the cell ``` #@title #Install and Import #@markdown Install ddsp, define some helper functions, and download the model. This transfers a lot of data and _should take a minute or two_. print('Installing from pip package...') !pip install -qU ddsp==1.6.5 # Ignore a bunch of deprecation warnings import warnings warnings.filterwarnings("ignore") import copy import os import time import crepe import ddsp import ddsp.training from ddsp.colab.colab_utils import ( auto_tune, get_tuning_factor, download, play, record, specplot, upload, DEFAULT_SAMPLE_RATE) from ddsp.training.postprocessing import ( detect_notes, fit_quantile_transform ) import gin from google.colab import files import librosa import matplotlib.pyplot as plt import numpy as np import pickle import tensorflow.compat.v2 as tf import tensorflow_datasets as tfds # Helper Functions sample_rate = DEFAULT_SAMPLE_RATE # 16000 print('Done!') #@title Record or Upload Audio #@markdown * Either record audio from microphone or upload audio from file (.mp3 or .wav) #@markdown * Audio should be monophonic (single instrument / voice) #@markdown * Extracts fundmanetal frequency (f0) and loudness features. record_or_upload = "Record" #@param ["Record", "Upload (.mp3 or .wav)"] record_seconds = 5#@param {type:"number", min:1, max:10, step:1} if record_or_upload == "Record": audio = record(seconds=record_seconds) else: # Load audio sample here (.mp3 or .wav3 file) # Just use the first file. filenames, audios = upload() audio = audios[0] if len(audio.shape) == 1: audio = audio[np.newaxis, :] print('\nExtracting audio features...') # Plot. specplot(audio) play(audio) # Setup the session. ddsp.spectral_ops.reset_crepe() # Compute features. start_time = time.time() audio_features = ddsp.training.metrics.compute_audio_features(audio) audio_features['loudness_db'] = audio_features['loudness_db'].astype(np.float32) audio_features_mod = None print('Audio features took %.1f seconds' % (time.time() - start_time)) TRIM = -15 # Plot Features. fig, ax = plt.subplots(nrows=3, ncols=1, sharex=True, figsize=(6, 8)) ax[0].plot(audio_features['loudness_db'][:TRIM]) ax[0].set_ylabel('loudness_db') ax[1].plot(librosa.hz_to_midi(audio_features['f0_hz'][:TRIM])) ax[1].set_ylabel('f0 [midi]') ax[2].plot(audio_features['f0_confidence'][:TRIM]) ax[2].set_ylabel('f0 confidence') _ = ax[2].set_xlabel('Time step [frame]') #@title Load a model #@markdown Run for ever new audio input model = 'Violin' #@param ['Violin', 'Flute', 'Flute2', 'Trumpet', 'Tenor_Saxophone', 'Upload your own (checkpoint folder as .zip)'] MODEL = model def find_model_dir(dir_name): # Iterate through directories until model directory is found for root, dirs, filenames in os.walk(dir_name): for filename in filenames: if filename.endswith(".gin") and not filename.startswith("."): model_dir = root break return model_dir if model in ('Violin', 'Flute', 'Flute2', 'Trumpet', 'Tenor_Saxophone'): # Pretrained models. PRETRAINED_DIR = '/content/pretrained' # Copy over from gs:// for faster loading. !rm -r $PRETRAINED_DIR &> /dev/null !mkdir $PRETRAINED_DIR &> /dev/null GCS_CKPT_DIR = 'gs://ddsp/models/timbre_transfer_colab/2021-07-08' model_dir = os.path.join(GCS_CKPT_DIR, 'solo_%s_ckpt' % model.lower()) !gsutil cp $model_dir/* $PRETRAINED_DIR &> /dev/null model_dir = PRETRAINED_DIR gin_file = os.path.join(model_dir, 'operative_config-0.gin') else: # User models. UPLOAD_DIR = '/content/uploaded' !mkdir $UPLOAD_DIR uploaded_files = files.upload() for fnames in uploaded_files.keys(): print("Unzipping... {}".format(fnames)) !unzip -o "/content/$fnames" -d $UPLOAD_DIR &> /dev/null model_dir = find_model_dir(UPLOAD_DIR) gin_file = os.path.join(model_dir, 'operative_config-0.gin') # Load the dataset statistics. DATASET_STATS = None dataset_stats_file = os.path.join(model_dir, 'dataset_statistics.pkl') print(f'Loading dataset statistics from {dataset_stats_file}') try: if tf.io.gfile.exists(dataset_stats_file): with tf.io.gfile.GFile(dataset_stats_file, 'rb') as f: DATASET_STATS = pickle.load(f) except Exception as err: print('Loading dataset statistics from pickle failed: {}.'.format(err)) # Parse gin config, with gin.unlock_config(): gin.parse_config_file(gin_file, skip_unknown=True) # Assumes only one checkpoint in the folder, 'ckpt-[iter]`. ckpt_files = [f for f in tf.io.gfile.listdir(model_dir) if 'ckpt' in f] ckpt_name = ckpt_files[0].split('.')[0] ckpt = os.path.join(model_dir, ckpt_name) # Ensure dimensions and sampling rates are equal time_steps_train = gin.query_parameter('F0LoudnessPreprocessor.time_steps') n_samples_train = gin.query_parameter('Harmonic.n_samples') hop_size = int(n_samples_train / time_steps_train) time_steps = int(audio.shape[1] / hop_size) n_samples = time_steps * hop_size # print("===Trained model===") # print("Time Steps", time_steps_train) # print("Samples", n_samples_train) # print("Hop Size", hop_size) # print("\n===Resynthesis===") # print("Time Steps", time_steps) # print("Samples", n_samples) # print('') gin_params = [ 'Harmonic.n_samples = {}'.format(n_samples), 'FilteredNoise.n_samples = {}'.format(n_samples), 'F0LoudnessPreprocessor.time_steps = {}'.format(time_steps), 'oscillator_bank.use_angular_cumsum = True', # Avoids cumsum accumulation errors. ] with gin.unlock_config(): gin.parse_config(gin_params) # Trim all input vectors to correct lengths for key in ['f0_hz', 'f0_confidence', 'loudness_db']: audio_features[key] = audio_features[key][:time_steps] audio_features['audio'] = audio_features['audio'][:, :n_samples] # Set up the model just to predict audio given new conditioning model = ddsp.training.models.Autoencoder() model.restore(ckpt) # Build model by running a batch through it. start_time = time.time() _ = model(audio_features, training=False) print('Restoring model took %.1f seconds' % (time.time() - start_time)) #@title Modify conditioning #@markdown These models were not explicitly trained to perform timbre transfer, so they may sound unnatural if the incoming loudness and frequencies are very different then the training data (which will always be somewhat true). #@markdown ## Note Detection #@markdown You can leave this at 1.0 for most cases threshold = 1 #@param {type:"slider", min: 0.0, max:2.0, step:0.01} #@markdown ## Automatic ADJUST = True #@param{type:"boolean"} #@markdown Quiet parts without notes detected (dB) quiet = 20 #@param {type:"slider", min: 0, max:60, step:1} #@markdown Force pitch to nearest note (amount) autotune = 0 #@param {type:"slider", min: 0.0, max:1.0, step:0.1} #@markdown ## Manual #@markdown Shift the pitch (octaves) pitch_shift = 0 #@param {type:"slider", min:-2, max:2, step:1} #@markdown Adjust the overall loudness (dB) loudness_shift = 0 #@param {type:"slider", min:-20, max:20, step:1} audio_features_mod = {k: v.copy() for k, v in audio_features.items()} ## Helper functions. def shift_ld(audio_features, ld_shift=0.0): """Shift loudness by a number of ocatves.""" audio_features['loudness_db'] += ld_shift return audio_features def shift_f0(audio_features, pitch_shift=0.0): """Shift f0 by a number of ocatves.""" audio_features['f0_hz'] *= 2.0 ** (pitch_shift) audio_features['f0_hz'] = np.clip(audio_features['f0_hz'], 0.0, librosa.midi_to_hz(110.0)) return audio_features mask_on = None if ADJUST and DATASET_STATS is not None: # Detect sections that are "on". mask_on, note_on_value = detect_notes(audio_features['loudness_db'], audio_features['f0_confidence'], threshold) if np.any(mask_on): # Shift the pitch register. target_mean_pitch = DATASET_STATS['mean_pitch'] pitch = ddsp.core.hz_to_midi(audio_features['f0_hz']) mean_pitch = np.mean(pitch[mask_on]) p_diff = target_mean_pitch - mean_pitch p_diff_octave = p_diff / 12.0 round_fn = np.floor if p_diff_octave > 1.5 else np.ceil p_diff_octave = round_fn(p_diff_octave) audio_features_mod = shift_f0(audio_features_mod, p_diff_octave) # Quantile shift the note_on parts. _, loudness_norm = fit_quantile_transform( audio_features['loudness_db'], mask_on, inv_quantile=DATASET_STATS['quantile_transform']) # Turn down the note_off parts. mask_off = np.logical_not(mask_on) loudness_norm[mask_off] -= quiet * (1.0 - note_on_value[mask_off][:, np.newaxis]) loudness_norm = np.reshape(loudness_norm, audio_features['loudness_db'].shape) audio_features_mod['loudness_db'] = loudness_norm # Auto-tune. if autotune: f0_midi = np.array(ddsp.core.hz_to_midi(audio_features_mod['f0_hz'])) tuning_factor = get_tuning_factor(f0_midi, audio_features_mod['f0_confidence'], mask_on) f0_midi_at = auto_tune(f0_midi, tuning_factor, mask_on, amount=autotune) audio_features_mod['f0_hz'] = ddsp.core.midi_to_hz(f0_midi_at) else: print('\nSkipping auto-adjust (no notes detected or ADJUST box empty).') else: print('\nSkipping auto-adujst (box not checked or no dataset statistics found).') # Manual Shifts. audio_features_mod = shift_ld(audio_features_mod, loudness_shift) audio_features_mod = shift_f0(audio_features_mod, pitch_shift) # Plot Features. has_mask = int(mask_on is not None) n_plots = 3 if has_mask else 2 fig, axes = plt.subplots(nrows=n_plots, ncols=1, sharex=True, figsize=(2*n_plots, 8)) if has_mask: ax = axes[0] ax.plot(np.ones_like(mask_on[:TRIM]) * threshold, 'k:') ax.plot(note_on_value[:TRIM]) ax.plot(mask_on[:TRIM]) ax.set_ylabel('Note-on Mask') ax.set_xlabel('Time step [frame]') ax.legend(['Threshold', 'Likelihood','Mask']) ax = axes[0 + has_mask] ax.plot(audio_features['loudness_db'][:TRIM]) ax.plot(audio_features_mod['loudness_db'][:TRIM]) ax.set_ylabel('loudness_db') ax.legend(['Original','Adjusted']) ax = axes[1 + has_mask] ax.plot(librosa.hz_to_midi(audio_features['f0_hz'][:TRIM])) ax.plot(librosa.hz_to_midi(audio_features_mod['f0_hz'][:TRIM])) ax.set_ylabel('f0 [midi]') _ = ax.legend(['Original','Adjusted']) #@title #Resynthesize Audio af = audio_features if audio_features_mod is None else audio_features_mod # Run a batch of predictions. start_time = time.time() outputs = model(af, training=False) audio_gen = model.get_audio_from_outputs(outputs) print('Prediction took %.1f seconds' % (time.time() - start_time)) # Plot print('Original') play(audio) print('Resynthesis') play(audio_gen) specplot(audio) plt.title("Original") specplot(audio_gen) _ = plt.title("Resynthesis") ```
github_jupyter
# Project 3: Implement SLAM --- ## Project Overview In this project, you'll implement SLAM for robot that moves and senses in a 2 dimensional, grid world! SLAM gives us a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems. Since this localization and map-building relies on the visual sensing of landmarks, this is a computer vision problem. Using what you've learned about robot motion, representations of uncertainty in motion and sensing, and localization techniques, you will be tasked with defining a function, `slam`, which takes in six parameters as input and returns the vector `mu`. > `mu` contains the (x,y) coordinate locations of the robot as it moves, and the positions of landmarks that it senses in the world You can implement helper functions as you see fit, but your function must return `mu`. The vector, `mu`, should have (x, y) coordinates interlaced, for example, if there were 2 poses and 2 landmarks, `mu` will look like the following, where `P` is the robot position and `L` the landmark position: ``` mu = matrix([[Px0], [Py0], [Px1], [Py1], [Lx0], [Ly0], [Lx1], [Ly1]]) ``` You can see that `mu` holds the poses first `(x0, y0), (x1, y1), ...,` then the landmark locations at the end of the matrix; we consider a `nx1` matrix to be a vector. ## Generating an environment In a real SLAM problem, you may be given a map that contains information about landmark locations, and in this example, we will make our own data using the `make_data` function, which generates a world grid with landmarks in it and then generates data by placing a robot in that world and moving and sensing over some numer of time steps. The `make_data` function relies on a correct implementation of robot move/sense functions, which, at this point, should be complete and in the `robot_class.py` file. The data is collected as an instantiated robot moves and senses in a world. Your SLAM function will take in this data as input. So, let's first create this data and explore how it represents the movement and sensor measurements that our robot takes. --- ## Create the world Use the code below to generate a world of a specified size with randomly generated landmark locations. You can change these parameters and see how your implementation of SLAM responds! `data` holds the sensors measurements and motion of your robot over time. It stores the measurements as `data[i][0]` and the motion as `data[i][1]`. #### Helper functions You will be working with the `robot` class that may look familiar from the first notebook, In fact, in the `helpers.py` file, you can read the details of how data is made with the `make_data` function. It should look very similar to the robot move/sense cycle you've seen in the first notebook. ``` import numpy as np from helpers import make_data # your implementation of slam should work with the following inputs # feel free to change these input values and see how it responds! # world parameters num_landmarks = 5 # number of landmarks N = 20 # time steps world_size = 100.0 # size of world (square) # robot parameters measurement_range = 50.0 # range at which we can sense landmarks motion_noise = 2.0 # noise in robot motion measurement_noise = 2.0 # noise in the measurements distance = 20.0 # distance by which robot (intends to) move each iteratation # make_data instantiates a robot, AND generates random landmarks for a given world size and number of landmarks data = make_data(N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance) ``` ### A note on `make_data` The function above, `make_data`, takes in so many world and robot motion/sensor parameters because it is responsible for: 1. Instantiating a robot (using the robot class) 2. Creating a grid world with landmarks in it **This function also prints out the true location of landmarks and the *final* robot location, which you should refer back to when you test your implementation of SLAM.** The `data` this returns is an array that holds information about **robot sensor measurements** and **robot motion** `(dx, dy)` that is collected over a number of time steps, `N`. You will have to use *only* these readings about motion and measurements to track a robot over time and find the determine the location of the landmarks using SLAM. We only print out the true landmark locations for comparison, later. In `data` the measurement and motion data can be accessed from the first and second index in the columns of the data array. See the following code for an example, where `i` is the time step: ``` measurement = data[i][0] motion = data[i][1] ``` ``` # print out some stats about the data time_step = 0 print('Example measurements: \n', data[time_step][0]) print('\n') print('Example motion: \n', data[time_step][1]) ``` Try changing the value of `time_step`, you should see that the list of measurements varies based on what in the world the robot sees after it moves. As you know from the first notebook, the robot can only sense so far and with a certain amount of accuracy in the measure of distance between its location and the location of landmarks. The motion of the robot always is a vector with two values: one for x and one for y displacement. This structure will be useful to keep in mind as you traverse this data in your implementation of slam. ## Initialize Constraints One of the most challenging tasks here will be to create and modify the constraint matrix and vector: omega and xi. In the second notebook, you saw an example of how omega and xi could hold all the values the define the relationships between robot poses `xi` and landmark positions `Li` in a 1D world, as seen below, where omega is the blue matrix and xi is the pink vector. <img src='images/motion_constraint.png' width=50% height=50% /> In *this* project, you are tasked with implementing constraints for a 2D world. We are referring to robot poses as `Px, Py` and landmark positions as `Lx, Ly`, and one way to approach this challenge is to add *both* x and y locations in the constraint matrices. <img src='images/constraints2D.png' width=50% height=50% /> You may also choose to create two of each omega and xi (one for x and one for y positions). ### TODO: Write a function that initializes omega and xi Complete the function `initialize_constraints` so that it returns `omega` and `xi` constraints for the starting position of the robot. Any values that we do not yet know should be initialized with the value `0`. You may assume that our robot starts out in exactly the middle of the world with 100% confidence (no motion or measurement noise at this point). The inputs `N` time steps, `num_landmarks`, and `world_size` should give you all the information you need to construct intial constraints of the correct size and starting values. *Depending on your approach you may choose to return one omega and one xi that hold all (x,y) positions *or* two of each (one for x values and one for y); choose whichever makes most sense to you!* ``` def initialize_constraints(N, num_landmarks, world_size): ''' This function takes in a number of time steps N, number of landmarks, and a world_size, and returns initialized constraint matrices, omega and xi.''' ## Recommended: Define and store the size (rows/cols) of the constraint matrix in a variable rows = 2 * N + 2 * num_landmarks cols = 2 * N + 2 * num_landmarks ## TODO: Define the constraint matrix, Omega, with two initial "strength" values ## for the initial x, y location of our robot omega = np.zeros((rows, cols)) omega[0][0] = 1 omega[1][1] = 1 ## TODO: Define the constraint *vector*, xi ## you can assume that the robot starts out in the middle of the world with 100% confidence xi = np.zeros((rows, 1)) xi[0][0] = world_size / 2 xi[1][0] = world_size / 2 return omega, xi ``` ### Test as you go It's good practice to test out your code, as you go. Since `slam` relies on creating and updating constraint matrices, `omega` and `xi` to account for robot sensor measurements and motion, let's check that they initialize as expected for any given parameters. Below, you'll find some test code that allows you to visualize the results of your function `initialize_constraints`. We are using the [seaborn](https://seaborn.pydata.org/) library for visualization. **Please change the test values of N, landmarks, and world_size and see the results**. Be careful not to use these values as input into your final smal function. This code assumes that you have created one of each constraint: `omega` and `xi`, but you can change and add to this code, accordingly. The constraints should vary in size with the number of time steps and landmarks as these values affect the number of poses a robot will take `(Px0,Py0,...Pxn,Pyn)` and landmark locations `(Lx0,Ly0,...Lxn,Lyn)` whose relationships should be tracked in the constraint matrices. Recall that `omega` holds the weights of each variable and `xi` holds the value of the sum of these variables, as seen in Notebook 2. You'll need the `world_size` to determine the starting pose of the robot in the world and fill in the initial values for `xi`. ``` # import data viz resources import matplotlib.pyplot as plt from pandas import DataFrame import seaborn as sns %matplotlib inline # define a small N and world_size (small for ease of visualization) N_test = 5 num_landmarks_test = 2 small_world = 10 # initialize the constraints initial_omega, initial_xi = initialize_constraints(N_test, num_landmarks_test, small_world) # define figure size plt.rcParams["figure.figsize"] = (10,7) # display omega sns.heatmap(DataFrame(initial_omega), cmap='Blues', annot=True, linewidths=.5); # define figure size plt.rcParams["figure.figsize"] = (1,7) # display xi sns.heatmap(DataFrame(initial_xi), cmap='Oranges', annot=True, linewidths=.5); ``` --- ## SLAM inputs In addition to `data`, your slam function will also take in: * N - The number of time steps that a robot will be moving and sensing * num_landmarks - The number of landmarks in the world * world_size - The size (w/h) of your world * motion_noise - The noise associated with motion; the update confidence for motion should be `1.0/motion_noise` * measurement_noise - The noise associated with measurement/sensing; the update weight for measurement should be `1.0/measurement_noise` #### A note on noise Recall that `omega` holds the relative "strengths" or weights for each position variable, and you can update these weights by accessing the correct index in omega `omega[row][col]` and *adding/subtracting* `1.0/noise` where `noise` is measurement or motion noise. `Xi` holds actual position values, and so to update `xi` you'll do a similar addition process only using the actual value of a motion or measurement. So for a vector index `xi[row][0]` you will end up adding/subtracting one measurement or motion divided by their respective `noise`. ### TODO: Implement Graph SLAM Follow the TODO's below to help you complete this slam implementation (these TODO's are in the recommended order), then test out your implementation! #### Updating with motion and measurements With a 2D omega and xi structure as shown above (in earlier cells), you'll have to be mindful about how you update the values in these constraint matrices to account for motion and measurement constraints in the x and y directions. Recall that the solution to these matrices (which holds all values for robot poses `P` and landmark locations `L`) is the vector, `mu`, which can be computed at the end of the construction of omega and xi as the inverse of omega times xi: $\mu = \Omega^{-1}\xi$ **You may also choose to return the values of `omega` and `xi` if you want to visualize their final state!** ``` ## TODO: Complete the code to implement SLAM ## slam takes in 6 arguments and returns mu, ## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations def slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise): ## TODO: Use your initilization to create constraint matrices, omega and xi omega, xi = initialize_constraints(N, num_landmarks, world_size) ## TODO: Iterate through each time step in the data ## get all the motion and measurement data as you iterate for i in range(len(data)): measurements = data[i][0] motion = data[i][1] ## TODO: update the constraint matrix/vector to account for all *measurements* ## this should be a series of additions that take into account the measurement noise for measure in measurements: id_landmark = measure[0] x = measure[1] y = measure[2] # Update x value omega[2 * i, 2 * i] += 1 / measurement_noise omega[2 * i, 2 * N + 2 * id_landmark] += -1 / measurement_noise omega[2 * N + 2 * id_landmark, 2 * i] += -1 / measurement_noise omega[2 * N + 2 * id_landmark, 2 * N + 2 * id_landmark] += 1 / measurement_noise xi[2 * i, 0] += -x / measurement_noise xi[2 * N + 2 * id_landmark, 0] += x / measurement_noise # Update y value omega[2 * i + 1, 2 * i + 1] += 1 / measurement_noise omega[2 * i + 1, 2 * N + 2 * id_landmark + 1] += -1 / measurement_noise omega[2 * N + 2 * id_landmark + 1, 2 * i + 1] += -1 / measurement_noise omega[2 * N + 2 * id_landmark + 1, 2 * N + 2 * id_landmark + 1] += 1 / measurement_noise xi[2 * i + 1, 0] += -y / measurement_noise xi[2 * N + 2 * id_landmark + 1, 0] += y / measurement_noise ## TODO: update the constraint matrix/vector to account for all *motion* and motion noise dx = motion[0] dy = motion[1] # Update dx value omega[2 * i, 2 * i] += 1 / motion_noise omega[2 * i, 2 * i + 2] += -1 / motion_noise omega[2 * i + 2, 2 * i] += -1 / motion_noise omega[2 * i + 2, 2 * i + 2] += 1 / motion_noise xi[2 * i, 0] += -dx / motion_noise xi[2 * i + 2, 0] += dx / motion_noise # Update dy value omega[2 * i + 1, 2 * i + 1] += 1 / motion_noise omega[2 * i + 1, 2 * i + 3] += -1 / motion_noise omega[2 * i + 3, 2 * i + 1] += -1 / motion_noise omega[2 * i + 3, 2 * i + 3] += 1 / motion_noise xi[2 * i + 1, 0] += -dy / motion_noise xi[2 * i + 3, 0] += dy / motion_noise ## TODO: After iterating through all the data ## Compute the best estimate of poses and landmark positions ## using the formula, omega_inverse * Xi omega_inverse = np.linalg.inv(omega) mu = np.dot(omega_inverse, xi) return mu # return `mu` ``` ## Helper functions To check that your implementation of SLAM works for various inputs, we have provided two helper functions that will help display the estimated pose and landmark locations that your function has produced. First, given a result `mu` and number of time steps, `N`, we define a function that extracts the poses and landmarks locations and returns those as their own, separate lists. Then, we define a function that nicely print out these lists; both of these we will call, in the next step. ``` # a helper function that creates a list of poses and of landmarks for ease of printing # this only works for the suggested constraint architecture of interlaced x,y poses def get_poses_landmarks(mu, N): # create a list of poses poses = [] for i in range(N): poses.append((mu[2*i].item(), mu[2*i+1].item())) # create a list of landmarks landmarks = [] for i in range(num_landmarks): landmarks.append((mu[2*(N+i)].item(), mu[2*(N+i)+1].item())) # return completed lists return poses, landmarks def print_all(poses, landmarks): print('\n') print('Estimated Poses:') for i in range(len(poses)): print('['+', '.join('%.3f'%p for p in poses[i])+']') print('\n') print('Estimated Landmarks:') for i in range(len(landmarks)): print('['+', '.join('%.3f'%l for l in landmarks[i])+']') ``` ## Run SLAM Once you've completed your implementation of `slam`, see what `mu` it returns for different world sizes and different landmarks! ### What to Expect The `data` that is generated is random, but you did specify the number, `N`, or time steps that the robot was expected to move and the `num_landmarks` in the world (which your implementation of `slam` should see and estimate a position for. Your robot should also start with an estimated pose in the very center of your square world, whose size is defined by `world_size`. With these values in mind, you should expect to see a result that displays two lists: 1. **Estimated poses**, a list of (x, y) pairs that is exactly `N` in length since this is how many motions your robot has taken. The very first pose should be the center of your world, i.e. `[50.000, 50.000]` for a world that is 100.0 in square size. 2. **Estimated landmarks**, a list of landmark positions (x, y) that is exactly `num_landmarks` in length. #### Landmark Locations If you refer back to the printout of *exact* landmark locations when this data was created, you should see values that are very similar to those coordinates, but not quite (since `slam` must account for noise in motion and measurement). ``` # call your implementation of slam, passing in the necessary parameters mu = slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise) # print out the resulting landmarks and poses if(mu is not None): # get the lists of poses and landmarks # and print them out poses, landmarks = get_poses_landmarks(mu, N) print_all(poses, landmarks) ``` ## Visualize the constructed world Finally, using the `display_world` code from the `helpers.py` file (which was also used in the first notebook), we can actually visualize what you have coded with `slam`: the final position of the robot and the positon of landmarks, created from only motion and measurement data! **Note that these should be very similar to the printed *true* landmark locations and final pose from our call to `make_data` early in this notebook.** ``` # import the helper function from helpers import display_world # Display the final world! # define figure size plt.rcParams["figure.figsize"] = (20,20) # check if poses has been created if 'poses' in locals(): # print out the last pose print('Last pose: ', poses[-1]) # display the last position of the robot *and* the landmark positions display_world(int(world_size), poses[-1], landmarks) ``` ### Question: How far away is your final pose (as estimated by `slam`) compared to the *true* final pose? Why do you think these poses are different? You can find the true value of the final pose in one of the first cells where `make_data` was called. You may also want to look at the true landmark locations and compare them to those that were estimated by `slam`. Ask yourself: what do you think would happen if we moved and sensed more (increased N)? Or if we had lower/higher noise parameters. **Answer**: (Write your answer here.) ## Testing To confirm that your slam code works before submitting your project, it is suggested that you run it on some test data and cases. A few such cases have been provided for you, in the cells below. When you are ready, uncomment the test cases in the next cells (there are two test cases, total); your output should be **close-to or exactly** identical to the given results. If there are minor discrepancies it could be a matter of floating point accuracy or in the calculation of the inverse matrix. ### Submit your project If you pass these tests, it is a good indication that your project will pass all the specifications in the project rubric. Follow the submission instructions to officially submit! ``` # Here is the data and estimated outputs for test case 1 test_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.866382374890936, -16.384904503932]], [-12.2607279422326, -15.801093326936487]], [[[4, -6.202512900833806, -1.823403210274639]], [-12.2607279422326, -15.801093326936487]], [[[4, 7.412136480918645, 15.388585962142429]], [14.008259661173426, 14.274756084260822]], [[[4, -7.526138813444998, -0.4563942429717849]], [14.008259661173426, 14.274756084260822]], [[[2, -6.299793150150058, 29.047830407717623], [4, -21.93551130411791, -13.21956810989039]], [14.008259661173426, 14.274756084260822]], [[[1, 15.796300959032276, 30.65769689694247], [2, -18.64370821983482, 17.380022987031367]], [14.008259661173426, 14.274756084260822]], [[[1, 0.40311325410337906, 14.169429532679855], [2, -35.069349468466235, 2.4945558982439957]], [14.008259661173426, 14.274756084260822]], [[[1, -16.71340983241936, -2.777000269543834]], [-11.006096015782283, 16.699276945166858]], [[[1, -3.611096830835776, -17.954019226763958]], [-19.693482634035977, 3.488085684573048]], [[[1, 18.398273354362416, -22.705102332550947]], [-19.693482634035977, 3.488085684573048]], [[[2, 2.789312482883833, -39.73720193121324]], [12.849049222879723, -15.326510824972983]], [[[1, 21.26897046581808, -10.121029799040915], [2, -11.917698965880655, -23.17711662602097], [3, -31.81167947898398, -16.7985673023331]], [12.849049222879723, -15.326510824972983]], [[[1, 10.48157743234859, 5.692957082575485], [2, -22.31488473554935, -5.389184118551409], [3, -40.81803984305378, -2.4703329790238118]], [12.849049222879723, -15.326510824972983]], [[[0, 10.591050242096598, -39.2051798967113], [1, -3.5675572049297553, 22.849456408289125], [2, -38.39251065320351, 7.288990306029511]], [12.849049222879723, -15.326510824972983]], [[[0, -3.6225556479370766, -25.58006865235512]], [-7.8874682868419965, -18.379005523261092]], [[[0, 1.9784503557879374, -6.5025974151499]], [-7.8874682868419965, -18.379005523261092]], [[[0, 10.050665232782423, 11.026385307998742]], [-17.82919359778298, 9.062000642947142]], [[[0, 26.526838150174818, -0.22563393232425621], [4, -33.70303936886652, 2.880339841013677]], [-17.82919359778298, 9.062000642947142]]] ## Test Case 1 ## # Estimated Pose(s): # [50.000, 50.000] # [37.858, 33.921] # [25.905, 18.268] # [13.524, 2.224] # [27.912, 16.886] # [42.250, 30.994] # [55.992, 44.886] # [70.749, 59.867] # [85.371, 75.230] # [73.831, 92.354] # [53.406, 96.465] # [34.370, 100.134] # [48.346, 83.952] # [60.494, 68.338] # [73.648, 53.082] # [86.733, 38.197] # [79.983, 20.324] # [72.515, 2.837] # [54.993, 13.221] # [37.164, 22.283] # Estimated Landmarks: # [82.679, 13.435] # [70.417, 74.203] # [36.688, 61.431] # [18.705, 66.136] # [20.437, 16.983] ### Uncomment the following three lines for test case 1 and compare the output to the values above ### # mu_1 = slam(test_data1, 20, 5, 100.0, 2.0, 2.0) # poses, landmarks = get_poses_landmarks(mu_1, 20) # print_all(poses, landmarks) # Here is the data and estimated outputs for test case 2 test_data2 = [[[[0, 26.543274387283322, -6.262538160312672], [3, 9.937396825799755, -9.128540360867689]], [18.92765331253674, -6.460955043986683]], [[[0, 7.706544739722961, -3.758467215445748], [1, 17.03954411948937, 31.705489938553438], [3, -11.61731288777497, -6.64964096716416]], [18.92765331253674, -6.460955043986683]], [[[0, -12.35130507136378, 2.585119104239249], [1, -2.563534536165313, 38.22159657838369], [3, -26.961236804740935, -0.4802312626141525]], [-11.167066095509824, 16.592065417497455]], [[[0, 1.4138633151721272, -13.912454837810632], [1, 8.087721200818589, 20.51845934354381], [3, -17.091723454402302, -16.521500551709707], [4, -7.414211721400232, 38.09191602674439]], [-11.167066095509824, 16.592065417497455]], [[[0, 12.886743222179561, -28.703968411636318], [1, 21.660953298391387, 3.4912891084614914], [3, -6.401401414569506, -32.321583037341625], [4, 5.034079343639034, 23.102207946092893]], [-11.167066095509824, 16.592065417497455]], [[[1, 31.126317672358578, -10.036784369535214], [2, -38.70878528420893, 7.4987265861424595], [4, 17.977218575473767, 6.150889254289742]], [-6.595520680493778, -18.88118393939265]], [[[1, 41.82460922922086, 7.847527392202475], [3, 15.711709540417502, -30.34633659912818]], [-6.595520680493778, -18.88118393939265]], [[[0, 40.18454208294434, -6.710999804403755], [3, 23.019508919299156, -10.12110867290604]], [-6.595520680493778, -18.88118393939265]], [[[3, 27.18579315312821, 8.067219022708391]], [-6.595520680493778, -18.88118393939265]], [[], [11.492663265706092, 16.36822198838621]], [[[3, 24.57154567653098, 13.461499960708197]], [11.492663265706092, 16.36822198838621]], [[[0, 31.61945290413707, 0.4272295085799329], [3, 16.97392299158991, -5.274596836133088]], [11.492663265706092, 16.36822198838621]], [[[0, 22.407381798735177, -18.03500068379259], [1, 29.642444125196995, 17.3794951934614], [3, 4.7969752441371645, -21.07505361639969], [4, 14.726069092569372, 32.75999422300078]], [11.492663265706092, 16.36822198838621]], [[[0, 10.705527984670137, -34.589764174299596], [1, 18.58772336795603, -0.20109708164787765], [3, -4.839806195049413, -39.92208742305105], [4, 4.18824810165454, 14.146847823548889]], [11.492663265706092, 16.36822198838621]], [[[1, 5.878492140223764, -19.955352450942357], [4, -7.059505455306587, -0.9740849280550585]], [19.628527845173146, 3.83678180657467]], [[[1, -11.150789592446378, -22.736641053247872], [4, -28.832815721158255, -3.9462962046291388]], [-19.841703647091965, 2.5113335861604362]], [[[1, 8.64427397916182, -20.286336970889053], [4, -5.036917727942285, -6.311739993868336]], [-5.946642674882207, -19.09548221169787]], [[[0, 7.151866679283043, -39.56103232616369], [1, 16.01535401373368, -3.780995345194027], [4, -3.04801331832137, 13.697362774960865]], [-5.946642674882207, -19.09548221169787]], [[[0, 12.872879480504395, -19.707592098123207], [1, 22.236710716903136, 16.331770792606406], [3, -4.841206109583004, -21.24604435851242], [4, 4.27111163223552, 32.25309748614184]], [-5.946642674882207, -19.09548221169787]]] ## Test Case 2 ## # Estimated Pose(s): # [50.000, 50.000] # [69.035, 45.061] # [87.655, 38.971] # [76.084, 55.541] # [64.283, 71.684] # [52.396, 87.887] # [44.674, 68.948] # [37.532, 49.680] # [31.392, 30.893] # [24.796, 12.012] # [33.641, 26.440] # [43.858, 43.560] # [54.735, 60.659] # [65.884, 77.791] # [77.413, 94.554] # [96.740, 98.020] # [76.149, 99.586] # [70.211, 80.580] # [64.130, 61.270] # [58.183, 42.175] # Estimated Landmarks: # [76.777, 42.415] # [85.109, 76.850] # [13.687, 95.386] # [59.488, 39.149] # [69.283, 93.654] ### Uncomment the following three lines for test case 2 and compare to the values above ### # mu_2 = slam(test_data2, 20, 5, 100.0, 2.0, 2.0) # poses, landmarks = get_poses_landmarks(mu_2, 20) # print_all(poses, landmarks) ```
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Causal-Inference" data-toc-modified-id="Causal-Inference-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Causal Inference</a></span><ul class="toc-item"><li><span><a href="#The-Definition-of-Causal-Effect" data-toc-modified-id="The-Definition-of-Causal-Effect-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>The Definition of Causal Effect</a></span></li><li><span><a href="#Assumptions-of-Estimating-Causal-Effect" data-toc-modified-id="Assumptions-of-Estimating-Causal-Effect-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Assumptions of Estimating Causal Effect</a></span></li><li><span><a href="#Confounders" data-toc-modified-id="Confounders-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Confounders</a></span></li><li><span><a href="#Randomized-Trials-v.s.-Observational-Studies" data-toc-modified-id="Randomized-Trials-v.s.-Observational-Studies-1.4"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Randomized Trials v.s. Observational Studies</a></span></li><li><span><a href="#Matching" data-toc-modified-id="Matching-1.5"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Matching</a></span><ul class="toc-item"><li><span><a href="#Propensity-Scores" data-toc-modified-id="Propensity-Scores-1.5.1"><span class="toc-item-num">1.5.1&nbsp;&nbsp;</span>Propensity Scores</a></span></li></ul></li><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.6"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Implementation</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Reference</a></span></li></ul></div> ``` # code for loading the format for the notebook import os # path : store the current path to convert back to it later path = os.getcwd() os.chdir(os.path.join('..', '..', 'notebook_format')) from formats import load_style load_style(plot_style=False) os.chdir(path) # 1. magic for inline plot # 2. magic to print version # 3. magic so that the notebook will reload external python modules # 4. magic to enable retina (high resolution) plots # https://gist.github.com/minrk/3301035 %matplotlib inline %load_ext watermark %load_ext autoreload %autoreload 2 %config InlineBackend.figure_format='retina' import numpy as np import pandas as pd import seaborn as sns import scipy.stats as stats import matplotlib.pyplot as plt from sklearn.metrics import roc_auc_score from sklearn.compose import ColumnTransformer from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression %watermark -a 'Ethen' -d -t -v -p numpy,scipy,pandas,sklearn,matplotlib,seaborn ``` # Causal Inference A typical statement that people make in the real world follows a pattern like this: > I took ibuprofen and my headache is gone, therefore the medicine worked. Upon seeing this statement, we may be tempted to view the statement above as a causal effect, where ibuprofen does in fact help with headache. The statement, however, does not tell us what would have happened if the person didn't take the medicine. Maybe headache would have been cured without taking the medicine. We'll take a moment and introduce some notation to formalize the discussion of causal inferencing. We denote $Y^a$ as the outcome that would have been observed if treatment was set to $A = a$. In the context of causal inferencing, there are two possible actions that can be applied to an individual. $1$, treatment; $0$, control. Hence $Y^1$ denotes the outcome if the treatment was applied, whereas $Y^0$ measures the outcome if individual was under the control group. Coming back to the statement above. The reason why it isn't a proper causal effect is because it's only telling us $Y^1=1$. It doesn't tell us what would have happened had we not taken ibuprofen, $Y^0=?$. And we can only state that there is a causal effect if $Y^1 \neq Y^0$. The two main messages that we're getting at in the section above are: First, in the context of causal inferencing, a lot of times we're mainly interested in the relationship between means of different potential outcomes. \begin{align} E(Y^1 - Y^0) \end{align} Where the term, **potential outcome, refers to the outcome we would see under each possible treatment option**. More on this formula in the next section. Second, our little statement above shows what is known as the **fundamental problem of causal inferencing**. Meaning we can only observe one potential outcome for each person. However, with certain assumptions, we can estimate population level causal effects. In other words, it is possible for us to answer questions such as: what would the rate of headache remission be if everyone took ibuprofen when they had a headache versus if no one did. Thus the next question is, how do we use observed data to link observed outcome to potential outcome. ## The Definition of Causal Effect In the previous section, we flashed the idea that with causal inferencing, we're interesting in estimating $E(Y^1 - Y^0)$. This notation denotes **average causal effect**. What this means is: Imagine a hypothetical world where our entire population, every single person, got treatment $A=0$. Versus, some other hypothetical world where everyone received the other treatment, $A=1$. The most important thing here is the two hypothetical world have the exact same people, it's the same population of people. But in one case, we do one thing to them, and in another case, we do another. And then, if we were able to observe both of these worlds simultaneously, we could collect the outcome data from everyone in the populations, and then compute average difference. This is what we mean by an average causal effect. It's computed over the whole population and we're saying, what would the average outcome be if everybody got one treatment, versus if everybody got another treatment? Of course, in reality, we're not going to see both of these worlds. But this is what we're hoping to estimate. In reality, what we get from an experiment is $E(Y|A=1)$. Here, we are saying what is the expected value of $Y$ given $A=1$. An equivalent way of reading this is the expected value of $Y$ restricting to the sub-population who actually had $A=1$. The main point that we're getting at is: \begin{align} E(Y^1 - Y^0) \neq E(Y|A=1) - E(Y|A=0) \end{align} The reason being the sub-population might differ from the whole population in important ways. e.g. People at higher risk for the flu might be more likely to get the flu shot. Then if we take the expected value of $Y$ among people who actually got the flu shot, we're taking expected value of $Y$ among a higher risk population. And this will most likely be different than the expected value of the potential outcome $Y^1$ because $Y^1$ is the outcome if everyone in the whole population got treatment, i.e. it's not restricting to a sub-population. We could of course still compute $E(Y|A=1) - E(Y|A=0)$, but we need to keep in mind that the people who received treatment $A=0$ might differ in fundamental ways from people who got treatment $A=1$. So we haven't isolated a treatment effect, because these are different people, and they might have different characteristics in general. So that's why this distinction between the two are very important. The causal effect where we're manipulating treatment on the same group of people versus what we actually observe, which is the difference in means among some populations that are defined by treatment. In short, the take home message from this paragraph is that $E(Y|A=1) - E(Y|A=0)$ is generally not a causal effect, because it is comparing two different populations of people. ## Assumptions of Estimating Causal Effect Hopefully, by now, we know the fundamental problem of causal inferencing is that we can only observe one treatment and one outcome for each person at a given point in time. The next question is how do we then use observed data to link observed outcomes to potential outcomes? To do so, we'll need make some assumptions. Our observed data typically consists of an outcome, $Y$, a treatment variable $A$, and then some set of covariates $X$ (additional information that we may wish to collect for individuals in the studies). The assumptions that we'll make are listed below. - **Stable Unit Treatment Value Assumption (SUTVA):** Not interference between units. Treatment assignment of one unit does not affect the outcome of another unit. Same sentence but phrased a bit differently: When we assign somebody a treatment how effective that is isn't dependent on what is happening with other people - **Positivity Assumption:** For every set of values for $X$, treatment assignment was not deterministic: $P(A=a|X=x) > 0$ for all $a$ and $x$. This assumption ensures we can have some data at every level of $X$ for people who are treated and not treated. The reason we need this assumption is: if for a given value of $X$, everybody would/wouldn't treated, then there's really no way for us to learn what would've happened if they were/weren't treated. In some cases where people with certain diseases might be ineligible for a particular treatment, we wouldn't want to make inference about that population, so we would probably exclude them from the study. - **Ignorability Assumption:** Given pre-treatment covariates $X$, treatment assignment is independent of the potential outcome. $Y^1, Y^0 \perp A | X$ (here, $\perp$ denotes independence). So among people with the same values of $X$, we could essentially think of treatment as being randomly assigned. This is also referred to as "no unmeasured confounders' assumption". - **Consistency Assumption:** The potential outcome under the treatment $A=a$, $Y^a$ is equivalent to the observed outcome $Y$, if the actual treatment received is $A=a$. i.e. $Y = Y^a \text{ if } A=a$. Given the assumptions above, we can link our observe data to potential outcomes. Given that we have: \begin{align} E(Y | A=a, X=x) &= E(Y^a | A=a, X=x) \text{ due to consistency} \\ &= E(Y^a | X=x) \text{ due to ignorability} \end{align} To elaborate on the ignorability assumption, the assumption said that conditional on the set of covariates $X$, the treatment assignment mechanism doesn't matter, it's just random. So, in other words, conditioning on $A$ isn't providing us any additional information about the mean of the potential outcome here. After laying down the assumptions behind causal inferencing, we'll also introduce two important concepts **confounders** and **observational studies**. ## Confounders Confounders are defined as variables that affect both the treatment and outcome. e.g. Imagine that older people are at higher risk of cardiovascular disease (the outcome), but are also more likely to receive statins (the treatment). In this case, age would be a confounder. Age is affecting both the treatment decision, which here is whether or not to receive statins, and is also directly affecting the outcome, which is cardiovascular disease. So, when it comes to confounder control, we are interested in first identifying a set of variables $X$ that will make the ignorability assumption hold. After finding a set of variables like this, will we then have hope of estimating causal effects. ## Randomized Trials v.s. Observational Studies In a randomized trial, the treatment assignment, $A$, would be randomly decided. Thus if the randomized trial is actually randomized, the distribution of our covariates, $X$, will be the same in both groups. i.e. the covariates are said to be balanced. Thus if our outcome between different treatment groups end up differing, it will not be because of differences in $X$. So you might be wondering, why not just always perform a randomized trial. Well, there are a couple of reasons: - It's expensive. We have to enroll people into a trial, there might be loads to protocols that we need to follow and it takes time and money to keep track of those people after they enrolled. - Sometimes it's unethical to randomize treatment. A typical example would be smoking. For observational studies, though similar to a randomized trial, we're not actually intervening with the treatment assignment. We're only observing what happens in reality. Additionally, we can alway leverage retrospective data, where these data are already being collected for various other purposes not specific to this research that we're interested. Caveat is that these data might be messy and of lower quality. Observational studies are becoming more and more prevalent, but as it is not a randomized trial, typically, the distribution of the confounders that we are concerned about, will differ between the treatment groups. The next section will describe matching, which is a technique that aims to address this issue. ## Matching Matching is a method that attempts to control for confounding and make an observational study more like a randomized trial. The main idea is to match individuals in the treated group $A=1$ to similar individuals in the control group $A=0$ on the covariates $X$. This is similar to the notion of estimating the causal effect of the treatment on the treated. e.g. Say there is only 1 covariate that we care about, age. Then, in a randomized trial, for any particular age, there should be about the same number of treated and untreated people. In the cases where older people are more likely to get $A=1$, if we were to match treated people to control people of the same age, there will be about the same number of treated and controls at any age. Once the data are matched, we can treat it as if it was a randomized trial. The advantage of this approach is that it can help reveal lack of overlap in covariate distribution. Caveat is that we can't exactly match on the full set of covariates, so what we'll do is try and make sure the distribution of covariates is balanced between the groups, also referred to as stochastic balance (The distribution of confounders being similar for treated and untreated subjects). ### Propensity Scores We've stated during the matching procedure, each test group member is paired with a similar member of the control group. Here we'll elaborate on what we mean by "similar". Similarity is often times computed using **propensity scores**, which is defined as the probability of receiving treatment, rather than control, given covariates $X$. We'll define $A=1$ for treatment and $A=0$ for control. The propensity score for subject $i$ is denoted as $\pi_i$. \begin{align} \pi_i = P(A=1 | X_i) \end{align} As an example, if a person had a propensity score of 0.3, that would mean that given their particular covariates, there was a 30% chance that they'll receive the treatment. We can calculate this score by fitting our favorite classification algorithm to our data (input features are our covariates, and the labels are whether that person belongs to the treatment or control group). With these knowledge in mind, let's get our hands dirty with some code, we'll discuss more as we go along. ## Implementation We'll be using the [Right Heart Catheterization dataset](http://biostat.mc.vanderbilt.edu/wiki/pub/Main/DataSets/rhc.html). The csv file can be downloaded from the following [link](http://biostat.mc.vanderbilt.edu/wiki/pub/Main/DataSets/rhc.csv). ``` # we'll only be working with a subset of the variables in the raw dataset, # feel free to experiment with more AGE = 'age' MEANBP1 = 'meanbp1' CAT1 = 'cat1' SEX = 'sex' DEATH = 'death' # outcome variable in the our raw data SWANG1 = 'swang1' # treatment variable in our raw data TREATMENT = 'treatment' num_cols = [AGE, MEANBP1] cat_cols = [CAT1, SEX, DEATH, SWANG1] input_path = 'rhc.csv' dtype = {col: 'category' for col in cat_cols} df = pd.read_csv(input_path, usecols=num_cols + cat_cols, dtype=dtype) print(df.shape) df.head() ``` Usually, our treatment group will be smaller than the control group. ``` # replace this column with treatment yes or no df[SWANG1].value_counts() # replace these values with shorter names df[CAT1].value_counts() cat1_col_mapping = { 'ARF': 'arf', 'MOSF w/Sepsis': 'mosf_sepsis', 'COPD': 'copd', 'CHF': 'chf', 'Coma': 'coma', 'MOSF w/Malignancy': 'mosf', 'Cirrhosis': 'cirrhosis', 'Lung Cancer': 'lung_cancer', 'Colon Cancer': 'colon_cancer' } df[CAT1] = df[CAT1].replace(cat1_col_mapping) # convert features' value to numerical value, and store the # numerical value to the original value mapping col_mappings = {} for col in (DEATH, SWANG1, SEX): col_mapping = dict(enumerate(df[col].cat.categories)) col_mappings[col] = col_mapping print(col_mappings) for col in (DEATH, SWANG1, SEX): df[col] = df[col].cat.codes df = df.rename({SWANG1: TREATMENT}, axis=1) df.head() cat_cols = [CAT1] df_one_hot = pd.get_dummies(df[cat_cols], drop_first=True) df_cleaned = pd.concat([df[num_cols], df_one_hot, df[[SEX, TREATMENT, DEATH]]], axis=1) df_cleaned.head() ``` Given all of these covariates and our column `treatment` that indicates whether the subject received the treatment or control, we wish to have a quantitative way of measuring whether our covariates are balanced between the two groups. To assess whether balance has been achieved, we can look at standardized mean differences (smd), which is calculated by the difference in the means between the two groups divided by the pooled standard deviation. \begin{align} smd = \frac{\bar{X}_t - \bar{X}_c}{\sqrt{(s^2_t + s^2_c) / 2}} \end{align} Where: - $\bar{X}_t$, $\bar{X}_c$ denotes the mean of that feature for the treatment and control group respectively. Note that people often times report the absolute value of this number. - $s^2_t$, $s^2_c$ denotes the standard deviation of that feature for the treatment and control group respectively. For the denominator we're essentially calculating the pooled standard deviation. We can calculate the standardized mean differences for every feature. If our calculated smd is 1, then that means there's a 1 standard deviation difference in means. The benefit of having standard deviation in the denominator is that this number becomes insensitive to the scale of the feature. After computing this measurement for all of our features, there is a rule of thumb that are commonly used to determine whether that feature is balanced or not, (similar to the 0.05 for p-value idea). - Smaller than $0.1$. For a randomized trial, the smd between all of the covariates should typically fall into this bucket. - $0.1$ - $0.2$. Not necessarily balanced, but small enough that people are usually not too worried about them. Sometimes, even after performing matching, there might still be a few covariates whose smd fall under this range. - $0.2$. Values that are greater than this threshold are considered seriously imbalanced. ``` features = df_cleaned.columns.tolist() features.remove(TREATMENT) features.remove(DEATH) agg_operations = {TREATMENT: 'count'} agg_operations.update({ feature: ['mean', 'std'] for feature in features }) table_one = df_cleaned.groupby(TREATMENT).agg(agg_operations) # merge MultiIndex columns together into 1 level # table_one.columns = ['_'.join(col) for col in table_one.columns.values] table_one.head() def compute_table_one_smd(table_one: pd.DataFrame, round_digits: int=4) -> pd.DataFrame: feature_smds = [] for feature in features: feature_table_one = table_one[feature].values neg_mean = feature_table_one[0, 0] neg_std = feature_table_one[0, 1] pos_mean = feature_table_one[1, 0] pos_std = feature_table_one[1, 1] smd = (pos_mean - neg_mean) / np.sqrt((pos_std ** 2 + neg_std ** 2) / 2) smd = round(abs(smd), round_digits) feature_smds.append(smd) return pd.DataFrame({'features': features, 'smd': feature_smds}) table_one_smd = compute_table_one_smd(table_one) table_one_smd ``` The next few code chunk will actually fit the propensity score. ``` # treatment will be our label for estimating the propensity score, # and death is the outcome that we care about, thus is also removed # from the step that is estimating the propensity score death = df_cleaned[DEATH] treatment = df_cleaned[TREATMENT] df_cleaned = df_cleaned.drop([DEATH, TREATMENT], axis=1) column_transformer = ColumnTransformer( [('numerical', StandardScaler(), num_cols)], sparse_threshold=0, remainder='passthrough' ) data = column_transformer.fit_transform(df_cleaned) data.shape logistic = LogisticRegression(solver='liblinear') logistic.fit(data, treatment) pscore = logistic.predict_proba(data)[:, 1] pscore roc_auc_score(treatment, pscore) ``` We won't be spending too much time tweaking the model here, checking some evaluation metric of the model serves as a quick sanity check. Once the propensity score is estimated, it is useful to look for overlap before jumping straight to the matching process. By overlap, we are referring to compare the distribution of the propensity score for the subjects in the control and treatment group. ``` mask = treatment == 1 pos_pscore = pscore[mask] neg_pscore = pscore[~mask] print('treatment count:', pos_pscore.shape) print('control count:', neg_pscore.shape) ``` Looking at the plot below, we can see that our features, $X$, does in fact contain information about the user receiving treatment. The distributional difference between the propensity scores for the two group justifies the need for matching, since they are not directly comparable otherwise. Although, there's a distributional difference in the density plot, but in this case, what we see is that there's overlap everywhere, so this is actually the kind of plot we would like to see if we're going to do propensity score matching. What we mean by overlap is that no matter where we look on the plot, even though there might be more control than treatment or vice versa, there will still be some subject from either group. The notion of overlap means that our positivity assumption is probably reasonable. Remember positivity refers to the situation where all of the subjects in the study have at least some chance of receiving either treatment. And that appears to be the case here, hence this would be a situation where we would feel comfortable to proceed with our propensity score matching. ``` # change default style figure and font size plt.rcParams['figure.figsize'] = 8, 6 plt.rcParams['font.size'] = 12 sns.distplot(neg_pscore, label='control') sns.distplot(pos_pscore, label='treatment') plt.xlim(0, 1) plt.title('Propensity Score Distribution of Control vs Treatment') plt.ylabel('Density') plt.xlabel('Scores') plt.legend() plt.tight_layout() plt.show() ``` Keep in mind that not every plot will look like this, if there's major lack of overlap in some part of the propensity score distribution plot that means our positivity assumption would essentially be violated. Or in other words, we can't really estimate a causal effect in those area of the distribution since in those areas, those are subjects that have close to zero chance of being in the control/treatment group. One thing that we may wish to do when encountered with this scenario is either look and see if we're missing some covariates or get rid of individuals who have extreme propensity scores and focus on the areas where there are strong overlapping. The next step is to perform matching. In general, the procedure looks like this: > We compute the distance between the estimated propensity score for each treated subject with every control. And for every treated subject we would find the subject in the control that has the closest distance to it. These pairs are "matched" together and will be included in the final dataset that will be used to estimate the causal effect. But in practice there are actually many different variants to performing the step mentioned above. e.g. First: We mentioned that when there's a lack of balance, we can get rid of individuals who have extreme propensity scores. Some example of doing this includes removing control subjects whose propensity score is less than the minimum in the treatment group and removing treated subjects whose propensity score is greater than the maximum in the control group. Second: Some people would only consider the treatment and control subject to be a match if the difference between their propensity score difference is less than a specified threshold, $\delta$ (this threshold is also referred to as caliper). In other words, given a user in the treatment group, $u_t$, we find the set of candidate matches from the control group. \begin{align} C(u_t) = \{ u_c \in {\text control} : |\pi_{u_c} - \pi_{u_t}| \leq \delta \} \end{align} If $|C(u_t)| = 0$, $u_t$ is not matched, and is excluded from further consideration. Otherwise, we select the control user $u_c$ satisfying: \begin{align} \text{argmin}_{u_c \in C(u_t)} \big|\pi_{u_c} - \pi_{u_t}\big| \end{align} and retain the pair of users. Reducing the $\delta$ parameter improves the balance of the final dataset at the cost of reducing its size, we can experiment with different values and see if we retain the majority of the test group. Third: A single user in the control group can potentially be matched to multiple users in the treatment group. To account for this, we can add a weighting to each matched record with the inverse of its frequency. i.e. if a control group user occurred 4 times in the matched dataset, we assigned that record a weight of 1/4. We may wish to check whether duplicates occurs a lot in the final matched dataset. Some implementation gives a flag whether this multiple matched control group scenario is allowed, i.e. whether matching with replacement is allowed. If replacement is not allowed, then matches generally will be found in the same order as the data are sorted. Thus, the match(es) for the first observation will be found first, the match(es) for the second observation will be found second, etc. Matching without replacement will generally increase bias. Here, we'll what we'll do is: for every record in the treatment we find its closest record in the control group without controlling for distance threshold and duplicates. ``` def get_similar(pos_pscore: np.ndarray, neg_pscore: np.ndarray, topn: int=5, n_jobs: int=1): from sklearn.neighbors import NearestNeighbors knn = NearestNeighbors(n_neighbors=topn + 1, metric='euclidean', n_jobs=n_jobs) knn.fit(neg_pscore.reshape(-1, 1)) distances, indices = knn.kneighbors(pos_pscore.reshape(-1, 1)) sim_distances = distances[:, 1:] sim_indices = indices[:, 1:] return sim_distances, sim_indices sim_distances, sim_indices = get_similar(pos_pscore, neg_pscore, topn=1) sim_indices ``` We can still check the number of occurrences for the matched control record. As mentioned in the previous section, we can add these information as weights to our dataset, but we won't be doing that here. ``` _, counts = np.unique(sim_indices[:, 0], return_counts=True) np.bincount(counts) ``` After applying the matching procedure, it's important to check and validate that the matched dataset are indeed indistinguishable in terms of the covariates that we were using to balance the control and treatment group. ``` df_cleaned[TREATMENT] = treatment df_cleaned[DEATH] = death df_pos = df_cleaned[mask] df_neg = df_cleaned[~mask].iloc[sim_indices[:, 0]] df_matched = pd.concat([df_pos, df_neg], axis=0) df_matched.head() table_one_matched = df_matched.groupby(TREATMENT).agg(agg_operations) table_one_smd_matched = compute_table_one_smd(table_one_matched) table_one_smd_matched ``` Upon completing propensity score matching and verified that our covariates are now fairly balanced using standardized mean difference (smd), we can carry out a outcome analysis using a paired t-test. For all the various knobs that we've described when introducing the matching process, we can experiment with various options and see if our conclusions change. ``` num_matched_pairs = df_neg.shape[0] print('number of matched pairs: ', num_matched_pairs) # pair t-test stats.ttest_rel(df_pos[DEATH].values, df_neg[DEATH].values) ``` This result tells us after using matching adjustment to ensure comparability between the treatment and control group, we find that receiving Right Heart Catheterization does have an effect on a patient's change of dying. # Reference - [Blog: Comparative Statistics in Python using SciPy](http://benalexkeen.com/comparative-statistics-in-python-using-scipy/) - [Cousera: A Crash Course in Causality - Inferring Causal Effects from Observational Data](https://www.coursera.org/learn/crash-course-in-causality/) - [Github: pymatch - Matching techniques for observational studies](https://github.com/benmiroglio/pymatch) - [Paper: B. Miroglio, D. Zeber, J. Kaye, R. Weiss - The Effect of Ad Blocking on User Engagement with the Web (2018)](https://dl.acm.org/citation.cfm?id=3178876.3186162)
github_jupyter
``` NAME = "change the conv2d" BATCH_SIZE = 32 import os import cv2 import torch import numpy as np def load_data(img_size=112): data = [] index = -1 labels = {} for directory in os.listdir('./data/'): index += 1 labels[f'./data/{directory}/'] = [index,-1] print(len(labels)) for label in labels: for file in os.listdir(label): filepath = label + file img = cv2.imread(filepath,cv2.IMREAD_GRAYSCALE) img = cv2.resize(img,(img_size,img_size)) img = img / 255.0 data.append([ np.array(img), labels[label][0] ]) labels[label][1] += 1 for _ in range(12): np.random.shuffle(data) print(len(data)) np.save('./data.npy',data) return data import torch def other_loading_data_proccess(data): X = [] y = [] print('going through the data..') for d in data: X.append(d[0]) y.append(d[1]) print('splitting the data') VAL_SPLIT = 0.25 VAL_SPLIT = len(X)*VAL_SPLIT VAL_SPLIT = int(VAL_SPLIT) X_train = X[:-VAL_SPLIT] y_train = y[:-VAL_SPLIT] X_test = X[-VAL_SPLIT:] y_test = y[-VAL_SPLIT:] print('turning data to tensors') X_train = torch.from_numpy(np.array(X_train)) y_train = torch.from_numpy(np.array(y_train)) X_test = torch.from_numpy(np.array(X_test)) y_test = torch.from_numpy(np.array(y_test)) return [X_train,X_test,y_train,y_test] REBUILD_DATA = True if REBUILD_DATA: data = load_data() np.random.shuffle(data) X_train,X_test,y_train,y_test = other_loading_data_proccess(data) import torch import torch.nn as nn import torch.nn.functional as F # class Test_Model(nn.Module): # def __init__(self): # super().__init__() # self.conv1 = nn.Conv2d(1, 6, 5) # self.pool = nn.MaxPool2d(2, 2) # self.conv2 = nn.Conv2d(6, 16, 5) # self.fc1 = nn.Linear(16 * 25 * 25, 120) # self.fc2 = nn.Linear(120, 84) # self.fc3 = nn.Linear(84, 36) # def forward(self, x): # x = self.pool(F.relu(self.conv1(x))) # x = self.pool(F.relu(self.conv2(x))) # x = x.view(-1, 16 * 25 * 25) # x = F.relu(self.fc1(x)) # x = F.relu(self.fc2(x)) # x = self.fc3(x) # return x class Test_Model(nn.Module): def __init__(self): super().__init__() self.pool = nn.MaxPool2d(2, 2) self.conv1 = nn.Conv2d(1, 32, 5) self.conv3 = nn.Conv2d(32,64,5) self.conv2 = nn.Conv2d(64, 128, 5) self.fc1 = nn.Linear(128 * 10 * 10, 512) self.fc2 = nn.Linear(512, 256) self.fc4 = nn.Linear(256,128) self.fc3 = nn.Linear(128, 36) def forward(self, x,shape=False): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv3(x))) x = self.pool(F.relu(self.conv2(x))) if shape: print(x.shape) x = x.view(-1, 128 * 10 * 10) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc4(x)) x = self.fc3(x) return x device = torch.device('cuda') model = Test_Model().to(device) preds = model(X_test.reshape(-1,1,112,112).float().to(device),True) preds[0] optimizer = torch.optim.SGD(model.parameters(),lr=0.1) criterion = nn.CrossEntropyLoss() EPOCHS = 5 loss_logs = [] from tqdm import tqdm PROJECT_NAME = "Sign-Language-Recognition" def test(net,X,y): correct = 0 total = 0 net.eval() with torch.no_grad(): for i in range(len(X)): real_class = torch.argmax(y[i]).to(device) net_out = net(X[i].view(-1,1,112,112).to(device).float()) net_out = net_out[0] predictied_class = torch.argmax(net_out) if predictied_class == real_class: correct += 1 total += 1 return round(correct/total,3) import wandb len(os.listdir('./data/')) import random # index = random.randint(0,29) # print(index) # wandb.init(project=PROJECT_NAME,name=NAME) # for _ in tqdm(range(EPOCHS)): # for i in range(0,len(X_train),BATCH_SIZE): # X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device) # y_batch = y_train[i:i+BATCH_SIZE].to(device) # model.to(device) # preds = model(X_batch.float()) # loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long)) # optimizer.zero_grad() # loss.backward() # optimizer.step() # wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index])}) # wandb.finish() import matplotlib.pyplot as plt import pandas as pd df = pd.Series(loss_logs) df.plot.line(figsize=(12,6)) test(model,X_test,y_test) test(model,X_train,y_train) preds X_testing = X_train y_testing = y_train correct = 0 total = 0 model.eval() with torch.no_grad(): for i in range(len(X_testing)): real_class = torch.argmax(y_testing[i]).to(device) net_out = model(X_testing[i].view(-1,1,112,112).to(device).float()) net_out = net_out[0] predictied_class = torch.argmax(net_out) # print(predictied_class) if str(predictied_class) == str(real_class): correct += 1 total += 1 print(round(correct/total,3)) # for real,pred in zip(y_batch,preds): # print(real) # print(torch.argmax(pred)) # print('\n') # conv2d_output # conv2d_1_ouput # conv2d_2_ouput # output_fc1 # output_fc2 # output_fc4 # max_pool2d_keranl # max_pool2d # num_of_linear # activation # best num of epochs # best optimizer # best loss ## best lr class Test_Model(nn.Module): def __init__(self,conv2d_output=128,conv2d_1_ouput=32,conv2d_2_ouput=64,output_fc1=512,output_fc2=256,output_fc4=128,output=36,activation=F.relu,max_pool2d_keranl=2): super().__init__() print(conv2d_output) print(conv2d_1_ouput) print(conv2d_2_ouput) print(output_fc1) print(output_fc2) print(output_fc4) print(activation) self.conv2d_output = conv2d_output self.pool = nn.MaxPool2d(max_pool2d_keranl) self.conv1 = nn.Conv2d(1, conv2d_1_ouput, 5) self.conv3 = nn.Conv2d(conv2d_1_ouput,conv2d_2_ouput,5) self.conv2 = nn.Conv2d(conv2d_2_ouput, conv2d_output, 5) self.fc1 = nn.Linear(conv2d_output * 10 * 10, output_fc1) self.fc2 = nn.Linear(output_fc1, output_fc2) self.fc4 = nn.Linear(output_fc2,output_fc4) self.fc3 = nn.Linear(output_fc4, output) self.activation = activation def forward(self, x,shape=False): x = self.pool(self.activation(self.conv1(x))) x = self.pool(self.activation(self.conv3(x))) x = self.pool(self.activation(self.conv2(x))) if shape: print(x.shape) x = x.view(-1, self.conv2d_output * 10 * 10) x = self.activation(self.fc1(x)) x = self.activation(self.fc2(x)) x = self.activation(self.fc4(x)) x = self.fc3(x) return x # conv2d_output # conv2d_1_ouput # conv2d_2_ouput # output_fc1 # output_fc2 # output_fc4 # max_pool2d_keranl # max_pool2d # num_of_linear # best num of epochs # best optimizer # best loss ## best lr # batch size EPOCHS = 3 BATCH_SIZE = 32 def get_loss(criterion,y,model,X): preds = model(X.view(-1,1,112,112).to(device).float()) preds.to(device) loss = criterion(preds,torch.tensor(y,dtype=torch.long).to(device)) loss.backward() return loss.item() activations = [nn.ELU(),nn.LeakyReLU(),nn.PReLU(),nn.ReLU(),nn.ReLU6(),nn.RReLU(),nn.SELU(),nn.CELU(),nn.GELU(),nn.SiLU(),nn.Tanh()] for activation in activations: model = Test_Model(activation=activation) optimizer = torch.optim.SGD(model.parameters(),lr=0.1) criterion = nn.CrossEntropyLoss() index = random.randint(0,29) print(index) wandb.init(project=PROJECT_NAME,name=f'activation-{activation}') for _ in tqdm(range(EPOCHS)): for i in range(0,len(X_train),BATCH_SIZE): X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device) y_batch = y_train[i:i+BATCH_SIZE].to(device) model.to(device) preds = model(X_batch.float()) loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long)) optimizer.zero_grad() loss.backward() optimizer.step() wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index]),'val_loss':get_loss(criterion,y_test,model,X_test)}) print(f'{torch.argmax(preds[index])} \n {y_batch[index]}') print(f'{torch.argmax(preds[1])} \n {y_batch[1]}') print(f'{torch.argmax(preds[2])} \n {y_batch[2]}') print(f'{torch.argmax(preds[3])} \n {y_batch[3]}') print(f'{torch.argmax(preds[4])} \n {y_batch[4]}') wandb.finish() ```
github_jupyter
# Class Central Survey: compare target group 'Willingness to pay' with the rest of the sample ``` import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from IPython.display import display sns.set(style="white") sns.set_context("talk") ``` ## Read the survey data ``` df = pd.read_csv('raw/2016-17-ClassCentral-Survey-data-noUserText.csv', decimal=',', encoding = "ISO-8859-1") ``` ## Create target group 'Willingness to pay' ``` df['How willing are you to pay for a certificate for a MOOC?'].value_counts() target_name = 'Willingness to pay' willing = (pd.to_numeric(df['How willing are you to pay for a certificate for a MOOC?'], errors='coerce') > 3) ``` ## Generic function to plot barchart for any categorical feature on any target/nontarget split ``` def binary_compare_categorical_barh(mask, feature, df=df, target_name='target', nontarget_name='Other', split_name='visitor', answer='answer'): """Split dataframe into two based on mask Draw horizontal barcharts for each category item for both masked and unmasked object""" target = df[mask] nontarget = df[~mask] target_size, nontarget_size = len(target), len(nontarget) res_target = target[feature].value_counts()/target_size*100 res_nontarget = nontarget[feature].value_counts()/nontarget_size*100 result = pd.DataFrame({target_name: res_target, nontarget_name: res_nontarget}) result[answer] = result.index res_df = pd.melt(result, id_vars=answer, var_name=split_name, value_name='percentage') display(res_df) sns.factorplot(x='percentage', y=answer, hue=split_name, data=res_df, kind='bar', orient='h', size=6, aspect=2) plt.title(feature) sns.despine(left=True, bottom=True) plt.show() return ``` ## Generic function to plot barchart for any multi-categorical feature on any target/nontarget split ``` def binary_compare_multi_select_categorical_barh(df, target, target_name, question, selectors, nontarget_name = 'Others'): """draw a barchart for Survey results on a question that allows to select multiple categories df: dataframe to use target: selection of rows based on column values question: the question you want to analyse selectors: list of df column containing the selectors (values 0/1)""" size = {} target_df = df[target] nontarget_df = df[~target] size[target_name], size[nontarget_name] = len(target_df), len(nontarget_df) print(size) graph_targetdata = target_df.loc[:, selectors] graph_targetdata['target'] = target_name graph_nontargetdata = nontarget_df.loc[:, selectors] graph_nontargetdata['target'] = nontarget_name graph_data = pd.concat([graph_targetdata, graph_nontargetdata]) melted = pd.melt(graph_data, id_vars='target', var_name='select', value_name='percentage') grouped = melted.groupby(['target', 'select'], as_index=False).sum() #print(size[grouped['target']]) grouped.percentage = grouped.percentage/grouped['target'].map(size)*100 # make it percentage of total grouped['select'] = grouped['select'].apply(lambda x: x.split(": ")[1]) # remove prefix from string display(grouped) sns.factorplot(x='percentage', y='select', hue='target', data=grouped, kind='bar', orient='h', size=6, aspect=2) sns.plt.title(question) sns.despine(left=True, bottom=True) sns.plt.show() ``` ## Apply this plot on the target 'Willing to pay' for some categorical features ``` binary_compare_categorical_barh(mask=willing, target_name='Willing to pay', feature='How familiar are you with MOOCs?') binary_compare_categorical_barh(mask=willing, target_name='Willing to pay', feature='Which region of the world are you in?') ``` Africa is the only region where there are far more people willing to pay for a certificate than people who wont. The higher education quality available at a lower cost in the region, the lesser people are willing to pay for a MOOC certificate. ``` binary_compare_categorical_barh(mask=willing, target_name='Willng to pay', feature='How important is the ability to earn a certificate when you complete a MOOC?') ``` Those who find the ability to earn a certificate important are more willing to pay than the others. This is one of the first actions the platform providers took to increase there business: quit with the free certificates. ``` reasons = ['Reasons: Learning skills for current career', 'Reasons: Learning skills for new career', 'Reasons: School credit', 'Reasons: Personal interest', 'Reasons: Access to reference materials'] binary_compare_multi_select_categorical_barh(df, target=willing, target_name='Willing to pay', question='Which of the following are important reasons for you to take MOOCs?', selectors=reasons) ``` There is only a slight difference in the reasons to follow MOOCs between those who are willing to pay and those who don't. When the reasons are career related respondents are willing to pay for a certificate. ``` decisions = ['Decide: Topic/Subject', 'Decide: Instructor', 'Decide: Institution/university', 'Decide: Platform', 'Decide: Ratings', 'Decide: Others recommendations'] binary_compare_multi_select_categorical_barh(df, target=willing, target_name='Willing to pay', question='Which are the most important factors in deciding which MOOC to take?', selectors=decisions) ``` The Institution is a more appealing reason to follow a MOOC for those who are willing to pay compared to those that are not. ``` aspects = ['Aspects: Browsing discussion forums', 'Aspects: Actively contributing to discussion forums', 'Aspects: Connecting with other learners in the course environment', 'Aspects: Connecting with learners outside the course environment', 'Aspects: Taking the course with other people you know (friends, colleagues, etc.)'] binary_compare_multi_select_categorical_barh(df, target=willing, target_name='Willing to pay', question='Which of the following are important aspects of the MOOC experience to you?', selectors=aspects) ``` Connecting with other students is more important for those who are willing to pay. Is this an opportunity for the platforms to increase their revenue by improving forum features and quality? ``` benefits = ['Benefit: Have not taken MOOCs', 'Benefit: Not Really', 'Benefit: School credit towards a degree', 'Benefit: Promotion at current organization', 'Benefit: Higher performance evaluation at current job', 'Benefit: Helped me get a new job in the same field', 'Benefit: Helped me get a new job in a different field'] binary_compare_multi_select_categorical_barh(df, target=willing, target_name='Willing to pay', question='Have you received any tangible benefits from taking MOOCs?', selectors=benefits) ``` People willing to pay see more benefits in MOOCs ``` pays = ['Pay: The topic/subject', 'Pay: The institution/university offering the MOOC', 'Pay: The instructor/professor', 'Pay: The MOOC platform being used', 'Pay: A multi-course certification that the MOOC is a part of'] binary_compare_multi_select_categorical_barh(df, target=willing, target_name='Willing to pay', question='Which of the following have a strong impact on your willingness to pay for a MOOC certificate?', selectors=pays) binary_compare_categorical_barh(mask=willing, target_name='Willing to pay', feature='# MOOCs Started') ``` The willingness to pay drops after starting about 7 course ``` binary_compare_categorical_barh(mask=willing, target_name='Willing to pay', feature='# MOOCs Finished') ``` The more people finish MOOCs the less willingnes to pay. Is this the reason why Coursera is switching to a subscription model? ``` binary_compare_categorical_barh(mask=willing, target_name='Willing to pay', feature='When did you first start taking MOOCs?') ``` People who more recently started taking MOOCs are more willing to pay ``` binary_compare_categorical_barh(mask=willing, target_name='Willing to pay', feature='How much do you think employers value MOOC certificates?') ``` People who are more willing to pay for a certificate think employer value the certificates about twice as much as people who are less willing to pay for the certificate ``` binary_compare_categorical_barh(mask=willing, target_name='Willing to pay', feature='What is your level of formal education?') ``` There is very little correlation between willingness to pay for certificates and the education level of the respondent ``` binary_compare_categorical_barh(mask=willing, target_name='Willing to pay', feature='What is your age range?') ``` In the age range 36-45 there is significant more willingness to pay, the age range with biggest need to upgrade their skills? In the age range 56+ the willingness to pay drops.
github_jupyter
Vulnerability of an Area to Serious Earthquakes dis.004 http://sedac.ciesin.columbia.edu/data/set/ndh-earthquake-frequency-distribution dis.004 http://sedac.ciesin.columbia.edu/data/set/ndh-earthquake-frequency-distribution Downloaded to RW_Data/Rasters/gdeqk/ File: gdeqk.asc Import libraries ``` # Libraries for downloading data from remote server (may be ftp) import requests from urllib.request import urlopen from contextlib import closing import shutil # Library for uploading/downloading data to/from S3 import boto3 # Libraries for handling data import rasterio as rio import numpy as np # from netCDF4 import Dataset # import pandas as pd # import scipy # Libraries for various helper functions # from datetime import datetime import os import threading import sys from glob import glob ``` s3 tools ``` s3_upload = boto3.client("s3") s3_download = boto3.resource("s3") s3_bucket = "wri-public-data" s3_folder = "resourcewatch/raster/dis_004_vulnerability_to_earthquakes/" s3_file = "dis_004_vulnerability_to_earthquakes.asc" s3_key_orig = s3_folder + s3_file s3_key_edit = s3_key_orig[0:-4] + "_edit.tif" class ProgressPercentage(object): def __init__(self, filename): self._filename = filename self._size = float(os.path.getsize(filename)) self._seen_so_far = 0 self._lock = threading.Lock() def __call__(self, bytes_amount): # To simplify we'll assume this is hooked up # to a single filename. with self._lock: self._seen_so_far += bytes_amount percentage = (self._seen_so_far / self._size) * 100 sys.stdout.write("\r%s %s / %s (%.2f%%)"%( self._filename, self._seen_so_far, self._size, percentage)) sys.stdout.flush() ``` Define local file locations ``` local_folder = "/Users/nathansuberi/Desktop/RW_Data/Rasters/gdeqk/" file_name = "gdeqk.asc" local_orig = local_folder + file_name orig_extension_length = 4 #4 for each char in .tif local_edit = local_orig[:-orig_extension_length] + "_edit.tif" ``` Use rasterio to reproject and compress ``` with rio.open(local_orig, 'r') as src: print(src.profile) data = src.read() print(data) # Note - this is the core of Vizz's netcdf2tif function with rio.open(local_orig, 'r') as src: # This assumes data is readable by rasterio # May need to open instead with netcdf4.Dataset, for example data = src.read()[0] rows = data.shape[0] columns = data.shape[1] print(rows) print(columns) # Latitude bounds south_lat = -90 north_lat = 90 # Longitude bounds west_lon = -180 east_lon = 180 transform = rio.transform.from_bounds(west_lon, south_lat, east_lon, north_lat, columns, rows) # Profile no_data_val = -9999 target_projection = 'EPSG:4326' target_data_type = np.int32 profile = { 'driver':'GTiff', 'height':rows, 'width':columns, 'count':1, 'dtype':target_data_type, 'crs':target_projection, 'transform':transform, 'compress':'lzw', 'nodata': no_data_val } with rio.open(local_edit, "w", **profile) as dst: dst.write(data.astype(profile["dtype"]), 1) ``` Upload orig and edit files to s3 ``` # Original s3_upload.upload_file(local_orig, s3_bucket, s3_key_orig, Callback=ProgressPercentage(local_orig)) # Edit s3_upload.upload_file(local_edit, s3_bucket, s3_key_edit, Callback=ProgressPercentage(local_edit)) ```
github_jupyter
``` import sys import os sys.path.insert(0,'..') import torch import numpy as np import json import car_racing_simulator.VehicleModel as VehicleModel import car_racing_simulator.Track as Track import math from car_racing.network import Actor as Actor from car_racing.orca_env_function import getNFcollosionreward p1 = Actor(10, 2, std=0.1) p2 = Actor(10, 2, std=0.1) def Compete(player1, player2): p1.load_state_dict(torch.load("../car_racing/pretrained_models/" + player1 + ".pth")) p2.load_state_dict(torch.load("../car_racing/pretrained_models/" + player2 + ".pth")) config = json.load(open('../car_racing/config.json')) track1 = Track.Track(config) device = torch.device("cpu") vehicle_model = VehicleModel.VehicleModel(config["n_batch"], device, config) mat_action1 = [] mat_action2 = [] mat_state1 = [] mat_reward1 = [] mat_done = [] mat_state2 = [] global_coordinates1 = [] curvilinear_coordinates1 = [] global_coordinates2 = [] curvilinear_coordinates2 = [] init_size = 10000 curr_batch_size = init_size state_c1 = torch.zeros(curr_batch_size, config["n_state"]) # state[:, 6:12].view(6) state_c2 = torch.zeros(curr_batch_size, config["n_state"]) # state[:, 6:12].view(6) state_c1[:, 0] = torch.zeros((curr_batch_size))#torch.rand((curr_batch_size)) state_c2[:, 0] = torch.zeros((curr_batch_size))#torch.FloatTensor([2.0])#torch.rand((curr_batch_size)) a = torch.rand(curr_batch_size) a_linear = (a>0.5)*torch.ones(curr_batch_size) state_c1[:, 1] = a_linear*0.2 - 0.1 state_c2[:, 1] = -state_c1[:, 1] # state_c1[:, 1] = -0.1#torch.zeros((curr_batch_size))#torch.rand((curr_batch_size)) # state_c2[:, 1] = 0.1#torch.zeros((curr_batch_size))#torch.FloatTensor([2.0])#torch.rand((curr_batch_size)) done_c1 = torch.zeros((curr_batch_size)) <= -0.1 done_c2 = torch.zeros((curr_batch_size)) <= -0.1 prev_coll_c1 = torch.zeros((curr_batch_size)) <= -0.1 prev_coll_c2 = torch.zeros((curr_batch_size)) <= -0.1 counter1 = torch.zeros((curr_batch_size)) counter2 = torch.zeros((curr_batch_size)) over_mat = [] overtakings = torch.zeros((curr_batch_size)) prev_leading_player = torch.cat([torch.zeros(int(curr_batch_size/2)) <= 0.1,torch.zeros(int(curr_batch_size/2)) <= -0.1]) c1_out=0 c2_out=0 t=0 a_win=0 b_win=0 overtakings_p1 = 0 overtakings_p2 = 0 for i in range(2000): dist1 = p1(torch.cat([state_c1[:, 0:5], state_c2[:, 0:5]], dim=1)) action1 = dist1.sample() dist2 = p2(torch.cat([state_c2[:, 0:5], state_c1[:, 0:5]], dim=1)) action2 = dist2.sample() mat_state1.append(state_c1[0:5]) mat_action1.append(action1.detach()) prev_state_c1 = state_c1 prev_state_c2 = state_c2 state_c1 = vehicle_model.dynModelBlendBatch(state_c1.view(-1, 6), action1.view(-1, 2)).view(-1, 6) state_c2 = vehicle_model.dynModelBlendBatch(state_c2.view(-1, 6), action2.view(-1, 2)).view(-1, 6) state_c1 = (state_c1.transpose(0, 1) * (~done_c1) + prev_state_c1.transpose(0, 1) * (done_c1)).transpose(0, 1) state_c2 = (state_c2.transpose(0, 1) * (~done_c2) + prev_state_c2.transpose(0, 1) * (done_c2)).transpose(0, 1) reward1, reward2, done_c1, done_c2,state_c1, state_c2, n_c1, n_c2 = getNFcollosionreward(state_c1, state_c2, vehicle_model.getLocalBounds(state_c1[:, 0]), vehicle_model.getLocalBounds(state_c2[:, 0]), prev_state_c1, prev_state_c2) done = ((done_c1) * (done_c2)) remaining_xo = ~done # prev_coll_c1 = coll_c1[remaining_xo] # removing elements that died # prev_coll_c2 = coll_c2[remaining_xo] counter1 = counter1[remaining_xo] counter2 = counter2[remaining_xo] # check for collision c1_out = c1_out + n_c1 c2_out = c2_out + n_c2 #check for overtake state_c1[:,2] leading_player = torch.ones(state_c1.size(0))*((state_c1[:,0]-state_c2[:,0])>0)# True means 1 is leading false means other is leading overtakings = overtakings + torch.ones(leading_player.size(0))*(leading_player!=prev_leading_player) if torch.sum(torch.ones(leading_player.size(0))*(leading_player!=prev_leading_player))>0: temp=1 overtakings_p1_bool= (leading_player!=prev_leading_player)*(leading_player==(torch.zeros((leading_player.size(0)))<=0.1)) overtakings_p1 = overtakings_p1 + torch.sum(torch.ones(leading_player.size(0))*overtakings_p1_bool) overtakings_p2_bool= (leading_player!=prev_leading_player)*(leading_player==(torch.zeros((leading_player.size(0)))<=-0.1)) overtakings_p2 = overtakings_p2 + torch.sum(torch.ones(leading_player.size(0))*overtakings_p2_bool) prev_leading_player = leading_player[remaining_xo] out_state_c1 = state_c1[~remaining_xo] out_state_c2 = state_c2[~remaining_xo] state_c1 = state_c1[remaining_xo] state_c2 = state_c2[remaining_xo] curr_batch_size = state_c1.size(0) if curr_batch_size < remaining_xo.size(0): t=t+1 # if t==1: # print(i) a_win = a_win + torch.sum(torch.ones(out_state_c1.size(0))*(out_state_c1[:,0]>out_state_c2[:,0])) b_win = b_win + torch.sum(torch.ones(out_state_c1.size(0))*(out_state_c1[:,0]<out_state_c2[:,0])) over_mat.append(torch.sum(overtakings[~remaining_xo])) element_deducted = ~(done_c1 * done_c2) done_c1 = done_c1[element_deducted] done_c2 = done_c2[element_deducted] overtakings = overtakings[remaining_xo] # print(over_mat) if np.all(done.numpy()) == True or i==1999: # if ((done_c1) * (done_c2)): # print(torch.sum(torch.stack(over_mat)), c1_out,c2_out, a_win,b_win, overtakings_p1,overtakings_p2) print('Normalized score for races between ' + player1 + ' vs ' + player2,) print('Overtakes per lap', np.array(torch.sum(torch.stack(over_mat))/init_size)) print( player1 + ' Won:', np.array(a_win / init_size)) print( player2 + ' Won:', np.array(b_win / init_size)) print( player1 + ' Collisions:', np.array(c1_out/init_size)) print( player2 + ' Collisions:', np.array(c2_out/init_size)) print( player1 + ' Overtake:', np.array(overtakings_p1/init_size)) print( player2 + ' Overtake:', np.array(overtakings_p2/init_size)) # print("done", i) break ``` #### The function Compete, Play 10,000 matches between player 1 and player 2. <br /> Based on matches, it prints normalized wins along with normalized collisions and overtakes in a lap. <br /> Below we play matches between CoPG vs GDA, TRCoPO vs TRGDA and CoPG vs TRCoPO. ``` Compete('CoPG','GDA') Compete('TRCoPO','TRGDA') Compete('CoPG','TRCoPO') ```
github_jupyter
<a href="https://www.bigdatauniversity.com"><img src="https://ibm.box.com/shared/static/cw2c7r3o20w9zn8gkecaeyjhgw3xdgbj.png" width="400" align="center"></a> <h1><center>Multiple Linear Regression</center></h1> <h4>About this Notebook</h4> In this notebook, we learn how to use scikit-learn to implement Multiple linear regression. We download a dataset that is related to fuel consumption and Carbon dioxide emission of cars. Then, we split our data into training and test sets, create a model using training set, Evaluate your model using test set, and finally use model to predict unknown value <h1>Table of contents</h1> <div class="alert alert-block alert-info" style="margin-top: 20px"> <ol> <li><a href="#understanding-data">Understanding the Data</a></li> <li><a href="#reading_data">Reading the Data in</a></li> <li><a href="#multiple_regression_model">Multiple Regression Model</a></li> <li><a href="#prediction">Prediction</a></li> <li><a href="#practice">Practice</a></li> </ol> </div> <br> <hr> ### Importing Needed packages ``` import matplotlib.pyplot as plt import pandas as pd import pylab as pl import numpy as np %matplotlib inline ``` ### Downloading Data To download the data, we will use !wget to download it from IBM Object Storage. ``` !wget -O FuelConsumption.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/FuelConsumptionCo2.csv ``` __Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) <h2 id="understanding_data">Understanding the Data</h2> ### `FuelConsumption.csv`: We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64) - **MODELYEAR** e.g. 2014 - **MAKE** e.g. Acura - **MODEL** e.g. ILX - **VEHICLE CLASS** e.g. SUV - **ENGINE SIZE** e.g. 4.7 - **CYLINDERS** e.g 6 - **TRANSMISSION** e.g. A6 - **FUELTYPE** e.g. z - **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9 - **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9 - **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2 - **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 <h2 id="reading_data">Reading the data in</h2> ``` df = pd.read_csv("FuelConsumption.csv") # take a look at the dataset df.head() ``` Lets select some features that we want to use for regression. ``` cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY','FUELCONSUMPTION_COMB','CO2EMISSIONS']] cdf.head(9) ``` Lets plot Emission values with respect to Engine size: ``` plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() ``` #### Creating train and test dataset Train/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set. This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data. It is more realistic for real world problems. This means that we know the outcome of each data point in this dataset, making it great to test with! And since this data has not been used to train the model, the model has no knowledge of the outcome of these data points. So, in essence, it’s truly an out-of-sample testing. ``` msk = np.random.rand(len(df)) < 0.8 train = cdf[msk] test = cdf[~msk] ``` #### Train data distribution ``` plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() ``` <h2 id="multiple_regression_model">Multiple Regression Model</h2> In reality, there are multiple variables that predict the Co2emission. When more than one independent variable is present, the process is called multiple linear regression. For example, predicting co2emission using FUELCONSUMPTION_COMB, EngineSize and Cylinders of cars. The good thing here is that Multiple linear regression is the extension of simple linear regression model. ``` from sklearn import linear_model regr = linear_model.LinearRegression() x = np.asanyarray(train[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']]) y = np.asanyarray(train[['CO2EMISSIONS']]) regr.fit (x, y) # The coefficients print ('Coefficients: ', regr.coef_) ``` As mentioned before, __Coefficient__ and __Intercept__ , are the parameters of the fit line. Given that it is a multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn can estimate them from our data. Scikit-learn uses plain Ordinary Least Squares method to solve this problem. #### Ordinary Least Squares (OLS) OLS is a method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by minimizing the sum of the squares of the differences between the target dependent variable and those predicted by the linear function. In other words, it tries to minimizes the sum of squared errors (SSE) or mean squared error (MSE) between the target variable (y) and our predicted output ($\hat{y}$) over all samples in the dataset. OLS can find the best parameters using of the following methods: - Solving the model parameters analytically using closed-form equations - Using an optimization algorithm (Gradient Descent, Stochastic Gradient Descent, Newton’s Method, etc.) <h2 id="prediction">Prediction</h2> ``` y_hat= regr.predict(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']]) x = np.asanyarray(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']]) y = np.asanyarray(test[['CO2EMISSIONS']]) print("Residual sum of squares: %.2f" % np.mean((y_hat - y) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(x, y)) ``` __explained variance regression score:__ If $\hat{y}$ is the estimated target output, y the corresponding (correct) target output, and Var is Variance, the square of the standard deviation, then the explained variance is estimated as follow: $\texttt{explainedVariance}(y, \hat{y}) = 1 - \frac{Var\{ y - \hat{y}\}}{Var\{y\}}$ The best possible score is 1.0, lower values are worse. <h2 id="practice">Practice</h2> Try to use a multiple linear regression with the same dataset but this time use __FUEL CONSUMPTION in CITY__ and __FUEL CONSUMPTION in HWY__ instead of FUELCONSUMPTION_COMB. Does it result in better accuracy? ``` # write your code here regr = linear_model.LinearRegression() x = np.asanyarray(train[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']]) y = np.asanyarray(train[['CO2EMISSIONS']]) regr.fit (x, y) # The coefficients print ('Coefficients: ', regr.coef_) y_hat= regr.predict(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']]) x = np.asanyarray(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']]) y = np.asanyarray(test[['CO2EMISSIONS']]) print("Residual sum of squares: %.2f" % np.mean((y_hat - y) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(x, y)) ``` Double-click __here__ for the solution. <!-- Your answer is below: regr = linear_model.LinearRegression() x = np.asanyarray(train[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']]) y = np.asanyarray(train[['CO2EMISSIONS']]) regr.fit (x, y) print ('Coefficients: ', regr.coef_) y_= regr.predict(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']]) x = np.asanyarray(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']]) y = np.asanyarray(test[['CO2EMISSIONS']]) print("Residual sum of squares: %.2f"% np.mean((y_ - y) ** 2)) print('Variance score: %.2f' % regr.score(x, y)) --> <h2>Want to learn more?</h2> IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="http://cocl.us/ML0101EN-SPSSModeler">SPSS Modeler</a> Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://cocl.us/ML0101EN_DSX">Watson Studio</a> <h3>Thanks for completing this lesson!</h3> <h4>Author: <a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a></h4> <p><a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a>, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.</p> <hr> <p>Copyright &copy; 2018 <a href="https://cocl.us/DX0108EN_CC">Cognitive Class</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.</p>
github_jupyter
Plots the diversity of the different VAE variants for the Random Explorations and IMGEP-HGS approaches. (Fig. 11, Supplementary Material). ``` # default print properties multiplier = 2 pixel_cm_ration = 36.5 width_full = int(13.95 * pixel_cm_ration) * multiplier width_half = int(13.95/2 * pixel_cm_ration) * multiplier height_default_1 = int(3.5 * pixel_cm_ration) * multiplier height_default_2 = int(4.5 * pixel_cm_ration) * multiplier # margins in pixel top_margin = 5 * multiplier left_margin = 45 * multiplier right_margin = 37 * multiplier bottom_margin = 50 * multiplier font_size = 10 * multiplier font_family='Times New Roman' line_width = 2 * multiplier # Define and load data import autodisc as ad import ipywidgets import plotly import numpy as np import collections plotly.offline.init_notebook_mode(connected=True) data_filters = collections.OrderedDict() data_filters['none'] = [] data_filters['non dead'] = ('classifier_dead.data', '==', False) data_filters['animals (non div)'] = (('classifier_diverging.data', '==', 0), 'and', ('classifier_animal.data', '==', True)) data_filters['non animals (non div)'] = ((('classifier_dead.data', '==', False), 'and', ('classifier_animal.data', '==', False)), 'and', ('classifier_diverging.data', '==', 0)) data_filters['animals (all)'] = ('classifier_animal.data', '==', True) data_filters['non animals (all)'] = (('classifier_dead.data', '==', False), 'and', ('classifier_animal.data', '==', False)) org_experiment_definitions = dict() org_experiment_definitions['main_paper'] = [ # RANDOM dict(id = '1', directory = '../experiments/experiment_000001', name = 'Random Init*', is_default = True), dict(id = '2', directory = '../experiments/experiment_000002', name = 'Random Mutate', is_default = True), # HANDSELECTED FEATURES dict(id = '101', directory = '../experiments/experiment_000101', name = 'HGS 1', is_default = True), dict(id = '103', directory = '../experiments/experiment_000103', name = 'HGS 2', is_default = True), dict(id = '102', directory = '../experiments/experiment_000102', name = 'HGS 3', is_default = True), dict(id = '104', directory = '../experiments/experiment_000104', name = 'HGS 4', is_default = True), dict(id = '105', directory = '../experiments/experiment_000105', name = 'HGS 5', is_default = True), dict(id = '107', directory = '../experiments/experiment_000107', name = 'HGS 6', is_default = True), dict(id = '106', directory = '../experiments/experiment_000106', name = 'HGS 7', is_default = True), dict(id = '108', directory = '../experiments/experiment_000108', name = 'HGS 8', is_default = True), dict(id = '109', directory = '../experiments/experiment_000109', name = 'HGS 9*', is_default = True), ] repetition_ids = list(range(10)) # define names and load the data experiment_name_format = '<name>' # <id>, <name> #global experiment_definitions experiment_definitions = [] experiment_statistics = [] current_experiment_list = 'main_paper' experiment_definitions = [] for org_exp_def in org_experiment_definitions[current_experiment_list]: new_exp_def = dict() new_exp_def['directory'] = org_exp_def['directory'] if 'is_default' in org_exp_def: new_exp_def['is_default'] = org_exp_def['is_default'] if 'name' in org_exp_def: new_exp_def['id'] = ad.gui.jupyter.misc.replace_str_from_dict(experiment_name_format, {'id': org_exp_def['id'], 'name': org_exp_def['name']}) else: new_exp_def['id'] = ad.gui.jupyter.misc.replace_str_from_dict(experiment_name_format, {'id': org_exp_def['id']}) experiment_definitions.append(new_exp_def) experiment_statistics = dict() for experiment_definition in experiment_definitions: experiment_statistics[experiment_definition['id']] = ad.gui.jupyter.misc.load_statistics(experiment_definition['directory']) # Parameters num_of_bins_per_dimension = 5 run_parameter_ranges = dict() run_parameter_ranges[('run_parameters', 'T')] = (1, 20) run_parameter_ranges[('run_parameters', 'R')] = (2, 20) run_parameter_ranges[('run_parameters', 'm')] = (0, 1) run_parameter_ranges[('run_parameters', 's')] = (0, 0.3) run_parameter_ranges[('run_parameters', 'b', 0)] = (0, 1) run_parameter_ranges[('run_parameters', 'b', 1)] = (0, 1) run_parameter_ranges[('run_parameters', 'b', 2)] = (0, 1) run_parameter_ranges[('parameter_initstate_space_representation','data','[0]')] = (-5, 5) run_parameter_ranges[('parameter_initstate_space_representation','data','[1]')] = (-5, 5) run_parameter_ranges[('parameter_initstate_space_representation','data','[2]')] = (-5, 5) run_parameter_ranges[('parameter_initstate_space_representation','data','[3]')] = (-5, 5) run_parameter_ranges[('parameter_initstate_space_representation','data','[4]')] = (-5, 5) run_parameter_ranges[('parameter_initstate_space_representation','data','[5]')] = (-5, 5) run_parameter_ranges[('parameter_initstate_space_representation','data','[6]')] = (-5, 5) run_parameter_ranges[('parameter_initstate_space_representation','data','[7]')] = (-5, 5) statistic_ranges = dict() statistic_ranges[('lenia_statistics','statistics.activation_mass[-1]')] = (0, 1) statistic_ranges[('lenia_statistics','statistics.activation_volume[-1]')] = (0, 1) statistic_ranges[('lenia_statistics','statistics.activation_density[-1]')] = (0, 1) statistic_ranges[('lenia_statistics','statistics.activation_mass_asymmetry[-1]')] = (-1, 1) statistic_ranges[('lenia_statistics','statistics.activation_mass_distribution[-1]')] = (0, 1) statistic_ranges[('statistic_space_representation','data','[0]')] = (-5, 5) statistic_ranges[('statistic_space_representation','data','[1]')] = (-5, 5) statistic_ranges[('statistic_space_representation','data','[2]')] = (-5, 5) statistic_ranges[('statistic_space_representation','data','[3]')] = (-5, 5) statistic_ranges[('statistic_space_representation','data','[4]')] = (-5, 5) statistic_ranges[('statistic_space_representation','data','[5]')] = (-5, 5) statistic_ranges[('statistic_space_representation','data','[6]')] = (-5, 5) statistic_ranges[('statistic_space_representation','data','[7]')] = (-5, 5) default_config = dict( plot_type = 'plotly_box', layout = dict( yaxis= dict( title='number of bins', showline = False, linewidth = 1, zeroline=False, ), xaxis= dict( showline = False, linewidth = 1, zeroline=False, ), font = dict( family=font_family, size=font_size, ), width = width_full, # in cm height = height_default_2, # in cm margin = dict( l=left_margin, #left margin in pixel r=right_margin, #right margin in pixel b=bottom_margin, #bottom margin in pixel t=top_margin, #top margin in pixel ), updatemenus=[], ), init_mode='all', default_trace=dict( boxmean=True, ), traces = [ dict(marker=dict(color='rgb(0,0,0)')), ] ) # General Functions to load data import autodisc as ad import warnings def measure_n_explored_bins(n_points_per_bin, n_bin_per_dim, n_dim): return len(n_points_per_bin) def calc_diversity(experiment_definitions, source_data, space_defintion, num_of_bins_per_dimension=5, ignore_out_of_range_values=False, data_filter=None): data_filter_inds = None if data_filter is not None and data_filter: # filter data according data_filter the given filter data_filter_inds = ad.gui.jupyter.misc.filter_experiments_data(source_data, data_filter) data_bin_descr_per_exp = dict() for exp_def in experiment_definitions: exp_id = exp_def['id'] rep_data_matricies = [] cur_bin_config = [] cur_matrix_data = [] for dim_name, dim_ranges in space_defintion.items(): # define the bin configuration for the current parameter cur_bin_config.append((dim_ranges[0], dim_ranges[1], num_of_bins_per_dimension)) cur_data_filter_inds = data_filter_inds[exp_id] if data_filter_inds is not None else None # get all repetition data for the current paramter try: cur_data = ad.gui.jupyter.misc.get_experiment_data(data=source_data, experiment_id=exp_id, data_source=dim_name, repetition_ids='all', data_filter_inds=cur_data_filter_inds) except Exception as err: if not isinstance(err, KeyError): raise Exception('Error during loading of data for Experiment {!r} (Datasource = {!r} )!'.format(exp_id, dim_name)) from err else: # could not load data warnings.warn('Could not load data for Experiment {!r} (Datasource = {!r} )!'.format(exp_id, dim_name)) cur_data = [] for rep_idx, cur_rep_data in enumerate(cur_data): cur_rep_data = np.array([cur_rep_data]).transpose() if rep_idx >= len(rep_data_matricies): rep_data_matricies.append(cur_rep_data) else: rep_data_matricies[rep_idx] = np.hstack((rep_data_matricies[rep_idx], cur_rep_data)) #print(rep_data_matricies[0].shape) cur_run_parameter_bin_descr_per_exp = [] for rep_idx, rep_matrix_data in enumerate(rep_data_matricies): rep_data = ad.helper.statistics.calc_space_distribution_bins(rep_matrix_data, cur_bin_config, ignore_out_of_range_values=ignore_out_of_range_values) #rep_data['dimensions'] = ['T', 'R', 'm', 's', 'b[0]', 'b[1]', 'b[2]'] cur_run_parameter_bin_descr_per_exp.append(rep_data) data_bin_descr_per_exp[exp_id] = cur_run_parameter_bin_descr_per_exp ######################## # calculate diversity measures based on the calculated space distribution bins data_diversity = dict() for exp_id in data_bin_descr_per_exp.keys(): data_diversity[exp_id] = dict() # n_explored_bins for exp_id, exp_data in data_bin_descr_per_exp.items(): cur_data = np.zeros(len(exp_data)) for rep_idx, rep_data in enumerate(exp_data): cur_data[rep_idx] = measure_n_explored_bins(rep_data['n_points'], num_of_bins_per_dimension, len(space_defintion)) data_diversity[exp_id]['n_explored_bins'] = cur_data return data_diversity, data_bin_descr_per_exp ``` # Diversity Curves - all entities ## Paramter Space ``` # Load data data_diversity_run_parameters, _ = calc_diversity( experiment_definitions, experiment_statistics, run_parameter_ranges, num_of_bins_per_dimension=num_of_bins_per_dimension, data_filter=data_filters['none']) # Plot Data import copy config = copy.deepcopy(default_config) fig = ad.gui.jupyter.plot_barbox_per_datasource(experiment_definitions=[exp_def['id'] for exp_def in experiment_definitions], repetition_ids=repetition_ids, data=data_diversity_run_parameters, data_source=['n_explored_bins'], config=config) ``` ## Statistic Space - All ``` # Load data data_diversity_statistic_space_all, _ = calc_diversity( experiment_definitions, experiment_statistics, statistic_ranges, num_of_bins_per_dimension=num_of_bins_per_dimension, data_filter=data_filters['none']) # Plot Data import copy config = copy.deepcopy(default_config) fig = ad.gui.jupyter.plot_barbox_per_datasource(experiment_definitions=[exp_def['id'] for exp_def in experiment_definitions], repetition_ids=repetition_ids, data=data_diversity_statistic_space_all, data_source=['n_explored_bins'], config=config) ``` ## Statistic Space - Animals ``` # Load data data_diversity_statistic_space_animals, _ = calc_diversity( experiment_definitions, experiment_statistics, statistic_ranges, num_of_bins_per_dimension=num_of_bins_per_dimension, data_filter=data_filters['animals (all)']) # Plot Data import copy config = copy.deepcopy(default_config) fig = ad.gui.jupyter.plot_barbox_per_datasource(experiment_definitions=[exp_def['id'] for exp_def in experiment_definitions], repetition_ids=repetition_ids, data=data_diversity_statistic_space_animals, data_source=['n_explored_bins'], config=config) ``` ## Statistic Space - Non Animals ``` # Load data data_diversity_statistic_space_nonanimals, _ = calc_diversity( experiment_definitions, experiment_statistics, statistic_ranges, num_of_bins_per_dimension=num_of_bins_per_dimension, data_filter=data_filters['non animals (all)']) # Plot Data import copy config = copy.deepcopy(default_config) fig = ad.gui.jupyter.plot_barbox_per_datasource(experiment_definitions=[exp_def['id'] for exp_def in experiment_definitions], repetition_ids=repetition_ids, data=data_diversity_statistic_space_nonanimals, data_source=['n_explored_bins'], config=config) ```
github_jupyter
# Tutorial demonstrating the basic functionality of the `iwatlas` package In this tutorial we will learn how to: - Download the data netcdf4 file - Make a point velocity time-series prediction - Make a prediction of velocity over a spatial region --- ``` # These are the sub-modules in the iwatlas package that we will use from iwatlas import sshdriver from iwatlas import uvdriver from iwatlas import harmonics from iwatlas import stratification as strat from iwatlas import iwaves import xarray as xr import pandas as pd import numpy as np from scipy.interpolate import interp1d import matplotlib.pyplot as plt # Uncomment this option to allow for interactive plot windows (e.g. zooming) # %matplotlib notebook # Set where you want to download the 200 MB data file # basedir = '/home/jupyter-ubuntu/data/iwatlas' basedir = '../DATA' %%time # Download the data if it does not exist import urllib, os # Link to a 200 MB data file on cloudstor # publicurl = 'https://cloudstor.aarnet.edu.au/plus/s/vdksw5WKFOTO0nD/download' publicurl = 'https://research-repository.uwa.edu.au/files/93942498/NWS_2km_GLORYS_hex_2013_2014_InternalWave_Atlas.nc' atlasfile = '{}/NWS_2km_GLORYS_hex_2013_2014_InternalWave_Atlas.nc'.format(basedir) if os.path.exists(basedir): print('Folder exists.') else: print('Making folder {}'.format(basedir)) os.mkdir(basedir) if os.path.exists(atlasfile): print('File exists.') else: print('Downloading file...') urllib.request.urlretrieve (publicurl, atlasfile) print('Done. Saved to {}'.format(atlasfile)) atlasfile # Load the atlas file as an object ssh = sshdriver.load_ssh_clim(atlasfile) ssh # WA-IMOS locations (August 2019) sites = { 'NIN100':{'y':-21.84986667,'x':113.9064667}, 'NWSBAR':{'y':-20.76128333,'x':114.7586167}, 'NWSROW':{'y':-17.75801667,'x':119.9061}, 'NWSBRW':{'y':-14.23543333,'x':123.1623833}, 'NWSLYN':{'y':-9.939416667,'x':130.3490833}, 'PIL200':{'x': 115.9154, 'y':-19.435333} , 'KIM200':{'x':121.243217 , 'y':-15.534517} , 'KIM400':{'x': 121.114967, 'y':-15.22125} , 'ITFTIS':{'x': 127.5577, 'y':-9.819217} , 'BB250':{'x':123.34613 , 'y':-13.75897} , 'Prelude':{'x':123.3506, 'y':-13.7641} , } ``` # Example 1: baroclinic velocity prediciton at a point ``` # Site of interest xpt = sites['Prelude']['x'] ypt = sites['Prelude']['y'] # Prediction time timeout = pd.date_range('2021-08-01','2021-09-01',freq='1H').values %%time # Compute the velocity as a function of z # this requires calculating vertical mode function for every time step so may take a minute or two uz, vz, zout = uvdriver.predict_uv_z(ssh, np.array([xpt]), np.array([ypt]), timeout) # Output arrays are size (Nz, Nxy, Nt) uz.shape, zout.shape # Plot the surface velocity usurf = uz[0,0,...] vsurf = vz[0,0,...] plt.figure(figsize=(12,6)) plt.plot(timeout, usurf,lw=0.5) plt.plot(timeout, vsurf,lw=0.5) plt.ylabel('Velocity [m/s]') plt.legend(('u','v')) plt.ylim(-1,1) plt.xlim(timeout[0],timeout[-1]) plt.grid(b=True,ls=':') # z-t contour plot of velocity plt.figure(figsize=(12,6)) plt.contourf(timeout, -zout[:,0,0].squeeze(), vz.squeeze(), np.linspace(-0.6,0.6,21), cmap='RdBu') plt.ylabel('Depth [m]') plt.xlim(timeout[0],timeout[-1]) plt.colorbar() # Output the surface velocity to a csv file df = pd.DataFrame({'u [m/s]':usurf,'v [m/s]':vsurf},index=timeout) outfile = '{}/IWATLAS_velocity_example.csv'.format(basedir) df.to_csv(outfile) print('Velocity prediction written to:\n\t{}'.format(outfile)) df ``` # Example 2: surface baroclinic velocity over a region ``` dx = 0.02 # 2 km x = np.arange(xpt-0.25, xpt+0.25+dx,dx) y = np.arange(ypt-0.25, ypt+0.25+dx,dx) X, Y = np.meshgrid(x,y) # Prediction time timeout = np.datetime64('2021-08-12 09:00:00') %%time # Compute the velocity as a function of z # this requires calculating vertical mode function for every time step so may take a minute or two uz, vz, zout = uvdriver.predict_uv_z(ssh, X.ravel(), Y.ravel(), np.array([timeout])) # uz has shape Nz, Nxy, Nt # Extract the surface velocity and reshape us = uz[0,...,0].reshape(X.shape) vs = vz[0,...,0].reshape(X.shape) # Make a contour plot of speed with vectors overlaid speed = np.abs(us + 1j*vs) plt.figure(figsize=(8,6)) plt.contourf(X, Y, speed, np.arange(0,1.1, 0.1), cmap='Reds') plt.colorbar() plt.quiver(X, Y, us, vs) # Overlay the depth contours ssh.contourf(ssh._ds['dv'],np.arange(100,400,10), colors='k', linewidths=0.2, filled=False, colorbar=False, xlims=[x[0],x[-1]], ylims=[y[0],y[-1]]) plt.plot(xpt, ypt,'rd') plt.gca().set_aspect('equal') plt.title(timeout) ```
github_jupyter
``` :dep smartcore = { version = "0.2.0", features=["nalgebra-bindings", "datasets"]} :dep nalgebra = "0.23.0" use nalgebra::{DMatrix, DVector, Scalar}; use std::error::Error; use std::io::prelude::*; use std::io::BufReader; use std::fs::File; use std::str::FromStr; fn parse_csv<N, R>(input: R) -> Result<DMatrix<N>, Box<dyn Error>> where N: FromStr + Scalar, N::Err: Error, R: BufRead { // initialize an empty vector to fill with numbers let mut data = Vec::new(); // initialize the number of rows to zero; we'll increment this // every time we encounter a newline in the input let mut rows = 0; // for each line in the input, for line in input.lines() { // increment the number of rows rows += 1; // iterate over the items in the row, separated by commas for datum in line?.split_terminator(",") { // trim the whitespace from the item, parse it, and push it to // the data array data.push(N::from_str(datum.trim())?); } } // The number of items divided by the number of rows equals the // number of columns. let cols = data.len() / rows; // Construct a `DMatrix` from the data in the vector. Ok(DMatrix::from_row_slice(rows, cols, &data[..])) } let file = File::open("../data/boston.csv")?; let bos: DMatrix<f64> = parse_csv(BufReader::new(file))?; bos.shape() println!("{}", bos.rows(0, 5)); let x = bos.columns(0, 13).into_owned(); let y = bos.column(13).into_owned(); (x.shape(), y.shape()) println!("{}", x.rows(0, 5)); println!("{}", y.rows(0, 5)); use smartcore::model_selection::train_test_split; let (x_train, x_test, y_train, y_test) = train_test_split(&x, &y.transpose(), 0.2, true); (x_train.shape(), y_train.shape(), x_test.shape(), y_test.shape()) let a = x_train.clone().insert_column(13, 1.0).into_owned(); let b = y_train.clone().transpose(); (a.shape(), b.shape()) println!("{}", a.rows(0, 5)); // A.T.dot(A) let a_t_a = a.transpose() * &a; // np.linalg.inv(A.T.dot(A)) let a_t_a_inv = a_t_a.try_inverse().unwrap(); // np.linalg.inv(A.T.dot(A)).dot(A.T).dot(b) let x_hat = a_t_a_inv * &a.transpose() * &b; let coeff = x_hat.rows(0, 13).into_owned(); let intercept = x_hat[(13, 0)]; println!("coeff: {}, intercept: {}", coeff, intercept); let y_hat_inv = (x_test.clone() * &coeff).add_scalar(intercept); use smartcore::metrics::mean_absolute_error; mean_absolute_error(&y_test, &y_hat_inv.transpose()) println!("y_hat: {}, y_true: {}", y_hat_inv.transpose().columns(0, 5), y_test.columns(0, 5)); // Q, R = np.linalg.qr(A) let qr = a.clone().qr(); let (q, r) = (qr.q().transpose().to_owned(), qr.r().to_owned()); // np.linalg.inv(R).dot(Q.T).dot(b) let r_inv = r.try_inverse().unwrap().to_owned(); let x_hat = r_inv * &q * &b; let coeff = x_hat.rows(0, 13).into_owned(); let intercept = x_hat[(13, 0)]; println!("coeff: {}, intercept: {}", coeff, intercept); let y_hat_qr = (x_test.clone() * &coeff).add_scalar(intercept); mean_absolute_error(&y_test, &y_hat_qr.transpose()) use smartcore::linear::linear_regression::LinearRegression; let lr = LinearRegression::fit(&x_train.clone(), &y_train.clone(), Default::default()).unwrap(); let lr_y_hat = lr.predict(&x_test).unwrap(); mean_absolute_error(&y_test, &lr_y_hat) use smartcore::ensemble::random_forest_regressor::RandomForestRegressor; let rf_y_hat = RandomForestRegressor::fit(&x_train, &y_train, Default::default()). and_then(|rf| rf.predict(&x_test)).unwrap(); mean_absolute_error(&y_test, &rf_y_hat) ```
github_jupyter
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_01_ai_gym.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # T81-558: Applications of Deep Neural Networks **Module 12: Reinforcement Learning** * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # Module 12 Video Material * **Part 12.1: Introduction to the OpenAI Gym** [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb) * Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb) * Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb) * Part 12.4: Atari Games with Keras Neural Networks [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb) * Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_apply_rl.ipynb) # Part 12.1: Introduction to the OpenAI Gym [OpenAI Gym](https://gym.openai.com/) aims to provide an easy-to-setup general-intelligence benchmark with a wide variety of different environments. The goal is to standardize how environments are defined in AI research publications so that published research becomes more easily reproducible. The project claims to provide the user with a simple interface. As of June 2017, developers can only use Gym with Python. OpenAI gym is pip-installed onto your local machine. There are a few significant limitations to be aware of: * OpenAI Gym Atari only **directly** supports Linux and Macintosh * OpenAI Gym Atari can be used with Windows; however, it requires a particular [installation procedure](https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30) * OpenAI Gym can not directly render animated games in Google CoLab. Because OpenAI Gym requires a graphics display, the only way to display Gym in Google CoLab is an embedded video. The presentation of OpenAI Gym game animations in Google CoLab is discussed later in this module. ### OpenAI Gym Leaderboard The OpenAI Gym does have a leaderboard, similar to Kaggle; however, the OpenAI Gym's leaderboard is much more informal compared to Kaggle. The user's local machine performs all scoring. As a result, the OpenAI gym's leaderboard is strictly an "honor's system." The leaderboard is maintained the following GitHub repository: * [OpenAI Gym Leaderboard](https://github.com/openai/gym/wiki/Leaderboard) If you submit a score, you are required to provide a writeup with sufficient instructions to reproduce your result. A video of your results is suggested, but not required. ### Looking at Gym Environments The centerpiece of Gym is the environment, which defines the "game" in which your reinforcement algorithm will compete. An environment does not need to be a game; however, it describes the following game-like features: * **action space**: What actions can we take on the environment, at each step/episode, to alter the environment. * **observation space**: What is the current state of the portion of the environment that we can observe. Usually, we can see the entire environment. Before we begin to look at Gym, it is essential to understand some of the terminology used by this library. * **Agent** - The machine learning program or model that controls the actions. Step - One round of issuing actions that affect the observation space. * **Episode** - A collection of steps that terminates when the agent fails to meet the environment's objective, or the episode reaches the maximum number of allowed steps. * **Render** - Gym can render one frame for display after each episode. * **Reward** - A positive reinforcement that can occur at the end of each episode, after the agent acts. * **Nondeterministic** - For some environments, randomness is a factor in deciding what effects actions have on reward and changes to the observation space. It is important to note that many of the gym environments specify that they are not nondeterministic even though they make use of random numbers to process actions. It is generally agreed upon (based on the gym GitHub issue tracker) that nondeterministic property means that a deterministic environment will still behave randomly even when given consistent seed value. The seed method of an environment can be used by the program to seed the random number generator for the environment. The Gym library allows us to query some of these attributes from environments. I created the following function to query gym environments. ``` import gym def query_environment(name): env = gym.make(name) spec = gym.spec(name) print(f"Action Space: {env.action_space}") print(f"Observation Space: {env.observation_space}") print(f"Max Episode Steps: {spec.max_episode_steps}") print(f"Nondeterministic: {spec.nondeterministic}") print(f"Reward Range: {env.reward_range}") print(f"Reward Threshold: {spec.reward_threshold}") ``` We will begin by looking at the MountainCar-v0 environment, which challenges an underpowered car to escape the valley between two mountains. The following code describes the Mountian Car environment. ``` query_environment("MountainCar-v0") ``` There are three distinct actions that can be taken: accelrate forward, decelerate, or accelerate backwards. The observation space contains two continuous (floating point) values, as evident by the box object. The observation space is simply the position and velocity of the car. The car has 200 steps to escape for each epasode. You would have to look at the code to know, but the mountian car recieves no incramental reward. The only reward for the car is given when it escapes the valley. ``` query_environment("CartPole-v1") ``` The CartPole-v1 environment challenges the agent to move a cart while keeping a pole balanced. The environment has an observation space of 4 continuous numbers: * Cart Position * Cart Velocity * Pole Angle * Pole Velocity At Tip To achieve this goal, the agent can take the following actions: * Push cart to the left * Push cart to the right There is also a continuous variant of the mountain car. This version does not simply have the motor on or off. For the continuous car the action space is a single floating point number that specifies how much forward or backward force is being applied. ``` query_environment("MountainCarContinuous-v0") ``` Note: ignore the warning above, it is a relativly inconsequential bug in OpenAI Gym. Atari games, like breakout can use an observation space that is either equal to the size of the Atari screen (210x160) or even use the RAM memory of the Atari (128 bytes) to determine the state of the game. Yes thats bytes, not kilobytes! ``` query_environment("Breakout-v0") query_environment("Breakout-ram-v0") ``` ### Render OpenAI Gym Environments from CoLab It is possible to visualize the game your agent is playing, even on CoLab. This section provides information on how to generate a video in CoLab that shows you an episode of the game your agent is playing. This video process is based on suggestions found [here](https://colab.research.google.com/drive/1flu31ulJlgiRL1dnN2ir8wGh9p7Zij2t). Begin by installing **pyvirtualdisplay** and **python-opengl**. ``` !pip install gym pyvirtualdisplay > /dev/null 2>&1 !apt-get install -y xvfb python-opengl ffmpeg > /dev/null 2>&1 ``` Next, we install needed requirements to display an Atari game. ``` !apt-get update > /dev/null 2>&1 !apt-get install cmake > /dev/null 2>&1 !pip install --upgrade setuptools 2>&1 !pip install ez_setup > /dev/null 2>&1 !pip install gym[atari] > /dev/null 2>&1 ``` Next we define functions used to show the video by adding it to the CoLab notebook. ``` import gym from gym.wrappers import Monitor import glob import io import base64 from IPython.display import HTML from pyvirtualdisplay import Display from IPython import display as ipythondisplay display = Display(visible=0, size=(1400, 900)) display.start() """ Utility functions to enable video recording of gym environment and displaying it To enable video, just do "env = wrap_env(env)"" """ def show_video(): mp4list = glob.glob('video/*.mp4') if len(mp4list) > 0: mp4 = mp4list[0] video = io.open(mp4, 'r+b').read() encoded = base64.b64encode(video) ipythondisplay.display(HTML(data='''<video alt="test" autoplay loop controls style="height: 400px;"> <source src="data:video/mp4;base64,{0}" type="video/mp4" /> </video>'''.format(encoded.decode('ascii')))) else: print("Could not find video") def wrap_env(env): env = Monitor(env, './video', force=True) return env ``` Now we are ready to play the game. We use a simple random agent. ``` #env = wrap_env(gym.make("MountainCar-v0")) env = wrap_env(gym.make("Atlantis-v0")) observation = env.reset() while True: env.render() #your agent goes here action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: break; env.close() show_video() ```
github_jupyter
# binary url encoding Kunnen we codes handig van string naar bitmap omzetten en terug? Yessur https://docs.python.org/2/library/binascii.html#binascii.hexlify ``` import pickle import binascii test_bools = [True, False, False, True] bytearray(test_bools) binascii.hexlify(bytearray(test_bools)) list(binascii.unhexlify(binascii.hexlify(bytearray(test_bools)))) ``` Hm, nee dat is niet echt de compacte representatie die ik in gedachten had... oh dit is beter: https://docs.python.org/3/library/base64.html ``` bin([True, False]) def bool_list_to_binary_int(bool_list): return sum(int(v)*2**i for i,v in enumerate(bool_list[::-1])) def bool_list_to_binary_repr(bool_list): return bin(bool_list_to_binary_int(bool_list)) bool_list_to_binary_repr(test_bools).encode('base64') str(bool_list_to_binary_int(test_bools)).encode('base64') bytes(bool_list_to_binary_repr(test_bools).encode('ascii')) import base64 base64.urlsafe_b64encode(str(bool_list_to_binary_int(test_bools))) import BitVector bv = BitVector.BitVector(bitlist=test_bools) print(bv) import io import codecs ``` tests met io weer weggehaald ``` bv = BitVector.BitVector(bitlist=test_bools * 2) print(bv) with open('testhy.txt', 'wb') as fh: bv.write_to_file(fh) with open('testhy.txt', 'rb') as fh: brap = fh.read() proep = codecs.encode(brap, 'base64') brap, proep bv_lang = BitVector.BitVector(bitlist=test_bools * 8) print(bv_lang) with open('testlang.txt', 'wb') as fh: bv_lang.write_to_file(fh) with open('testlang.txt', 'rb') as fh: brap = fh.read() proep = codecs.encode(brap, 'base64') brap, proep bv_langer = BitVector.BitVector(bitlist=test_bools * 16) with open('testlanger.txt', 'wb') as fh: bv_langer.write_to_file(fh) with open('testlanger.txt', 'rb') as fh: brap = fh.read() proep = codecs.encode(brap, 'base64') brap, proep bv_langst = BitVector.BitVector(bitlist=test_bools * 32) with open('testlangst.txt', 'wb') as fh: bv_langst.write_to_file(fh) with open('testlangst.txt', 'rb') as fh: brap = fh.read() proep = codecs.encode(brap, 'base64') brap, proep bv_lung = BitVector.BitVector(bitlist=[1, 1, 1, 0] * 64 * 3) with open('testlung.txt', 'wb') as fh: bv_lung.write_to_file(fh) with open('testlung.txt', 'rb') as fh: brap = fh.read() proep = codecs.encode(brap, 'base64') brap, proep ``` Die ='s aan het eind komen omdat base64 3x8=24 bits in 4 karakter opslaat, dus 6 bits per karakter. Als je dus geen factor 6 erin hebt dan krijg je een = aan het eind, wrs iets van de lege extra karakter ofzo. Idd, zie hier, het is padding: https://stackoverflow.com/a/36571117/1199693 Kun je in een URL dus wel weglaten, alleen moet je ze dan weer toevoegen als je encoding string geen multiple van 4 is. ``` codecs.encode([True, False], 'base64') byte_file_like = io.BytesIO() dinkie = [True, False, False, True, True, False, False, True] byte_file_like.write(bytes(codecs.encode(str(dinkie), encoding='utf-8'))) byte_file_like.seek(0) byte_file_like.read() ```
github_jupyter
``` #export import k1lib, torch.nn as nn, torch, dill, traceback from k1lib.callbacks import Cbs from time import time as _time __all__ = ["CancelRunException", "CancelEpochException", "CancelBatchException", "Learner"] #export class CancelRunException(Exception): """Used in core training loop, to skip the run entirely""" pass class CancelEpochException(Exception): """Used in core training loop, to skip to next epoch""" pass class CancelBatchException(Exception): """Used in core training loop, to skip to next batch""" pass #export class Learner: def __init__(self): self._model = None; self._data = None; self._opt = None self._cbs = None; self.fileName = None self.css = "*"; self.exceptionRaised = None # slowly pops self.cbs = k1lib.Callbacks().withBasics().withQOL().withAdvanced() @property def model(self): """Set this to change the model to run""" return self._model @model.setter def model(self, model): self._model = model @property def data(self): """Set this to change the data (list of 2 dataloader) to run against.""" return self._data @data.setter def data(self, data): self._data = data @property def opt(self): """Set this to change the optimizer. If you're making your own optimizers, beware to follow the PyTorch's style guide as there are callbacks that modifies optimizer internals while training like :class:`k1lib.schedule.ParamScheduler`.""" return self._opt @opt.setter def opt(self, opt): self._opt = opt @property def cbs(self): """The :class:`~k1lib.callbacks.callbacks.Callbacks` object. Initialized to include all the common callbacks. You can set a new one if you want to.""" return self._cbs @cbs.setter def cbs(self, cbs): cbs.l = self; self._cbs = cbs @property def css(self) -> str: """The css selector string. Set this to select other parts of the network. After setting, you can access the selector like this: :code:`l.selector` See also: :class:`~k1lib.selector.ModuleSelector`""" return self._css @css.setter def css(self, css:str): self._css = css if self.model != None: self.selector = k1lib.selector.select(self.model, self.css) @property def lossF(self): """Set this to specify a loss function.""" raise NotImplementedError("lossF actually doesn't really exist. Used to exist as a core part of Learner, but then has been converted to Cbs.LossF") @lossF.setter def lossF(self, lossF): if hasattr(self.cbs, "LossF"): self.cbs.LossF.lossF = lossF else: self.cbs.add(Cbs.LossF(lossF)) def __getattr__(self, attr): if attr == "cbs": raise AttributeError() return getattr(self.cbs, attr) def __getstate__(self): answer = dict(self.__dict__); del answer["selector"]; return answer def __setstate__(self, state): self.__dict__.update(state) self.css = self.css; self.cbs.l = self def evaluate(self): pass # supposed to be overriden, to provide functionality here @property def _warnings(self): warnings = "Warning: no model yet. Set using `l.model = ...`\n" if self.model == None else "" lossClasses = tuple([*k1lib.Callback.lossCls]) lossFnCbs = [True for cb in self.cbs if isinstance(cb, lossClasses)] warnings += "Warning: no loss function callback detected (or you set `lossF` already but then erased all callbacks)! Set using `l.lossF = ...` or `l.cbs.add(Cbs.LossF(...))`\n" if len(lossFnCbs) == 0 else "" warnings += "Warning: no data yet. Set using `l.data = ...`\n" if self.data == None else "" warnings += "Warning: no optimizer yet. Set using `l.opt = ...`\n" if self.opt == None else "" if warnings != "": warnings += "\n\n" return warnings def __dir__(self): answer = list(super().__dir__()) answer.extend(self.cbs.cbsDict.keys()); return answer def __repr__(self): return f"""{self._warnings}l.model:\n{k1lib.tab(k1lib.limitLines(str(self.model)))} l.opt:\n{k1lib.tab(k1lib.limitLines(str(self.opt)))} l.cbs:\n{k1lib.tab(k1lib.limitLines(self.cbs.__repr__()))} Use... - l.model = ...: to specify a nn.Module object - l.data = ...: to specify data object - l.opt = ...: to specify an optimizer - l.lossF = ...: to specify a loss function - l.css = ...: to select modules using CSS. "#root" for root model - l.cbs = ...: to use a custom `Callbacks` object - l.selector: to get the modules selected by `l.css` - l.run(epochs): to run the network - l.Loss: to get a specific callback, this case "Loss"\n\n""" #export @k1lib.patch(Learner) def save(self, fileName:str=None): """Saves this :class:`Learner` to file. See also: :meth:`load` :param fileName: if empty, then will save as "learner-0.pth", with 0 changeable to avoid conflicts. If resave this exact :class:`Learner`, then use the old name generated before""" self.fileName = fileName or self.fileName if self.fileName == None: files = [file for file in os.listdir() if file.startswith("learner") and file.endswith(".pth")] files = set([int(file.split(".pth")[0].split("learner-")[1]) for file in files]) count = 0; while count in files: count += 1 self.fileName = f"l-{count}.pth" torch.save(self, self.fileName, pickle_module=dill) print(f"Saved to {self.fileName}") @k1lib.patch(Learner, static=True) def load(fileName:str=None): """Loads a :class:`Learner` from a file. See also: :meth:`save` :param fileName: if empty, then will prompt for file name""" f = fileName or input("Enter learner file name to load:") print(f"Loaded from {f}"); return torch.load(f, pickle_module=dill) #export @k1lib.patch(Learner) def _run1Batch(self): self.cbs("startBatch") try: self.cbs("startPass", "inPass", "endPass") self.cbs("startLoss", "inLoss", "endLoss") if not self.cbs("startBackward"): self.lossG.backward() if not self.cbs("startStep"): self.opt.step() if not self.cbs("startZeroGrad"): self.opt.zero_grad(set_to_none=True) except k1lib.CancelBatchException as ex: self.cbs("cancelBatch"); print(f"Batch cancelled: {ex}.") except (k1lib.CancelEpochException, k1lib.CancelRunException) as ex: # makes sure cancelBatch and endBatch gets called, for potential # cleanups, then reraise the exception self.cbs("cancelBatch", "endBatch"); raise ex self.cbs("endBatch") #export class DI: # data interceptor, just to record data loading times def __init__(self, l:Learner, data): self.l = l; self.data = data def __len__(self): return len(self.data) def __iter__(self): try: data = iter(self.data); timings = self.l.cbs.timings while True: beginTime = _time(); d = next(data) timings.loadData += _time() - beginTime; yield d except StopIteration: pass #export @k1lib.patch(Learner) def _run1Epoch(self): self.cbs("startEpoch") try: train, valid = self.data; train = DI(self, train); valid = DI(self, valid) try: self.batches = len(train) + len(valid) except: pass self.model.train() for self.batch, (self.xb, self.yb, *self.metab) in enumerate(train): self._run1Batch() trainLen = self.batch + 1 if not self.cbs("startValidBatches"): self.model.eval(); for self.batch, (self.xb, self.yb, *self.metab) in enumerate(valid): self.batch += trainLen; self._run1Batch() if self.batches is None: self.batches = self.batch + 1 except k1lib.CancelEpochException as ex: self.cbs("cancelEpoch"); print(f"Epoch cancelled: {ex}.") except k1lib.CancelRunException as ex: self.cbs("cancelEpoch", "endEpoch"); raise ex self.cbs("endEpoch") #export @k1lib.patch(Learner) def run(self, epochs:int, batches:int=None): """Main run function. :param epochs: number of epochs to run. 1 epoch is the length of the dataset :param batches: if set, then cancels the epoch after reaching the specified batch""" if self._warnings != "": if not input(f"""You still have these warnings:\n\n{self._warnings} Do you want to continue? (y/n) """).lower().startswith("y"): print("Run ended"); return self.epochs = epochs; self.batches = None self.css = self.css # update module selector with self.cbs.context(): if batches != None: self.cbs.add(Cbs.BatchLimit(batches)) self.cbs("startRun") try: for self.epoch in range(epochs): self._run1Epoch() except k1lib.CancelRunException as ex: self.cbs("cancelRun"); print(f"Run cancelled: {ex}.") self.cbs("endRun"); return self #export @k1lib.patch(Learner) def __call__(self, xb, yb=None): """Executes just a small batch. Convenience method to query how the network is doing. :param xb: x batch :param yb: y batch. If specified, return (y, loss), else return y alone """ oldData = self.data; self.data = [[(xb, (yb or torch.tensor(0)))], []] with self.cbs.suspendEval(), self.cbs.context(): ex = lambda _: k1lib.raiseEx(k1lib.CancelBatchException) self.cbs.add(k1lib.Callback().withCheckpoint("startLoss" if yb is None else "startBackward", ex)) self.run(1, 1) self.data = oldData; return self.y if yb is None else (self.y, self.loss) #export @k1lib.patch(Learner) def evaluate(self): """Function to visualize quickly how the network is doing. Undefined by default, just placed here as a convention, so you have to do something like this:: l = k1lib.Learner() def evaluate(self): xbs, ybs, ys = self.Recorder.record(1, 3) plt.plot(torch.vstack(xbs), torch.vstack(ys)) l.evaluate = partial(evaluate(l)) """ raise NotImplementedError("You have to define evaluate() by yourself") #export from k1lib.cli import * @k1lib.patch(Learner, static=True) def sample() -> Learner: """Creates an example learner, just for simple testing stuff anywhere. The network tries to learn the function y=x. Only bare minimum callbacks are included.""" l = Learner(); l.data = k1lib.kdata.FunctionData.main(lambda x: x) class Model(torch.nn.Module): def __init__(self): super().__init__() self.lin1 = k1lib.knn.LinBlock(1, 3) self.lin2 = nn.Linear(3, 1) def forward(self, x): return ((x[:, None] + 2) | self.lin1 | self.lin2).squeeze() l.model = Model(); l.cbs = k1lib.Callbacks().add(Cbs.CoreNormal()).add(Cbs.Loss()).add(Cbs.ProgressBar()) l.lossF = lambda y, yb: ((y - yb) ** 2).sum() l.opt = torch.optim.Adam(l.model.parameters(), lr=3e-3); return l l = Learner.sample(); l.run(50); tolerance=0.5 print(pred := l(torch.tensor([3.0])).item()) assert (3-tolerance) < pred < (3+tolerance) print(l.Loss.train[-3:])#; l.Loss.plot()[2000:] !../export.py _learner ```
github_jupyter
# Stratified LD Score Regression This notebook implements the pipepline of [S-LDSC](https://github.com/bulik/ldsc/wiki) for LD score and functional enrichment analysis. It is written by Anmol Singh (singh.anmol@columbia.edu), with input from Dr. Gao Wang. **FIXME: the initial draft is complete but pending Gao's review and documentation with minimal working example** The pipeline is developed to integrate GWAS summary statistics data, annotation data, and LD reference panel data to compute functional enrichment for each of the epigenomic annotations that the user provides using the S-LDSC model. We will first start off with an introduction, instructions to set up, and the minimal working examples. Then the workflow code that can be run using SoS on any data will be at the end. ## A brief review on Stratified LD score regression Here I briefly review LD Score Regression and what it is used for. For more in depth information on LD Score Regression please read the following three papers: 1. "LD Score regression distinguishes confounding from polygenicity in genome-wide association studies" by Sullivan et al (2015) 2. "Partitioning heritability by functional annotation using genome-wide association summary statistics" by Finucane et al (2015) 3. "Linkage disequilibrium–dependent architecture of human complex traits shows action of negative selection" by Gazal et al (2017) As stated in Sullivan et al 2015, confounding factors and polygenic effects can cause inflated test statistics and other methods cannot distinguish between inflation from confounding bias and a true signal. LD Score Regression (LDSC) is a technique that aims to identify the impact of confounding factors and polygenic effects using information from GWAS summary statistics. This approach involves using regression to mesaure the relationship between Linkage Disequilibrium (LD) scores and test statistics of SNPs from the GWAS summary statistics. Variants in LD with a "causal" variant show an elevation in test statistics in association analysis proportional to their LD (measured by $r^2$) with the causal variant within a certain window size (could be 1 cM, 1kB, etc.). In contrast, inflation from confounders such as population stratification that occur purely from genetic drift will not correlate with LD. For a polygenic trait, SNPs with a high LD score will have more significant χ2 statistics on average than SNPs with a low LD score. Thus, if we regress the $\chi^2$ statistics from GWAS against LD Score, the intercept minus one is an estimator of the mean contribution of confounding bias to the inflation in the test statistics. The regression model is known as LD Score regression. ### LDSC model Under a polygenic assumption, in which effect sizes for variants are drawn independently from distributions with variance proportional to $1/(p(1-p))$ where p is the minor allele frequency (MAF), the expected $\chi^2$ statistic of variant j is: $$E[\chi^2|l_j] = Nh^2l_j/M + Na + 1 \quad (1)$$ where $N$ is the sample size; $M$ is the number of SNPs, such that $h^2/M$ is the average heritability explained per SNP; $a$ measures the contribution of confounding biases, such as cryptic relatedness and population stratification; and $l_j = \sum_k r^2_{jk}$ is the LD Score of variant $j$, which measures the amount of genetic variation tagged by $j$. A full derivation of this equation is provided in the Supplementary Note of Sullivan et al (2015). An alternative derivation is provided in Supplementary Note of Zhu and Stephens (2017) AoAS. From this we can see that LD Score regression can be used to compute SNP-based heritability for a phenotype or trait, from GWAS summary statistics and does not require genotype information like other methods such as REML do. ### Stratified LDSC Heritability is the proportion of phenotypic variation (VP) that is due to variation in genetic values (VG) and thus can tell us how much of the difference in observed phenotypes in a sample is due to difference in genetics in the sample. It can also be extended to analyze partitioned heritability for a phenotype/trait split over categories. For Partitioned Heritability or Stratified LD Score Regression (S-LDSC) more power is added to our analysis by leveraging LD Score information as well as using SNPs that haven't reached Genome Wide Significance to partition heritability for a trait over categories which many other methods do not do. S-LDSC relies on the fact that the $\chi^2$ association statistic for a given SNP includes the effects of all SNPs tagged by this SNP meaning that in a region of high LD in the genome the given SNP from the GWAS represents the effects of a group of SNPs in that region. S-LDSC determines that a category of SNPs is enriched for heritability if SNPs with high LD to that category have more significant $\chi^2$ statistics than SNPs with low LD to that category. Here, enrichment of a category is defined as the proportion of SNP heritability in the category divided by the proportion of SNPs in that category. More precisely, under a polygenic model, the expected $\chi^2$ statistic of SNP $j$ is $$E[\chi^2_j] = N\sum_CT_Cl(j,C) + Na + 1 \quad (2)$$ where $N$ is sample size, C indexes categories, $ℓ(j, C)$ is the LD score of SNP j with respect to category $l(j,C) = \sum_{k\epsilon C} r^2_{jk}$, $a$ is a term that measures the contribution of confounding biases, and if the categories are disjoint, $\tau_C$ is the per-SNP heritability in category $C$; if the categories overlap, then the per-SNP heritability of SNP j is $\sum_{C:j\epsilon C} \tau_C$. Equation 2 allows us to estimate $\tau_C$ via a (computationally simple) multiple regression of $\chi^2$ against $ℓ(j, C)$, for either a quantitative or case-control study. To see how these methods have been applied to real world data as well as a further discussion on methods and comparisons to other methods please read the three papers listed at the top of the document. ## Command Interface ``` !sos run LDSC_Code.ipynb -h ``` ## Make Annotation File ``` [make_annot] # Make Annotated Bed File # path to bed file parameter: bed = str #path to bim file parameter: bim = str #name of output annotation file parameter: annot = str bash: expand = True make_annot.py --bed-file {bed} --bimfile {bim} --annot-file {annot} ``` ## Munge Summary Statistics (Option 1: No Signed Summary Statistic) ``` #This option is for when the summary statistic file does not contain a signed summary statistic (Z or Beta). #In this case,the program will calculate Z for you based on A1 being the risk allele [munge_sumstats_no_sign] #path to summary statistic file parameter: sumst = str #path to Hapmap3 SNPs file, keep all columns (SNP, A1, and A2) for the munge_sumstats program parameter: alleles = "w_hm3.snplist" #path to output file parameter: output = str bash: expand = True munge_sumstats.py --sumstats {sumst} --merge-alleles {alleles} --out {output} --a1-inc ``` ## Munge Summary Statistics (Option 2: No Signed Summary Statistic) ``` # This option is for when the summary statistic file does contain a signed summary statistic (Z or Beta) [munge_sumstats_sign] #path to summary statistic file parameter: sumst = str #path to Hapmap3 SNPs file, keep all columns (SNP, A1, and A2) for the munge_sumstats program parameter: alleles = "w_hm3.snplist" #path to output file parameter: output = str bash: expand = True munge_sumstats.py --sumstats {sumst} --merge-alleles {alleles} --out {output} ``` ## Calculate LD Scores **Make sure to delete SNP,CHR, and BP columns from annotation files if they are present otherwise this code will not work. Before deleting, if these columns are present, make sure that the annotation file is sorted.** ``` #Calculate LD Scores #**Make sure to delete SNP,CHR, and BP columns from annotation files if they are present otherwise this code will not work. Before deleting, if these columns are present, make sure that the annotation file is sorted.** [calc_ld_score] #Path to bim file parameter: bim = str #Path to annotation File. Make sure to remove the SNP, CHR, and BP columns from the annotation file if present before running. parameter: annot_file = str #name of output file parameter: output = str #path to Hapmap3 SNPs file, remove the A1 and A2 columns for the Calculate LD Scores program parameter: snplist = "w_hm3.snplist" bash: expand = True ldsc.py --bfile {bim} --l2 --ld-wind-cm 1 --annot {annot_file} --thin-annot --out {output} --print-snps {snplist} ``` ## Calculate Functional Enrichment using Annotations ``` #Calculate Enrichment Scores for Functional Annotations [calc_enrichment] #Path to Summary statistics File parameter: sumstats = str #Path to Reference LD Scores Files (Base Annotation + Annotation you want to analyze, format like minimal working example) parameter: ref_ld = str #Path to LD Weight Files (Format like minimal working example) parameter: w_ld = str #path to frequency files (Format like minimal working example) parameter: frq_file = str #Output name parameter: output = str bash: expand = True ldsc.py --h2 {sumstats} --ref-ld-chr {ref_ld} --w-ld-chr {w_ld} --overlap-annot --frqfile-chr {frq_file} --out {output} ```
github_jupyter
``` import nltk import numpy as np from sklearn.preprocessing import normalize from nltk.corpus import genesis as gen from nltk import pos_tag np.random.seed(10000) word_corpus = {} word_corpus_rev = {} num = 0 for word in (nltk.corpus.genesis.words()): if word not in word_corpus and word.isalpha(): word_corpus_rev[num] = word word_corpus[word] = num num += 1 len(word_corpus_rev) def generate_random(a): index = np.random.permutation(len(a))[0:10] a[index] = np.random.random(10) return a states = ['CC','CD','DT','EX','FW','IN','JJ','JJR','JJS','LS','MD','NN','NNS','NNP','NNPS','PDT','POS','PRP','PRP','RB','RBR','RBS','RP','TO','UH',"VB",'VBD','VBG','VBN','VBP','VBZ','WDT','WP','WP','WRB'] transition = np.zeros((len(states),len(states))) emission = np.zeros((len(word_corpus),len(states))) initial = np.zeros((len(states),1)) transition = normalize(np.apply_along_axis(generate_random, 0,transition), norm = 'l1') emission = normalize(np.apply_along_axis(generate_random, 1,emission), norm = 'l1') initial = normalize(np.apply_along_axis(generate_random, 0,initial), norm = 'l1') # transition = normalize(np.random.random((len(states),len(states))), norm='l1') * 100 # emission = normalize(np.random.random((len(word_corpus),len(states))), norm = 'l1') * 100 # initial = normalize(np.random.random((len(states),1)), norm = 'l1') print(emission.T.shape) transitionc = {} for i,condition in enumerate(states): dic = {} for j,state in enumerate(states): dic[state] = transition[i][j] transitionc[condition] = ( nltk.probability.DictionaryProbDist(dic)) transitionc = nltk.probability.DictionaryConditionalProbDist(transitionc); emissionc = {} for i,condition in enumerate(list(word_corpus)): dic = {} for j,state in enumerate(states): dic[state] = emission[i][j] emissionc[condition] = ( nltk.probability.DictionaryProbDist(dic)) emissionc = nltk.probability.DictionaryConditionalProbDist(transitionc); initialc = {} for i,condition in enumerate(states): initialc[condition] = initial[i] initialc = nltk.probability.DictionaryProbDist(initialc) emissionc model = nltk.tag.hmm.HiddenMarkovModelTagger(list(word_corpus),states,transitionc,emissionc,initialc) # a = nltk.pos_tag(nltk.corpus.genesis.sents()) obs_nltk = [] obs_blake = [] obs_vidur = [] lens = [] test = [] for sentence in gen.sents(): sen = [] sent = [] for word in sentence: if word.isalpha(): # test.append(word) sen.append((word,pos_tag([word])[0][1])) obs_blake.append(word_corpus[word]) sent.append(word_corpus[word]) lens.append(len(sen)) obs_nltk.append(sen) obs_vidur.append(sent) obs_nltk print(len(set(test)),len(set(list(word_corpus)))) print((set(test) - set(word_corpus))) import time index = [] obs_vidur = np.array(obs_vidur) for i, item in enumerate(lens): if item < 1: print (i) index.append(i) lens = np.delete(lens,index) obs_vidur = np.delete(obs_vidur,index) a = [] for sentence in gen.sents(): sen = [] for word in sentence: if word.isalpha(): sen.append((word,pos_tag([word])[0][1])) a.append(sen) trainer = nltk.tag.hmm.HiddenMarkovModelTrainer() start = time.clock() trainer.train_unsupervised(unlabeled_sequences=obs_nltk , model = model, max_iterations = 1) print (time.clock() - start) %load_ext autoreload %autoreload 2 from HMM import * model2 = hmm(len(states),25513) model2.custom(initial,transition,emission.T) start = time.clock() for i in range(2): model2.fit(obs_vidur,lens) time.clock() - start obs_blake ```
github_jupyter
# Generating Multiple Benchmark Flow Traffic Sets In this example, we write a script which will generate multiple benchmark traffic sets in a loop and save them in .pickle format. We will assume we are generating traffic for a `TrafPy` fat tree topology, although of course you can generate traffic for any arbitrary topology defined outside of `TrafPy` (see documentation and other examples). We will generate the rack distribution sensitivity benchmark data set for loads 0.1-0.5. ``` import trafpy.generator as tpg from trafpy.benchmarker import BenchmarkImporter import numpy as np import time import os from collections import defaultdict # use for initialising arbitrary length nested dict from sqlitedict import SqliteDict import json from pathlib import Path import gzip import pickle ``` ## 1. Define Generation Configuration If you were writing this in a script rather than a Jupyter Notebook, you may want to e.g. put this next cell in a `config.py` file and import the file into a separate script for conciseness. ``` # ------------------------------------------------------------------------- # general configuration # ------------------------------------------------------------------------- # define benchmark version BENCHMARK_VERSION = 'v001' # define minimum number of demands to generate (may generate more to meet jensen_shannon_distance_threshold and/or min_last_demand_arrival_time) MIN_NUM_DEMANDS = None MAX_NUM_DEMANDS = 5000 # define maximum allowed Jenson-Shannon distance for flow size and interarrival time distributions (lower value -> distributions must be more similar -> higher number of demands will be generated) (must be between 0 and 1) JENSEN_SHANNON_DISTANCE_THRESHOLD = 0.3 # define minimum time of last demand's arrival (helps define minimum simulation time) MIN_LAST_DEMAND_ARRIVAL_TIME = None # define network load fractions LOADS = [round(load, 3) for load in np.arange(0.1, 0.4, 0.1).tolist()] # ensure no python floating point arithmetic errors # define whether or not to TrafPy packer should auto correct invalid node distribution(s) AUTO_NODE_DIST_CORRECTION = True # slot size (if None, won't generate slots_dict database) # SLOT_SIZE = None SLOT_SIZE = 1000.0 # 50.0 1000.0 10.0 # ------------------------------------------------------------------------- # benchmark-specific configuration # ------------------------------------------------------------------------- BENCHMARKS = ['rack_sensitivity_0', 'rack_sensitivity_02', 'rack_sensitivity_04', 'rack_sensitivity_06', 'rack_sensitivity_08'] # define network topology for each benchmark net = tpg.gen_fat_tree(k=4, L=2, n=8, num_channels=1, server_to_rack_channel_capacity=1250, # 1250 rack_to_edge_channel_capacity=1000, edge_to_agg_channel_capacity=1000, agg_to_core_channel_capacity=2000) NETS = {benchmark: net for benchmark in BENCHMARKS} # define network capacity for each benchmark NETWORK_CAPACITIES = {benchmark: net.graph['max_nw_capacity'] for benchmark in BENCHMARKS} NETWORK_EP_LINK_CAPACITIES = {benchmark: net.graph['ep_link_capacity'] for benchmark in BENCHMARKS} # define network racks for each benchmark RACKS_DICTS = {benchmark: net.graph['rack_to_ep_dict'] for benchmark in BENCHMARKS} ``` ## 2. Write a Function to Generate the Benchmark Traffic This function should use the above configuration variables to generate traffic for each of our benchmarks as required. ``` def gen_benchmark_demands(path_to_save=None, load_prev_dists=True, overwrite=False): ''' If slot size is not None, will also generate an sqlite database for the slots_dict dictionary. This is useful if later during simulations want to have pre-computed slots_dict rather than computing & storing them in memory. ''' if path_to_save[-1] == '/' or path_to_save[-1] == '\\': path_to_save = path_to_save[:-1] # init benchmark importer importer = BenchmarkImporter(BENCHMARK_VERSION, load_prev_dists=load_prev_dists) # load distributions for each benchmark benchmark_dists = {benchmark: {} for benchmark in BENCHMARKS} nested_dict = lambda: defaultdict(nested_dict) benchmark_demands = nested_dict() # begin generating data for each benchmark num_loads = len(LOADS) start_loops = time.time() print('\n~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*') print('Benchmarks to Generate: {}'.format(BENCHMARKS)) print('Loads to generate: {}'.format(LOADS)) for benchmark in BENCHMARKS: print('~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*') print('Generating demands for benchmark \'{}\'...'.format(benchmark)) # get racks and endpoints racks_dict = RACKS_DICTS[benchmark] if racks_dict is not None: eps_racks_list = [eps for eps in racks_dict.values()] eps = [] for rack in eps_racks_list: for ep in rack: eps.append(ep) else: eps = NETS[benchmark].graph['endpoints'] start_benchmark = time.time() load_counter = 1 benchmark_dists[benchmark] = importer.get_benchmark_dists(benchmark, eps, racks_dict=racks_dict) for load in LOADS: start_load = time.time() network_load_config = {'network_rate_capacity': NETWORK_CAPACITIES[benchmark], 'ep_link_capacity': NETWORK_EP_LINK_CAPACITIES[benchmark], 'target_load_fraction': load, 'disable_timeouts': True} print('Generating demand data for benchmark {} load {}...'.format(benchmark, load)) if benchmark_dists[benchmark]['num_ops_dist'] is not None: # job-centric use_multiprocessing = True else: # flow-centric use_multiprocessing = False demand_data = tpg.create_demand_data(min_num_demands=MIN_NUM_DEMANDS, max_num_demands=MAX_NUM_DEMANDS, eps=eps, node_dist=benchmark_dists[benchmark]['node_dist'], flow_size_dist=benchmark_dists[benchmark]['flow_size_dist'], interarrival_time_dist=benchmark_dists[benchmark]['interarrival_time_dist'], num_ops_dist=benchmark_dists[benchmark]['num_ops_dist'], c=3, jensen_shannon_distance_threshold=JENSEN_SHANNON_DISTANCE_THRESHOLD, network_load_config=network_load_config, min_last_demand_arrival_time=MIN_LAST_DEMAND_ARRIVAL_TIME, auto_node_dist_correction=AUTO_NODE_DIST_CORRECTION, use_multiprocessing=use_multiprocessing, print_data=False) file_path = path_to_save + '/benchmark_{}_load_{}'.format(benchmark, load) tpg.pickle_data(path_to_save=file_path, data=demand_data, overwrite=overwrite) # reset benchmark demands dict to save memory benchmark_demands = nested_dict() if SLOT_SIZE is not None: # generate slots dict and save as database print('Creating slots_dict database with slot_size {}...'.format(SLOT_SIZE)) s = time.time() demand = tpg.Demand(demand_data, eps=eps) with SqliteDict(file_path+'_slotsize_{}_slots_dict.sqlite'.format(SLOT_SIZE)) as slots_dict: for key, val in demand.get_slots_dict(slot_size=SLOT_SIZE, include_empty_slots=True, print_info=True).items(): if type(key) is not str: slots_dict[json.dumps(key)] = val else: slots_dict[key] = val slots_dict.commit() slots_dict.close() e = time.time() print('Created slots_dict database in {} s'.format(e-s)) else: pass end_load = time.time() print('Generated \'{}\' demands for load {} of {} in {} seconds.'.format(benchmark, load_counter, num_loads, end_load-start_load)) load_counter += 1 end_benchmark = time.time() print('Generated demands for benchmark \'{}\' in {} seconds.'.format(benchmark, end_benchmark-start_benchmark)) end_loops = time.time() print('Generated all benchmarks in {} seconds.'.format(end_loops-start_loops)) return benchmark_demands ``` ## 3. Generate the Benchmark Traffic We will generate each of our traffic sets 2x to enable us to run 2 repeat experiments for each set ``` for _set in range(2): path_to_save = '../data/generate_multiple_benchmark_traffic_sets/set_{}_benchmark_data'.format(_set) Path(path_to_save).mkdir(exist_ok=True, parents=True) benchmark_demands = gen_benchmark_demands(path_to_save=path_to_save, load_prev_dists=False, overwrite=False) ```
github_jupyter
Getting started with predictsignauxfaibles - Training a logistic regression === In this notebook, we'll focus on using predictsignauxfaibles to train a logistic regression, in a way that much of the code here can be reused to quickly test other models.\ \ In `predictsignauxfaibles`, our models are "declared and specified" in `models/<MODEL_NAME>/model_conf.py`\ Our processing pipeline works as following: - fetching input vairables for train, test and prediction (when pertains) sets - pre-processing our data to produce model features - feed this pre-processed data into a model, produce evaluation metrics and predictions - log training/testing/prediction statistics \ Here we will assume that you wish to train a model that uses the same pre-processing steps as in `models/default/model_conf.py` ``` from pathlib import Path import importlib.util import logging logging.getLogger().setLevel(logging.INFO) from sklearn.base import BaseEstimator from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline from sklearn.preprocessing import OneHotEncoder, StandardScaler from sklearn.model_selection import train_test_split from sklearn.metrics import fbeta_score, balanced_accuracy_score from sklearn_pandas import DataFrameMapper import pandas as pd import predictsignauxfaibles.models from predictsignauxfaibles.data import SFDataset from predictsignauxfaibles.config import OUTPUT_FOLDER, IGNORE_NA from predictsignauxfaibles.pipelines import run_pipeline from predictsignauxfaibles.utils import load_conf ``` Logging a preprocessing & model configuration --- The following code fetches the configuration module for model `default`, so that we can easily access, use and adapt the preprocessing steps, train and test sets ``` conf = load_conf("default") ``` We can then look into and modify the current configuration: - `conf.VARIABLES`contains the list of variables to be fetched - `conf.FEATURES` contains the list of features to be produced from those variables during pre-processing steps - `conf.TRANSFO_PIPELINE` contains the pre-processing pipeline, which is a list of `predictsignauxfaibles.Preprocessor` objects. Each preprocessor is defined by a function, a set of inputs and a set of outputs - `conf.MODEL_PIPELINE` contains a `sklearn.pipeline` with `fit` and `predict` methods ``` conf.VARIABLES conf.TRANSFO_PIPELINE conf.FEATURES conf.MODEL_PIPELINE train = conf.TRAIN_DATASET train.sample_size = 1e4 test = conf.TEST_DATASET test.sample_size = 1e4 ``` Fetching data --- At this point, we have allocated datasets but we have not fetched any data into it: ``` conf.TRAIN_DATASET ``` ### Option 1: Load data from MongoDB (requires an authorized connection to our database): ``` savepath = None # change it to be a filepath if you wish to save train and test data locally train.fetch_data().raise_if_empty() test.fetch_data().raise_if_empty() logging.info("Succesfully loaded Features data from MongoDB") if savepath is not None: train.data.to_csv(f"{savepath}_train.csv") test.data.to_csv(f"{savepath}_test.csv") logging.info(f"Saved Features extract to {savepath}") ``` ### Option 2: Load data from a local file, for instance a csv: ``` train_filepath = "/path/to/train_dataset.csv" test_filepath = "/path/to/test_dataset.csv" train.data = pd.read_csv(train_filepath) logging.info(f"Succesfully loaded train data from {train_filepath}") test.data = pd.read_csv(test_filepath) logging.info(f"Succesfully loaded test data from {test_filepath}") ``` ### Option 3: Perform your train/test split a posteriori from a single saved extract from Features: ``` features_filepath = "/path/to/features_extract.csv" df = SFDataset( date_min="2018-01-01", date_max="2018-12-31", fields=conf.VARIABLES, sample_size=2e4, ) df.data = pd.read_csv(features_filepath) logging.info(f"Succesfully loaded unsplit features data from {features_filepath}") X_train, X_test, _, _ = train_test_split( df.data, df.data["outcome"], test_size=0.33, random_state=42 ) train = SFDataset() train.data = X_train test = SFDataset() test.data = X_test ``` Pre-processing our data --- To remove any bias in evaluation, our test set should not contain any SIRET that belong to the same SIREN as any SIRET in train: ``` train_siren_set = train.data["siren"].unique().tolist() test.remove_siren(train_siren_set) ``` We then run the trasnformation (=pre-processing) pipeline on both sets: ``` train.replace_missing_data().remove_na(ignore=IGNORE_NA) train.data = run_pipeline(train.data, conf.TRANSFO_PIPELINE) test.replace_missing_data().remove_na(ignore=IGNORE_NA) test.data = run_pipeline(test.data, conf.TRANSFO_PIPELINE) ``` Training our model --- To train any model on our data, you can create and modify you own modeling pipeline ``` model_pp = conf.MODEL_PIPELINE fit = model_pp.fit(train.data, train.data["outcome"]) params = fit.get_params() ``` Model evaluation --- ``` def evaluate( model, dataset, beta ): # To be turned into a SFModel method when refactoring models """ Returns evaluation metrics of model evaluated on df Args: model: a sklearn-like model with a predict method df: dataset """ balanced_accuracy = balanced_accuracy_score( dataset.data["outcome"], model.predict(dataset.data) ) fbeta = fbeta_score(dataset.data["outcome"], model.predict(dataset.data), beta=beta) return {"balanced_accuracy": balanced_accuracy, "fbeta": fbeta} eval_metrics = evaluate(fit, test, conf.EVAL_BETA) eval_metrics ```
github_jupyter
# miRNA-Seq Pipeline ## Be sure to install paramiko and scp with pip before using this notebook ## 1. Configure AWS key pair, data location on S3 and the project information This cell only contains information that you, the user, should input. #### String Fields **s3_input_files_address**: This is an s3 path to where your input fastq files are found. This shouldn't be the path to the actual fastq files, just to the directory containing all of them. All fastq files must be in the same s3 bucket. **s3_output_files_address**: This is an s3 path to where you would like the outputs from your project to be uploaded. This will only be the root directory, please see the README for information about exactly how outputs are structured **design_file**: This is a path to your design file for this project. Please see the README for the format specification for the design files. **your_cluster_name**: This is the name given to your cluster when it was created using ParallelCluster. **private_key**: The path to your private key needed to access your cluster. **project_name**: Name of your project. There should be no whitespace. **workflow**: The workflow you want to run for this project. For the miRNASeq pipeline the possible workflow is "bowtie2". **genome**: The name of the reference you want to use for your project. Currently only "human" is supported here. #### analysis_steps This is a set of strings that contains the steps you would like to run. The order of the steps does not matter. posible bowtie2 steps: "fastqc", "trim", "cut_adapt", "align_and_count", "multiqc" ``` import os import sys sys.path.append("../../src/cirrus_ngs") from util import PipelineManager from util import DesignFileLoader from util import ConfigParser ## S3 input and output addresses. # Notice: DO NOT put a forward slash at the end of your addresses. s3_input_files_address = "s3://path/to/fastq" s3_output_files_address = "s3://path/to/output" ## Path to the design file design_file = "../../data/cirrus-ngs/smallrnaseq_design_example.txt" ## ParallelCluster name your_cluster_name = "clustername" ## The private key pair for accessing cluster. private_key = "/path/to/your_aws_key.pem" ## Project information # Recommended: Specify year, month, date, user name and pipeline name (no empty spaces) project_name = "test_project" ## Workflow information: only "bowtie2" now workflow = "bowtie2" ## Genome information: currently available genomes: human, mouse genome = "mouse" ## "fastqc", "trim", "cut_adapt", "align_and_count", "merge_counts", "multiqc" analysis_steps = {"fastqc", "trim", "cut_adapt", "cut_adapt_fastqc", "align_and_count","multiqc"} print("Variables set.") ``` ## 2. Create ParallelCluster Following cell connects to your cluster. Run before step 3. ``` sys.path.append("../../src/cirrus_ngs") from awsCluster import ClusterManager, ConnectionManager ## Create a new cluster master_ip_address = ClusterManager.create_aws_cluster(cluster_name=your_cluster_name) ssh_client = ConnectionManager.connect_master(hostname=master_ip_address, username="ec2-user", private_key_file=private_key) ``` ## 3. Run the pipeline This cell actually executes your pipeline. Make sure that steps 1 and 2 have been completed before running. ``` ## DO NOT EDIT BELOW ## print the analysis information reference_list, tool_list = ConfigParser.parse(os.getcwd()) ConfigParser.print_software_info("SmallRNASeq", workflow, genome, reference_list, tool_list) print (analysis_steps) sample_list, group_list, pair_list = DesignFileLoader.load_design_file(design_file) PipelineManager.execute("SmallRNASeq", ssh_client, project_name, workflow, analysis_steps, s3_input_files_address, sample_list, group_list, s3_output_files_address, reference, "NA", pair_list) ``` ## 4. Check status of pipeline This allows you to check the status of your pipeline. You can specify a step or set the step variable to "all". If you specify a step it should be one that is in your analysis_steps set. You can toggle how verbose the status checking is by setting the verbose flag (at the end of the second line) to False. ``` step="all" PipelineManager.check_status(ssh_client, step, "SmallRNASeq", workflow, project_name, analysis_steps,verbose=True) ``` ## 5. Display MultiQC report ### Note: Run the cells below after the multiqc step is done ``` # Download the multiqc html file to local notebook_dir = os.getcwd().split("notebooks")[0] + "data/" !aws s3 cp $s3_output_files_address/$project_name/$workflow/multiqc_report.html $notebook_dir from IPython.display import IFrame IFrame(os.path.relpath("{}multiqc_report.html".format(notebook_dir)), width="100%", height=1000) ```
github_jupyter
<a href="https://colab.research.google.com/github/martin-fabbri/colab-notebooks/blob/master/tree-based-models/decision_tree_01.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Decision Tree http://www.r2d3.us/visual-intro-to-machine-learning-part-1/ ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from graphviz import Source from sklearn.datasets import load_iris from sklearn.metrics import accuracy_score from sklearn.tree import export_graphviz from sklearn.tree import DecisionTreeClassifier ``` The algorithm works by starting at the top of the tree (the root node), then it will traverse down the branches of this decision tree and ask a series of questions. In the end it will reach the bottom of the tree (the leaf node) that contains the final outcome. For example, if somebody has a credit that's poor and his/her income is high, then the bank will say a Yes, we will give him/her the loan. Our task now is to learn how to generate the tree to create these decision rules for us. Thankfully, the core method for learning a decision tree can be viewed as a recursive algorithm. A decision tree can be "learned" by splitting the dataset into subsets based on the input features/attributes' value. This process is repeated on each derived subset in a recursive manner called recursive partitioning: Start at the tree's root node Select the best rule/feature that splits the data into two subsets (child node) for the current node Repeated step 2 on each of the derived subset until the tree can't be further splitted. As we'll later see, we can set restrictions to decide when the tree should stop growing. There are a few additional details that we need to make more concrete. Including how to pick the rule/feature to split on and because it is a recursive algorithm, we have to figure out when to stop the recursion, in other words, when to not go and split another node in the tree. ### Splitting criteria for classification Trees The first question is what is the best rule/feature to split on and how do we measure that? One way to determine this is by choosing the one that maximizes the **Information Gain (IG)** at each split. $$IG(D_{p}, a) = I(D_{p}) - p_{left} I(D_{left}) - p_{right} I(D_{right})$$ - $IG$: Information Gain - $a$: feature to perform the split - $I$: Some impurity measure that we'll look at in the subsequent section - $D_{p}$: training subset of the parent node - $D_{left}$, $D_{right}$ :training subset of the left/right child node - $p_{left}$, $p_{right}$: proportion of parent node samples that ended up in the left/right child node after the split. $\frac{N_{left}}{N_p}$ or $\frac{N_{right}}{N_p}$. Where: - $N_p$: number of samples in the parent node - $N_{left}$: number of samples in the left child node - $N_{right}$: number of samples in the right child node ## Impurity The two most common impurity measure are **entropy** and **gini** index. ### Entropy Entropy is defined as: $$I_E(t) = - \sum_{i =1}^{C} p(i \mid t) \;log_2 \,p(i \mid t)$$ for all non-empty classes, $p(i \mid t) \neq 0$, where: - $p(i \mid t)$ is the proportion (or frequency or probability) of the samples that belong to class $i$ for a particular node $t$ - $C$ is the number of unique class labels The entropy is therefore 0 if all samples at a node belong to the same class, and the entropy is maximal if we have an uniform class distribution. For example, in a binary class setting, the entropy is 0 if $p(i =1 \mid t) =1$ or $p(i =0 \mid t) =1$. And if the classes are distributed uniformly with $p(i =1 \mid t) = 0.5$ and $p(i =0 \mid t) =0.5$ the entropy is 1, which we can visualize by plotting the entropy for binary class setting below. ``` def entropy(p): return -p * np.log2(p) -(1-p) * np.log2(1-p) plt.rcParams['figure.figsize'] = 8, 6 plt.rcParams['font.size'] = 12 x = np.arange(0.0, 1.0, 0.01) ent = [entropy(p) if p != 0 else None for p in x] plt.plot(x, ent) plt.axhline(y = 1.0, linewidth = 1, color = 'k', linestyle = '--') plt.ylim([ 0, 1.1 ]) plt.xlabel('p(i=1)') plt.ylabel('Entropy'); print(f'Entropy P(0.0000001): {entropy(0.0000001):.6f}') print(f'Entropy P(0.5): {entropy(0.5)}') print(f'Entropy P(0.9999999): {entropy(0.9999999):.6f}') ``` The entropy is 0 if all samples of a node belong to the same class, and the entropy is maximal if we have a uniform class distribution. In other words, the entropy of a node (consist of single class) is zero because the probability is 1 and log (1) = 0. Entropy reaches maximum value when all classes in the node have equal probability. 1. Entropy of a group in which all examples belong to the same class: $$entropy = -1\;log_2 \,1\,=0$$ 1. Entropy of a group with 50% in either class: $$entropy = -0.5\;log_2\,0.5\,-0.5\;log_2\,0.5=1$$ This is a good set for training. So, basically, the entropy attempts to maximize the mutual information (by constructing a equal probability node) in the decision tree. ### Gini Index Gini Index is defined as: $$ \begin{align*} I_G(t) &= \sum_{i =1}^{C} p(i \mid t) \big(1-p(i \mid t)\big) \nonumber \\ &= \sum_{i =1}^{C} p(i \mid t) - p(i \mid t)^2 \nonumber \\ &= \sum_{i =1}^{C} p(i \mid t) - \sum_{i =1}^{C} p(i \mid t)^2 \nonumber \\ &= 1 - \sum_{i =1}^{C} p(i \mid t)^2 \end{align*} $$ Compared to Entropy, the maximum value of the Gini index is 0.5, which occurs when the classes are perfectly balanced in a node. On the other hand, the minimum value of the Gini index is 0 and occurs when there is only one class represented in a node (A node with a lower Gini index is said to be more "pure"). ``` def gini(p): return p * (1 - p) + (1 - p) * (1 - (1 - p)) gi = gini(x) # plot for i, lab in zip([ent, gi], ['Entropy', 'Gini Index']): plt.plot(x, i, label = lab) plt.legend(loc = 'upper center', bbox_to_anchor = (0.5, 1.15), ncol = 3, fancybox = True, shadow = False) plt.axhline(y = 0.5, linewidth = 1, color = 'k', linestyle = '--') plt.axhline(y = 1.0, linewidth = 1, color = 'k', linestyle = '--') plt.ylim([ 0, 1.1 ]) plt.xlabel('p(i=1)') plt.ylabel('Impurity') plt.tight_layout() ``` As we can see from the plot, there is not much differences (as in they both increase and decrease at similar range). In practice, Gini Index and Entropy typically yield very similar results and it is often not worth spending much time on evaluating decision tree models using different impurity criteria. As for which one to use, maybe consider Gini Index, because this way, we don’t need to compute the log, which can make it a bit computationly faster. Decision trees can also be used on regression task. It's just instead of using gini index or entropy as the impurity function, we use criteria such as MSE (mean square error): $$I_{MSE}(t) = \frac{1}{N_t} \sum_i^{N_t}(y_i - \bar{y})^2$$ Where $\bar{y}$ is the averages of the response at node $t$, and $N_t$ is the number of observations that reached node $t$. This is simply saying, we compute the differences between all $N_t$ observation's reponse to the average response, square it and take the average. ### Sample Scenario Here we'll calculate the Entropy score by hand to hopefully make things a bit more concrete. Using the bank loan example again, suppose at a particular node, there are 80 observations, of whom 40 were classified as Yes (the bank will issue the loan) and 40 were classified as No. We can first calculate the Entropy before making a split: $$I_E(D_{p}) = - \left( \frac{40}{80} log_2(\frac{40}{80}) + \frac{40}{80} log_2(\frac{40}{80}) \right) = 1$$ Suppose we try splitting on Income and the child nodes turn out to be. - Left (Income = high): 30 Yes and 10 No - Right (Income = low): 10 Yes and 30 No $$I_E(D_{left}) = - \left( \frac{30}{40} log_2(\frac{30}{40}) + \frac{10}{40} log_2(\frac{10}{40}) \right) = 0.81$$ $$I_E(D_{right}) = - \left( \frac{10}{40} log_2(\frac{10}{40}) + \frac{30}{40} log_2(\frac{30}{40}) \right) = 0.81$$ $$IG(D_{p}, Income) = 1 - \frac{40}{80} (0.81) - \frac{40}{80} (0.81) = 0.19$$ Next we repeat the same process and evaluate the split based on splitting by Credit. - Left (Credit = excellent): 20 Yes and 0 No - Right (Credit = poot): 20 Yes and 40 No $$I_E(D_{left}) = - \left( \frac{20}{20} log_2(\frac{20}{20}) + \frac{0}{20} log_2(\frac{0}{20}) \right) = 0$$ $$I_E(D_{right}) = - \left( \frac{20}{60} log_2(\frac{20}{60}) + \frac{40}{60} log_2(\frac{40}{60}) \right) = 0.92$$ $$IG(D_{p}, Credit) = 1 - \frac{20}{80} (0) - \frac{60}{80} (0.92) = 0.31$$ In this case, it will choose Credit as the feature to split upon. If we were to have more features, the decision tree algorithm will simply try every possible split, and will choose the split that maximizes the information gain. If the feature is a continuous variable, then we can simply get the unique values of that feature in a sorted order, then try all possible split values (threshold) by using cutoff point (average) between every two values (e.g. a unique value of 1, 2, 3 will result in trying the split on the value 1.5 and 2.5). Or to speed up computations, we can bin the unqiue values into buckets, and split on the buckets. ## Visualizing a Decision Tree ``` # load a sample dataset iris = load_iris() X = iris.data y = iris.target clf = DecisionTreeClassifier(criterion = 'entropy', min_samples_split = 10, max_depth = 3) clf.fit(X, y) y_pred = clf.predict(X) print('classification distribution: ', np.bincount(y_pred)) print('accuracy score: ', accuracy_score(y, y_pred)) # visualize the decision tree # export it as .dot file, other common parameters include # `rounded` (boolean to round the score on each node) export_graphviz(clf, feature_names = iris.feature_names, filled = True, class_names = iris.target_names, out_file = 'tree.dot') # read it in and visualize it, or if we wish to # convert the .dot file into other formats, we can do: # import os # os.system('dot -Tpng tree.dot -o tree.jpeg') with open('tree.dot') as f: dot_graph = f.read() Source(dot_graph) ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # set the graphs to show in the jupyter notebook %matplotlib inline # set seaborn graphs to a better style sns.set(style="ticks") path = 'Online_Retail.csv' online_rt = pd.read_csv(path, encoding = 'latin1') online_rt.head() # group by the Country countries = online_rt.groupby('Country').sum() # sort the value and get the first 10 after UK countries = countries.sort_values(by = 'Quantity',ascending = False)[1:11] # create the plot countries['Quantity'].plot(kind='bar') # Set the title and labels plt.xlabel('Countries') plt.ylabel('Quantity') plt.title('10 Countries with most orders') # show the plot plt.show() online_rt = online_rt[online_rt.Quantity > 0] online_rt.head() # groupby CustomerID customers = online_rt.groupby(['CustomerID','Country']).sum() # there is an outlier with negative price customers = customers[customers.UnitPrice > 0] # get the value of the index and put in the column Country customers['Country'] = customers.index.get_level_values(1) # top three countries top_countries = ['Netherlands', 'EIRE', 'Germany'] # filter the dataframe to just select ones in the top_countries customers = customers[customers['Country'].isin(top_countries)] ################# # Graph Section # ################# # creates the FaceGrid g = sns.FacetGrid(customers, col="Country") # map over a make a scatterplot g.map(plt.scatter, "Quantity", "UnitPrice", alpha=1) # adds legend g.add_legend() #This takes our initial dataframe groups it primarily by 'CustomerID' and secondarily by 'Country'. #It sums all the (non-indexical) columns that have numerical values under each group. customers = online_rt.groupby(['CustomerID','Country']).sum().head() #Here's what it looks like: customers sales_volume = online_rt.groupby('Country').Quantity.sum().sort_values(ascending=False) top3 = sales_volume.index[1:4] #We are excluding UK top3 online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice online_rt.head() grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country']) plottable = grouped['Quantity','Revenue'].agg('sum') plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity # get the value of the index and put in the column Country plottable['Country'] = plottable.index.get_level_values(1) plottable.head() # creates the FaceGrid g = sns.FacetGrid(plottable, col="Country") # map over a make a scatterplot g.map(plt.scatter, "Quantity", "AvgPrice", alpha=1) # adds legend g.add_legend(); ```
github_jupyter
___ <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a> ___ # Logistic Regression Project - Solutions In this project we will be working with a fake advertising data set, indicating whether or not a particular internet user clicked on an Advertisement on a company website. We will try to create a model that will predict whether or not they will click on an ad based off the features of that user. This data set contains the following features: * 'Daily Time Spent on Site': consumer time on site in minutes * 'Age': cutomer age in years * 'Area Income': Avg. Income of geographical area of consumer * 'Daily Internet Usage': Avg. minutes a day consumer is on the internet * 'Ad Topic Line': Headline of the advertisement * 'City': City of consumer * 'Male': Whether or not consumer was male * 'Country': Country of consumer * 'Timestamp': Time at which consumer clicked on Ad or closed window * 'Clicked on Ad': 0 or 1 indicated clicking on Ad ## Import Libraries **Import a few libraries you think you'll need (Or just import them as you go along!)** ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ``` ## Get the Data **Read in the advertising.csv file and set it to a data frame called ad_data.** ``` ad_data = pd.read_csv('advertising.csv') ``` **Check the head of ad_data** ``` ad_data.head() ``` ** Use info and describe() on ad_data** ``` ad_data.info() ad_data.describe() ``` ## Exploratory Data Analysis Let's use seaborn to explore the data! Try recreating the plots shown below! ** Create a histogram of the Age** ``` sns.set_style('whitegrid') ad_data['Age'].hist(bins=30) plt.xlabel('Age') ``` **Create a jointplot showing Area Income versus Age.** ``` sns.jointplot(x='Age',y='Area Income',data=ad_data) ``` **Create a jointplot showing the kde distributions of Daily Time spent on site vs. Age.** ``` sns.jointplot(x='Age',y='Daily Time Spent on Site',data=ad_data,color='red',kind='kde'); ``` ** Create a jointplot of 'Daily Time Spent on Site' vs. 'Daily Internet Usage'** ``` sns.jointplot(x='Daily Time Spent on Site',y='Daily Internet Usage',data=ad_data,color='green') ``` ** Finally, create a pairplot with the hue defined by the 'Clicked on Ad' column feature.** ``` sns.pairplot(ad_data,hue='Clicked on Ad',palette='bwr') ``` # Logistic Regression Now it's time to do a train test split, and train our model! You'll have the freedom here to choose columns that you want to train on! ** Split the data into training set and testing set using train_test_split** ``` from sklearn.model_selection import train_test_split X = ad_data[['Daily Time Spent on Site', 'Age', 'Area Income','Daily Internet Usage', 'Male']] y = ad_data['Clicked on Ad'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) ``` ** Train and fit a logistic regression model on the training set.** ``` from sklearn.linear_model import LogisticRegression logmodel = LogisticRegression() logmodel.fit(X_train,y_train) ``` ## Predictions and Evaluations ** Now predict values for the testing data.** ``` predictions = logmodel.predict(X_test) ``` ** Create a classification report for the model.** ``` from sklearn.metrics import classification_report print(classification_report(y_test,predictions)) ```
github_jupyter
# Time-Series Visualization using bokeh/plotly For this assignment, Bokeh has been used as the package for creating plots. ``` # import packages import pandas as pd import numpy as np import datetime from bokeh.plotting import figure, show, output_file, output_notebook from bokeh.palettes import Spectral11, colorblind, Inferno, BuGn, brewer from bokeh.models import HoverTool, value, LabelSet, Legend, ColumnDataSource,LinearColorMapper,BasicTicker, PrintfTickFormatter, ColorBar # Load the data data = pd.read_csv('Monthly_Property_Crime_2005_to_2015.csv', parse_dates=['Date']) data.head() data.Date.min(), data.Date.max() data.Category.value_counts() data['Year'] = data.Date.apply(lambda x: x.year) data['Month'] = data.Date.apply(lambda x: x.month) data.head() burglary = data[data.Category == 'BURGLARY'].sort_values(['Date']) stolen_property = data[data.Category == 'STOLEN PROPERTY'].sort_values(['Date']) vehicle_theft = data[data.Category == 'VEHICLE THEFT'].sort_values(['Date']) vandalism = data[data.Category == 'VANDALISM'].sort_values(['Date']) larceny = data[data.Category == 'LARCENY/THEFT'].sort_values(['Date']) arson = data[data.Category == 'ARSON'].sort_values(['Date']) arson.head() output_notebook() ``` ### Bar Chart This is used to analyse the average crimes per month. All the months have between 600-800 average crimes, with February being the least. ``` temp_df = data.groupby(['Month']).mean().reset_index() temp_df.head() TOOLS = "hover,save,pan,box_zoom,reset,wheel_zoom,tap" p = figure(plot_height=350, # plot_width=1000, title="Average Number of Crimes by Month", tools=TOOLS, toolbar_location='above') p.vbar(x=temp_df.Month, top=temp_df.IncidntNum, width=0.9) p.y_range.start = 0 p.x_range.range_padding = 0.1 p.xgrid.grid_line_color = None p.axis.minor_tick_line_color = None p.outline_line_color = None p.xaxis.axis_label = 'Month' p.yaxis.axis_label = 'Average Crimes' p.select_one(HoverTool).tooltips = [ ('month', '@x'), ('Number of crimes', '@top'), ] output_file("barchart.html", title="barchart") show(p) ``` ### Line Chart This plot shows the trend in number of crimes over the years. It can be seen that the crime rate decreased from 2005-2010, with 2010 having the lowest crime rate. From then on, it has kept increasing steadily with 2015 having the highest number of crimes. ``` temp_df = data.groupby(['Year']).sum().reset_index() temp_df.head() TOOLS = 'save,pan,box_zoom,reset,wheel_zoom,hover' p = figure(title="Year-wise total number of crimes", y_axis_type="linear", plot_height = 400, tools = TOOLS, plot_width = 800) p.xaxis.axis_label = 'Year' p.yaxis.axis_label = 'Total Crimes' p.circle(2010, temp_df.IncidntNum.min(), size = 10, color = 'red') p.line(temp_df.Year, temp_df.IncidntNum,line_color="purple", line_width = 3) p.select_one(HoverTool).tooltips = [ ('year', '@x'), ('Number of crimes', '@y'), ] output_file("line_chart.html", title="Line Chart") show(p) ``` ### Stacked bar chart This chart explores the distribution of crimes among the various categories over the years. In particular, larceny/theft are the most frequently occuring crimes, while stolen property occur the least. 2005 saw a high number of vehicle thefts, which reduced quite a bit subsequently. ``` wide = data.pivot(index='Date', columns='Category', values='IncidntNum') wide.reset_index(inplace=True) wide['Year'] = wide.Date.apply(lambda x: x.year) wide['Month'] = wide.Date.apply(lambda x: x.month) temp_df = wide.groupby(['Year']).sum().reset_index() temp_df.head() cats = ['ARSON','BURGLARY','LARCENY/THEFT','STOLEN PROPERTY','VANDALISM','VEHICLE THEFT'] temp_df.drop(['Month'], axis = 1, inplace=True) temp_df.head() TOOLS = "save,pan,box_zoom,reset,wheel_zoom,tap" source = ColumnDataSource(data=temp_df) p = figure( plot_width=800, title="Category wise count of crimes by year",toolbar_location='above', tools=TOOLS) colors = brewer['Dark2'][6] p.vbar_stack(cats, x='Year', width=0.9, color=colors, source=source, legend=[value(x) for x in cats]) p.y_range.start = 0 p.x_range.range_padding = 0.1 p.xgrid.grid_line_color = None p.axis.minor_tick_line_color = None p.outline_line_color = None p.xaxis.axis_label = 'Year' p.yaxis.axis_label = 'Total Crimes' p.legend.location = "top_left" p.legend.orientation = "horizontal" output_file("stacked_bar.html", title="Stacked Bar Chart") show(p) ``` ### Heat Map The heat map shows the total number of crimes by month and year, with darker colours represnting higher number of crimes. Months in 2015 have the highest total while the least are in the some months in 2009 and 2010. ``` temp_df = data.groupby(['Year', 'Month']).sum().reset_index() # temp_df['Month_Category'] = pd.concat([temp_df['Month'], temp_df['Category']], axis = 1) temp_df.head() TOOLS = "hover,save,pan,box_zoom,reset,wheel_zoom,tap" hm = figure(title="Month-Year wise crimes", tools=TOOLS, toolbar_location='above') source = ColumnDataSource(temp_df) colors = brewer['BuGn'][9] colors = colors[::-1] mapper = LinearColorMapper( palette=colors, low=temp_df.IncidntNum.min(), high=temp_df.IncidntNum.max()) hm.rect(x="Year", y="Month",width=2,height=1,source = source, fill_color={ 'field': 'IncidntNum', 'transform': mapper }, line_color=None) color_bar = ColorBar( color_mapper=mapper, major_label_text_font_size="10pt", ticker=BasicTicker(desired_num_ticks=len(colors)), formatter=PrintfTickFormatter(), label_standoff=6, border_line_color=None, location=(0, 0)) hm.add_layout(color_bar, 'right') hm.xaxis.axis_label = 'Year' hm.yaxis.axis_label = 'Month' hm.select_one(HoverTool).tooltips = [ ('Year', '@Year'),('Month', '@Month'), ('Number of Crimes', '@IncidntNum') ] output_file("heatmap.html", title="Heat Map") show(hm) # open a browser ``` ### Multiline Plot This plot shows the distribution of crimes across categories over the years. This plot shows information similar to the stacked bar chart, except that here it is easier to note that arson and property theft amount to almost the same amount of crimes every year. Similarly, vehicle theft(except for 2005), vandalism and burglary have very similar patterns. Only larceny/theft is increasing with every passing year and its count is much higher than any of the other types of crimes. ``` TOOLS = 'crosshair,save,pan,box_zoom,reset,wheel_zoom' p = figure(title="Category-wise crimes through Time", y_axis_type="linear",x_axis_type='datetime', tools = TOOLS) p.line(burglary['Date'], burglary.IncidntNum, legend="burglary", line_color="purple", line_width = 3) p.line(stolen_property['Date'], stolen_property.IncidntNum, legend="stolen_property", line_color="blue", line_width = 3) p.line(vehicle_theft['Date'], vehicle_theft.IncidntNum, legend="vehicle_theft", line_color = 'coral', line_width = 3) p.line(larceny['Date'], larceny.IncidntNum, legend="larceny", line_color='green', line_width = 3) p.line(vandalism['Date'], vandalism.IncidntNum, legend="vandalism", line_color="gold", line_width = 3) p.line(arson['Date'], arson.IncidntNum, legend="arson", line_color="magenta",line_width = 3) p.legend.location = "top_left" p.xaxis.axis_label = 'Year' p.yaxis.axis_label = 'Count' output_file("multiline_plot.html", title="Multi Line Plot") show(p) # open a browser ```
github_jupyter
``` # Code to get landmarks from mediapipe and then create a bounding box around nails to segment them # for further analysis # Use distance transform to further correct the landmarks from mediapipe # See if you can get tip indices from drawing a convex hull around hand?? # NOTE: Current code works on already segmented masks - this preprocessing step is required! # import required libraries import cv2 as cv # opencv import numpy as np # for numerical calculations import synapseclient # synapse login etc import synapseutils # to download files using folder structure import pandas as pd # data frames from matplotlib import pyplot as plt # plotting import mediapipe as mp # for detecting hand landmarks import seaborn as sns # for violin plot import os # for listing files in a folder import timeit # to track program running time import shutil mp_drawing = mp.solutions.drawing_utils mp_hands = mp.solutions.hands # login into Synapse syn = synapseclient.login() ## hand landmark detection using mediapipe hands = mp_hands.Hands(static_image_mode=True, max_num_hands=1, # use one hand at a time min_detection_confidence=0.5) def getTIPLandmarks(multi_hand_landmarks, img_shape): # input is results.multi_hand_landmarks # will focus on all fingers except thumb - i.e index, middle, ring and pinky for hand_landmarks in multi_hand_landmarks: index_finger_tip = [round(hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].x* img_shape[1]), round(hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].y* img_shape[0])] middle_finger_tip = [round(hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_TIP].x* img_shape[1]), round(hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_TIP].y* img_shape[0])] ring_finger_tip = [round(hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_TIP].x* img_shape[1]), round(hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_TIP].y* img_shape[0])] pinky_tip = [round(hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_TIP].x* img_shape[1]), round(hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_TIP].y* img_shape[0])] tips_dict = {'index': index_finger_tip, 'middle': middle_finger_tip, 'ring': ring_finger_tip, 'pinky': pinky_tip} return(tips_dict) def getDIPLandmarks(multi_hand_landmarks, img_shape): # input is results.multi_hand_landmarks # will focus on all fingers except thumb - i.e index, middle, ring and pinky for hand_landmarks in multi_hand_landmarks: index_finger_dip = [round(hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_DIP].x* img_shape[1]), round(hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_DIP].y* img_shape[0])] middle_finger_dip = [round(hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_DIP].x* img_shape[1]), round(hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_DIP].y* img_shape[0])] ring_finger_dip = [round(hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_DIP].x* img_shape[1]), round(hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_DIP].y* img_shape[0])] pinky_dip = [round(hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_DIP].x* img_shape[1]), round(hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_DIP].y* img_shape[0])] dips_dict = {'index': index_finger_dip, 'middle': middle_finger_dip, 'ring': ring_finger_dip, 'pinky': pinky_dip} return(dips_dict) def getTipsFromHull(tips_landmarks, hull): # input is TIPLandmarks and hull points # For each landmark we find the nearest hull point, i.e matching the hull points # with the tips, so that we can associate which sections of the contour belong # to which finger # index finger index_dist = list(map(lambda x: np.linalg.norm(x-tips_landmarks['index']), hull[0])) index_hull_point = hull[0][index_dist.index(min(index_dist))][0] # middle finger middle_dist = list(map(lambda x: np.linalg.norm(x-tips_landmarks['middle']), hull[0])) middle_hull_point = hull[0][middle_dist.index(min(middle_dist))][0] # ring finger ring_dist = list(map(lambda x: np.linalg.norm(x-tips_landmarks['ring']), hull[0])) ring_hull_point = hull[0][ring_dist.index(min(ring_dist))][0] # pinky pinky_dist = list(map(lambda x: np.linalg.norm(x-tips_landmarks['pinky']), hull[0])) pinky_hull_point = hull[0][pinky_dist.index(min(pinky_dist))][0] tips_hulls_dict = {'index': index_hull_point, 'middle': middle_hull_point, 'ring': ring_hull_point, 'pinky': pinky_hull_point} return(tips_hulls_dict) def getClosestPixelInHull(pixel_in, contour_section): # Given a pixel_in and contour_section, output the pixel in the contour_section that is closese to pixel_in index_dist = list(map(lambda x: np.linalg.norm(x-pixel_in), contour_section[0])) index_point = contour_section[0][index_dist.index(min(index_dist))][0] return(index_point) def locatePixelInList(pixel,input_list): # Given a contour find the index of the input pixel (mostly one from the convex hull) temp_list = list([(input_list[0][x][0] == pixel).all() for x in range(len(input_list[0]))]) # gives a list of true/false pixel_index = temp_list.index(max(temp_list)) # pick the true return(pixel_index) def getHand(img, hand= 'left'): """ Split the given two-hand image into a single hand image. :param img: input RGB image :param hand: 'left' - left half of the picture 'right' - right half of the image """ rows, cols, channels = img.shape new_cols = round(cols/2) if hand == 'left': img_cropped = img[:, 0:new_cols-1,:] elif hand == 'right': img_cropped = img[:, new_cols:cols-1,:] else: print('Returning input image') img_cropped = img return img_cropped def getBinaryImage(im, min_foreground_pixel = 10, max_foreground_pixel = 255): # Thresholds the given image(segmented mask with black background) to give a black-white binary image # with the background being white pixels and foreground(hand) being white pixels ### Convert to Gray Scale imgray = cv.cvtColor(im, cv.COLOR_BGR2GRAY) ### Binary Threshold ret, thresh = cv.threshold(imgray, min_foreground_pixel, max_foreground_pixel, cv.THRESH_BINARY) return(thresh) def getContoursAndHull(black_white_im): # Given a binary(black and white image with segmented hand mask) give output of contours, hierarchy # and convex hull points # returns only the largest contour as this is the one we want for the hand! ### Find contours contours, hierarchy = cv.findContours(black_white_im, cv.RETR_TREE, cv.CHAIN_APPROX_NONE) ### contour lengths contour_lengths = [len(cc) for cc in contours] largest_contour_index = contour_lengths.index(max(contour_lengths)) # subset contours and hierarchy to the largest contour contours = [contours[largest_contour_index]] hierarchy = np.array([[hierarchy[0][largest_contour_index]]]) ### Convex Hull # create hull array for convex hull points hull = [] # calculate points for each contour for i in range(len(contours)): # creating convex hull object for each contour hull.append(cv.convexHull(contours[i], False)) return({'contours':contours, 'hierarchy':hierarchy, 'hull': hull}) def drawConvexHull(black_white_im, contours, hierarchy, hull, draw_Contour = True): # given a black white image, contours and hull, output a image with contours and hull drawn # create an empty black image # drawing = np.zeros((black_white_im.shape[0], black_white_im.shape[1], 3), np.uint8) drawing = black_white_im # draw contours and hull points color_contours = (0, 255, 0) # green - color for contours color = (255, 0, 0) # blue - color for convex hull # draw ith contour if draw_Contour: cv.drawContours(drawing, contours, -1, color_contours, 1, 8, hierarchy) # draw ith convex hull object cv.drawContours(drawing, hull, -1, color, 1, 8) return(drawing) def cropImageFromContour(img, cnt): # Crop the image(img) based on the input of a closed contour(cnt), set of points # adapted from https://www.life2coding.com/cropping-polygon-or-non-rectangular-region-from-image-using-opencv-python/ mask = np.zeros(img.shape[0:2], dtype=np.uint8) # draw the contours on the mask cv.drawContours(mask, [cnt], -1, (255, 255, 255), -1, cv.LINE_AA) res = cv.bitwise_and(img,img,mask = mask) rect = cv.boundingRect(cnt) # returns (x,y,w,h) of the rect cropped = res[rect[1]: rect[1] + rect[3], rect[0]: rect[0] + rect[2]] # ## To get white background cropped image # ## create the white background of the same size of original image # wbg = np.ones_like(img, np.uint8)*255 # cv.bitwise_not(wbg,wbg, mask=mask) # # overlap the resulted cropped image on the white background # dst = wbg+res # dst_cropped = dst[rect[1]: rect[1] + rect[3], rect[0]: rect[0] + rect[2]] return(cropped) def cropWarpedRect(img, rect): #### BEST # from https://stackoverflow.com/questions/11627362/how-to-straighten-a-rotated-rectangle-area-of-an-image-using-opencv-in-python/48553593#48553593 # Get center, size, and angle from rect center, size, theta = rect box = cv.boxPoints(rect) box = np.int0(box) width = int(rect[1][0]) height = int(rect[1][1]) src_pts = box.astype("float32") dst_pts = np.array([[0, height-1], [0, 0], [width-1, 0], [width-1, height-1]], dtype="float32") M = cv.getPerspectiveTransform(src_pts, dst_pts) warped = cv.warpPerspective(img, M, (width, height)) if (theta > 45): warped = cv.rotate(warped, cv.ROTATE_90_CLOCKWISE) return(warped) def getContourSubsectionPercentage(contour_in, pixel_in, percent = 5): # get subsection of contour that is 5%(default) left and right to the input pixel input_pixel_location = locatePixelInList(pixel_in, contour_in) nPixel_contour = len(contour_in[0]) section_left = int(max(0, input_pixel_location - round(nPixel_contour*percent/100))) section_right = int(min(nPixel_contour-1, input_pixel_location + round(nPixel_contour*percent/100))) print(section_left) print(section_right) subContour = [np.array(contour_in[0][section_left:section_right])] return(subContour) def getContourSubsection(contour_in, pixel_on_contour, ref_pixel): # Given a pixel on the contour, split the contour into two sections left of the pixel # and right of the pixel. # Then find the closest points in both sections (left and right) to the ref_pixel, and # then give subset the contour between these two points # The idea is when we give a contour of the hand, and a point on it corresponding to the # finger tip, we get two sections of the contour, to the left of the finger enf then right # We then find the points closest to these input_pixel_location = locatePixelInList(pixel_on_contour, contour_in) nPixel_contour = len(contour_in[0]) # roll/shift the array so that input_pixel is the middle of array contour_rolled = [np.array(np.roll(contour_in[0],2*(round(nPixel_contour/2)-input_pixel_location)))] section_left = [np.array(contour_rolled[0][0:round(nPixel_contour/2)])] section_right = [np.array(contour_rolled[0][round(nPixel_contour/2):nPixel_contour])] closest_pixel_left = getClosestPixelInHull(ref_pixel, section_left) closest_pixel_right = getClosestPixelInHull(ref_pixel, section_right) closest_pixel_left_location = locatePixelInList(closest_pixel_left, contour_rolled) closest_pixel_right_location = locatePixelInList(closest_pixel_right, contour_rolled) subContour = [np.array(contour_rolled[0][(closest_pixel_left_location-1):closest_pixel_right_location])] subContour = [np.array(np.roll(subContour[0],-2*(round(nPixel_contour/2)-input_pixel_location)))] # subContour = [np.array([[pixel_on_contour], [closest_pixel_left], [closest_pixel_right]])] # return({'left': closest_pixel_left, 'right': closest_pixel_right}) return(subContour) # In the working folder (..../), have the following folder structure # ..../hand_segmentation_psorcast Jun 16 2021 08_51 Dan/ # ..../testHulls # ..../testHulls/left # ..../testHulls/right # ..../segmentedNails # ..../segmentedNails/left # ..../segmentedNails/left_unrotated # ..../segmentedNails/right # ..../segmentedNails/right_unrotated # Function to create required directory def create_directories(): directories = [ "testHulls/left", "testHulls/right", "segmentedNails/left", "segmentedNails/left_unrotated", "segmentedNails/right", "segmentedNails/right_unrotated", 'hand_segmentation_psorcast Jun 16 2021 08_51 Dan/' ] dir_paths = [] for directory in directories: dir_path = os.path.join(os.getcwd(), directory) dir_paths.append(dir_path) if not os.path.exists(directory): os.makedirs(dir_path) else: shutil.rmtree(dir_path) os.makedirs(dir_path) return(dir_paths) create_directories() ## Getting hands images from Synapse hand_masks_synapse_id = 'syn25999658' # Folder containing all merged hand images and masks img_all_directory = 'hand_segmentation_psorcast Jun 16 2021 08_51 Dan/' # Local path for storing images from Synapse # Hand mask images form slack (check June 16 - Chat with Dan Webster, Aryton Tediarjo and Meghasyam Tummalacherla) # Also in https://www.synapse.org/#!Synapse:syn25999658 # Download the curated hand image files from Synapse hands_all_files = synapseutils.syncFromSynapse(syn = syn, entity= hand_masks_synapse_id, path= img_all_directory) all_files = os.listdir(img_all_directory) fuse_files = list(filter(lambda x: 'fuse' in x, all_files)) orig_files = list(map(lambda x: x[:-11], fuse_files)) # remove ___fuse.png target_directory = 'testHulls' # directory with mediapipe results target_nails_directory = 'segmentedNails' # directory with segmented nails ## Getting full - resolution hand images from synapse full_res_images_synapse_id = 'syn25837496' # Folder containing full resolution images img_full_res_directory = 'hand_images_full_res' # local path to download the files into # Download the curated hand image files from Synapse hands_all_files = synapseutils.syncFromSynapse(syn = syn, entity= full_res_images_synapse_id, path= img_full_res_directory) all_files = os.listdir(img_all_directory) ## Target Synapse directory for segmented nails nails_syn_target_folder = 'syn25999657' def customSynapseUpload(file_path, file_name): # provenance provenance_set = synapseclient.Activity( name = file_name, used = 'syn25999658', executed = 'https://github.com/itismeghasyam/psorcastValidationAnalysis/blob/master/feature_extraction/nail_segmentation_mediapipe_pipeline.ipynb') test_entity = synapseclient.File( file_path, parent=nails_syn_target_folder) test_entity = syn.store(test_entity, activity = provenance_set) #### LEFT HAND left_fail_index = 0 left_pass_index = 0 left_nails_rects = {} # save image name and rect dictionaries, i.e image vs nail bounding boxes for index in range(len(fuse_files)): # update path for current image current_image_path = img_all_directory + '/' + fuse_files[index] # current_orig_path = img_all_directory + '/' + orig_files[index] current_orig_path = img_full_res_directory + '/' + orig_files[index] # take full res image in print(current_image_path) # update target path for the contoured image for the current image current_left_target_path = target_directory + '/' + 'left/' + fuse_files[index] # update target path for segmented nails from the current image (ROTATED) current_left_nails_target_path = target_nails_directory + '/' + 'left' # update target path for segmented nails from the current image (UN-ROTATED) current_left_nails_target_path_unrotated = target_nails_directory + '/' + 'left_unrotated' # read images img = cv.imread(current_image_path) orig_img = cv.imread(current_orig_path) # Masks of left and right img (fuse files) left_img_fuse = getHand(img, 'left') # Actual photos left_orig_img = getHand(orig_img, 'left') # clones of actual images left_orig_img_clone = left_orig_img.copy() # resize image [for faster calculation, and mediapipe usually takes in small images of size # 300 x 300][https://github.com/google/mediapipe/blob/master/mediapipe/calculators/image/scale_image_calculator.cc] img_shape = left_orig_img.shape resize_factor = round(300/max(img_shape),3) # resize max(length, width) to 300 pixels left_orig_img_resize = cv.resize(left_orig_img, None, fx = resize_factor , fy = resize_factor, interpolation = cv.INTER_AREA) color_contours = (0, 255, 0) # green - color for contours ### left hand bw_img_fuse = getBinaryImage(left_img_fuse) # resize the left hand mask to match that of the original image img_shape = left_orig_img.shape[0:2] bw_img = cv.resize(bw_img_fuse, (img_shape[1], img_shape[0]), interpolation = cv.INTER_AREA) ### contours_hull = getContoursAndHull(bw_img) ### dw_img = drawConvexHull(left_orig_img, contours_hull['contours'], contours_hull['hierarchy'], contours_hull['hull']) # apply mediapipe to get hand landmarks results = hands.process(cv.cvtColor(left_orig_img_resize, cv.COLOR_BGR2RGB)) # to draw all landmarks from mediapipe if not results.multi_hand_landmarks: cv.imwrite(current_left_target_path, dw_img) left_nails_rects[orig_files[index]] = {} left_fail_index = left_fail_index + 1 else: dw_img = drawConvexHull(left_orig_img, contours_hull['contours'], contours_hull['hierarchy'], contours_hull['hull'], draw_Contour=True) for hand_landmarks in results.multi_hand_landmarks: mp_drawing.draw_landmarks(dw_img, hand_landmarks, mp_hands.HAND_CONNECTIONS) # get tip landmarks from results tips_landmarks = getTIPLandmarks(results.multi_hand_landmarks, left_orig_img.shape) # get DIP landmarks from results dips_landmarks = getDIPLandmarks(results.multi_hand_landmarks, left_orig_img.shape) # get points closest to tips from hull tips_hull = getTipsFromHull(tips_landmarks, contours_hull['hull']) # get subcontours for each finger subContours = {} # get minimum bounding rectangle(min area) for each subContour subContourRects = {} subContourBoxes = {} for finger_key in tips_hull: subContours[finger_key] = getContourSubsection(contours_hull['contours'], tips_hull[finger_key], dips_landmarks[finger_key]) # subContours[finger_key] = getContourSubsectionPercentage(contours_hull['contours'], tips_hull[finger_key]) rect = cv.minAreaRect(subContours[finger_key][0]) box = cv.boxPoints(rect) box = np.int0(box) subContourRects[finger_key] = rect subContourBoxes[finger_key] = [box] # draw SubContours finger for finger_key in subContours: cv.drawContours(dw_img, subContours[finger_key], -1, (255,0,0), 1, 8, contours_hull['hierarchy']) cv.drawContours(dw_img, subContourBoxes[finger_key],-1,(0,0,255)) # rotated via rects cropped_finger = cropWarpedRect(left_orig_img_clone, subContourRects[finger_key]) curr_file_name = 'left_' + finger_key + '_' + fuse_files[index] curr_path = current_left_nails_target_path + '/' + curr_file_name if min(cropped_finger.shape) >0 : cv.imwrite(curr_path, cropped_finger) # upload this corrected cropped finger to synapse via customSynpaseUpload customSynapseUpload(curr_path, curr_file_name) # unrotated fingers cropped_finger_unrotated = cropImageFromContour(left_orig_img_clone, subContourBoxes[finger_key][0]) curr_file_name = 'left_' + finger_key + '_' + fuse_files[index] curr_path = current_left_nails_target_path_unrotated + '/' + curr_file_name if min(cropped_finger_unrotated.shape) >0 : cv.imwrite(curr_path, cropped_finger_unrotated) cv.imwrite(current_left_target_path, dw_img) left_nails_rects[orig_files[index]] = subContourRects left_pass_index = left_pass_index + 1 print('left fail percent') print(100*left_fail_index/(left_fail_index+left_pass_index)) #### RIGHT HAND right_fail_index = 0 right_pass_index = 0 right_nails_rects = {} # save image name and rect dictionaries, i.e image vs nail bounding boxes for index in range(len(fuse_files)): # update path for current image current_image_path = img_all_directory + '/' + fuse_files[index] # current_orig_path = img_all_directory + '/' + orig_files[index] current_orig_path = img_full_res_directory + '/' + orig_files[index] # take full res image in print(current_image_path) # update target path for the contoured image for the current image current_right_target_path = target_directory + '/' + 'right/' + fuse_files[index] # update target path for segmented nails from the current image (ROTATED) current_right_nails_target_path = target_nails_directory + '/' + 'right' # update target path for segmented nails from the current image (UN-ROTATED) current_right_nails_target_path_unrotated = target_nails_directory + '/' + 'right_unrotated' # read images img = cv.imread(current_image_path) orig_img = cv.imread(current_orig_path) # Masks of left and right img (fuse files) right_img_fuse = getHand(img, 'right') # Actual photos right_orig_img = getHand(orig_img, 'right') # clones of actual images right_orig_img_clone = right_orig_img.copy() # resize image [for faster calculation, and mediapipe usually takes in small images of size # 300 x 300][https://github.com/google/mediapipe/blob/master/mediapipe/calculators/image/scale_image_calculator.cc] img_shape = right_orig_img.shape resize_factor = round(300/max(img_shape),3) # resize max(length, width) to 300 pixels right_orig_img_resize = cv.resize(right_orig_img, None, fx = resize_factor , fy = resize_factor, interpolation = cv.INTER_AREA) color_contours = (0, 255, 0) # green - color for contours ### right hand bw_img_fuse = getBinaryImage(right_img_fuse) # resize the right hand mask to match that of the original image img_shape = right_orig_img.shape[0:2] bw_img = cv.resize(bw_img_fuse, (img_shape[1], img_shape[0]), interpolation = cv.INTER_AREA) ### contours_hull = getContoursAndHull(bw_img) ### dw_img = drawConvexHull(right_orig_img, contours_hull['contours'], contours_hull['hierarchy'], contours_hull['hull']) # apply mediapipe to get hand landmarks results = hands.process(cv.cvtColor(right_orig_img_resize, cv.COLOR_BGR2RGB)) # to draw all landmarks from mediapipe if not results.multi_hand_landmarks: cv.imwrite(current_right_target_path, dw_img) right_nails_rects[orig_files[index]] = {} right_fail_index = right_fail_index + 1 else: dw_img = drawConvexHull(right_orig_img, contours_hull['contours'], contours_hull['hierarchy'], contours_hull['hull'], draw_Contour=True) for hand_landmarks in results.multi_hand_landmarks: mp_drawing.draw_landmarks(dw_img, hand_landmarks, mp_hands.HAND_CONNECTIONS) # get tip landmarks from results tips_landmarks = getTIPLandmarks(results.multi_hand_landmarks, right_orig_img.shape) # get DIP landmarks from results dips_landmarks = getDIPLandmarks(results.multi_hand_landmarks, right_orig_img.shape) # get points closest to tips from hull tips_hull = getTipsFromHull(tips_landmarks, contours_hull['hull']) # get subcontours for each finger subContours = {} # get minimum bounding rectangle(min area) for each subContour subContourRects = {} subContourBoxes = {} for finger_key in tips_hull: subContours[finger_key] = getContourSubsection(contours_hull['contours'], tips_hull[finger_key], dips_landmarks[finger_key]) # subContours[finger_key] = getContourSubsectionPercentage(contours_hull['contours'], tips_hull[finger_key]) rect = cv.minAreaRect(subContours[finger_key][0]) box = cv.boxPoints(rect) box = np.int0(box) subContourRects[finger_key] = rect subContourBoxes[finger_key] = [box] # draw SubContours finger for finger_key in subContours: cv.drawContours(dw_img, subContours[finger_key], -1, (255,0,0), 1, 8, contours_hull['hierarchy']) cv.drawContours(dw_img, subContourBoxes[finger_key],-1,(0,0,255)) # rotated via rects cropped_finger = cropWarpedRect(right_orig_img_clone, subContourRects[finger_key]) curr_file_name = 'right_' + finger_key + '_' + fuse_files[index] curr_path = current_right_nails_target_path + '/' + curr_file_name if min(cropped_finger.shape) >0 : cv.imwrite(curr_path, cropped_finger) # upload this corrected cropped finger to synapse via customSynpaseUpload customSynapseUpload(curr_path, curr_file_name) # unrotated fingers cropped_finger_unrotated = cropImageFromContour(right_orig_img_clone, subContourBoxes[finger_key][0]) curr_file_name = 'right_' + finger_key + '_' + fuse_files[index] curr_path = current_right_nails_target_path_unrotated + '/' + curr_file_name if min(cropped_finger_unrotated.shape) >0 : cv.imwrite(curr_path, cropped_finger_unrotated) cv.imwrite(current_right_target_path, dw_img) right_nails_rects[orig_files[index]] = subContourRects right_pass_index = right_pass_index + 1 print('right fail percent') print(100*right_fail_index/(right_fail_index+right_pass_index)) # Minimum bounding rects of nails into a dataframe (left hand) aa_left = pd.DataFrame.from_dict(left_nails_rects,orient = 'index') aa_left.columns = 'left_' + aa_left.columns aa_left['image'] = aa_left.index aa_left.index = range(len(aa_left.index)) # Minimum bounding rects of nails into a dataframe (right hand) aa_right = pd.DataFrame.from_dict(right_nails_rects,orient = 'index') aa_right.columns = 'right_' + aa_right.columns aa_right['image'] = aa_right.index aa_right.index = range(len(aa_right.index)) # Merge rects from left and right hands aa = pd.merge(aa_left, aa_right, on = 'image', how = 'outer') aa.head() ### Upload nail bounding rects to Synapse ## Write to csv aa.to_csv('nail_bounding_rects.csv') # Synapse target folder syn_target_folder = 'syn22342373' # Upload results to Synapse # provenance provenance_set = synapseclient.Activity( name='nail_bounding_rects.csv', description='Minimum bounding rectangle for nails (except thumb). Each rectangle is of the form (center, size, theta)', used = 'syn25999658', executed = 'https://github.com/itismeghasyam/psorcastValidationAnalysis/blob/master/feature_extraction/nail_segmentation_mediapipe_pipeline.ipynb') test_entity = synapseclient.File( 'nail_bounding_rects.csv', description='Minimum bounding rectangle for nails (except thumb). Each rectangle is of the form (center, size, theta)', parent=syn_target_folder) test_entity = syn.store(test_entity, activity = provenance_set) ```
github_jupyter
# Keras Syntax Basics Our main goal is to predict the price of the gem stone based on the features. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns df = pd.read_csv('../Data/fake_reg.csv') df.head() ``` # Explore the data ``` df.info() df.describe().T sns.pairplot(df); ``` # Test/Train Split ``` # Features X = df[['feature1','feature2']].values # Label y = df['price'].values from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) X_train.shape, X_test.shape, y_train.shape, y_test.shape ``` # Normalizing/Scaling the Data ``` from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scaled_X_train = scaler.fit_transform(X_train) scaled_X_test = scaler.transform(X_test) scaled_X_train[:5] scaled_X_train.min(), scaled_X_train.max() ``` ------- # TensorFlow 2.0 Syntax ``` from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense ``` ### There are two ways to create keras based model. ## Approach 1) Model - as a list of layers ``` model = Sequential([Dense(4, activation='relu'), # input layer Dense(2, activation='relu'), # hidden layer Dense(1)]) #output layer ``` ## Approach 2) Model - adding in layers one by one Preferred one ``` model = Sequential() model.add(Dense(2)) model.add(Dense(2)) model.add(Dense(2)) ``` ---------- Let's go ahead and build a simple model and then compile it by defining our solver ``` model = Sequential() model.add(Dense(4, activation='relu')) model.add(Dense(4, activation='relu')) model.add(Dense(4, activation='relu')) # Final output node for prediction model.add(Dense(1)) model.compile(optimizer='rmsprop', loss='mse') ``` # Choosing an optimizer and loss Keep in mind what kind of problem you are trying to solve: Note: For optimizer, we can use whatever optimizer we like. ----- For a **multi-class classification** problem model.compile(optimizer='rmsprop', loss=`'categorical_crossentropy'`, metrics=['accuracy']) ---- For a **binary classification** problem model.compile(optimizer='rmsprop', loss=`'binary_crossentropy'`, metrics=['accuracy']) ----- For a mean squared error **regression** problem model.compile(optimizer='rmsprop', loss=`'mse'`) ----- # Training Below are some common definitions that are necessary to know and understand to correctly utilize Keras: * Sample: one element of a dataset. * Example: one image is a sample in a convolutional network * Example: one audio file is a sample for a speech recognition model * Batch: a set of N samples. The samples in a batch are processed independently, in parallel. If training, a batch results in only one update to the model.A batch generally approximates the distribution of the input data better than a single input. The larger the batch, the better the approximation; however, it is also true that the batch will take longer to process and will still result in only one update. For inference (evaluate/predict), it is recommended to pick a batch size that is as large as you can afford without going out of memory (since larger batches will usually result in faster evaluation/prediction). * Epoch: an arbitrary cutoff, generally defined as "one pass over the entire dataset", used to separate training into distinct phases, which is useful for logging and periodic evaluation. * When using validation_data or validation_split with the fit method of Keras models, evaluation will be run at the end of every epoch. * Within Keras, there is the ability to add callbacks specifically designed to be run at the end of an epoch. Examples of these are learning rate changes and model checkpointing (saving). ``` model.fit(scaled_X_train,y_train,epochs=250) ``` ## Evaluation Let's evaluate our performance on our training set and our test set. We can compare these two performances to check for overfitting. ``` loss_df = pd.DataFrame(model.history.history) loss_df.head() loss_df.plot() ``` ------ # Compare final evaluation (MSE) on training set and test set. These should hopefully be fairly close to each other. ``` model.metrics_names training_score = model.evaluate(scaled_X_train,y_train,verbose=0) test_score = model.evaluate(scaled_X_test,y_test,verbose=0) training_score, test_score ``` # Further Evaluations ``` predictions = model.predict(scaled_X_test) predictions[:5] # predictions reshape as panadas series predictions = pd.Series(predictions.reshape(300,)) predictions # Creating dataframe (combniation of True Y Value vs predictions) pred_df = pd.DataFrame(y_test, columns=['Test True Y']) pred_df = pd.concat([pred_df, predictions], axis=1) pred_df.head() pred_df.columns=['Test True Y', 'Model Predictions'] pred_df.head() ``` ### Visualize the data ``` sns.scatterplot(data=pred_df, x='Test True Y', y='Model Predictions'); ``` For the ideal scenarios, the line will be overlapped (predictions and true y value) and we will see Perfect Stright line. -------- ``` pred_df['Error'] = pred_df['Test True Y'] - pred_df['Model Predictions'] sns.displot(pred_df['Error'], bins=50, kde=True); ``` ------- # MAE, MSE, RMSE ``` from sklearn.metrics import mean_absolute_error, mean_squared_error MAE = mean_absolute_error(pred_df['Test True Y'], pred_df['Model Predictions']) MAE MSE = mean_squared_error(pred_df['Test True Y'], pred_df['Model Predictions']) MSE # MSE value and the value we got from evaluate() method are esentially the same thing, difference just due to precision test_score RMSE = np.sqrt(MSE) RMSE ``` ## How do we know if this MAE is good or bad? We can compare this to mean value of the actual distribution of dataset itself. ``` df.describe().T ``` We can see the average price is 498.67 minimum is 223.34 and max of 774.40\$. Based on our model prediction, MAE (on average how far we are off) is 4.03$ (less than 1%) which is pretty good. ``` 100* 4.03 / 498.67 ``` ---------- # Predicting on brand new data What if we just saw a brand new gemstone from the ground? What should we price it at? This is the **exact** same procedure as predicting on a new test data! ``` # [[Feature1, Feature2]] new_gem = [[998, 1000]] ``` As our model is trained on scaled data, we need to scaled our new data too. ``` new_gem = scaler.transform(new_gem) model.predict(new_gem) ``` Our model predicts that new gem we pick up is around 419\$. ----- # Saving and Loading a Model ``` from tensorflow.keras.models import load_model model.save('models/my_gem_model.h5') ``` Load the saved model and make some predictions. ``` loaded_model = load_model('models/my_gem_model.h5') loaded_model.predict(new_gem) ```
github_jupyter
## Browsing elephant modules and functions ``` import requests requests.get('http://localhost:5000/api').json() requests.get('http://localhost:5000/api/statistics').json() ``` ## POST requests ### Format The main payload - kwargs for an elephant function - must be provided in `data` key of json POST request. ```python response = requests.post('http://localhost:5000/api/<module>/<function>', json=dict( data=dict(...), # required units=dict(time='ms', amplitude='mV', rate='Hz'), # optional t_stop=120.57, # optional; in the specified units; default is None t_start=2.3, # optional; in the specified units; default is 0 )) ``` If no units are provied, the default units are used: * `ms` for time * `mV` for the amplitude of an analog signal * `Hz` for rates ## Examples ### Inter-spike interval ``` response = requests.post('http://localhost:5000/api/statistics/isi', json=dict( data=dict(spiketrain=[1,2,3,4,11]) )) response.json() ``` The resulting array `[1.0, 1.0, 1.0, 7.0]` is inter-spike intervals with the (default) units of `ms`. #### Customize units, t_start, and t_stop, shared within the request If you want to explicitly specify the units: ``` response = requests.post('http://localhost:5000/api/statistics/isi', json=dict( data=dict(spiketrain=[1,2,3,4,11]), units=dict(time='s') # specify that all units are in seconds )) response.json() ``` No difference, because currently elephant-sever does NOT output the units used - it assumes that the clients know what the units were. This behaviour can be easily changed. #### Custom arguments to specific elephant functions You can customize calls to elephant functions. Doing so requires filling all the necessary kwargs in json POST request: ``` response = requests.post('http://localhost:5000/api/statistics/isi', json=dict( data=dict( spiketrain=dict(times=[1,2,3,4,11], units='s', t_stop=27) ), units=dict(time='ms') )) response.json() # ISI in ms ``` ### CV2, LV NEST users are typically interested in the coefficient of variation and local variation analyses, and the request is: ``` response = requests.post('http://localhost:5000/api/statistics/cv2', json=dict( data=dict(v=[1,2,3,4,11]) )) response.json() ``` ## Invalid requests If you forgot to specify `t_stop` in the previous example: ``` bad_response = requests.post('http://localhost:5000/api/statistics/isi', json=dict( data=dict( spiketrain=dict(times=[1,2,3,4,11], units='s') ), units=dict(time='ms') )) print(bad_response) print() print(bad_response.text) ``` However, for some reasons, you cannot decode the error response: ``` bad_response.json() ```
github_jupyter
# NLP Example bằng tiếng Việt using StackNetClassifier ## Credit: Code and Notebooks from https://github.com/ngxbac/aivivn_phanloaisacthaibinhluan https://www.aivivn.com/ ``` import pandas as pd import numpy as np from scipy.sparse import hstack, csr_matrix, vstack from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer from sklearn.decomposition import TruncatedSVD from sklearn.model_selection import StratifiedKFold from sklearn.metrics import f1_score from sklearn.ensemble import * from sklearn.linear_model import * from tqdm import * import wordcloud import matplotlib.pyplot as plt import gc import lightgbm as lgb %matplotlib inline # Load data train_df = pd.read_csv("./data/train.csv") test_df = pd.read_csv("./data/test.csv") train_df.head() test_df.head() df = pd.concat([train_df, test_df], axis=0) # del train_df, test_df # gc.collect() import emoji def extract_emojis(str): return [c for c in str if c in emoji.UNICODE_EMOJI] good_df = train_df[train_df['label'] == 0] good_comment = good_df['comment'].values good_emoji = [] for c in good_comment: good_emoji += extract_emojis(c) good_emoji = np.unique(np.asarray(good_emoji)) bad_df = train_df[train_df['label'] == 1] bad_comment = bad_df['comment'].values bad_emoji = [] for c in bad_comment: bad_emoji += extract_emojis(c) bad_emoji = np.unique(np.asarray(bad_emoji)) good_emoji # Just remove "sad, bad" emoji :D good_emoji_fix = [ '↖', '↗', '☀', '☺', '♀', '♥', '✌', '✨', '❣', '❤', '⭐', '🆗', '🌝', '🌟', '🌧', '🌷', '🌸', '🌺', '🌼', '🍓', '🎈', '🎉', '🐅', '🐾', '👉', '👌', '👍', '👏', '💋', '💌', '💐', '💓', '💕', '💖', '💗', '💙', '💚', '💛', '💜', '💞', '💟', '💥', '💪', '💮', '💯', '💰', '📑', '🖤', '😀', '😁', '😂', '😃', '😄', '😅', '😆', '😇', '😉', '😊', '😋', '😌', '😍', '😎', '😑', '😓', '😔', '😖', '😗', '😘', '😙', '😚', '😛', '😜', '😝', '😞', '😟', '😡', '😯', '😰', '😱', '😲', '😳', '😻', '🙂', '🙃', '🙄', '🙆', '🙌', '🤑', '🤔', '🤗', ] bad_emoji # Just remove "good" emoji :D bad_emoji_fix = [ '☹', '✋', '❌', '❓', '👎', '👶', '💀', '😐', '😑', '😒', '😓', '😔', '😞', '😟', '😠', '😡', '😢', '😣', '😤', '😥', '😧', '😩', '😪', '😫', '😬', '😭', '😳', '😵', '😶', '🙁', '🙄', '🤔', ] def count_good_bad_emoji(row): comment = row['comment'] n_good_emoji = 0 n_bad_emoji = 0 for c in comment: if c in good_emoji_fix: n_good_emoji += 1 if c in bad_emoji_fix: n_bad_emoji += 1 row['n_good_emoji'] = n_good_emoji row['n_bad_emoji'] = n_bad_emoji return row # Some features df['comment'] = df['comment'].astype(str).fillna(' ') df['comment'] = df['comment'].str.lower() df['num_words'] = df['comment'].apply(lambda s: len(s.split())) df['num_unique_words'] = df['comment'].apply(lambda s: len(set(w for w in s.split()))) df['words_vs_unique'] = df['num_unique_words'] / df['num_words'] * 100 df = df.apply(count_good_bad_emoji, axis=1) df['good_bad_emoji_ratio'] = df['n_good_emoji'] / df['n_bad_emoji'] df['good_bad_emoji_ratio'] = df['good_bad_emoji_ratio'].replace(np.nan, 0) df['good_bad_emoji_ratio'] = df['good_bad_emoji_ratio'].replace(np.inf, 99) df['good_bad_emoji_diff'] = df['n_good_emoji'] - df['n_bad_emoji'] df['good_bad_emoji_sum'] = df['n_good_emoji'] + df['n_bad_emoji'] train_df = df[~df['label'].isnull()] test_df = df[df['label'].isnull()] train_comments = train_df['comment'].fillna("none").values test_comments = test_df['comment'].fillna("none").values y_train = train_df['label'].values train_df.head() ``` Tạo feature TFIDF đơn giản ``` tfidf = TfidfVectorizer( min_df = 5, max_df = 0.8, max_features=10000, sublinear_tf=True ) X_train_tfidf = tfidf.fit_transform(train_comments) X_test_tfidf = tfidf.transform(test_comments) EXCLUED_COLS = ['id', 'comment', 'label'] static_cols = [c for c in train_df.columns if not c in EXCLUED_COLS] X_train_static = train_df[static_cols].values X_test_static = test_df[static_cols].values X_train = hstack([X_train_tfidf, csr_matrix(X_train_static)]).tocsr() X_test = hstack([X_test_tfidf, csr_matrix(X_test_static)]).tocsr() # X_train = X_train_tfidf # X_test = X_test_tfidf X_train.shape, X_test.shape, y_train.shape ``` # Stacking method ``` models=[ ######## First level ######## [ RandomForestClassifier (n_estimators=100, criterion="entropy", max_depth=5, max_features=0.5, random_state=1), ExtraTreesClassifier (n_estimators=100, criterion="entropy", max_depth=5, max_features=0.5, random_state=1), GradientBoostingClassifier(n_estimators=100, learning_rate=0.1, max_depth=5, max_features=0.5, random_state=1), LogisticRegression(random_state=1) ], ######## Second level ######## [ RandomForestClassifier (n_estimators=200, criterion="entropy", max_depth=5, max_features=0.5, random_state=1) ] ] from pystacknet.pystacknet import StackNetClassifier model = StackNetClassifier( models, metric="f1", folds=5, restacking=False, use_retraining=True, use_proba=True, random_state=12345, n_jobs=1, verbose=1 ) model.fit(X_train, y_train) preds=model.predict_proba(X_test) pred_cls = np.argmax(preds, axis=1) # submission = pd.read_csv("./data/sample_submission.csv") # submission['label'] = pred_cls # submission.head() # submission.to_csv("stack_demo.csv", index=False) ``` # Ensemble method ``` from sklearn.model_selection import cross_val_predict models = [ RandomForestClassifier (n_estimators=100, criterion="entropy", max_depth=5, max_features=0.5, random_state=1), ExtraTreesClassifier (n_estimators=100, criterion="entropy", max_depth=5, max_features=0.5, random_state=1), GradientBoostingClassifier(n_estimators=100, learning_rate=0.1, max_depth=5, max_features=0.5, random_state=1), LogisticRegression(random_state=1) ] def cross_val_and_predict(clf, X, y, X_test, nfolds): kf = StratifiedKFold(n_splits=nfolds, shuffle=True, random_state=42) oof_preds = np.zeros((X.shape[0], 2)) sub_preds = np.zeros((X_test.shape[0], 2)) for fold, (train_idx, valid_idx) in enumerate(kf.split(X, y)): X_train, y_train = X[train_idx], y[train_idx] X_valid, y_valid = X[valid_idx], y[valid_idx] clf.fit(X_train, y_train) oof_preds[valid_idx] = clf.predict_proba(X_valid) sub_preds += clf.predict_proba(X_test) / kf.n_splits return oof_preds, sub_preds sub_preds = [] for clf in models: oof_pred, sub_pred = cross_val_and_predict(clf, X_train, y_train, X_test, nfolds=5) oof_pred_cls = oof_pred.argmax(axis=1) oof_f1 = f1_score(y_pred=oof_pred_cls, y_true=y_train) print(clf.__class__) print(f"F1 CV: {oof_f1}") sub_preds.append(sub_pred) sub_preds = np.asarray(sub_preds) sub_preds = sub_preds.mean(axis=0) sub_pred_cls = sub_preds.argmax(axis=1) # submission_ensemble = submission.copy() # submission_ensemble['label'] = sub_pred_cls # submission_ensemble.to_csv("ensemble.csv", index=False) import pandas as pd import numpy as np from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.decomposition import TruncatedSVD from sklearn.model_selection import StratifiedKFold from sklearn.metrics import f1_score import wordcloud import matplotlib.pyplot as plt import gc import lightgbm as lgb %matplotlib inline # Load data train_df = pd.read_csv("./data/train.csv") test_df = pd.read_csv("./data/test.csv") train_df.head() test_df.head() train_comments = train_df['comment'].fillna("none").values test_comments = test_df['comment'].fillna("none").values y_train = train_df['label'].values # Wordcloud of training set cloud = np.array(train_comments).flatten() plt.figure(figsize=(20,10)) word_cloud = wordcloud.WordCloud( max_words=200,background_color ="black", width=2000,height=1000,mode="RGB" ).generate(str(cloud)) plt.axis("off") plt.imshow(word_cloud) # Wordcloud of test set cloud = np.array(test_comments).flatten() plt.figure(figsize=(20,10)) word_cloud = wordcloud.WordCloud( max_words=100,background_color ="black", width=2000,height=1000,mode="RGB" ).generate(str(cloud)) plt.axis("off") plt.imshow(word_cloud) tfidf = TfidfVectorizer( min_df=5, max_df= 0.8, max_features=10000, sublinear_tf=True ) X_train = tfidf.fit_transform(train_comments) X_test = tfidf.transform(test_comments) X_train.shape, X_test.shape, y_train.shape def lgb_f1_score(y_hat, data): y_true = data.get_label() y_hat = np.round(y_hat) # scikits f1 doesn't like probabilities return 'f1', f1_score(y_true, y_hat), True print("Starting LightGBM. Train shape: {}, test shape: {}".format(X_train.shape, X_test.shape)) # Cross validation model folds = StratifiedKFold(n_splits=5, shuffle=True, random_state=69) # Create arrays and dataframes to store results oof_preds = np.zeros(X_train.shape[0]) sub_preds = np.zeros(X_test.shape[0]) # k-fold for n_fold, (train_idx, valid_idx) in enumerate(folds.split(X_train, y_train)): print("Fold %s" % (n_fold)) train_x, train_y = X_train[train_idx], y_train[train_idx] valid_x, valid_y = X_train[valid_idx], y_train[valid_idx] # set data structure lgb_train = lgb.Dataset(train_x, label=train_y, free_raw_data=False) lgb_test = lgb.Dataset(valid_x, label=valid_y, free_raw_data=False) params = { 'objective' :'binary', 'learning_rate' : 0.01, 'num_leaves' : 76, 'feature_fraction': 0.64, 'bagging_fraction': 0.8, 'bagging_freq':1, 'boosting_type' : 'gbdt', } reg = lgb.train( params, lgb_train, valid_sets=[lgb_train, lgb_test], valid_names=['train', 'valid'], num_boost_round=10000, verbose_eval=100, early_stopping_rounds=100, feval=lgb_f1_score ) oof_preds[valid_idx] = reg.predict(valid_x, num_iteration=reg.best_iteration) sub_preds += reg.predict(X_test, num_iteration=reg.best_iteration) / folds.n_splits del reg, train_x, train_y, valid_x, valid_y gc.collect() threshold = 0.5 preds = (sub_preds > threshold).astype(np.uint8) ```
github_jupyter
# Fun with histograms > 1d and 2d histograms - toc: true - badges: true - comments: false - categories: [jupyter] <img src="python_figures/histograms.png" alt="least squares" width="600"> ## Introduction This code produces the figure above. I tried to showcase a few things one can do with 1d and 2d histograms. ## The code ``` import matplotlib.pyplot as plt import numpy as np import matplotlib.gridspec as gridspec import scipy.special from scipy.optimize import curve_fit ``` make graph look pretty ``` # http://wiki.scipy.org/Cookbook/Matplotlib/LaTeX_Examples # this is a latex constant, don't change it. pts_per_inch = 72.27 # write "\the\textwidth" (or "\showthe\columnwidth" for a 2 collumn text) text_width_in_pts = 450.0 # inside a figure environment in latex, the result will be on the # dvi/pdf next to the figure. See url above. text_width_in_inches = text_width_in_pts / pts_per_inch # make rectangles with a nice proportion golden_ratio = 0.618 # figure.png or figure.eps will be intentionally larger, because it is prettier inverse_latex_scale = 2 # when compiling latex code, use # \includegraphics[scale=(1/inverse_latex_scale)]{figure} # we want the figure to occupy 2/3 (for example) of the text width fig_proportion = (3.0 / 3.0) csize = inverse_latex_scale * fig_proportion * text_width_in_inches # always 1.0 on the first argument fig_size = (1.0 * csize, 0.5 * csize) # find out the fontsize of your latex text, and put it here text_size = inverse_latex_scale * 12 label_size = inverse_latex_scale * 10 tick_size = inverse_latex_scale * 8 # learn how to configure: # http://matplotlib.sourceforge.net/users/customizing.html params = {'backend': 'ps', 'axes.labelsize': 16, 'legend.fontsize': tick_size, 'legend.handlelength': 2.5, 'legend.borderaxespad': 0, 'axes.labelsize': label_size, 'xtick.labelsize': tick_size, 'ytick.labelsize': tick_size, 'font.family': 'serif', 'font.size': text_size, # 'font.serif': ['Computer Modern Roman'], 'ps.usedistiller': 'xpdf', 'text.usetex': True, 'figure.figsize': fig_size, } plt.rcParams.update(params) plt.ioff() fig = plt.figure(1, figsize=fig_size) # figsize accepts only inches. ``` Panels on the left of the figure ``` gs = gridspec.GridSpec(2, 2, width_ratios=[1, 0.2], height_ratios=[0.2, 1]) gs.update(left=0.05, right=0.50, top=0.95, bottom=0.10, hspace=0.02, wspace=0.02) sigma = 1.0 # standard deviation (spread) mu = 0.0 # mean (center) of the distribution x = np.random.normal(loc=mu, scale=sigma, size=5000) k = 2.0 # shape theta = 1.0 # scale y = np.random.gamma(shape=k, scale=theta, size=5000) # bottom left panel ax10 = plt.subplot(gs[1, 0]) counts, xedges, yedges, image = ax10.hist2d(x, y, bins=40, cmap="YlOrRd", density=True) dx = xedges[1] - xedges[0] dy = yedges[1] - yedges[0] xvec = xedges[:-1] + dx / 2 yvec = yedges[:-1] + dy / 2 ax10.set_xlabel(r"$x$") ax10.set_ylabel(r"$y$", rotation="horizontal") ax10.text(-2, 8, r"$p(x,y)$") ax10.set_xlim([xedges.min(), xedges.max()]) ax10.set_ylim([yedges.min(), yedges.max()]) # top left panel ax00 = plt.subplot(gs[0, 0]) gaussian = (1.0 / np.sqrt(2.0 * np.pi * sigma ** 2)) * \ np.exp(-((xvec - mu) ** 2) / (2.0 * sigma ** 2)) xdist = counts.sum(axis=1) * dy ax00.bar(xvec, xdist, width=dx, fill=False, edgecolor='black', alpha=0.8) ax00.plot(xvec, gaussian, color='black') ax00.set_xlim([xedges.min(), xedges.max()]) ax00.set_xticklabels([]) ax00.set_yticks([]) ax00.set_xlabel("Normal distribution", fontsize=16) ax00.xaxis.set_label_position("top") ax00.set_ylabel(r"$p(x)$", rotation="horizontal", labelpad=20) # bottom right panel ax11 = plt.subplot(gs[1, 1]) gamma_dist = yvec ** (k - 1.0) * np.exp(-yvec / theta) / \ (theta ** k * scipy.special.gamma(k)) ydist = counts.sum(axis=0) * dx ax11.barh(yvec, ydist, height=dy, fill=False, edgecolor='black', alpha=0.8) ax11.plot(gamma_dist, yvec, color='black') ax11.set_ylim([yedges.min(), yedges.max()]) ax11.set_xticks([]) ax11.set_yticklabels([]) ax11.set_ylabel("Gamma distribution", fontsize=16) ax11.yaxis.set_label_position("right") ax11.set_xlabel(r"$p(y)$") ax11.xaxis.set_label_position("top") ``` Panels on the right of the figure ``` gs2 = gridspec.GridSpec(2, 1, width_ratios=[1], height_ratios=[1, 1]) gs2.update(left=0.60, right=0.98, top=0.95, bottom=0.10, hspace=0.02, wspace=0.05) x = np.random.normal(loc=0, scale=1, size=1000) y = np.random.gamma(shape=2, size=1000) bx10 = plt.subplot(gs2[1, 0]) bx00 = plt.subplot(gs2[0, 0]) N = 100 a = np.random.gamma(shape=5, size=N) my_bins = np.arange(0,15,1.5) n1, bins1, patches1 = bx00.hist(a, bins=my_bins, density=True, histtype='stepfilled', alpha=0.2, hatch='/') bx00.set_xlim([0, 15]) bx00.set_ylim([0, 0.28]) bx00.set_xticklabels([]) bx00.set_xlabel(r"\texttt{plt.hist}") bx00.xaxis.set_label_position("top") # the following way is equivalent to plt.hist, but it gives # the user more flexibility when plotting and analysing the results n2, bins2 = np.histogram(a, bins=my_bins, density=True) wid = bins2[1] - bins2[0] red, = bx10.plot(bins2[:-1]+wid/2, n2, marker='o', color='red') bx10.bar(bins2[:-1], n2, width=wid, fill=False, edgecolor='black', linewidth=3, alpha=0.8, align="edge") bx10.set_xlim([0, 15]) bx10.set_ylim([0, 0.28]) bx10.set_xlabel(r"\texttt{np.histogram};\quad \texttt{plt.bar}") ``` best fit ``` xdata = my_bins[:-1] + wid/2 ydata = n2 def func(x, p1, p2): return x ** (p1 - 1.0) * np.exp(-x / p2) / (p2 ** p1 * scipy.special.gamma(p1)) popt, pcov = curve_fit(func, xdata, ydata, p0=(1.5, 1.5)) # p0 = initial guess p1, p2 = popt SStot = ((ydata - ydata.mean()) ** 2).sum() SSres = ((ydata - func(xdata, p1, p2)) ** 1).sum() Rsquared = 1 - SSres / SStot h = np.linspace(0,15,101) bx00.plot(h, func(h, p1, p2), color='blue', linewidth=2) # dummy plot, just so we can have a legend on the bottom panel blue, = ax10.plot([100],[100], color='blue', linewidth=2, label="Best fit") bx10.legend([red,blue],[r'Data',r'Best fit, $r^2=${:.2f}'.format(Rsquared)], loc='upper right', frameon=False, handlelength=4, markerfirst=False, numpoints=3) fig.savefig("./python_figures/histograms.png",dpi=300) fig ```
github_jupyter
<h1 align="center">Ecuación de Poisson Boltzmann</h1> <div align="right">Por David A. Miranda, PhD<br>2021</div> <h2>1. Importa las librerias</h2> ``` import numpy as np import matplotlib.pyplot as plt import scipy.constants as cte from scipy.integrate import odeint ``` ## 2. Ecuación de Poisson-Boltzmann La ecuación de Poisson se puede utilizar para estudiar las propiedades eléctricas de cargas eléctricas suspendidas en solución. En tal caso, la densidad de cargas eléctricas en suspensión se puede modelar con una distribución de Maxwell-Botlzmann. La ecuación de Poisson obtenida al considerar cargas positivas y negativas que obedecen una distribución de Maxwell-Boltzmn se conoce como ecución de Poisson-Boltzmann [(Grinmes and Martinsen, 2018, pp 30-33)](https://bibliotecavirtual.uis.edu.co:2191/science/article/pii/B9780123740045000027/pdfft): $$\nabla^2\phi = \frac{2zq_e \eta_0}{\epsilon} sinh \left( \frac{zq_e}{k_BT} \phi \right)$$ En términos de la longitud de Debye $1/\kappa$ y se hace un cambio de variable $\psi = \dfrac{zq_e}{k_BT} \phi$, la ecuación de Poisson-Boltzmann toma la siguiente forma: $$\nabla^2\psi = \kappa^2 sinh \left( \psi \right)\qquad(1)$$ Donde, $\kappa = \left( \dfrac{2z^2q_e^2 \eta_0}{\varepsilon k_BT} \right)^{1/2}$ ``` q_e = cte.e # Carga del electrón z = 1 # Valencia n0 = 0.1 # Concentración en mol/litro eps = 80.5 * cte.epsilon_0 # Permitividad eléctrica del agua por la del vacío kT = cte.Boltzmann * (20 + 273.15) # Constante de Boltzmann por la temperatura kappa = np.sqrt(( 2 * (z**2) * n0 * (q_e**2) ) / (eps*kT)) ``` ## 2.1. Solución analítica para una dimensión, 1D La solución analítica de la ecuación (1) para el caso unidimensional está dada por: $$ \psi(x) = 4 tanh^{-1} \left( \gamma e^{-\kappa x} \right)$$ Donde $\gamma = tanh\left( \frac{zq_e\psi_0}{4k_BT} \right)$. Ver más detalles en [(Ohshima 2013, pp. 345)](https://link.springer.com/referencework/10.1007/978-3-642-20665-8). ``` phi_0 = 0.2 # V x = np.linspace(1e-9, 5000, 1000) def poisson_boltzmann_1D(x, n0, phi_0, T): kT = cte.Boltzmann * (T + 273.15) gamma = np.tanh(z * q_e * phi_0 / (4*kT)) kappa = np.sqrt(( 2 * (z**2) * n0 * (q_e**2) ) / (eps*kT)) return 4*np.arctanh(gamma * np.exp(-kappa * x)) plt.figure(1, dpi=240) plt.figure(2, dpi=240) for T in [-77, 0, 37.5, 100]: psi_pb = poisson_boltzmann_1D(x, n0, phi_0, T) plt.figure(1) plt.plot(x, psi_pb, label='T = %0.1f ºC' % T) plt.xlabel('x [m]') plt.ylabel(r'$\psi(x)$') plt.figure(2) plt.plot(x, kT*psi_pb/(z*q_e), label='T = %0.1f ºC' % T) plt.xlabel('x [m]') plt.ylabel(r'$\phi(x)$ [V]') title = r'$\phi_0 = %0.3f$ [V], $\eta_0 = %0.2f$ [mol/l]' % (phi_0, n0) for fig in [1, 2]: plt.figure(fig) plt.title(title) _ = plt.legend() ``` ## 2.2. Ecuación diferencial de segundo orden como sistema de ecuaciones diferenciales de primer orden Para resolver la ecuación (1) numéricamente se puede utilizar el método odeint implementado en scipy. Este método permite resolver sistemas de ecuaciones diferenciales de primer orden. En tal sentido, el primer paso es transformar la ecuación (1) en un sistema de ecuaciones de primer orden; para ello se define $\vec{\xi}$, entonces, la ecuación (1) se transforma en: $$\vec{\nabla} \cdot \vec{\xi} = \kappa^2 sinh(\psi)$$ $$\vec{\xi} = \vec{\nabla} \psi$$ ### 2.2.1. El caso unidimensional, 1D Para el caso en 1D, el anterior sistema de ecuaciones se escribe de la siguiente forma: $$\frac{d \xi(x)}{dx} = \kappa^2 sinh[\psi(x)]$$ $$\xi(x) = \frac{d\psi(x)}{dx}$$ ``` def dXi_dx(U, x, kappa=kappa): # U es un arreglo tal que ѱ = U[0] y ξ = U[1] # Esta función retorna las primeras derivadas [ѱ', ξ'] psi = U[0] xi = U[1] dxi_dx = kappa**2 * np.sinh(psi) dpsi_dx = xi return [ dpsi_dx, dxi_dx ] ``` ## 3. Preguntas de autoexplicación + Interprete la gráfica obtenida a partir de la solución analítica para 1D. + Para una misma concentración y temperatura, ¿cuál es el efecto de variar el potencial en el electrodo? + Para un mismo potencial en electrodo y temperatura, ¿cuál es el efecto de variar la concentración? + Para un mismo potencial y concentración, ¿cuál es el efecto de variar la temperatura? End!
github_jupyter
# TensorFlow TensorFlow is an open source software library by Google which is extensively used for numerical computation. It uses data flow graphs which can be shared and executed on many different platforms. It is widely used for building deep learning models which is a subset of machine learning. Tensor is nothing but a multidimensional array, so when we say tensorflow ,it is literally a flow of multi-dimensional arrays (tensor) in the computation graph. With Anaconda installed, installing tensorflow has become very simple. Irrespective of the platform you are using, you can easily install tensorflow by typing the following command. <br> source activate universe conda install -c conda-forge tensorflow We can check successful tensorflow installation by simply running the following hello world program. ``` import warnings warnings.filterwarnings('ignore') import tensorflow as tf hello = tf.constant("Hello World") hello.numpy() ``` ## Variables, Constants, and Placeholders Variables, constants, placeholders are the fundamental elements of tensorflow. However, there is always a confusion between these three. Let us see each element one by one and learn the difference between them. ### Variables Variables are the containers used to store values. Variables will be used as input to several other operations in the computation graph. We can create tensorflow variables using tf.Variable() function. In the below example, we define a variable with values from a random normal distribution and name it as weights. ``` weights = tf.Variable(tf.random.normal([3, 2], stddev=0.1), name="weights") weights ``` However, after defining a variable, we need to explicitly create an initialization operation using tf.global_variables_initializer() method which will allocate resources for the variable. ### Constants Constants, unlike variables, their values cannot be changed. Constant are immutable once they are assigned values they cannot be changed throughout. we can create constant using tf.constant() function. ``` x = tf.constant(13) x.numpy() ``` ### Placeholders Think of placeholders as a variable where you only define the type and dimension but will not assign the value. Placeholders are defined with no values. values for the placeholders will be fed at the runtime. Placeholders have an optional argument called shape which specifies the dimensions of the data. If the shape is set to none then we can feed data of any size at runtime. Placeholders can be defined using tf.placeholder() function ``` # not in TF 2.0 # x = tf.placeholder("float", shape=None) ``` To put in simple terms, we use tf.variable to store the data and tf.placeholder for feeding the external data. ## Computation Graph Everything in tensorflow will be represented as a computation graph which consists of nodes and edges, where nodes are the mathematical operations, say addition, multiplication etc.. and edges are the tensors. Having computational graph is very efficient in optimizing resources and it also promotes distributed computing. Say we have node B whose input is dependent on the output of node A this type of dependency is called direct dependency. ``` A = tf.multiply(8,5) B = tf.multiply(A,1) ``` When node B doesn't depend on node A for its input it is called indirect dependency ``` A = tf.multiply(8,5) B = tf.multiply(4,3) ``` So if we can understand these dependencies, we can distribute the independent computations in available resources and reduce the computation time. Whenever we import tensorflow, automatically default graph will be created and all nodes we create will get associated with the default graph. ## Sessions Computation graphs will only be defined, in order to execute the computation graph we use tensorflow sessions. sess = tf.Session() We can create the session for our computation graph using tf.Session() method which will allocate the memory for storing the current value of the variable. After creating the session, we can execute our graph using sess.run() method. In order to run anything in tensorflow, we need to start the tensorflow session for an instance, look at the below code, ``` import tensorflow as tf a = tf.multiply(2,3) a.numpy() ``` It will print tensorflow object instead of 6. Because as already said, whenever we import tensorflow a default computation graph will be automatically created and all nodes which we create will get attached to the graph. In order to execute the graph, we need to initialize tensorflow session as follows, ``` import tensorflow as tf a = tf.multiply(2,3) a.numpy() # not in TF 2.0 #create tensorflow session for executing the session # with tf.Session() as sess: # #run the session # print(sess.run(a)) ```
github_jupyter
# Esselunga online shop scraping Product names, prices per unit and prices per kg/liter, nutritional information. Purpose: help calculating price and convenience of meals and diets, for example knowing which are the cheapest products per gram of protein (€/grams of protein) or per calories (€/cal). ## Initial data scraping The initial scraping has been done manually, just going to every category of the online shop and launching a JS script to get to the bottom of the infinite scroll. This triggers all the xhr requests. Then, the JSON responses for every category have been saved to `*.har` files by right-clicking and selecting _"Save All As HAR"_. ### JS autoscroll code from https://www.quora.com/Is-there-a-way-to-automatically-load-a-page-with-infinite-scroll-until-it-gets-to-the-bottom-of-the-feed-Is-there-a-software-or-a-setting-that-helps-to-do-this ```var lastScrollHeight = 0; function autoScroll() { var sh = document.documentElement.scrollHeight; if (sh != lastScrollHeight) { lastScrollHeight = sh; document.documentElement.scrollTop = sh; } } window.setInterval(autoScroll, 100);``` ## Parse responses ``` import re, os, json def label_to_price_per_kg(label): regex = r'Euro (\d+,\d+) \/ ([a-z]+)' match = re.match(regex, label) if not match: return None groups = list(match.groups()) price_per_kg = float(groups[0].replace(',', '.')) # Adjust price per kg if necessary if groups[1] == 'g' or groups[1] == 'ml': price_per_kg *= 1000 elif groups[1] == 'hg': price_per_kg *= 10 elif groups[1] == 'pz': price_per_kg = None return price_per_kg def parse_har(har_filename): # Parse har as json with open('har/{}'.format(har_filename)) as f: js = json.loads(f.read()) # Iterate over responses and get all the entities entities = [] for entry in js['log']['entries']: response = json.loads(entry['response']['content']['text']) entities.extend(response['entities']) for entity in entities: # Strip description entity['description'] = entity['description'].strip() # Calculate price per kg by parsing the label entity['price_per_kg'] = label_to_price_per_kg(entity['label']) # Add qty entity['qty'] = ' '.join([entity['unitValue'], entity['unitText']]) # Add category entity['category'] = har_filename[:-4] # Only keep products with price per kg entities = list(filter(lambda entity: entity['price_per_kg'], entities)) # Keep only interesting keys keys_to_keep = ['description', 'category', 'price', 'qty', 'price_per_kg'] # Create dictionary with product id as key entities = { entity['id'] : { k: entity[k] for k in keys_to_keep } for entity in entities } return entities ``` ## Build database from HARs ``` import pandas as pd entities = {} for har_filename in next(os.walk('har'))[2]: entities.update(parse_har(har_filename)) ``` ## Scrape nutrition facts from IDs The `nutrition_facts` dictionary has the same keys as `entities`. If the value is `repeat`, it means the request has failed for some reason and needs to be repeated. Otherwise, if the response has status code 200 OK, the value would be the text of the response. Initially, the dictionary is created by copying `entities` and setting all the keys to `repeat`. This is done so that the data can be scraped in multiple passes without losing intermediate results, because after about half of the request I noticed my IP address has been 'banned' and every request ended in timeout. The code below opens and reads the dictionary in the JSON file and tries to make requests to the Esselunga online shop website to scrape nutritional information. To do so, first make a valid request from the browser in order to obtain valid cookies and headers for the request. ## Parse nutrition facts ``` def delete_nutrition_facts_by_id(_id): nutrition_facts = json.loads(open('nutrition_facts/nutrition_facts.json', 'r').read()) nutrition_facts.pop(str(_id)) with open('nutrition_facts/nutrition_facts.json', 'w') as f: f.write(json.dumps(nutrition_facts)) def parse_table(_id): return pd.read_html(nutrition_facts[_id])[0] def parse_nutrition_facts(nutrition_facts): # Find tables containing nutritional info. missing = 0 for _id in nutrition_facts.keys(): found = False for idx, el in enumerate(nutrition_facts[_id]['informations']): if 'value' in el and 'Informazioni nutrizionali' in el['value']: nutrition_facts[_id] = el['value'] found = True break if not found: nutrition_facts[_id] = None missing += 1 # Fill missing keys with None. for _id in entities.keys(): if str(_id) not in nutrition_facts: nutrition_facts[_id] = None missing += 1 # Now 'entities' and 'nutrition_facts' have the same keys. print('{} total entities. {} are missing nutrition facts.'.format(len(nutrition_facts), missing)) # Convert str keys to int keys nutrition_facts = { int(_id) : fact for _id, fact in nutrition_facts.items() } # Parse raw HTML table and extract data for _id, html in nutrition_facts.items(): if html is None: continue # Parse table table = pd.read_html(html)[0] # Identify columns in table: labels ('nutrizionali') and values per 100g cols_nutrizionali = [col for col in table.columns if 'nutrizionali' in col] cols_100 = [col for col in table.columns if '100' in col] # If either column is not found, forget the nutrition facts for this item if not cols_nutrizionali: nutrition_facts[_id] = None continue if not cols_100: nutrition_facts[_id] = None continue col_nutrizionali = cols_nutrizionali[0] # Get calories separately because the corresponding label is not always consistent # col_100 will be the one where kcal are found regexes = [r'(\d+),?\d*? *kcal', r'kcal *(\d+)'] matches = pd.DataFrame() it = iter([(col, regex) for col in cols_100 for regex in regexes]) col_regex = None while matches.empty: try: col_regex = next(it) except StopIteration: break matches = table[col_regex[0]].astype(str).str.extract(col_regex[1], flags=re.IGNORECASE).dropna() col_100 = col_regex[0] kcal = int(matches[0].tolist()[0]) d = {} d['kcal'] = kcal keywords = ['carboidrati', 'zuccheri', 'proteine', 'grassi', 'saturi', 'monoinsaturi', 'polinsaturi', 'fibre', 'sale'] translate = {'carboidrati' : 'carbs', 'zuccheri': 'sugars', 'proteine': 'protein', 'grassi': 'fat', 'saturi': 'saturated', 'monoinsaturi': 'monounsat.', 'polinsaturi': 'polyunsat.', 'fibre': 'fiber', 'sale': 'salt'} # For each keyword, find it in 'col_nutrizionali' and get the corresponding value in 'col_100' for keyword in keywords: found = list(table[table[col_nutrizionali].str.contains(keyword, case=False).fillna(False)][col_100]) if not found: continue found = found[0] if isinstance(found, float) or isinstance(found, int): d[translate[keyword]] = float(found) continue if isinstance(found, str): found = found.replace(',', '.').strip() numeric_const_pattern = '[-+]? (?: (?: \d* \. \d+ ) | (?: \d+ \.? ) )(?: [Ee] [+-]? \d+ ) ?' rx = re.compile(numeric_const_pattern, re.VERBOSE) parsed = rx.findall(found) if not parsed: d[translate[keyword]] = float(0) else: d[translate[keyword]] = float(parsed[0]) nutrition_facts[_id] = d return nutrition_facts nutrition_facts = json.loads(open('nutrition_facts/nutrition_facts.json', 'r').read()) nutrition_facts = parse_nutrition_facts(nutrition_facts) ``` ## Merge with database ``` # First make sure keys are exactly the same assert set(entities.keys()) == set(nutrition_facts.keys()) # Merge things for _id in entities.keys(): d = nutrition_facts[_id] if d is not None: entities[_id].update(d) ``` ## Sanity check ``` import math tols = [5.0, 10.0, 20.0, 50.0, 100.0] count = [0, 0, 0, 0, 0] diffs = [] for _id, entity in entities.items(): kcal_actual = entity.get('kcal', 0.0) kcal_calculated = 4 * entity.get('carboidrati', 0.0) + 4 * entity.get('proteine', 0.0) + 9 * entity.get('grassi', 0.0) + 1.5 * entity.get('fibre', 0.0) if not math.isnan(kcal_calculated): diffs.append(abs(kcal_actual - kcal_calculated)) for i, tol in enumerate(tols): if not math.isclose(kcal_actual, kcal_calculated, abs_tol=tol): count[i] += 1 import matplotlib.pyplot as plt plt.plot(tols, count) plt.xlabel('abs_tol') plt.ylabel('count') plt.show() ``` Most products are within 20 calories of the calculated ones. Most of the products which are **not** are _weird candy with polyols_. Some of them are blatant errors though. Let's remove them. ``` diffs.sort() plt.plot(diffs[-50:]) plt.show() to_remove = [5416230, 5416233, 5399643, 5427633, 5758337, 5417410, 5759062, 5417420, 5410182, 5724116, 5717056, 5409164, 5410135, 5405325, 5381793] for _id in to_remove: entities.pop(_id) ``` ## Add useful data ``` for entity in entities.values(): entity.update( { '€/kcal' : (entity['price_per_kg'] / 10.0) / entity['kcal'] if 'kcal' in entity and entity['kcal'] != 0.0 else 0.0, '€/g carb' : (entity['price_per_kg'] / 10.0) / entity['carbs'] if 'carbs' in entity and entity['carbs'] != 0.0 else 0.0, '€/g protein' : (entity['price_per_kg'] / 10.0) / entity['protein'] if 'protein' in entity and entity['protein'] != 0.0 else 0.0, '€/g fat' : (entity['price_per_kg'] / 10.0) / entity['fat'] if 'fat' in entity and entity['fat'] != 0.0 else 0.0} ) ``` ## Save to csv ``` df = pd.DataFrame(entities) df = df.transpose() df.to_csv('csv/esselunga.csv') ```
github_jupyter
# Preprocessing data ``` import os ``` [git submodule](https://stackoverflow.com/questions/36236484/maintaining-a-git-repo-inside-another-git-repo) <br> ## MusicNet Download the dataset (11 GiB) * [Deep Complex Networks: MusicNet](https://github.com/ChihebTrabelsi/deep_complex_networks) - [official page](https://homes.cs.washington.edu/~thickstn/musicnet.html) ``` musicnet_path = "../musicnet/data" file_in = os.path.join(musicnet_path, "musicnet.h5") if not os.path.exists(file_in): os.makedirs(musicnet_path, exist_ok=False) !wget https://homes.cs.washington.edu/~thickstn/media/musicnet.h5 -P {musicnet_path} ``` Extract test and validation subsets as specified in [Trabelsi et al. (2019)](https://arxiv.org/abs/1705.09792). ``` datasets = { "test": [ "id_2303", "id_2382", "id_1819", ], "valid": [ "id_2131", "id_2384", "id_1792", "id_2514", "id_2567", "id_1876", ], } ``` Populate the train with the remaining keys. ``` import h5py from itertools import chain with h5py.File(file_in, "r") as h5_in: remaining_keys = set(h5_in.keys()) - set(chain(*datasets.values())) datasets.update({ "train": list(remaining_keys) }) ``` Run resampler on the keys of each dataset (this takes a while) Code is loosely based on [Trabelsi et al. (2019)](https://github.com/ChihebTrabelsi/deep_complex_networks/blob/master/musicnet/scripts/resample.py) but has been customized for HDF5 * dependencies: [resampy](https://github.com/bmcfee/resampy) ``` from cplxpaper.musicnet.utils import resample_h5 for dataset, keys in datasets.items(): file_out = os.path.join(musicnet_path, f"musicnet_11khz_{dataset}.h5") resample_h5(file_in, file_out, 44100, 11000, keys=sorted(keys)) ``` The `utils.musicnet` also implements a `torch.Dataset` which interfaces HDF5 files * dependencies: [ncls](https://github.com/biocore-ntnu/ncls) -- written in cython/c and offers termendous speed up compared to pythonic `IntervalTree`. <br> ## TIMIT * [TIMIT](https://catalog.ldc.upenn.edu/LDC93S1) paywalled?! - [on Kaggle](https://www.kaggle.com/mfekadu/darpa-timit-acousticphonetic-continuous-speech) ``` # !pip install kaggle ``` From [Kaggle-api](https://github.com/Kaggle/kaggle-api#api-credentials) > At the 'Account' tab of your user profile (`https://www.kaggle.com/<username>/account`) select 'Create API Token'. This will download the token `kaggle.json`. Download TIMIT dataset without any hassle with `kaggle`! ``` timit_path = "../timit/data" if not os.path.exists(timit_path): os.makedirs(timit_path, exist_ok=False) timit_uri = "mfekadu/darpa-timit-acousticphonetic-continuous-speech" !kaggle datasets download -p {timit_path} --unzip {timit_uri} ``` Run preprocessing scripts ``` # preprocess the data # how do? ``` <br> ``` assert False ``` <br>
github_jupyter
# Objective * 20181230: * Predict stock price in next day using simple moving average * Given prices for the last N days, we do prediction for day N+1 * 20190121 - Diff from StockPricePrediction_v3_mov_avg.ipynb: * Here we use last value to do prediction ``` %matplotlib inline import math import matplotlib import numpy as np import pandas as pd import seaborn as sns import time from datetime import date, datetime, time, timedelta from matplotlib import pyplot as plt from pylab import rcParams from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error from tqdm import tqdm_notebook np.warnings.filterwarnings('ignore') #### Input params ################## stk_path = "./data/VTI.csv" test_size = 0.2 # proportion of dataset to be used as test set cv_size = 0.2 # proportion of dataset to be used as cross-validation set Nmax = 2 # for feature at day t, we use lags from t-1, t-2, ..., t-N as features # Nmax is the maximum N we are going to test fontsize = 14 ticklabelsize = 14 #################################### ``` # Common functions ``` def get_preds_mov_avg(df, target_col, N, pred_min, offset): """ Given a dataframe, get prediction at timestep t using values from t-1, t-2, ..., t-N. Using simple moving average. Inputs df : dataframe with the values you want to predict. Can be of any length. target_col : name of the column you want to predict e.g. 'adj_close' N : get prediction at timestep t using values from t-1, t-2, ..., t-N pred_min : all predictions should be >= pred_min offset : for df we only do predictions for df[offset:]. e.g. offset can be size of training set Outputs pred_list : list. The predictions for target_col. np.array of length len(df)-offset. """ pred_list = df[target_col].rolling(window = N, min_periods=1).mean() # len(pred_list) = len(df) # Add one timestep to the predictions pred_list = np.concatenate((np.array([np.nan]), np.array(pred_list[:-1]))) # If the values are < pred_min, set it to be pred_min pred_list = np.array(pred_list) pred_list[pred_list < pred_min] = pred_min return pred_list[offset:] def get_mape(y_true, y_pred): """ Compute mean absolute percentage error (MAPE) """ y_true, y_pred = np.array(y_true), np.array(y_pred) return np.mean(np.abs((y_true - y_pred) / y_true)) * 100 ``` # Load data ``` df = pd.read_csv(stk_path, sep = ",") # Convert Date column to datetime df.loc[:, 'Date'] = pd.to_datetime(df['Date'],format='%Y-%m-%d') # Change all column headings to be lower case, and remove spacing df.columns = [str(x).lower().replace(' ', '_') for x in df.columns] # # Get month of each sample # df['month'] = df['date'].dt.month # Sort by datetime df.sort_values(by='date', inplace=True, ascending=True) df.head(10) df['date'].min(), df['date'].max() # Plot adjusted close over time rcParams['figure.figsize'] = 10, 8 # width 10, height 8 ax = df.plot(x='date', y='adj_close', style='b-', grid=True) ax.set_xlabel("date") ax.set_ylabel("USD") ``` # Split into train, dev and test set ``` # Get sizes of each of the datasets num_cv = int(cv_size*len(df)) num_test = int(test_size*len(df)) num_train = len(df) - num_cv - num_test print("num_train = " + str(num_train)) print("num_cv = " + str(num_cv)) print("num_test = " + str(num_test)) # Split into train, cv, and test train = df[:num_train].copy() cv = df[num_train:num_train+num_cv].copy() train_cv = df[:num_train+num_cv].copy() test = df[num_train+num_cv:].copy() print("train.shape = " + str(train.shape)) print("cv.shape = " + str(cv.shape)) print("train_cv.shape = " + str(train_cv.shape)) print("test.shape = " + str(test.shape)) test['date'].min(), test['date'].max() ``` # EDA ``` # Plot adjusted close over time rcParams['figure.figsize'] = 10, 8 # width 10, height 8 matplotlib.rcParams.update({'font.size': 14}) ax = train.plot(x='date', y='adj_close', style='b-', grid=True) ax = cv.plot(x='date', y='adj_close', style='y-', grid=True, ax=ax) ax = test.plot(x='date', y='adj_close', style='g-', grid=True, ax=ax) ax.legend(['train', 'validation', 'test']) ax.set_xlabel("date") ax.set_ylabel("USD") ``` # Predict using Moving Average ``` RMSE = [] mape = [] for N in range(1, Nmax+1): # N is no. of samples to use to predict the next value est_list = get_preds_mov_avg(train_cv, 'adj_close', N, 0, num_train) cv.loc[:, 'est' + '_N' + str(N)] = est_list RMSE.append(math.sqrt(mean_squared_error(est_list, cv['adj_close']))) mape.append(get_mape(cv['adj_close'], est_list)) print('RMSE = ' + str(RMSE)) print('MAPE = ' + str(mape)) df.head() # Plot RMSE versus N plt.figure(figsize=(12, 8), dpi=80) plt.plot(range(1, Nmax+1), RMSE, 'x-') plt.grid() plt.xlabel('N') plt.ylabel('RMSE') # Plot MAPE versus N. Note for MAPE smaller better. plt.figure(figsize=(12, 8), dpi=80) plt.plot(range(1, Nmax+1), mape, 'x-') plt.grid() plt.xlabel('N') plt.ylabel('MAPE') # Set optimum N N_opt = 1 ``` # Plot Predictions on dev set ``` # Plot adjusted close over time rcParams['figure.figsize'] = 10, 8 # width 10, height 8 matplotlib.rcParams.update({'font.size': 14}) ax = train.plot(x='date', y='adj_close', style='b-', grid=True) ax = cv.plot(x='date', y='adj_close', style='y-', grid=True, ax=ax) ax = test.plot(x='date', y='adj_close', style='g-', grid=True, ax=ax) ax = cv.plot(x='date', y='est_N1', style='r-', grid=True, ax=ax) ax = cv.plot(x='date', y='est_N2', style='m-', grid=True, ax=ax) ax.legend(['train', 'validation', 'test', 'predictions with N=1', 'predictions with N=2']) ax.set_xlabel("date") ax.set_ylabel("USD") # Plot adjusted close over time rcParams['figure.figsize'] = 10, 8 # width 10, height 8 ax = train.plot(x='date', y='adj_close', style='bx-', grid=True) ax = cv.plot(x='date', y='adj_close', style='yx-', grid=True, ax=ax) ax = test.plot(x='date', y='adj_close', style='gx-', grid=True, ax=ax) ax = cv.plot(x='date', y='est_N1', style='rx-', grid=True, ax=ax) ax = cv.plot(x='date', y='est_N2', style='mx-', grid=True, ax=ax) ax.legend(['train', 'validation', 'test', 'predictions with N=1', 'predictions with N=2']) ax.set_xlabel("date") ax.set_ylabel("USD") ax.set_xlim([date(2017, 11, 1), date(2017, 12, 30)]) ax.set_ylim([127, 137]) ax.set_title('Zoom in to dev set') ``` # Final Model ``` est_list = get_preds_mov_avg(df, 'adj_close', N_opt, 0, num_train+num_cv) test.loc[:, 'est' + '_N' + str(N_opt)] = est_list print("RMSE = %0.3f" % math.sqrt(mean_squared_error(est_list, test['adj_close']))) print("MAPE = %0.3f%%" % get_mape(test['adj_close'], est_list)) test.head() # Plot adjusted close over time rcParams['figure.figsize'] = 10, 8 # width 10, height 8 ax = train.plot(x='date', y='adj_close', style='b-', grid=True) ax = cv.plot(x='date', y='adj_close', style='y-', grid=True, ax=ax) ax = test.plot(x='date', y='adj_close', style='g-', grid=True, ax=ax) ax = test.plot(x='date', y='est_N1', style='r-', grid=True, ax=ax) ax.legend(['train', 'validation', 'test', 'predictions with N_opt=1']) ax.set_xlabel("date") ax.set_ylabel("USD") matplotlib.rcParams.update({'font.size': 14}) # Plot adjusted close over time rcParams['figure.figsize'] = 10, 8 # width 10, height 8 ax = train.plot(x='date', y='adj_close', style='bx-', grid=True) ax = cv.plot(x='date', y='adj_close', style='yx-', grid=True, ax=ax) ax = test.plot(x='date', y='adj_close', style='gx-', grid=True, ax=ax) ax = test.plot(x='date', y='est_N1', style='rx-', grid=True, ax=ax) ax.legend(['train', 'validation', 'test', 'predictions with N_opt=1'], loc='upper left') ax.set_xlabel("date") ax.set_ylabel("USD") ax.set_xlim([date(2018, 4, 23), date(2018, 11, 23)]) ax.set_ylim([130, 155]) ax.set_title('Zoom in to test set') # Plot adjusted close over time, only for test set rcParams['figure.figsize'] = 10, 8 # width 10, height 8 matplotlib.rcParams.update({'font.size': 14}) ax = test.plot(x='date', y='adj_close', style='gx-', grid=True) ax = test.plot(x='date', y='est_N1', style='rx-', grid=True, ax=ax) ax.legend(['test', 'predictions using last value'], loc='upper left') ax.set_xlabel("date") ax.set_ylabel("USD") ax.set_xlim([date(2018, 4, 23), date(2018, 11, 23)]) ax.set_ylim([130, 155]) # Save as csv test_last_value = test test_last_value.to_csv("./out/test_last_value.csv") ``` # Findings * On the test set, the RMSE is 1.127 and MAPE is 0.565% using last value prediction ``` # Compare various methods results_dict = {'method': ['Last Value', 'Moving Average', 'Linear Regression', 'XGBoost', 'LSTM'], 'RMSE': [1.127, 1.27, 1.42, 1.162, 1.164], 'MAPE(%)': [0.565, 0.64, 0.707, 0.58, 0.583]} results = pd.DataFrame(results_dict) results # Read all dataframes for the different methods test_last_value = pd.read_csv("./out/test_last_value.csv", index_col=0) test_last_value.loc[:, 'date'] = pd.to_datetime(test_last_value['date'],format='%Y-%m-%d') test_mov_avg = pd.read_csv("./out/test_mov_avg.csv", index_col=0) test_mov_avg.loc[:, 'date'] = pd.to_datetime(test_mov_avg['date'],format='%Y-%m-%d') test_lin_reg = pd.read_csv("./out/test_lin_reg.csv", index_col=0) test_lin_reg.loc[:, 'date'] = pd.to_datetime(test_lin_reg['date'],format='%Y-%m-%d') test_xgboost = pd.read_csv("./out/test_xgboost.csv", index_col=0) test_xgboost.loc[:, 'date'] = pd.to_datetime(test_xgboost['date'],format='%Y-%m-%d') test_lstm = pd.read_csv("./out/test_lstm.csv", index_col=0) test_lstm.loc[:, 'date'] = pd.to_datetime(test_lstm['date'],format='%Y-%m-%d') # Plot all methods together to compare rcParams['figure.figsize'] = 10, 8 # width 10, height 8 matplotlib.rcParams.update({'font.size': 14}) ax = test.plot(x='date', y='adj_close', style='g-', grid=True) ax = test_last_value.plot(x='date', y='est_N1', style='r-', grid=True, ax=ax) ax = test_mov_avg.plot(x='date', y='est_N2', style='b-', grid=True, ax=ax) ax = test_lin_reg.plot(x='date', y='est_N5', style='m-', grid=True, ax=ax) ax = test_xgboost.plot(x='date', y='est', style='y-', grid=True, ax=ax) ax = test_lstm.plot(x='date', y='est', style='c-', grid=True, ax=ax) ax.legend(['test', 'predictions using last value', 'predictions using moving average', 'predictions using linear regression', 'predictions using XGBoost', 'predictions using LSTM'], loc='lower left') ax.set_xlabel("date") ax.set_ylabel("USD") # ax.set_xlim([date(2018, 4, 23), date(2018, 11, 23)]) ax.set_xlim([date(2018, 7, 1), date(2018, 11, 23)]) ax.set_ylim([131, 151.5]) ```
github_jupyter
# Семинар 6 ## Методы обработки текстов Примеры задач автоматической обработки текстов: - классификация текстов - анализ тональности - фильтрация спама - по теме или жанру - машинный перевод - распознавание речи - извлечение информации - именованные сущности - факты и события - кластеризация текстов - оптическое распознавание символов - проверка правописания - вопросно-ответные системы - суммаризация текстов - генерация текстов Одни из классических методов для работы с текстами: - токенизация - лемматизация / стемминг - удаление стоп-слов - векторное представление текстов (bag of words и TF-IDF) _Что почитать:_ - Jurafsky, Martin: Speech and Language Processing (2nd or 3rd Edition) ## Токенизация Токенизировать — значит, поделить текст на слова, или *токены*. Самый наивный способ токенизировать текст — разделить с помощью `split`. Но `split` упускает очень много всего, например, банально не отделяет пунктуацию от слов. Кроме этого, есть ещё много менее тривиальных проблем. Поэтому лучше использовать готовые токенизаторы. ``` !pip install nltk from nltk.tokenize import word_tokenize import numpy as np import pandas as pd example = 'Но не каждый хочет что-то исправлять:(' example.split() # в ячейке ниже у вас может быть ошибка - надо будет загрузить пакет 'punkt' import nltk nltk.download() # либо nltk.download('punkt') word_tokenize(example) ``` В nltk вообще есть довольно много токенизаторов: ``` from nltk import tokenize dir(tokenize)[:16] ``` Они умеют выдавать индексы начала и конца каждого токена: ``` wh_tok = tokenize.WhitespaceTokenizer() list(wh_tok.span_tokenize(example)) ``` (если вам было интересно, зачем вообще включать в модуль токенизатор, который работает как `.split()` :) Некторые токенизаторы ведут себя специфично: ``` tokenize.TreebankWordTokenizer().tokenize("don't stop me") ``` Для некоторых задач это может быть полезно. А некоторые — вообще не для текста на естественном языке: ``` tokenize.SExprTokenizer().tokenize("(a (b c)) d e (f)") from nltk.tokenize import TweetTokenizer tw = TweetTokenizer() tw.tokenize(example) ``` _Что почитать:_ - http://mlexplained.com/2019/11/06/a-deep-dive-into-the-wonderful-world-of-preprocessing-in-nlp/ - https://blog.floydhub.com/tokenization-nlp/ ## Стоп-слова и пунктуация *Стоп-слова* — это слова, которые часто встречаются практически в любом тексте и ничего интересного не говорят о конкретном документе, то есть играют роль шума. Поэтому их принято убирать. По той же причине убирают и пунктуацию. ``` nltk.download("stopwords") from nltk.corpus import stopwords print(stopwords.words('russian')) from string import punctuation punctuation noise = stopwords.words('russian') + list(punctuation) noise ``` ## Лемматизация Лемматизация — это сведение разных форм одного слова к начальной форме — *лемме*. Например, токены «пью», «пил», «пьет» перейдут в «пить». Почему это хорошо? * Во-первых, мы хотим рассматривать как отдельный признак каждое *слово*, а не каждую его отдельную форму. * Во-вторых, некоторые стоп-слова стоят только в начальной форме, и без лемматизации мы исключаем только её. Неплохие лемматизаторы для русского языка — `mystem` и `pymorphy`. ### [Mystem](https://tech.yandex.ru/mystem/) Как с ним работать: * можно скачать `mystem` и запускать [из терминала с разными параметрами](https://tech.yandex.ru/mystem/doc/) * [pymystem3](https://pythonhosted.org/pymystem3/pymystem3.html) — обертка для питона, работает медленнее, но это удобно ``` !pip install pymystem3 from pymystem3 import Mystem mystem_analyzer = Mystem() ``` Мы инициализировали `Mystem` c параметрами по умолчанию. А вообще параметры есть такие: * mystem_bin — путь к `mystem`, если их несколько * grammar_info — нужна ли грамматическая информация или только леммы (по умолчанию нужна) * disambiguation — нужно ли снятие омонимии — дизамбигуация (по умолчанию нужна) * entire_input — нужно ли сохранять в выводе все (пробелы всякие, например), или можно выкинуть (по умолчанию оставляется все) Методы `Mystem` принимают строку, токенизатор вшит внутри. Можно, конечно, и пословно анализировать, но тогда он не сможет учитывать контекст. Можно просто лемматизировать текст: ``` print(mystem_analyzer.lemmatize(example)) ``` ### [Pymorphy](http://pymorphy2.readthedocs.io/en/latest/) Это модуль на питоне, довольно быстрый и с кучей функций. ``` !pip install pymorphy2 !pip install pymorphy2-dicts !pip install DAWG-Python from pymorphy2 import MorphAnalyzer pymorphy2_analyzer = MorphAnalyzer() ``` `pymorphy2` работает с отдельными словами. Если дать ему на вход предложение — он его просто не лемматизирует, т.к. не понимает. ``` tokenized_example = tw.tokenize(example) tokenized_example[3] ana = pymorphy2_analyzer.parse(tokenized_example[3]) ana ana[0].normal_form ``` ### mystem vs. pymorphy 1) *Скорость.* `mystem` работает невероятно медленно под Windows на больших текстах 2) *Снятие омонимии*. `Mystem` умеет снимать омонимию по контексту (хотя не всегда преуспевает), `pymorphy2` берет на вход одно слово и соответственно вообще не умеет дизамбигуировать по контексту. ``` homonym1 = 'За время обучения я прослушал больше сорока курсов.' homonym2 = 'Сорока своровала блестящее украшение со стола.' mystem_analyzer = Mystem() # инициализируем объект с параметрами по умолчанию print(mystem_analyzer.analyze(homonym1)[-5]) print(mystem_analyzer.analyze(homonym2)[0]) mystem_analyzer.lemmatize(homonym2) ``` ## Стемминг В отличие от лемматизации, при применении стемминга у всех слов отбрасываются аффиксы (окончания и суффиксы), что необязательно приводит слова к формам, существующим в рассматриваемом языке. ``` text = "In my younger and more vulnerable years my father gave me some advice that I've been turning over in my mind ever since.\n\"Whenever you feel like criticizing any one,\" he told me, \"just remember that all the people in this world haven't had the advantages that you've had.\"" print(text) text_tokenized = [w for w in word_tokenize(text) if w.isalpha()] text_tokenized from nltk.stem.snowball import SnowballStemmer stemmer = SnowballStemmer('english') text_stemmed = [stemmer.stem(w) for w in text_tokenized] print(' '.join(text_stemmed)) nltk.download() nltk.download('wordnet') from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() text_lemmatized = [lemmatizer.lemmatize(w) for w in text_tokenized] print(' '.join(text_lemmatized)) ``` _Что почитать:_ - https://en.wikipedia.org/wiki/Stemming - https://en.wikipedia.org/wiki/Lemmatisation - https://www.datacamp.com/community/tutorials/stemming-lemmatization-python ## Bag-of-words и TF-IDF Но как же все-таки работать с текстами, используя стандартные методы машинного обучения? Нужна выборка! Модель bag-of-words: текст можно представить как набор независимых слов. Тогда каждому слову можно сопоставить вес, таким образом, сопоставляя тексту набор весов. В качестве весов можно брать частоту встречаемости слов в тексте. ``` texts = ['I like my cat.', 'My cat is the most perfect cat.', 'is this cat or is this bread?'] texts_tokenized = [' '.join([w for w in word_tokenize(t) if w.isalpha()]) for t in texts] texts_tokenized from sklearn.feature_extraction.text import CountVectorizer cnt_vec = CountVectorizer() X = cnt_vec.fit_transform(texts_tokenized) cnt_vec.get_feature_names() X X.toarray() ``` Тексты: - I like my cat. - My cat is the most perfect cat. - is this cat or is this bread? ``` pd.DataFrame(X.toarray(), columns=cnt_vec.get_feature_names()) ``` Заметим, что если слово часто встречается в одном тексте, но почти не встречается в других, то оно получает для данного текста большой вес, ровно так же, как и слова, которые часто встречаются в каждом тексте. Для того, чтобы разделять эти такие слова, можно использовать статистическую меру TF-IDF, характеризующую важность слова для конкретного текста. Для каждого слова из текста $d$ рассчитаем относительную частоту встречаемости в нем (Term Frequency): $$ \text{TF}(t, d) = \frac{C(t | d)}{\sum\limits_{k \in d}C(k | d)}, $$ где $C(t | d)$ - число вхождений слова $t$ в текст $d$. Также для каждого слова из текста $d$ рассчитаем обратную частоту встречаемости в корпусе текстов $D$ (Inverse Document Frequency): $$ \text{IDF}(t, D) = \log\left(\frac{|D|}{|\{d_i \in D \mid t \in d_i\}|}\right) $$ Логарифмирование здесь проводится с целью уменьшить масштаб весов, ибо зачастую в корпусах присутствует очень много текстов. В итоге каждому слову $t$ из текста $d$ теперь можно присвоить вес $$ \text{TF-IDF}(t, d, D) = \text{TF}(t, d) \times \text{IDF}(t, D) $$ Интерпретировать формулу выше несложно: действительно, чем чаще данное слово встречается в данном тексте и чем реже в остальных, тем важнее оно для этого текста. Отметим, что в качестве TF и IDF можно использовать другие определения (https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Definition). D это только тексты из нашего датасета ``` from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vec = TfidfVectorizer() X = tfidf_vec.fit_transform(texts_tokenized) tfidf_vec.get_feature_names() X X.toarray() ``` Тексты: - I like my cat. - My cat is the most perfect cat. - is this cat or is this bread? ``` pd.DataFrame(X.toarray(), columns=tfidf_vec.get_feature_names()) ``` Что изменилось по сравнению с методом `CountVectorizer`? Интерпретируйте результат. _Что почитать:_ - https://en.wikipedia.org/wiki/Tf%E2%80%93idf - https://programminghistorian.org/en/lessons/analyzing-documents-with-tfidf ## n-grams ``` from sklearn.linear_model import LogisticRegression from sklearn.feature_extraction.text import CountVectorizer ``` Что такое n-граммы: ``` from nltk import ngrams sent = 'Если б мне платили каждый раз'.split() list(ngrams(sent, 1)) # униграммы list(ngrams(sent, 2)) # биграммы list(ngrams(sent, 3)) # триграммы list(ngrams(sent, 5)) # ... пентаграммы? ``` ## Задача: классификация твитов по тональности У нас есть датасет из твитов, про каждый указано, как он эмоционально окрашен: положительно или отрицательно. Задача: предсказывать эмоциональную окраску. Классификацию по тональности используют в рекомендательных системах, чтобы понять, понравилось ли людям кафе, кино, etc. [Источник датасета](http://study.mokoron.com/) ``` import numpy as np import pandas as pd from sklearn.metrics import classification_report from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline # считываем данные и заполняем общий датасет positive = pd.read_csv('positive.csv') positive['label'] = ['positive'] * len(positive) negative = pd.read_csv('negative.csv') negative['label'] = ['negative'] * len(negative) df = positive.append(negative) df.sample(10) df.shape x_train, x_test, y_train, y_test = train_test_split(df['text'], df['label'], random_state=13) ``` Самый простой способ извлечь фичи из текстовых данных — векторизаторы: `CountVectorizer` и `TfidfVectorizer` Объект `CountVectorizer` делает простую вещь: * строит для каждого документа (каждой пришедшей ему строки) вектор размерности `n`, где `n` — количество слов или n-грам во всём корпусе * заполняет каждый i-тый элемент количеством вхождений слова в данный документ ``` vec = CountVectorizer(ngram_range=(1, 1)) bow = vec.fit_transform(x_train) # bow — bag of words (мешок слов) bow ``` `'ngram_range'` отвечает за то, какие n-граммы мы используем в качестве признаков: - `'ngram_range'=(1, 1)` — униграммы - `'ngram_range'=(3, 3)` — триграммы - `'ngram_range'=(1, 3)` — униграммы, биграммы и триграммы В `vec.vocabulary_` лежит словарь — отображение слов в индексы: ``` list(vec.vocabulary_.items())[:10] clf = LogisticRegression(random_state=13) clf.fit(bow, y_train) pred = clf.predict(vec.transform(x_test)) print(classification_report(pred, y_test)) ``` Попробуем сделать то же самое для униграмм и биграмм: ``` vec = CountVectorizer(ngram_range=(1, 2)) bow = vec.fit_transform(x_train) clf = LogisticRegression(random_state=13) clf.fit(bow, y_train) pred = clf.predict(vec.transform(x_test)) print(classification_report(pred, y_test)) ``` А теперь для TF-IDF: ``` vec = TfidfVectorizer(ngram_range=(1, 1)) #vectorizer with ngram bow = vec.fit_transform(x_train) # нужно предпосчитать clf = LogisticRegression(random_state=13) clf.fit(bow, y_train) # строим лог регрессию pred = clf.predict(vec.transform(x_test)) print(classification_report(pred, y_test)) clf.predict_proba(vec.transform(x_test)) pred ``` ## О важности анализа данных Но иногда пунктуация бывает и не шумом. Главное — отталкиваться от задачи. Что будет, если вообще не убирать пунктуацию? ``` vec = TfidfVectorizer(ngram_range=(1, 1), tokenizer=word_tokenize) bow = vec.fit_transform(x_train) clf = LogisticRegression(random_state=13) clf.fit(bow, y_train) pred = clf.predict(vec.transform(x_test)) print(classification_report(pred, y_test)) bow.shape ``` Стоило оставить пунктуацию — и внезапно все метрики устремились к 1. Как это получилось? Среди неё были очень значимые токены (как вы думаете, какие?). Найдите признаки с самыми большими коэффициентами: ``` idx = np.where(np.abs(clf.coef_)>10)[1] idx clf.coef_[0][idx] a = vec.get_feature_names() for i in idx: print(a[i]) ``` Посмотрим на твиты с данным токеном: ``` cool_token = "d" tweets_with_cool_token = [tweet for tweet in x_train if cool_token in tweet] np.random.seed(42) for tweet in np.random.choice(tweets_with_cool_token, size=10, replace=False): print(tweet) ``` Посмотрим, как один из супер-значительных токенов справится с классификацией безо всякого машинного обучения: ``` cool_token = "d" pred = ['positive' if cool_token in tweet else 'negative' for tweet in x_test] print(classification_report(pred, y_test)) ``` ## Символьные n-граммы Теперь в качестве фичей используем, например, униграммы символов: ``` vec = CountVectorizer(analyzer='char', ngram_range=(1, 1)) bow = vec.fit_transform(x_train) clf = LogisticRegression(random_state=13) clf.fit(bow, y_train) pred = clf.predict(vec.transform(x_test)) print(classification_report(pred, y_test)) ``` В общем-то, теперь уже понятно, почему на этих данных здесь получается очень хорошее качество. Так или иначе, на символах классифицировать тоже можно: для некоторых задач (например, для определения языка) признаки-символьные n-граммы могут внести серьезный вклад в качество модели. Ещё одна замечательная особенность признаков-символов: токенизация и лемматизация не нужна, можно использовать такой подход для языков, у которых нет готовых анализаторов. _Что почитать:_ - https://web.stanford.edu/~jurafsky/slp3/3.pdf - https://books.google.com/ngrams
github_jupyter
# Kaggle Histopathology This notebook is a fast.ai implementation for the Kaggle challenge [Histopathologic Cancer Detection](https://www.kaggle.com/c/histopathologic-cancer-detection) ## Setup and Data Viz ``` #hide #magic commands %matplotlib inline %reload_ext autoreload %autoreload 2 #collapse-hide #imports from utils import read, subplot, get_learner, auc_score import sys sys.setrecursionlimit(100000) import glob from sklearn.utils import shuffle from sklearn.model_selection import train_test_split from sklearn.metrics import roc_curve, auc from torchvision import transforms from fastai.vision import * from fastai.metrics import * #collapse-show dataPath = Path('data/') train = dataPath/'train' test = dataPath/'test' data = pd.read_csv(dataPath/'train_labels.csv') data.head() data.info() ``` There are two columns in the data, one which represents the filename in the train directory. The second column shows the corresponding label for the said image ``` data['label'].value_counts() ``` We're dealing with an imbalanced dataset here. There's roughly 1.5x as many positive images as negative ones. We'll need to correct that before we start training. Let's look at some sample images ``` pos = data[data['label'] == 1] neg = data[data['label'] == 0] subplot(train, shuffle(pos), shuffle(neg)) ``` ## Data Split and DataBlock API ``` p_val = 0.2 train_files = data['id'].values train_labels = np.asarray(data['label'].values) _, _, _, val_idx = train_test_split(train_files, \ range(len(train_files)), \ test_size = p_val, \ stratify = train_labels, \ random_state = 42) train_model_dict = { 'names' : train_files, 'label': train_labels } test_files = [] test_names = list(test.rglob('*.tif')) for idx, name in enumerate(test_names): test_files.append(str(name).split('/')[-1]) train_df = pd.DataFrame(data = train_model_dict) test_df = pd.DataFrame(np.asarray(test_files), columns = ['name']) tfms = get_transforms() dataBunch = (ImageList.from_df(path = train, df = train_df, suffix = '.tif') .split_by_idx(val_idx) .label_from_df(cols = 'label') .add_test(ImageList.from_df(path = test, df = test_df)) .transform(tfms = tfms, size = 90) #default fast.ai transforms .databunch(bs = 64) ## datasets stats obtained from https://www.kaggle.com/qitvision/a-complete-ml-pipeline-fast-ai .normalize([tensor([0.702447, 0.546243, 0.696453]), tensor([0.238893, 0.282094, 0.216251])])) assert dataBunch.c == 2 assert dataBunch.classes == [0,1] dataBunch.show_batch(2) ``` ## fast.ai Model with Pretrained CNNs ``` archs = [models.densenet169, models.resnet50, models.resnet18] learners = [None] * len(archs) assert len(archs) == len(learners) for idx, arch in enumerate(archs): learners[idx] = get_learner(dataBunch, arch, metrics = ['accuracy', 'auc_score']) # densenet = deepcopy(learners[0]) # resnet50 = deepcopy(learners[1]) # resnet18 = deepcopy(learners[2])t ``` ## DenseNet169 ``` densenet = learners[0] densenet.fit_one_cycle(1, max_lr = 8e-4, wd = 5e-7) densenet.fit_one_cycle(2) densenet.recorder.plot_losses() interpret = ClassificationInterpretation.from_learner(densenet) interpret.plot_confusion_matrix('Confusion Matrix') #save model before further experimentation densenet.save('densenet169' + '_m1'') densenet.load('densenet169_m1') densenet.unfreeze() densenet.lr_find(wd = 1e-4) densenet.recorder.plot() densenet.callback_fns.append(partial(SaveModel, monitor='accuracy')) densenet.fit_one_cycle(10, max_lr = slice(5e-6, 5e-5)) interpret = ClassificationInterpretation.from_learner(densenet) interpret.plot_confusion_matrix() ``` This looks like the right moment to stop training. We can see that the validation and training loss are converging, but if we were to continue training for more cycles, there's a good chance the validation loss would start to go up(especially considering it was increasing for the final iteration). And that's when the model would start overfitting, and we definitely don't want that. Let's save this model and go on evaluating the model and visualizing the GradCAM heatmaps ``` densenet.save('densenet169' + '_m2') ``` ## ROC Curve and AUC Score ``` #Load the best performing model densetnet.load('densenet169_m2') preds,y = densenet.get_preds() ros_auc_score = auc_score(preds,y) print('The ROC AUC Score is {0}'.format(ros_auc_score)) acc = accuracy(preds,y) print('The accuracy is {0}%.'.format(acc*100)) ``` ## Submit submissions.csv to Kaggle ``` clean_fname = np.vectorize(lambda fname: str(fname).split('/')[-1].split('.')[0]) fname_cleaned = clean_fname(dataBunch.test_ds.items) fname_cleaned = fname_cleaned.astype(str) pt,yt = densenet.get_preds(ds_type=DatasetType.Test) sub = pd.read_csv(dataPath/'sample_submission.csv') submission_df = sub.set_index('id') submission_df.loc[fname_cleaned,'label'] = to_np(pt[:,1]) submission_df.to_csv(f'submission.csv') !kaggle competitions submit histopathologic-cancer-detection -f submission.csv -m "fast.ai densenet169" ```
github_jupyter
``` import os if os.getcwd().endswith('visualization'): os.chdir('..') import gmaps import numpy as np import pandas as pd from sklearn.externals import joblib from model import load_clean_data_frame API_KEY = 'AIzaSyCgyB-8lqWqGhTYSlt2VuJyeuEVotFoYO8' gmaps.configure(api_key=API_KEY) model = joblib.load('model/output/neural_network_basic_fs.joblib') raw_crime_data = load_clean_data_frame() locations = raw_crime_data[['longitude', 'latitude']].sample(frac=1) min_longitude = np.min(locations['longitude']) max_longitude = np.max(locations['longitude']) min_latitude = np.min(locations['latitude']) max_latitude = np.max(locations['latitude']) homicide_data = raw_crime_data.loc[raw_crime_data['type'] == 22.0] homicide_data = homicide_data[['iucr', 'type', 'location', 'fbi_code', 'hour', 'property_crime', 'weekday', 'domestic']] homicide_data = homicide_data.sample(frac=1) homicide_data.head() # Modeling homicide occuring in an alley vis_iucr = 102.0 vis_type = 22.0 vis_location = 11.0 vis_fbi_code = 23.0 vis_hour = 2.0 vis_property_crime = 0.0 vis_weekday = 0.0 vis_domestic = 0.0 columns = ['iucr', 'type', 'location', 'fbi_code', 'hour', 'property_crime', 'weekday', 'domestic', 'latitude', 'longitude'] predicted_columns = ['latitude', 'longitude', 'arrest'] predicted_data = pd.DataFrame() count = 1 for index, row in locations.iterrows(): point = pd.DataFrame([[ vis_iucr, vis_type, vis_location, vis_fbi_code, vis_hour, vis_property_crime, vis_weekday, vis_domestic, row['latitude'], row['longitude'] ]], columns=columns) predicted = model.predict(point) predicted = (model.predict_proba(point)[:, 1] >= 0.302).astype(float) predicted_row = pd.DataFrame([[ row['latitude'], row['longitude'], predicted[0] ]], columns=predicted_columns) predicted_data = predicted_data.append(predicted_row) count += 1 if count > 3000: break homicide_arrests = predicted_data.loc[predicted_data['arrest'] == 1.0] homicide_no_arrests = predicted_data.loc[predicted_data['arrest'] == 0.0] print(len(homicide_arrests)) print(len(homicide_no_arrests)) figure_layout = { 'width': '800px', 'height': '800px' } fig = gmaps.figure(layout=figure_layout, center=[41.836944, -87.684722], zoom_level=11) heatmap_layer = gmaps.heatmap_layer( homicide_no_arrests[['latitude', 'longitude']], weights=homicide_no_arrests['arrest']+1, max_intensity=1.0, point_radius=10.0, dissipating=True, opacity=1, gradient=[(255, 0, 0, 0), (255, 0, 0, 1)] ) fig.add_layer(heatmap_layer) heatmap_layer = gmaps.heatmap_layer( homicide_arrests[['latitude', 'longitude']], weights=homicide_arrests['arrest'], max_intensity=1.0, point_radius=8.0, dissipating=True, opacity=0.5, gradient=[(0, 0, 255, 0), (0, 0, 255, 1)] ) fig.add_layer(heatmap_layer) fig homicide_truth = raw_crime_data.loc[raw_crime_data['type'] == 22.0] homicide_truth = homicide_truth.sample(n=3000) homicide_arrests_truth = homicide_truth.loc[homicide_truth['arrest'] == 1.0] homicide_arrests_truth = homicide_arrests_truth.loc[homicide_arrests_truth['location'] == 11.0] homicide_no_arrests_truth = homicide_truth.loc[homicide_truth['arrest'] == 0.0] homicide_no_arrests_truth = homicide_no_arrests_truth.loc[homicide_no_arrests_truth['location'] == 11.0] print(len(homicide_arrests_truth)) print(len(homicide_no_arrests_truth)) figure_layout = { 'width': '800px', 'height': '800px' } fig = gmaps.figure(layout=figure_layout, center=[41.836944, -87.684722], zoom_level=11) heatmap_layer = gmaps.heatmap_layer( homicide_no_arrests_truth[['latitude', 'longitude']], weights=homicide_no_arrests_truth['arrest']+1, max_intensity=1.0, point_radius=15.0, dissipating=True, opacity=1, gradient=[(255, 0, 0, 0), (255, 0, 0, 1)] ) fig.add_layer(heatmap_layer) heatmap_layer = gmaps.heatmap_layer( homicide_arrests_truth[['latitude', 'longitude']], weights=homicide_arrests_truth['arrest'], max_intensity=1.0, point_radius=15.0, dissipating=True, opacity=0.5, gradient=[(0, 0, 255, 0), (0, 0, 255, 1)] ) fig.add_layer(heatmap_layer) fig homicide_data = raw_crime_data.loc[raw_crime_data['type'] == 12.0] homicide_data = homicide_data[['iucr', 'type', 'location', 'fbi_code', 'hour', 'property_crime', 'weekday', 'domestic']] homicide_data = homicide_data.sample(frac=1) homicide_data.head() # Modeling assult occuring on a sidewalk. vis_iucr = 74.0 vis_type = 12.0 vis_location = 3.0 vis_fbi_code = 16.0 vis_hour = 17.0 vis_property_crime = 0.0 vis_weekday = 4.0 vis_domestic = 0.0 columns = ['iucr', 'type', 'location', 'fbi_code', 'hour', 'property_crime', 'weekday', 'domestic', 'latitude', 'longitude'] predicted_columns = ['latitude', 'longitude', 'arrest'] predicted_data = pd.DataFrame() count = 1 for index, row in locations.iterrows(): point = pd.DataFrame([[ vis_iucr, vis_type, vis_location, vis_fbi_code, vis_hour, vis_property_crime, vis_weekday, vis_domestic, row['latitude'], row['longitude'] ]], columns=columns) predicted = (model.predict_proba(point)[:, 1] >= 0.302).astype(float) predicted_row = pd.DataFrame([[ row['latitude'], row['longitude'], predicted[0] ]], columns=predicted_columns) predicted_data = predicted_data.append(predicted_row) count += 1 if count > 3000: break assault_arrests = predicted_data.loc[predicted_data['arrest'] == 1.0] assault_no_arrests = predicted_data.loc[predicted_data['arrest'] == 0.0] print(len(assault_arrests)) print(len(assault_no_arrests)) figure_layout = { 'width': '800px', 'height': '800px' } fig = gmaps.figure(layout=figure_layout, center=[41.836944, -87.684722], zoom_level=11) heatmap_layer = gmaps.heatmap_layer( assault_no_arrests[['latitude', 'longitude']], weights=assault_no_arrests['arrest']+1, max_intensity=1.0, point_radius=8.0, dissipating=True, opacity=0.5, gradient=[(255, 0, 0, 0), (255, 0, 0, 1)] ) fig.add_layer(heatmap_layer) heatmap_layer = gmaps.heatmap_layer( assault_arrests[['latitude', 'longitude']], weights=assault_arrests['arrest'], max_intensity=1.0, point_radius=10.0, dissipating=True, opacity=1, gradient=[(0, 0, 255, 0), (0, 0, 255, 1)] ) fig.add_layer(heatmap_layer) fig assault_truth = raw_crime_data.loc[raw_crime_data['type'] == 12.0] assault_truth = assault_truth.sample(n=3000) assault_arrests_truth = assault_truth.loc[assault_truth['arrest'] == 1.0] assault_arrests_truth = assault_arrests_truth.loc[assault_arrests_truth['location'] == 3.0] assault_no_arrests_truth = assault_truth.loc[assault_truth['arrest'] == 0.0] assault_no_arrests_truth = assault_no_arrests_truth.loc[assault_no_arrests_truth['location'] == 3.0] print(len(assault_arrests_truth)) print(len(assault_no_arrests_truth)) figure_layout = { 'width': '800px', 'height': '800px' } fig = gmaps.figure(layout=figure_layout, center=[41.836944, -87.684722], zoom_level=11) heatmap_layer = gmaps.heatmap_layer( assault_no_arrests_truth[['latitude', 'longitude']], weights=assault_no_arrests_truth['arrest']+1, max_intensity=1.0, point_radius=15.0, dissipating=True, opacity=0.5, gradient=[(255, 0, 0, 0), (255, 0, 0, 1)] ) fig.add_layer(heatmap_layer) heatmap_layer = gmaps.heatmap_layer( assault_arrests_truth[['latitude', 'longitude']], weights=assault_arrests_truth['arrest'], max_intensity=1.0, point_radius=15.0, dissipating=True, opacity=1, gradient=[(0, 0, 255, 0), (0, 0, 255, 1)] ) fig.add_layer(heatmap_layer) fig ```
github_jupyter
``` class Animal(object): def __init__(self, age): self.age = age self.name = None def get_age(self): return self.age def get_name(self): return self.name def set_age(self, newage): self.age = newage def set_name(self, newname=""): self.name = newname def __str__(self): return "animal:"+str(self.name)+":"+str(self.age) class Person(Animal): def __init__(self, name, age): Animal.__init__(self, age) self.set_name(name) self.friends = [] def get_friends(self): return self.friends def speak(self): print("hello") def add_friend(self, fname): if fname not in self.friends: self.friends.append(fname) def age_diff(self, other): diff = self.age - other.age print(abs(diff), "year difference") def salute_friends(self): for friend in self.friends: print("Hello %s, you're my friend" % friend.name) def __str__(self): return "person:"+str(self.name)+":"+str(self.age) class Rectangle: def __init__(self, length, width): self.length = length self.width = width def area(self): return self.width * self.length def perimeter(self): return (2*self.length) + (2*self.width) class Square(Rectangle): def __init__(self, length): super().__init__(length, length) s = Square(3) s.area() ``` #### Animal examples ``` a = Animal(10) a.get_age() a.set_age(20) a.get_age() # error # a.speak() ``` #### Person examples ``` Diego = Person(name="Diego", age=24) print(Diego) Rodrigo = Person(name="Rodrigo", age=27) print(Rodrigo) Diego.add_friend(Rodrigo) Rodrigo.add_friend(Diego) Diego.get_friends() Rodrigo.get_friends() Rodrigo.salute_friends() Mariela = Person(name="Mariela", age=None) Rodrigo.add_friend(Mariela) Rodrigo.get_friends() Rodrigo.salute_friends() ``` #### Rectangle example ``` class Rectangle: def __init__(self, length, width): self.length = length self.width = width def area(self): return self.width * self.length def perimeter(self): return (2*self.length) + (2*self.width) class Square(Rectangle): def __init__(self, length): super().__init__(length, length) s = Square(2) s.area() class Cube(Square): def surface_area(self): face_area = super().area() return face_area * 6 def volume(self): face_area = super().area() return face_area * self.length c = Cube(2) c.surface_area() ``` #### Spell class and childs ``` class Spell(object): def __init__(self, incantation, name): self.name = name self.incantation = incantation def __str__(self): return self.name + ' ' + self.incantation + '\n' + self.getDescription() def getDescription(self): return 'No description' def execute(self): print (self.incantation) class Accio(Spell): def __init__(self): Spell.__init__(self, 'Accio', 'Summoning Charm') class Confundo(Spell): def __init__(self): Spell.__init__(self, 'Confundo', 'Confundus Charm') def getDescription(self): return 'Causes the victim to become confused and befuddled.' def studySpell(spell): print(spell) spell = Accio() spell.execute() studySpell(spell) studySpell(Confundo()) ```
github_jupyter
``` %reset -f import numpy as np from landlab import RasterModelGrid from landlab.components import OverlandFlow, FlowAccumulator, SpatialPrecipitationDistribution from landlab.plot.imshow import imshow_grid, imshow_grid_at_node from landlab.io.esri_ascii import read_esri_ascii from matplotlib import animation import matplotlib.pyplot as plt import matplotlib.colors as mcolors colors = [(0,0.2,1,i) for i in np.linspace(0,1,3)] cmap = mcolors.LinearSegmentedColormap.from_list('mycmap', colors, N=10) from PIL import Image, ImageDraw # Edits Franz # ---------------------- import os from landlab.io.esri_ascii import write_esri_ascii import shutil # Initial conditions run_time = 500 # duration of run, (s) h_init = 0.1 # initial thin layer of water (m) n = 0.01 # roughness coefficient, (s/m^(1/3)) g = 9.8 # gravity (m/s^2) alpha = 0.7 # time-step factor (nondimensional; from Bates et al., 2010) u = 0.4 # constant velocity (m/s, de Almeida et al., 2012) run_time_slices = np.arange(0,run_time+1,50) #show every 20th elapsed_time = 1.0 #Elapsed time starts at 1 second. This prevents errors when setting our boundary conditions. # Edits Franz # ---------------------- # time scale for rainfall events. To be adjusted. I am cheating with units... nb_years = 5 # Factor to scale rainfall values from the module SpatialPrecipitationDistribution (may generate very # small amounts of rainfall, which renders poorly...) rainfall_scaling = 1000. # Folder for runoff results dir_runoff = './runoff_results' # Folder for rainfall data dir_rainfall = './rainfall_series' # Overwrite or create directory for rainfall data if os.path.exists(dir_runoff): shutil.rmtree(dir_runoff) os.makedirs(dir_runoff) # Overwrite or create directory for rainfall data if os.path.exists(dir_rainfall): shutil.rmtree(dir_rainfall) os.makedirs(dir_rainfall) #Define grid # here we use an arbitrary, very small, "real" catchment fname = '../data/hugo_site.asc' rmg, z = read_esri_ascii(fname, name='topographic__elevation') rmg.status_at_node[rmg.nodes_at_right_edge] = rmg.BC_NODE_IS_FIXED_VALUE rmg.status_at_node[np.isclose(z, -9999.)] = rmg.BC_NODE_IS_CLOSED #Define outlet rmg_outlet_node = 2051 #node outlet_node_to_sample = 2050 outlet_link_to_sample = rmg.links_at_node[outlet_node_to_sample][3] #Plot topography and outlet plt.figure() imshow_grid_at_node(rmg, z, colorbar_label='Elevation (m)') plt.plot(rmg.node_x[outlet_node_to_sample], rmg.node_y[outlet_node_to_sample], "yo") plt.show() ### Edits Franz # ---------------------- # Important note: SpatialPrecipitationDistribution generate rainfall events per years over # an input grid. I guess the number of events also depends on the size of the catchment/grid # So we need to set a nb_years as input for number_of_years. And scale the storm events over run_time. # Let's consider rainfall instantaneous. # Create time series of rainfall events (output is in mm/h) rain = SpatialPrecipitationDistribution(rmg,number_of_years = nb_years) np.random.seed(26) # arbitrary to get a cool-looking storm out every tim # Container for rainfall duration storm_t_all = [] interstorm_t_all = [] # get the storm simulator to provide a storm # Variables required to generate rainfall datasets i = 0 max_rainfall = [] for (storm_t, interstorm_t) in rain.yield_storms(): # storm lengths in hrs i += 1 rmg.at_node['rainfall__flux'] *= 0.001 # because the rainfall comes out in mm/h rmg.at_node['rainfall__flux'] *= rainfall_scaling # to make the storm heavier and more interesting! # Save rainfall data to ascii file write_esri_ascii('./'+ dir_rainfall +'/rainfall_'+ str(i) + '.asc', rmg, 'rainfall__flux', clobber=True) # Save duration of storm and non-storm periods storm_t_all.append(storm_t) interstorm_t_all.append(interstorm_t) # Store max rainfall value max_rainfall.append(max(rmg.at_node['rainfall__flux'])) storm_ids = np.array(range(len(storm_t_all))) + 1 # Get moment of storms (initially in hours, to be rescaled over run_time # -> cheating to get some results...) days_storms = (np.array(interstorm_t_all)/24) scaled_days_storms = (days_storms * (run_time / (nb_years*365))).round() scaled_days_storms = scaled_days_storms.cumsum() # Set first storm a time = 1 scaled_days_storms = scaled_days_storms - scaled_days_storms[0] + 1 print(scaled_days_storms) # print(scaled_days_storms.sum()) # print(scaled_days_storms) # print((np.array(storm_t_all).sum()+np.array(interstorm_t_all).sum())/24/nb_years) # print(len(interstorm_t_all)) # print(storm_id) # #Set initial water depth values # rmg.at_node["surface_water__depth"] = np.zeros(rmg.number_of_nodes) # #Pointer to water depth # h = rmg.at_node['surface_water__depth'] # bools = (rmg.node_y > 100) * (rmg.node_y < 450) * (rmg.node_x < 400) * (rmg.node_x > 350) # h[bools] = 2 ## Set inital discharge rmg.at_node["surface_water__discharge"] = np.zeros(rmg.number_of_nodes) # Edits Franz # ---------------------- #Set initial water depth values and rainfall flux values rmg.at_node["surface_water__depth"] = np.zeros(rmg.number_of_nodes) rmg.at_node.pop('rainfall__flux') # print(rmg.at_node.keys()) # Read first rainfall data q_rain = read_esri_ascii('./rainfall_series/rainfall_1.asc', grid=rmg, name='rainfall__flux') # Update id for rainfall rainfall_id = 1 # Update surface water depth with rainfall data rmg.at_node['surface_water__depth'].fill(1.e-12) # a veneer of water stabilises the model rmg.at_node['surface_water__depth'] += rmg.at_node['rainfall__flux'] # storm_t_all[0] fig1 = plt.figure() imshow_grid(rmg,'topographic__elevation',colorbar_label='Elevation (m)') imshow_grid(rmg,'surface_water__depth',cmap=cmap,colorbar_label='Water depth (m)') plt.title(f'Time = 0') plt.show() fig1.savefig(dir_runoff + f"/runoff_0.jpeg")# print(rmg.at_node.keys()) #Call overland flow model of = OverlandFlow(rmg, steep_slopes=True) of.run_one_step() # Edits Franz # ---------------------- # look at hydorgraph at outlet hydrograph_time = [] discharge_at_outlet = [] height_at_outlet = [] #Run model # for t in run_time_slices: # Tweak time steps time_steps = np.append(scaled_days_storms,[500])[1:] for t in time_steps: #Run until next time to plot while elapsed_time < t: # First, we calculate our time step. dt = of.calc_time_step() # Now, we can generate overland flow. of.overland_flow() # Increased elapsed time elapsed_time += dt ## Append time and discharge and water depth to their lists to save data and for plotting. hydrograph_time.append(elapsed_time) q = rmg.at_link["surface_water__discharge"] discharge_at_outlet.append(np.abs(q[outlet_link_to_sample]) * rmg.dx) ht = rmg.at_node['surface_water__depth'] height_at_outlet.append(np.abs(ht[outlet_node_to_sample])) # Update rainfall dataset id rainfall_id += 1 rainfall_name = 'rainfall__flux_' + str(rainfall_id) # print(rainfall_id) # Avoid last time step if rainfall_id <= len(scaled_days_storms): # Read rainfall data # rmg.at_node.pop('rainfall__flux') q_rain = read_esri_ascii('./rainfall_series/rainfall_' + str(rainfall_id) + '.asc', grid=rmg, name=rainfall_name) # Add rainfall event to water depth rmg.at_node['surface_water__depth'] += rmg.at_node[rainfall_name] # Plot water depth for current time step fig=plt.figure() imshow_grid(rmg,'topographic__elevation',colorbar_label='Elevation (m)') imshow_grid(rmg,'surface_water__depth',limits=(0,np.array(max_rainfall).max()),cmap=cmap,colorbar_label='Water depth (m)') plt.title(f'Time = {round(elapsed_time,1)} s') plt.show() fig.savefig(dir_runoff + f"/runoff_{round(elapsed_time,1)}.jpeg") ## Plotting hydrographs and discharge fig=plt.figure(2) plt.plot(hydrograph_time, discharge_at_outlet, "b-", label="outlet") plt.ylabel("Discharge (cms)") plt.xlabel("Time (seconds)") plt.legend(loc="upper right") fig.savefig(dir_runoff + f"/runoff_discharge.jpeg") fig=plt.figure(3) plt.plot(hydrograph_time, height_at_outlet, "b-", label="outlet") plt.ylabel("Water depth (m)") plt.xlabel("Time (seconds)") plt.legend(loc="upper right") fig.savefig(dir_runoff + f"/runoff_waterdepth.jpeg") ```
github_jupyter
The news story in 2021 that captured the complete attention of the financial press was the [Gamestop / WallStreetBets / RoaringKitty episode](https://www.cbsnews.com/news/gamestop-reddit-wallstreetbets-short-squeeze-2021-01-28/) of late January. A group of presumably small, retail traders banded together on Reddit's [r/wallstreetbets](https://www.reddit.com/r/wallstreetbets/) forum to drive the price of `$GME`, `$AMC` and other "meme stocks" to unimaginable heights, wreaking havoc with the crowd of hedge funds who had shorted the stocks. In the wake of that headline-grabbing incident, many a hedge fund has begun to consider social media buzz - especially on "meme stocks" as a risk factor to consider when taking large positions, especially short ones. The smartest funds are going beyond simply hand-wringing and are starting to monitor social media forums like [r/wallstreetbets](https://www.reddit.com/r/wallstreetbets/) to identify potential risks in their portfolios. Below, I'm going to walk through an example of collecting `r/wallstreetbets` activity on a handful of example stocks using Reddit's semi-unofficial [PushShift API](https://github.com/pushshift/api) and related packages. In a following post, I'll walk through a simple example of sentiment analysis using [VADER](https://github.com/cjhutto/vaderSentiment), and other assorted python packages. If you'd like to experiment with the below code without tedious copy-pasting, I've made it available on the below Google Colab link. <a style="text-align: center;" href="https://colab.research.google.com/drive/1yf4fje7IFhI0skAXddTUx0nsPbCgJgTc?usp=sharing"><img src="images/colab.png" title="ipynb on Colab" /></a> ## Setup and Download We will access the PushShift API through a python package named [psaw](https://github.com/dmarx/psaw) (acronym for "PushShift API Wrapper") so first we'll need to pip install that. If you don't already have the fantastically useful `jsonlines` package installed, it'd be a good idea to install that too. ``` !pip install psaw !pip install jsonlines ``` The imports are dead-simple. I'll import pandas as well since that's my swiss army knife of choice. I'm also going to define a data root path to where on my system I want to store the ``` import os DATA_ROOT = '../data/' import pandas as pd from datetime import datetime from psaw import PushshiftAPI api = PushshiftAPI() ``` ## PushShift and `psaw` Overview I'll start with a quick example of how to use the psaw wrapper. You'll want to refer to the [psaw](https://github.com/dmarx/psaw) and [PushShift](https://github.com/pushshift/api) GitHub pages for more complete documentation. First, we will use the `search_submissions` API method, which searches submissions (the initial post in a new thread) for the given ticker. We need to pass in unix-type integer timestamps rather than human-readable ones, so here we're using pandas to do that. You'll also notice the `filter` parameter, which allows you to return only a subset of the many fields available. If you want to see the full list of available fields, read the docs or run the below code snippet. ` gen = api.search_submissions(q='GME',limit=1) list(gen)[0].d_.keys() ` ``` start_epoch = int(pd.to_datetime('2021-01-01').timestamp()) end_epoch = int(pd.to_datetime('2021-01-02').timestamp()) gen = api.search_submissions(q='GME', # this is the keyword (ticker symbol) for which we're searching after=start_epoch, before=end_epoch, # these are the unix-based timestamps to search between subreddit=['wallstreetbets','stocks'], # one or more subreddits to include in the search filter=['id','url','author', 'title', 'score', 'subreddit','selftext','num_comments'], # list of fields to return limit = 2 # limit on the number of records returned ) ``` You'll notice that this ran awfully quickly. In part, that's due to the fact that it has returned a lazy _generator_ object which doesn't (yet) contain the data we want. One simple way to make the generator object actually pull the data is to wrap it in a `list()` call. Below is an example of what that returns. _Side note: if you don't catch the resulting list in the first time you run this, you'll notice that it won't work a second time. The generator has been "consumed" and emptied of objects. So we will catch the returned value in a variable called `lst` and view that..._ ``` lst = list(gen) lst ``` Each element of the returned list is a `submission` object which, as far as I can tell, simply provides easier access to the fields. ``` print("id:",lst[0].id) # this is Reddit's unique ID for this post print("url:",lst[0].url) print("author:",lst[0].author) print("title:",lst[0].title) print("score:",lst[0].score) # upvote/downvote-based score, doesn't seem 100% reliable print("subreddit:",lst[0].subreddit) print("num_comments:",lst[0].num_comments) # number of comments in the thread (which we can get later if we choose) print("selftext:",lst[0].selftext) # This is the body of the post ``` Perhaps a more familiar way to interact with each item of this list is as a `dict`. Luckily, the API includes an easy way to get all of the available info as a dict without any effort - like this: ``` lst[0].d_ ``` That's much better! However, you'll notice that the returned values for `created` and `created_utc` aren't particularly user-friendly. They're in the same UNIX-style epoch integer format we had to specify in the query. A quick way to add a human-readable version is a function like the below. You'll notice the human-readable timestamp added onto the end. ``` def convert_date(timestamp): return datetime.fromtimestamp(timestamp).strftime('%Y-%m-%dT%H:%M:%S') lst[0].d_['datetime_utc'] = convert_date( lst[0].d_['created_utc'] ) lst[0].d_ ``` Depending on the ticker, you may find A LOT of posts (if you don't assign a `limit` value, of course). One handy capability of the API is to filter based on fields so we can seach only for fields with at least N comments. Notice that we need to express the greater than as a string (`">100"`), which isn't totally obvious from the documentation. ``` gen = api.search_submissions(q='GME', after=start_epoch, before=end_epoch, # these are the unix-based timestamps to search between subreddit=['wallstreetbets','stocks'], filter=['id','url','author', 'title', 'score','subreddit','selftext','num_comments'], # list of fields to return num_comments=">100", limit = 2 # limit on the number of records returned ) lst = list(gen) item = lst[0] item.d_ ``` ### Sidebar: Getting Comments For our purposes, just the `submissions` offer ample amounts of material to analyze so I'm generally ignoring the `comments` underneath them, other than tracking the `num_comments` value. However, if you wanted to pull the comments for a given submission, you could do it like below. Note a few things: 1. pass in the `id` property of the submission item as `link_id`. Also not totally clearly documented IMO 2. The filter values are a little different because the fields available on a comment are not exactly the same as on a submission. The main changes to note is that url -> permalink and selftext -> body. Otherwise, they seem similar. ``` comments_lst = list(api.search_comments(link_id=item.id, filter=['id','parent_id','permalink','author', 'title', 'subreddit','body','num_comments','score'], limit=5)) pd.DataFrame(comments_lst) ``` ## Building a Downloader With a basic understanding of the API and `psaw` wrapper, we can construct a simple downloader which downloads all submissions (with greater than n comments) for a one week time window on any stock ticker. Then, since we will probably want to avoid needing to call the API repeatedly for the same data, we will save it as a jsonlines file. If you're not familiar with `jsonlines`, it's well worth checking out. Note that, by default, jsonlines will append to the end of an existing file if one exists, or will create a file if one doesn't. Keep this in mind if running the same code on the same date/ticker repeatedly. It's probably easiest to assume the `jl` files have duplicates in them and to simply dedupe when reading back from disk. ``` import jsonlines from tqdm.notebook import tqdm import time import random def get_submissions(symbol, end_date): end_date = pd.to_datetime(end_date) #ensure it's a datetime object not string end_epoch = int(end_date.timestamp()) start_epoch = int((end_date-pd.offsets.Week(1)).timestamp()) gen = api.search_submissions(q=f'${symbol}', after=start_epoch, before=end_epoch, subreddit=['wallstreetbets','stocks'], num_comments = ">10", filter=['id','url','author', 'title', 'subreddit', 'num_comments','score','selftext'] ) path = os.path.join(DATA_ROOT,f'{symbol}.jl') with jsonlines.open(path, mode='a') as writer: for item in gen: item.d_['date_utc'] = convert_date(item.d_['created_utc']) writer.write(item.d_) return get_submissions('GME','2021-07-19') ``` If we had a list of tickers that we wanted to get across a longer daterange, we could use some nested for loops like below to iterate through symbols and weeks. Running the below should take 15-20 minutes to complete so feel free to narrow the scope of tickers or dates if needed. ``` import traceback symbols = ['GME']#,'AMC','SPCE','TSLA'] for symbol in tqdm(symbols): print(symbol) for date in tqdm(pd.date_range('2021-01-01','2021-10-31', freq='W')): try: get_submissions(symbol,date) except: traceback.print_exc() time.sleep(5) ``` ## Try it Out! Enough reading, already! The above code is available on colab at the link below. Feel free to try it out yourself. <a style="text-align: center;" href="https://colab.research.google.com/drive/1yf4fje7IFhI0skAXddTUx0nsPbCgJgTc?usp=sharing"><img src="images/colab.png" title="ipynb on Colab" /></a> You can modify the notebook however you'd like without risk of breaking it. I really hope that those interested will "fork" from my notebook (all you'll need is a Google Drive to save a copy of the file...) and extend it to answer your own questions through data. ## Summary In this first post, we've made it through the heavy lifting of downloading data from the API and storing it in a usable format on disk. In the next segment, we will do some basic analysis on how spikes in Reddit traffic may signal risk of increased volatility in a given stock. ## One last thing... If you've found this post useful or enlightening, please consider subscribing to the email list to be notified of future posts (email addresses will only be used for this purpose...). To subscribe, scroll to the top of this page and look at the right sidebar. You can also follow me on twitter ([__@data2alpha__](https://twitter.com/data2alpha)) and forward to a friend or colleague who may find this topic interesting.
github_jupyter
# Estimating Tour Mode Choice This notebook illustrates how to re-estimate ActivitySim's auto ownership model. The steps in the process are: - Run ActivitySim in estimation mode to read household travel survey files, run the ActivitySim submodels to write estimation data bundles (EDB) that contains the model utility specifications, coefficients, chooser data, and alternatives data for each submodel. - Read and transform the relevant EDB into the format required by the model estimation package [larch](https://larch.newman.me) and then re-estimate the model coefficients. No changes to the model specification will be made. - Update the ActivitySim model coefficients and re-run the model in simulation mode. The basic estimation workflow is shown below and explained in the next steps. ![estimation workflow](https://github.com/RSGInc/activitysim/raw/develop/docs/images/estimation_example.jpg) # Load libraries ``` import larch # !conda install larch #for estimation import larch.util.activitysim import pandas as pd import numpy as np import yaml import larch.util.excel import os ``` # Required Inputs In addition to a working ActivitySim model setup, estimation mode requires an ActivitySim format household travel survey. An ActivitySim format household travel survey is very similar to ActivitySim's simulation model tables: - households - persons - tours - joint_tour_participants - trips (not yet implemented) Examples of the ActivitySim format household travel survey are included in the [example_estimation data folders](https://github.com/RSGInc/activitysim/tree/develop/activitysim/examples/example_estimation). The user is responsible for formatting their household travel survey into the appropriate format. After creating an ActivitySim format household travel survey, the `scripts/infer.py` script is run to append additional calculated fields. An example of an additional calculated field is the `household:joint_tour_frequency`, which is calculated based on the `tours` and `joint_tour_participants` tables. The input survey files are below. ### Survey households ``` pd.read_csv("../data_sf/survey_data/override_persons.csv") ``` ### Survey persons ``` pd.read_csv("../data_sf/survey_data/override_persons.csv") ``` ### Survey tours ``` pd.read_csv("../data_sf/survey_data/override_tours.csv") ``` ### Survey joint tour participants ``` pd.read_csv("../data_sf/survey_data/survey_joint_tour_participants.csv") ``` # Example Setup if Needed To avoid duplication of inputs, especially model settings and expressions, the `example_estimation` depends on the `example`. The following commands create an example setup for use. The location of these example setups (i.e. the folders) are important because the paths are referenced in this notebook. The commands below download the skims.omx for the SF county example from the [activitysim resources repository](https://github.com/RSGInc/activitysim_resources). ``` !activitysim create -e example_estimation_sf -d test ``` # Run the Estimation Example The next step is to run the model with an `estimation.yaml` settings file with the following settings in order to output the EDB for all submodels: ``` enable=True bundles: - school_location - workplace_location - auto_ownership - free_parking - cdap - mandatory_tour_frequency - mandatory_tour_scheduling - joint_tour_frequency - joint_tour_composition - joint_tour_participation - joint_tour_destination - joint_tour_scheduling - non_mandatory_tour_frequency - non_mandatory_tour_destination - non_mandatory_tour_scheduling - tour_mode_choice - atwork_subtour_frequency - atwork_subtour_destination - atwork_subtour_scheduling - atwork_subtour_mode_choice survey_tables: households: file_name: survey_data/override_households.csv index_col: household_id persons: file_name: survey_data/override_persons.csv index_col: person_id tours: file_name: survey_data/override_tours.csv joint_tour_participants: file_name: survey_data/override_joint_tour_participants.csv ``` This enables the estimation mode functionality, identifies which models to run and their output estimation data bundles (EDBs), and the input survey tables, which include the override settings for each model choice. With this setup, the model will output an EBD with the following tables for this submodel: - model settings - tour_mode_choice_model_settings.yaml - coefficients - tour_mode_choice_coefficients.csv - coefficients template by tour purpose - tour_mode_choice_coefficients_template.csv - utilities specification - tour_mode_choice_SPEC.csv - chooser data - tour_mode_choice_values_combined.csv The following code runs the software in estimation mode, inheriting the settings from the simulation setup and using the San Francisco county data setup. It produces the EDB for all submodels but runs all the model steps identified in the inherited settings file. ``` %cd test !activitysim run -c configs_estimation/configs -c configs -o output -d data_sf ``` # Read EDB The next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data. ``` edb_directory = "output/estimation_data_bundle/tour_mode_choice/" def read_csv(filename, **kwargs): return pd.read_csv(os.path.join(edb_directory, filename), **kwargs) coefficients = read_csv( "tour_mode_choice_coefficients.csv", index_col='coefficient_name', ) coef_template = read_csv( "tour_mode_choice_coefficients_template.csv", index_col='coefficient_name', ) spec = read_csv("tour_mode_choice_SPEC.csv") values = read_csv("tour_mode_choice_values_combined.csv") ``` ### Model settings ``` settings = yaml.load( open(os.path.join(edb_directory, "tour_mode_choice_model_settings.yaml"),"r"), Loader=yaml.SafeLoader, ) settings ``` ### Coefficients ``` coefficients ``` ### Coef_template - coefficients by tour purpose ``` coef_template ``` ### Utility specifications ``` # Remove apostrophes from Label names spec['Label'] = spec['Label'].str.replace("'","") spec # Check for double-parameters ss = spec.query("Label!='#'").iloc[:,3:].stack().str.split("*") st = ss.apply(lambda x: len(x))>1 assert len(ss[st]) == 0 ``` ### Chooser and alternative values ``` # Remove apostrophes from column names values.columns = values.columns.str.replace("'","") values.fillna(0, inplace=True) values ``` # Data Processing and Estimation Setup The next step is to transform the EDB for larch for model re-estimation. ``` from larch import P,X ``` ### Alternatives ``` alt_names = list(spec.columns[3:]) alt_codes = np.arange(1,len(alt_names)+1) alt_names_to_codes = dict(zip(alt_names, alt_codes)) alt_codes_to_names = dict(zip(alt_codes, alt_names)) alt_names_to_codes ``` ### Remove choosers with invalid observed choice ``` values = values[values.override_choice.isin(alt_names)] ``` ### Nesting structure ``` tree = larch.util.activitysim.construct_nesting_tree(alt_names, settings['NESTS']) tree tree.elemental_names() ``` ### List tour purposes ``` purposes = list(coef_template.columns) purposes ``` ### Setup purpose specific models ``` m = {purpose:larch.Model(graph=tree) for purpose in purposes} for alt_code, alt_name in tree.elemental_names().items(): # Read in base utility function for this alt_name u = larch.util.activitysim.linear_utility_from_spec( spec, x_col='Label', p_col=alt_name, ignore_x=('#',), ) for purpose in purposes: # Modify utility function based on template for purpose u_purp = sum( ( P(coef_template[purpose].get(i.param,i.param)) * i.data * i.scale ) for i in u ) m[purpose].utility_co[alt_code] = u_purp ``` ### Set parameter values ``` for model in m.values(): larch.util.activitysim.explicit_value_parameters(model) larch.util.activitysim.apply_coefficients(coefficients, m) ``` ### Survey choice ``` values['override_choice_code'] = values.override_choice.map(alt_names_to_codes) ``` ### Availability ``` av = True # all alternatives are available d = larch.DataFrames( co=values.set_index('tour_id'), av=av, alt_codes=alt_codes, alt_names=alt_names, ) for purpose, model in m.items(): model.dataservice = d.selector_co(f"tour_type=='{purpose}'") model.choice_co_code = 'override_choice_code' from larch.model.model_group import ModelGroup mg = ModelGroup(m.values()) ``` # Estimate With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has two built-in estimation methods: BHHH and SLSQP. BHHH is the default and typically runs faster, but does not follow constraints on parameters. SLSQP is safer, but slower, and may need additional iterations. ``` mg.estimate(method='SLSQP', options={'maxiter':1000}) #mg.estimate(method='BHHH', options={'maxiter':1000}) ``` ### Estimated coefficients ``` mg.parameter_summary() ``` # Output Estimation Results ``` est_names = [j for j in coefficients.index if j in mg.pf.index] coefficients.loc[est_names, 'value'] = mg.pf.loc[est_names, 'value'] os.makedirs(os.path.join(edb_directory,'estimated'), exist_ok=True) ``` ### Write the re-estimated coefficients file ``` coefficients.reset_index().to_csv( os.path.join( edb_directory, 'estimated', "tour_mode_choice_coefficients_revised.csv", ), index=False, ) ``` ### Write the model estimation report, including coefficient t-statistic and log likelihood ``` for purpose, model in m.items(): model.to_xlsx( os.path.join( edb_directory, 'estimated', f"tour_mode_choice_{purpose}_model_estimation.xlsx", ), data_statistics=False ) ``` # Next Steps The final step is to either manually or automatically copy the `tour_mode_choice_coefficients_revised.csv` file to the configs folder, rename it to `tour_mode_choice_coeffs.csv`, and run ActivitySim in simulation mode. ``` pd.read_csv(os.path.join(edb_directory,'estimated',"tour_mode_choice_coefficients_revised.csv")) ```
github_jupyter
<a href="https://colab.research.google.com/github/open-mmlab/mmaction2/blob/master/demo/mmaction2_tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # MMAction2 Tutorial Welcome to MMAction2! This is the official colab tutorial for using MMAction2. In this tutorial, you will learn - Perform inference with a MMAction2 recognizer. - Train a new recognizer with a new dataset. Let's start! ## Install MMAction2 ``` # Check nvcc version !nvcc -V # Check GCC version !gcc --version # install dependencies: (use cu101 because colab has CUDA 10.1) !pip install -U torch==1.8.0+cu101 torchvision==0.9.0+cu101 torchtext==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html # install mmcv-full thus we could use CUDA operators !pip install mmcv-full==1.3.9 -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.8.0/index.html # Install mmaction2 !rm -rf mmaction2 !git clone https://github.com/open-mmlab/mmaction2.git %cd mmaction2 !pip install -e . # Install some optional requirements !pip install -r requirements/optional.txt # Check Pytorch installation import torch, torchvision print(torch.__version__, torch.cuda.is_available()) # Check MMAction2 installation import mmaction print(mmaction.__version__) # Check MMCV installation from mmcv.ops import get_compiling_cuda_version, get_compiler_version print(get_compiling_cuda_version()) print(get_compiler_version()) ``` ## Perform inference with a MMAction2 recognizer MMAction2 already provides high level APIs to do inference and training. ``` !mkdir checkpoints !wget -c https://download.openmmlab.com/mmaction/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth \ -O checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth from mmaction.apis import inference_recognizer, init_recognizer # Choose to use a config and initialize the recognizer config = 'configs/recognition/tsn/tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py' # Setup a checkpoint file to load checkpoint = 'checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth' # Initialize the recognizer model = init_recognizer(config, checkpoint, device='cuda:0') # Use the recognizer to do inference video = 'demo/demo.mp4' label = 'tools/data/kinetics/label_map_k400.txt' results = inference_recognizer(model, video, label) # Let's show the results for result in results: print(f'{result[0]}: ', result[1]) ``` ## Train a recognizer on customized dataset To train a new recognizer, there are usually three things to do: 1. Support a new dataset 2. Modify the config 3. Train a new recognizer ### Support a new dataset In this tutorial, we gives an example to convert the data into the format of existing datasets. Other methods and more advanced usages can be found in the [doc](/docs/tutorials/new_dataset.md) Firstly, let's download a tiny dataset obtained from [Kinetics-400](https://deepmind.com/research/open-source/open-source-datasets/kinetics/). We select 30 videos with their labels as train dataset and 10 videos with their labels as test dataset. ``` # download, decompress the data !rm kinetics400_tiny.zip* !rm -rf kinetics400_tiny !wget https://download.openmmlab.com/mmaction/kinetics400_tiny.zip !unzip kinetics400_tiny.zip > /dev/null # Check the directory structure of the tiny data # Install tree first !apt-get -q install tree !tree kinetics400_tiny # After downloading the data, we need to check the annotation format !cat kinetics400_tiny/kinetics_tiny_train_video.txt ``` According to the format defined in [`VideoDataset`](./datasets/video_dataset.py), each line indicates a sample video with the filepath and label, which are split with a whitespace. ### Modify the config In the next step, we need to modify the config for the training. To accelerate the process, we finetune a recognizer using a pre-trained recognizer. ``` from mmcv import Config cfg = Config.fromfile('./configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py') ``` Given a config that trains a TSN model on kinetics400-full dataset, we need to modify some values to use it for training TSN on Kinetics400-tiny dataset. ``` from mmcv.runner import set_random_seed # Modify dataset type and path cfg.dataset_type = 'VideoDataset' cfg.data_root = 'kinetics400_tiny/train/' cfg.data_root_val = 'kinetics400_tiny/val/' cfg.ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt' cfg.ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt' cfg.ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt' cfg.data.test.type = 'VideoDataset' cfg.data.test.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt' cfg.data.test.data_prefix = 'kinetics400_tiny/val/' cfg.data.train.type = 'VideoDataset' cfg.data.train.ann_file = 'kinetics400_tiny/kinetics_tiny_train_video.txt' cfg.data.train.data_prefix = 'kinetics400_tiny/train/' cfg.data.val.type = 'VideoDataset' cfg.data.val.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt' cfg.data.val.data_prefix = 'kinetics400_tiny/val/' # The flag is used to determine whether it is omnisource training cfg.setdefault('omnisource', False) # Modify num classes of the model in cls_head cfg.model.cls_head.num_classes = 2 # We can use the pre-trained TSN model cfg.load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth' # Set up working dir to save files and logs. cfg.work_dir = './tutorial_exps' # The original learning rate (LR) is set for 8-GPU training. # We divide it by 8 since we only use one GPU. cfg.data.videos_per_gpu = cfg.data.videos_per_gpu // 16 cfg.optimizer.lr = cfg.optimizer.lr / 8 / 16 cfg.total_epochs = 10 # We can set the checkpoint saving interval to reduce the storage cost cfg.checkpoint_config.interval = 5 # We can set the log print interval to reduce the the times of printing log cfg.log_config.interval = 5 # Set seed thus the results are more reproducible cfg.seed = 0 set_random_seed(0, deterministic=False) cfg.gpu_ids = range(1) # Save the best cfg.evaluation.save_best='auto' # We can initialize the logger for training and have a look # at the final config used for training print(f'Config:\n{cfg.pretty_text}') ``` ### Train a new recognizer Finally, lets initialize the dataset and recognizer, then train a new recognizer! ``` import os.path as osp from mmaction.datasets import build_dataset from mmaction.models import build_model from mmaction.apis import train_model import mmcv # Build the dataset datasets = [build_dataset(cfg.data.train)] # Build the recognizer model = build_model(cfg.model, train_cfg=cfg.get('train_cfg'), test_cfg=cfg.get('test_cfg')) # Create work_dir mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir)) train_model(model, datasets, cfg, distributed=False, validate=True) ``` ### Understand the log From the log, we can have a basic understanding the training process and know how well the recognizer is trained. Firstly, the ResNet-50 backbone pre-trained on ImageNet is loaded, this is a common practice since training from scratch is more cost. The log shows that all the weights of the ResNet-50 backbone are loaded except the `fc.bias` and `fc.weight`. Second, since the dataset we are using is small, we loaded a TSN model and finetune it for action recognition. The original TSN is trained on original Kinetics-400 dataset which contains 400 classes but Kinetics-400 Tiny dataset only have 2 classes. Therefore, the last FC layer of the pre-trained TSN for classification has different weight shape and is not used. Third, after training, the recognizer is evaluated by the default evaluation. The results show that the recognizer achieves 100% top1 accuracy and 100% top5 accuracy on the val dataset, Not bad! ## Test the trained recognizer After finetuning the recognizer, let's check the prediction results! ``` from mmaction.apis import single_gpu_test from mmaction.datasets import build_dataloader from mmcv.parallel import MMDataParallel # Build a test dataloader dataset = build_dataset(cfg.data.test, dict(test_mode=True)) data_loader = build_dataloader( dataset, videos_per_gpu=1, workers_per_gpu=cfg.data.workers_per_gpu, dist=False, shuffle=False) model = MMDataParallel(model, device_ids=[0]) outputs = single_gpu_test(model, data_loader) eval_config = cfg.evaluation eval_config.pop('interval') eval_res = dataset.evaluate(outputs, **eval_config) for name, val in eval_res.items(): print(f'{name}: {val:.04f}') ```
github_jupyter
## Adafruit Mini GPS PA1010D GPS https://www.adafruit.com/product/4415 ``` import json import pandas as pd pd.set_option('max_colwidth', 200) from meerkat import pa1010d, parser pd.__version__ # tested with 1.1.3 ``` #### Initialize Driver Class with Default I2C Address ``` gps = pa1010d.PA1010D(bus_n=1, bus_addr=0x10) gps.csv_writer.metadata ``` #### NMEA Sentence Support The GPS module outputs several NMEA sentences, to collect all the data a continuous looping read is used until enough bytes are collected to reconstruct all the sentences. Like a UART connection, the module cycles through transmitting each supported NMEA sentence type. Therefore, by default all supported sentence types are returned. The supported NMEA sentences are: 'GGA', 'GSA', 'GSV', 'RMC', 'VTG' Refer to http://aprs.gids.nl/nmea/ and datasheet page 16 for details. ``` gps.get() ``` Using the `nmea_sentences` keyword, specific NMEA sentences can be returned. Note that all sentences are still being transmitted on the I2C bus and being parsed by the driver. ``` gps.get(nmea_sentences=['GGA', 'GSA']) gps.get(nmea_sentences=['GSV', 'RMC', 'VTG']) gps.get(nmea_sentences=['RMC']) ``` #### CSV Writer Output ``` gps.metadata.header gps.write(description="test_1", n=4, nmea_sentences=['GGA', 'GSA'], delay=1) gps.csv_writer.path m, df = parser.csv_resource(gps.csv_writer.path) m df.dtypes df def nmea_type(sentence): """Get NMEA sentence type""" s = sentence.split(",") return s[0][1:] df.nmea_sentence.apply(lambda x: nmea_type(sentence=x)) def nmea_parse(sentence): """Parse NMEA sentence, removing start of sentence '$' and checksum delimiter '*' """ start_seq = sentence[0] sentence = sentence[1:] sentence, checksum = sentence.split("*") return sentence.split(",") + [checksum] df.nmea_sentence.apply(nmea_parse) df["nmea_type"] = df.nmea_sentence.apply(lambda x: nmea_type(sentence=x)) df ``` #### GGA sentence Parse using DataFrame and list comprehension for the same result, but loses the row index values ``` df_gga = pd.DataFrame([nmea_parse(gs) for gs in df.loc[df.nmea_type == "GNGGA", "nmea_sentence"].values]) df_gga.columns = parser.GGA_columns df_gga ``` Parse using .loc and apply to preserve the source indexes ``` df_gga = df.loc[df.nmea_type == "GNGGA"].apply(lambda x: nmea_parse(x.nmea_sentence), axis=1, result_type='expand') df_gga.columns = parser.GGA_columns df_gga ``` #### GSA sentence ``` df_gsa = df.loc[df.nmea_type == "GPGSA"].apply(lambda x: nmea_parse(x.nmea_sentence), axis=1, result_type='expand') df_gsa.columns = parser.GSA_columns df_gsa ``` Merge metadata onto NMEA data ``` df_meta = df.loc[df.nmea_type == "GNGGA", ["std_time_ms", "description", "sample_n", "datetime64_ns"]] df_meta df_gga df_final = pd.concat([df_meta, df_gga], axis=1) df_final df_final.dtypes def gps_dd(coord): """Convert ddmm.mmmm location to dd.dddd""" x = coord.split(".") head = x[0] minute_decimal = x[1] degree = head[0:-2] minute = head[-2:] return float(degree) + float(minute + "." + minute_decimal) / 60 df_final.latitude.apply(gps_dd) ``` #### JSON Writer Output ``` gps.json_writer.metadata_interval = 3 data = gps.publish(description="test_2", n=7, nmea_sentences=['GGA', 'RMC'], delay=2) for n, d in enumerate(data): #print(d) #print("-"*40) print('line {}'.format(n)) print('-------') print(d) print("="*40) print() # default writer format is CSV, switch to JSON gps.writer_output = 'json' # writer method with description and sample number gps.write(description='test_3', n=15, nmea_sentences=['GGA', 'GSA']) with open(gps.json_writer.path, 'r') as f: for n in range(4): #print(f.readline().strip()) print('line {}'.format(n)) print('-------') print(f.readline()) print("="*40) print() data = [] h = "" with open(gps.json_writer.path, 'r') as f: for _ in range(15): s = f.readline().strip() js = json.loads(s) if "metadata" in js.keys(): h = [js['time_format']] + js['metadata']['header'] data.append([js['std_time_ms'] , js['description'], js['sample_n'], js['nmea_sentence']]) h pd.DataFrame(data, columns=h) ```
github_jupyter
# Predicting Stock Prices with Prophet >"In this post we will be using Facebook's Prophet to forecast time series data. The data we will be using is historical daily SA&P 500 adjusted close price. We will first create a 3 year forecast usind ytd data and then simulate historical monthly forecasts dating back to 1980. Finally we will create various trading strategies to attempt to beat the tried and true method of buying and holding." - toc: true - badges: true - comments: true - categories: jupyter, prophet, stock, python - image: images/undraw_finance.svg ## Imports ``` import pandas as pd import numpy as np from fbprophet import Prophet import matplotlib.pyplot as plt from functools import reduce %matplotlib inline import warnings warnings.filterwarnings('ignore') plt.style.use('seaborn-deep') pd.options.display.float_format = "{:,.2f}".format ``` For this project we will be importing the standard libraries for data anaysis with Python. We will also import Prophet and reduce from functools which will be used to help simulate our Forecasts. ## The Data ``` stock_price = pd.read_csv('^GSPC.csv',parse_dates=['Date']) stock_price.info() stock_price.describe() ``` The data we are using is the historical S&P500 prices dating back to 1980. You can find the data [here](https://finance.yahoo.com/quote/%5EGSPC/history?p=%5EGSPC). ## Data Preparation ``` stock_price = stock_price[['Date','Adj Close']] stock_price.columns = ['ds', 'y'] stock_price.head(10) ``` For prophet to work, we need to change the names of the 'Date' and 'Adj Close' columns to 'ds' and 'y'. The term 'y' is typically used for the target column (what you are trying to predict) in most machine learning projects. ## Prophet ``` stock_price.set_index('ds').y.plot(figsize=(12,6), grid=True); ``` Before we use Prophet to create a forecast let's visualize our data. It's always a good idea to create a few visualitions to gain a better understanding of the data you are working with. ``` model = Prophet() model.fit(stock_price) ``` To activate the Prophet Model we simply call `Prophet()` and assign it to a variabl called `model`. Next fit our stock data to the model by calling the `fit` method. ``` future = model.make_future_dataframe(1095, freq='d') future_boolean = future['ds'].map(lambda x : True if x.weekday() in range(0, 5) else False) future = future[future_boolean] future.tail() ``` To create a forecast with our model we need to create some futue dates. Prophet provides us with a helper function called `make_future_dataframe`. We pass in the number of future periods and frequency. Above we created a forecast for the next 1095 days or 3 years. Since stocks can only be traded on weekdays we need to remove the weekends from our forecast dataframe. To do so we create a boolean expression where if a day does not equal 0 - 4 then return False. "0 = Monday, 6=Saturday, etc.." We then pass the boolean expression to our dataframe with returns only True values. We now have a forecast dataframe comprised of the next 3 years of weekdays. ``` forecast = model.predict(future) forecast.tail() ``` To create the forecast we call `predict` from our `model` and pass in the `future` dataframe we created earlier. We return the results in a new dataframe called `forecast`. When we inspect the `forecast` dataframe we see a bunch of new terms. The one we are most interested in is `yhat` which is our forecasted value. ``` model.plot(forecast); model.plot_components(forecast); ``` All the new fields appear a bit daunting but fortunately Prophet comes with two handy visualization helpers, `plot` and `plot_components`. The `plot` functions creates a graph of our actuals and forecast and `plot_components` provides us a graph of our trend and seasonality. ``` stock_price_forecast = forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']] df = pd.merge(stock_price, stock_price_forecast, on='ds', how='right') df.set_index('ds').plot(figsize=(16,8), color=['royalblue', "#34495e", "#e74c3c", "#e74c3c"], grid=True); ``` The visualization helpers are just using the data in our `forecast` dataframe. We can actually recreate the same graphs. Above I recreated the `plot` graph. ## Simulating Forecasts While the 3 year forecast we created above is pretty cool we don't want to make any trading decisions on it without backtesting the performance and a trading strategy. In this section we will simulate as if Prophet existed back in 1980 and we used it to creat a monthly forecast through 2019. We will then use this data in the following section to simulate how various trading strategies did vs if we just bought and held on to the stock. ``` stock_price['dayname'] = stock_price['ds'].dt.day_name() stock_price['month'] = stock_price['ds'].dt.month stock_price['year'] = stock_price['ds'].dt.year stock_price['month/year'] = stock_price['month'].map(str) + '/' + stock_price['year'].map(str) stock_price = pd.merge(stock_price, stock_price['month/year'].drop_duplicates().reset_index(drop=True).reset_index(), on='month/year', how='left') stock_price = stock_price.rename(columns={'index':'month/year_index'}) stock_price.tail() ``` Before we simulate the monthly forecasts we need to add some columns to our `stock_price` dataframe we created in the beginning of this project to make it a bit easier to work with. We add month, year, month/year, and month/year_index. ``` loop_list = stock_price['month/year'].unique().tolist() max_num = len(loop_list) - 1 forecast_frames = [] for num, item in enumerate(loop_list): if num == max_num: pass else: df = stock_price.set_index('ds')[ stock_price[stock_price['month/year'] == loop_list[0]]['ds'].min():\ stock_price[stock_price['month/year'] == item]['ds'].max()] df = df.reset_index()[['ds', 'y']] model = Prophet() model.fit(df) future = stock_price[stock_price['month/year_index'] == (num + 1)][['ds']] forecast = model.predict(future) forecast_frames.append(forecast) stock_price_forecast = reduce(lambda top, bottom: pd.concat([top, bottom], sort=False), forecast_frames) stock_price_forecast = stock_price_forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']] stock_price_forecast.to_csv('stock_price_forecast.csv', index=False) ``` Above is a lot but essentially we are looping through each unique `month/year` in the stock_price and fitting the `Prophet` model with the stock data available to that period and then forecasting out one month ahead. We continue to do this until we hit the last unique `month/year`. Finally we combine these forecasts into a single dataframe called `stock_price_forecast`. I save the results as it take a while to run and in case I need to reset I can pull the csv file instead of running the model again. ``` stock_price_forecast = pd.read_csv('stock_price_forecast.csv', parse_dates=['ds']) df = pd.merge(stock_price[['ds','y', 'month/year_index']], stock_price_forecast, on='ds') df['Percent Change'] = df['y'].pct_change() df.set_index('ds')[['y', 'yhat', 'yhat_lower', 'yhat_upper']].plot(figsize=(16,8), color=['royalblue', "#34495e", "#e74c3c", "#e74c3c"], grid=True) df.head() ``` Finally we combine our forecast with the actual prices and create a `Percent Change` column which will be used in our Trading Algorithms below. Lastly, I plot the forecasts with the actuals to see how well it did. As you can see there is a bit of a delay. It kind of behaves a lot like a moving average would. ## Trading Algorithms ``` df['Hold'] = (df['Percent Change'] + 1).cumprod() df['Prophet'] = ((df['yhat'].shift(-1) > df['yhat']).shift(1) * (df['Percent Change']) + 1).cumprod() df['Prophet Thresh'] = ((df['y'] > df['yhat_lower']).shift(1)* (df['Percent Change']) + 1).cumprod() df['Seasonality'] = ((~df['ds'].dt.month.isin([8,9])).shift(1) * (df['Percent Change']) + 1).cumprod() ``` Above we create four initial trading algorithms: * **Hold**: Our bench mark. This is a buy and hold strategy. Meaning we buy the stock and hold on to it until the end time period. * **Prophet**: This strategy is to sell when our forecast indicates a down trend and buy back in when it iindicates an upward trend * **Prophet Thresh**: This strategy is to only sell when the stock price fall below our yhat_lower boundary. * **Seasonality**: This strategy is to exit the market in August and re-enter in Ocober. This was based on the seasonality chart from above. ``` (df.dropna().set_index('ds')[['Hold', 'Prophet', 'Prophet Thresh','Seasonality']] * 1000).plot(figsize=(16,8), grid=True) print(f"Hold = {df['Hold'].iloc[-1]*1000:,.0f}") print(f"Prophet = {df['Prophet'].iloc[-1]*1000:,.0f}") print(f"Prophet Thresh = {df['Prophet Thresh'].iloc[-1]*1000:,.0f}") print(f"Seasonality = {df['Seasonality'].iloc[-1]*1000:,.0f}") ``` Above we plot the results simulating an initial investment of $1,000 dollars. As you can see our Seasonality did best and our benchmark strategy of Hold did the second best. Both Prophet based strategies didn't do so well. Let's see if we can improve the `Prophet Thresh` buy optimizing the threshold. ``` performance = {} for x in np.linspace(.9,.99,10): y = ((df['y'] > df['yhat_lower']*x).shift(1)* (df['Percent Change']) + 1).cumprod() performance[x] = y best_yhat = pd.DataFrame(performance).max().idxmax() pd.DataFrame(performance).plot(figsize=(16,8), grid=True); f'Best Yhat = {best_yhat:,.2f}' ``` Above we loop through various percents of the thresh to find the optimal thresh. It appears the best threshhold is 92% of our current yhat_lower. ``` df['Optimized Prophet Thresh'] = ((df['y'] > df['yhat_lower'] * best_yhat).shift(1) * (df['Percent Change']) + 1).cumprod() (df.dropna().set_index('ds')[['Hold', 'Prophet', 'Prophet Thresh', 'Seasonality', 'Optimized Prophet Thresh']] * 1000).plot(figsize=(16,8), grid=True) print(f"Hold = {df['Hold'].iloc[-1]*1000:,.0f}") print(f"Prophet = {df['Prophet'].iloc[-1]*1000:,.0f}") print(f"Prophet Thresh = {df['Prophet Thresh'].iloc[-1]*1000:,.0f}") print(f"Seasonality = {df['Seasonality'].iloc[-1]*1000:,.0f}") print(f"Optimized Prophet Thresh = {df['Optimized Prophet Thresh'].iloc[-1]*1000:,.0f}") ``` Above we see our new `Optimized Prophet Thresh` is the best trading stratege. **Unfortunately**, both the `Seasonaity` and `Optimized Prophet Thresh` are both cheating since they are using data from the future that wouldn't be available at the time of our trade. We are going to need to create an Optimized Thresh for each curent point in time of our Forecast. ``` fcst_thresh = {} for num, index in enumerate(df['month/year_index'].unique()): temp_df = df.set_index('ds')[ df[df['month/year_index'] == df['month/year_index'].unique()[0]]['ds'].min():\ df[df['month/year_index'] == index]['ds'].max()] performance = {} for thresh in np.linspace(0, .99, 100): percent = ((temp_df['y'] > temp_df['yhat_lower'] * thresh).shift(1)* (temp_df['Percent Change']) + 1).cumprod() performance[thresh] = percent best_thresh = pd.DataFrame(performance).max().idxmax() if num == len(df['month/year_index'].unique())-1: pass else: fcst_thresh[df['month/year_index'].unique()[num+1]] = best_thresh fcst_thresh = pd.DataFrame([fcst_thresh]).T.reset_index().rename(columns={'index':'month/year_index', 0:'Fcst Thresh'}) fcst_thresh['Fcst Thresh'].plot(figsize=(16,8), grid=True); ``` Above, like how we created our monthly forecast, we loop through the data and find the optimal thresh percent period to date for that current point in time. As you can see the % of the current thresh jumps around as we get further in to the periods (1/1/1980 - 3/18/2019). ``` df['yhat_optimized'] = pd.merge(df, fcst_thresh, on='month/year_index', how='left')['Fcst Thresh'].shift(1) * df['yhat_lower'] df['Prophet Fcst Thresh'] = ((df['y'] > df['yhat_optimized']).shift(1)* (df['Percent Change']) + 1).cumprod() (df.dropna().set_index('ds')[['Hold', 'Prophet', 'Prophet Thresh', 'Prophet Fcst Thresh']] * 1000).plot(figsize=(16,8), grid=True) print(f"Hold = {df['Hold'].iloc[-1]*1000:,.0f}") print(f"Prophet = {df['Prophet'].iloc[-1]*1000:,.0f}") print(f"Prophet Thresh = {df['Prophet Thresh'].iloc[-1]*1000:,.0f}") # print(f"Seasonality = {df['Seasonality'].iloc[-1]*1000:,.0f}") print(f"Prophet Fcst Thresh = {df['Prophet Fcst Thresh'].iloc[-1]*1000:,.0f}") ``` As we did before we create the new trading strategy and graph it. Unfortunately are results have gotten worse but we did do better then our initial `Prophet Thresh`. Instead of calculating the thresh using the full period to date let's try various rolling windows of time like you would see with a moving average(30, 60, 90, etc.). ``` rolling_thresh = {} for num, index in enumerate(df['month/year_index'].unique()): rolling_performance = {} for roll in range(10, 400, 10): temp_df = df.set_index('ds')[ df[df['month/year_index'] == index]['ds'].min() - pd.DateOffset(months=roll):\ df[df['month/year_index'] == index]['ds'].max()] performance = {} for thresh in np.linspace(.0,.99, 100): percent = ((temp_df['y'] > temp_df['yhat_lower'] * thresh).shift(1)* (temp_df['Percent Change']) + 1).cumprod() performance[thresh] = percent per_df = pd.DataFrame(performance) best_thresh = per_df.iloc[[-1]].max().idxmax() percents = per_df[best_thresh] rolling_performance[best_thresh] = percents per_df = pd.DataFrame(rolling_performance) best_rolling_thresh = per_df.iloc[[-1]].max().idxmax() if num == len(df['month/year_index'].unique())-1: pass else: rolling_thresh[df['month/year_index'].unique()[num+1]] = best_rolling_thresh rolling_thresh = pd.DataFrame([rolling_thresh]).T.reset_index().rename(columns={'index':'month/year_index', 0:'Fcst Thresh'}) rolling_thresh['Fcst Thresh'].plot(figsize=(16,8), grid=True); ``` Above is very simliar to before but now we are trying out various moving windows along with various threshold percents. This is getting quite complex. No wonder Quants make so much money. As you can see from above the thresh percents change over time. Now let's see how we did. ``` df['yhat_optimized'] = pd.merge(df, rolling_thresh, on='month/year_index', how='left')['Fcst Thresh'].fillna(1).shift(1) * df['yhat_lower'] df['Prophet Rolling Thresh'] = ((df['y'] > df['yhat_optimized']).shift(1)* (df['Percent Change']) + 1).cumprod() (df.dropna().set_index('ds')[['Hold', 'Prophet', 'Prophet Thresh', 'Prophet Fcst Thresh', 'Prophet Rolling Thresh']] * 1000).plot(figsize=(16,8), grid=True) print(f"Hold = {df['Hold'].iloc[-1]*1000:,.0f}") print(f"Prophet = {df['Prophet'].iloc[-1]*1000:,.0f}") print(f"Prophet Thresh = {df['Prophet Thresh'].iloc[-1]*1000:,.0f}") # print(f"Seasonality = {df['Seasonality'].iloc[-1]*1000:,.0f}") print(f"Prophet Fcst Thresh = {df['Prophet Fcst Thresh'].iloc[-1]*1000:,.0f}") print(f"Prophet Rolling Thresh = {df['Prophet Rolling Thresh'].iloc[-1]*1000:,.0f}") ``` As you can see our new `Porphet Rolling Thresh` did pretty well but still did't beat out the simpliest `Hold` strategy. Perhaps the saying "Time in the Market is better then Timing the Market" has some truth to it. ``` df['Time Traveler'] = ((df['y'].shift(-1) > df['yhat']).shift(1) * (df['Percent Change']) + 1).cumprod() (df.dropna().set_index('ds')[['Hold', 'Prophet', 'Prophet Thresh', 'Prophet Fcst Thresh', 'Prophet Rolling Thresh', 'Time Traveler']] * 1000).plot(figsize=(16,8), grid=True) print(f"Hold = {df['Hold'].iloc[-1]*1000:,.0f}") print(f"Prophet = {df['Prophet'].iloc[-1]*1000:,.0f}") print(f"Prophet Thresh = {df['Prophet Thresh'].iloc[-1]*1000:,.0f}") # print(f"Seasonality = {df['Seasonality'].iloc[-1]*1000:,.0f}") print(f"Prophet Fcst Thresh = {df['Prophet Fcst Thresh'].iloc[-1]*1000:,.0f}") print(f"Prophet Rolling Thresh = {df['Prophet Rolling Thresh'].iloc[-1]*1000:,.0f}") print(f"Time Traveler = {df['Time Traveler'].iloc[-1]*1000:,.0f}") ``` Above I implemented my `Time Traveler` strategy. This of course would be a perfect trading strategy as I know in advance when the market moves up or down. As you can the most you could make is $288,513 from $1,000. ## Summary Time Series Forecasting can be quite complex however Prophet makes it very easy to create robust forecasts with little effort. While it didn't make us rich with its stock market predictions it is still very useful and can be implemented quickly to solve many use cases in various areas.
github_jupyter
### CNN Model with features ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd from utils import split_sequence, get_apple_close_price, plot_series from utils import plot_residual_forecast_error, print_performance_metrics from utils import get_range, difference, inverse_difference from utils import train_test_split, NN_walk_forward_validation_v2 from features import get_apple_stock_with_features important_cols = ['Open', 'High', 'Low', 'Close', # <-- our target (pos=3) # 'Close_ndx', 'kc_10', 'ema_26', # 'ema_12', # 'macd', # 'macd_diff', # 'macd_sig', 'mavg_200', 'mavg_50', # 'mavg_20', # 'mavg_10', # 'b_hband_20', # 'b_lband_20', # 'ao', # 'ichimoku_a', # 'kc_lband_10', # 'dc_lband_20', # 'dc_hband_20', # 'tsi', # 'nvi', # 'mi', # 'atr_14' ] apple_stock = get_apple_stock_with_features(important_cols) short_series = get_range(apple_stock, '2003-01-01') # Model parameters look_back = 5 # days window look back n_features = len(short_series.columns.values) n_outputs = 5 # days forecast batch_size = 32 # for NN, batch size before updating weights n_epochs = 100 # for NN, number of training epochs ``` We need to first train/test split, then transform and scale our data ``` train, test = train_test_split(short_series,'2018-05-31') from sklearn.preprocessing import PowerTransformer # we need to use yeo-johnson transformation since we have negative values # pt = PowerTransformer(method='box-cox') pt = PowerTransformer(method='yeo-johnson') transformed_train = pt.fit_transform(train.values) transformed_test = pt.transform(test.values) # transformed_train = train.values # transformed_test = test.values from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scaled_train = scaler.fit_transform(transformed_train) scaled_test = scaler.transform(transformed_test) X_train, y_train = split_sequence(scaled_train, look_back, n_outputs) # we're only interested in the Close price (pos=3) y_train = y_train[:, :, 3] from keras.models import Sequential from keras.layers import Conv1D, MaxPooling1D, Dense, Flatten from keras.layers import LeakyReLU, BatchNormalization, Dropout, Activation from keras.optimizers import Adam import warnings warnings.simplefilter('ignore') def build_CNN(look_back, n_features, n_outputs, optimizer='adam'): model = Sequential() model.add(Conv1D(64, kernel_size=2, activation='relu', padding='same', input_shape=(look_back, n_features))) model.add(Flatten()) model.add(Dense(n_outputs)) model.compile(optimizer=optimizer, loss='mean_squared_error') return model model = build_CNN(look_back, n_features, n_outputs) model.summary() history = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size, shuffle=False) plot_series(history.history['loss'], title='CNN model with features - Loss over time') model.save_weights('cnn-model_weights.h5') size = 252 # approx. one year predictions = NN_walk_forward_validation_v2(model, scaled_train, scaled_test[:252], size=size, look_back=look_back, n_features=n_features, n_outputs=n_outputs) from utils import plot_walk_forward_validation, descale_with_features from utils import plot_residual_forecast_error, print_performance_metrics ``` We need to revert the scaling and transformation: ``` descaled_preds, descaled_test = descale_with_features(predictions, scaled_test, n_features, scaler=scaler, transformer=pt) fig, ax = plt.subplots(figsize=(15, 6)) plt.plot(descaled_test[:size][:, 3]) plt.plot(descaled_preds) ax.set_title('Walk forward validation - 5 days prediction') ax.legend(['Expected', 'Predicted']) plot_residual_forecast_error(descaled_preds, descaled_test[:size][:, 3]) print_performance_metrics(descaled_preds, descaled_test[:size][:, 3], model_name='CNN with features', total_days=size, steps=n_outputs) # model.load_weights('cnn-model_weights.h5') ```
github_jupyter
<a href="https://colab.research.google.com/github/adubowski/redi-xai/blob/main/inpainting/inpainting_gmcnn_test.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Inpaint Various Datasets using a Trained Inpainted Model Most code here is taken directly from https://github.com/shepnerd/inpainting_gmcnn/tree/master/pytorch with minor adjustments and refactoring into a Jupyter notebook, including a convenient way of providing arguments for test_options. The code cell under "Create elliptical masks" is original code, and significant adjustments have been made to the original code from "test.py" onwards. Otherwise, the cell titles refer to the module at the above Github link that the code was originally taken from. ### Load libraries ``` from google.colab import drive ## model.basemodel import os import torch import torch.nn as nn ## model.basenet # import os # import torch # import torch.nn as nn ## model.layer # import torch # import torch.nn as nn # import torch.nn.functional as F # from util.utils import gauss_kernel import torchvision.models as models import numpy as np ## model.loss # import torch # import torch.nn as nn import torch.autograd as autograd import torch.nn.functional as F # from model.layer import VGG19FeatLayer from functools import reduce ## model.net # import torch # import torch.nn as nn # import torch.nn.functional as F # from model.basemodel import BaseModel # from model.basenet import BaseNet # from model.loss import WGANLoss, IDMRFLoss # from model.layer import init_weights, PureUpsampling, ConfidenceDrivenMaskLayer, SpectralNorm # import numpy as np ## options.test_options import argparse # import os import time ## original code for ellipse masks import cv2 # import numpy as np from numpy import random from numpy.random import randint # from matplotlib import pyplot as plt import math ## utils.utils # import numpy as np import scipy.stats as st # import cv2 # import time # import os import glob ## Dependencies from test.py # import numpy as np # import cv2 # import os import subprocess # import glob # from options.test_options import TestOptions # from model.net import InpaintingModel_GMCNN # from util.utils import generate_rect_mask, generate_stroke_mask, getLatest drive.mount("/content/drive") dir_path = "/content/drive/MyDrive/redi-detecting-cheating" ``` ### model.basemodel ``` # a complex model consisted of several nets, and each net will be explicitly defined in other py classes class BaseModel(nn.Module): def __init__(self): super(BaseModel,self).__init__() def init(self, opt): self.opt = opt self.gpu_ids = opt.gpu_ids self.save_dir = opt.model_folder self.device = torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu') self.model_names = [] def setInput(self, inputData): self.input = inputData def forward(self): pass def optimize_parameters(self): pass def get_current_visuals(self): pass def get_current_losses(self): pass def update_learning_rate(self): pass def test(self): with torch.no_grad(): self.forward() # save models to the disk def save_networks(self, which_epoch): for name in self.model_names: if isinstance(name, str): save_filename = '%s_net_%s.pth' % (which_epoch, name) save_path = os.path.join(self.save_dir, save_filename) net = getattr(self, 'net' + name) if len(self.gpu_ids) > 0 and torch.cuda.is_available(): torch.save(net.state_dict(), save_path) # net.cuda(self.gpu_ids[0]) else: torch.save(net.state_dict(), save_path) def __patch_instance_norm_state_dict(self, state_dict, module, keys, i=0): key = keys[i] if i + 1 == len(keys): # at the end, pointing to a parameter/buffer if module.__class__.__name__.startswith('InstanceNorm') and \ (key == 'running_mean' or key == 'running_var'): if getattr(module, key) is None: state_dict.pop('.'.join(keys)) else: self.__patch_instance_norm_state_dict(state_dict, getattr(module, key), keys, i + 1) # load models from the disk def load_networks(self, load_path): for name in self.model_names: if isinstance(name, str): net = getattr(self, 'net' + name) if isinstance(net, torch.nn.DataParallel): net = net.module print('loading the model from %s' % load_path) # if you are using PyTorch newer than 0.4 (e.g., built from # GitHub source), you can remove str() on self.device state_dict = torch.load(load_path) # patch InstanceNorm checkpoints prior to 0.4 for key in list(state_dict.keys()): # need to copy keys here because we mutate in loop self.__patch_instance_norm_state_dict(state_dict, net, key.split('.')) net.load_state_dict(state_dict) # print network information def print_networks(self, verbose=True): print('---------- Networks initialized -------------') for name in self.model_names: if isinstance(name, str): net = getattr(self, 'net' + name) num_params = 0 for param in net.parameters(): num_params += param.numel() if verbose: print(net) print('[Network %s] Total number of parameters : %.3f M' % (name, num_params / 1e6)) print('-----------------------------------------------') # set requies_grad=Fasle to avoid computation def set_requires_grad(self, nets, requires_grad=False): if not isinstance(nets, list): nets = [nets] for net in nets: if net is not None: for param in net.parameters(): param.requires_grad = requires_grad ``` ### model.basenet ``` class BaseNet(nn.Module): def __init__(self): super(BaseNet, self).__init__() def init(self, opt): self.opt = opt self.gpu_ids = opt.gpu_ids self.save_dir = opt.checkpoint_dir self.device = torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu') def forward(self, *input): return super(BaseNet, self).forward(*input) def test(self, *input): with torch.no_grad(): self.forward(*input) def save_network(self, network_label, epoch_label): save_filename = '%s_net_%s.pth' % (epoch_label, network_label) save_path = os.path.join(self.save_dir, save_filename) torch.save(self.cpu().state_dict(), save_path) def load_network(self, network_label, epoch_label): save_filename = '%s_net_%s.pth' % (epoch_label, network_label) save_path = os.path.join(self.save_dir, save_filename) if not os.path.isfile(save_path): print('%s not exists yet!' % save_path) else: try: self.load_state_dict(torch.load(save_path)) except: pretrained_dict = torch.load(save_path) model_dict = self.state_dict() try: pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict} self.load_state_dict(pretrained_dict) print('Pretrained network %s has excessive layers; Only loading layers that are used' % network_label) except: print('Pretrained network %s has fewer layers; The following are not initialized: ' % network_label) for k, v in pretrained_dict.items(): if v.size() == model_dict[k].size(): model_dict[k] = v for k, v in model_dict.items(): if k not in pretrained_dict or v.size() != pretrained_dict[k].size(): print(k.split('.')[0]) self.load_state_dict(model_dict) ``` ### model.layer ``` class Conv2d_BN(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True): super(Conv2d_BN, self).__init__() self.model = nn.Sequential([ nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding, dilation, groups, bias), nn.BatchNorm2d(out_channels) ]) def forward(self, *input): return self.model(*input) class upsampling(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, scale=2): super(upsampling, self).__init__() assert isinstance(scale, int) self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, groups=groups, bias=bias) self.scale = scale def forward(self, x): h, w = x.size(2) * self.scale, x.size(3) * self.scale xout = self.conv(F.interpolate(input=x, size=(h, w), mode='nearest', align_corners=True)) return xout class PureUpsampling(nn.Module): def __init__(self, scale=2, mode='bilinear'): super(PureUpsampling, self).__init__() assert isinstance(scale, int) self.scale = scale self.mode = mode def forward(self, x): h, w = x.size(2) * self.scale, x.size(3) * self.scale if self.mode == 'nearest': xout = F.interpolate(input=x, size=(h, w), mode=self.mode) else: xout = F.interpolate(input=x, size=(h, w), mode=self.mode, align_corners=True) return xout class GaussianBlurLayer(nn.Module): def __init__(self, size, sigma, in_channels=1, stride=1, pad=1): super(GaussianBlurLayer, self).__init__() self.size = size self.sigma = sigma self.ch = in_channels self.stride = stride self.pad = nn.ReflectionPad2d(pad) def forward(self, x): kernel = gauss_kernel(self.size, self.sigma, self.ch, self.ch) kernel_tensor = torch.from_numpy(kernel) kernel_tensor = kernel_tensor.cuda() x = self.pad(x) blurred = F.conv2d(x, kernel_tensor, stride=self.stride) return blurred class ConfidenceDrivenMaskLayer(nn.Module): def __init__(self, size=65, sigma=1.0/40, iters=7): super(ConfidenceDrivenMaskLayer, self).__init__() self.size = size self.sigma = sigma self.iters = iters self.propagationLayer = GaussianBlurLayer(size, sigma, pad=32) def forward(self, mask): # here mask 1 indicates missing pixels and 0 indicates the valid pixels init = 1 - mask mask_confidence = None for i in range(self.iters): mask_confidence = self.propagationLayer(init) mask_confidence = mask_confidence * mask init = mask_confidence + (1 - mask) return mask_confidence class VGG19(nn.Module): def __init__(self, pool='max'): super(VGG19, self).__init__() self.conv1_1 = nn.Conv2d(3, 64, kernel_size=3, padding=1) self.conv1_2 = nn.Conv2d(64, 64, kernel_size=3, padding=1) self.conv2_1 = nn.Conv2d(64, 128, kernel_size=3, padding=1) self.conv2_2 = nn.Conv2d(128, 128, kernel_size=3, padding=1) self.conv3_1 = nn.Conv2d(128, 256, kernel_size=3, padding=1) self.conv3_2 = nn.Conv2d(256, 256, kernel_size=3, padding=1) self.conv3_3 = nn.Conv2d(256, 256, kernel_size=3, padding=1) self.conv3_4 = nn.Conv2d(256, 256, kernel_size=3, padding=1) self.conv4_1 = nn.Conv2d(256, 512, kernel_size=3, padding=1) self.conv4_2 = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.conv4_3 = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.conv4_4 = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.conv5_1 = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.conv5_2 = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.conv5_3 = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.conv5_4 = nn.Conv2d(512, 512, kernel_size=3, padding=1) if pool == 'max': self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2) self.pool4 = nn.MaxPool2d(kernel_size=2, stride=2) self.pool5 = nn.MaxPool2d(kernel_size=2, stride=2) elif pool == 'avg': self.pool1 = nn.AvgPool2d(kernel_size=2, stride=2) self.pool2 = nn.AvgPool2d(kernel_size=2, stride=2) self.pool3 = nn.AvgPool2d(kernel_size=2, stride=2) self.pool4 = nn.AvgPool2d(kernel_size=2, stride=2) self.pool5 = nn.AvgPool2d(kernel_size=2, stride=2) def forward(self, x): out = {} out['r11'] = F.relu(self.conv1_1(x)) out['r12'] = F.relu(self.conv1_2(out['r11'])) out['p1'] = self.pool1(out['r12']) out['r21'] = F.relu(self.conv2_1(out['p1'])) out['r22'] = F.relu(self.conv2_2(out['r21'])) out['p2'] = self.pool2(out['r22']) out['r31'] = F.relu(self.conv3_1(out['p2'])) out['r32'] = F.relu(self.conv3_2(out['r31'])) out['r33'] = F.relu(self.conv3_3(out['r32'])) out['r34'] = F.relu(self.conv3_4(out['r33'])) out['p3'] = self.pool3(out['r34']) out['r41'] = F.relu(self.conv4_1(out['p3'])) out['r42'] = F.relu(self.conv4_2(out['r41'])) out['r43'] = F.relu(self.conv4_3(out['r42'])) out['r44'] = F.relu(self.conv4_4(out['r43'])) out['p4'] = self.pool4(out['r44']) out['r51'] = F.relu(self.conv5_1(out['p4'])) out['r52'] = F.relu(self.conv5_2(out['r51'])) out['r53'] = F.relu(self.conv5_3(out['r52'])) out['r54'] = F.relu(self.conv5_4(out['r53'])) out['p5'] = self.pool5(out['r54']) return out class VGG19FeatLayer(nn.Module): def __init__(self): super(VGG19FeatLayer, self).__init__() self.vgg19 = models.vgg19(pretrained=True).features.eval().cuda() self.mean = torch.tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1).cuda() def forward(self, x): out = {} x = x - self.mean ci = 1 ri = 0 for layer in self.vgg19.children(): if isinstance(layer, nn.Conv2d): ri += 1 name = 'conv{}_{}'.format(ci, ri) elif isinstance(layer, nn.ReLU): ri += 1 name = 'relu{}_{}'.format(ci, ri) layer = nn.ReLU(inplace=False) elif isinstance(layer, nn.MaxPool2d): ri = 0 name = 'pool_{}'.format(ci) ci += 1 elif isinstance(layer, nn.BatchNorm2d): name = 'bn_{}'.format(ci) else: raise RuntimeError('Unrecognized layer: {}'.format(layer.__class__.__name__)) x = layer(x) out[name] = x # print([x for x in out]) return out def init_weights(net, init_type='normal', gain=0.02): def init_func(m): classname = m.__class__.__name__ if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1): if init_type == 'normal': nn.init.normal_(m.weight.data, 0.0, gain) elif init_type == 'xavier': nn.init.xavier_normal_(m.weight.data, gain=gain) elif init_type == 'kaiming': nn.init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') elif init_type == 'orthogonal': nn.init.orthogonal_(m.weight.data, gain=gain) else: raise NotImplementedError('initialization method [%s] is not implemented' % init_type) if hasattr(m, 'bias') and m.bias is not None: nn.init.constant_(m.bias.data, 0.0) elif classname.find('BatchNorm2d') != -1: nn.init.normal_(m.weight.data, 1.0, gain) nn.init.constant_(m.bias.data, 0.0) print('initialize network with %s' % init_type) net.apply(init_func) def init_net(net, init_type='normal', gpu_ids=[]): if len(gpu_ids) > 0: assert(torch.cuda.is_available()) net.to(gpu_ids[0]) net = torch.nn.DataParallel(net, gpu_ids) init_weights(net, init_type) return net def l2normalize(v, eps=1e-12): return v / (v.norm()+eps) class SpectralNorm(nn.Module): def __init__(self, module, name='weight', power_iteration=1): super(SpectralNorm, self).__init__() self.module = module self.name = name self.power_iteration = power_iteration if not self._made_params(): self._make_params() def _update_u_v(self): u = getattr(self.module, self.name + '_u') v = getattr(self.module, self.name + '_v') w = getattr(self.module, self.name + '_bar') height = w.data.shape[0] for _ in range(self.power_iteration): v.data = l2normalize(torch.mv(torch.t(w.view(height, -1).data), u.data)) u.data = l2normalize(torch.mv(w.view(height, -1).data, v.data)) sigma = u.dot(w.view(height, -1).mv(v)) setattr(self.module, self.name, w / sigma.expand_as(w)) def _made_params(self): try: u = getattr(self.module, self.name + '_u') v = getattr(self.module, self.name + '_v') w = getattr(self.module, self.name + '_bar') return True except AttributeError: return False def _make_params(self): w = getattr(self.module, self.name) height = w.data.shape[0] width = w.view(height, -1).data.shape[1] u = nn.Parameter(w.data.new(height).normal_(0, 1), requires_grad=False) v = nn.Parameter(w.data.new(width).normal_(0, 1), requires_grad=False) u.data = l2normalize(u.data) v.data = l2normalize(v.data) w_bar = nn.Parameter(w.data) del self.module._parameters[self.name] self.module.register_parameter(self.name+'_u', u) self.module.register_parameter(self.name+'_v', v) self.module.register_parameter(self.name+'_bar', w_bar) def forward(self, *input): self._update_u_v() return self.module.forward(*input) class PartialConv(nn.Module): def __init__(self, in_channels=3, out_channels=32, ksize=3, stride=1): super(PartialConv, self).__init__() self.ksize = ksize self.stride = stride self.fnum = 32 self.padSize = self.ksize // 2 self.pad = nn.ReflectionPad2d(self.padSize) self.eplison = 1e-5 self.conv = nn.Conv2d(in_channels, out_channels, stride=stride, kernel_size=ksize) def forward(self, x, mask): mask_ch = mask.size(1) sum_kernel_np = np.ones((mask_ch, mask_ch, self.ksize, self.ksize), dtype=np.float32) sum_kernel = torch.from_numpy(sum_kernel_np).cuda() x = x * mask / (F.conv2d(mask, sum_kernel, stride=1, padding=self.padSize)+self.eplison) x = self.pad(x) x = self.conv(x) mask = F.max_pool2d(mask, self.ksize, stride=self.stride, padding=self.padSize) return x, mask class GatedConv(nn.Module): def __init__(self, in_channels=3, out_channels=32, ksize=3, stride=1, act=F.elu): super(GatedConv, self).__init__() self.ksize = ksize self.stride = stride self.act = act self.padSize = self.ksize // 2 self.pad = nn.ReflectionPad2d(self.padSize) self.convf = nn.Conv2d(in_channels, out_channels, stride=stride, kernel_size=ksize) self.convm = nn.Conv2d(in_channels, out_channels, stride=stride, kernel_size=ksize, padding=self.padSize) def forward(self, x): x = self.pad(x) x = self.convf(x) x = self.act(x) m = self.convm(x) m = F.sigmoid(m) x = x * m return x class GatedDilatedConv(nn.Module): def __init__(self, in_channels, out_channels, ksize=3, stride=1, pad=1, dilation=2, act=F.elu): super(GatedDilatedConv, self).__init__() self.ksize = ksize self.stride = stride self.act = act self.padSize = pad self.pad = nn.ReflectionPad2d(self.padSize) self.convf = nn.Conv2d(in_channels, out_channels, stride=stride, kernel_size=ksize, dilation=dilation) self.convm = nn.Conv2d(in_channels, out_channels, stride=stride, kernel_size=ksize, dilation=dilation, padding=self.padSize) def forward(self, x): x = self.pad(x) x = self.convf(x) x = self.act(x) m = self.convm(x) m = F.sigmoid(m) x = x * m return x ``` ### model.loss ``` class WGANLoss(nn.Module): def __init__(self): super(WGANLoss, self).__init__() def __call__(self, input, target): d_loss = (input - target).mean() g_loss = -input.mean() return {'g_loss': g_loss, 'd_loss': d_loss} def gradient_penalty(xin, yout, mask=None): gradients = autograd.grad(yout, xin, create_graph=True, grad_outputs=torch.ones(yout.size()).cuda(), retain_graph=True, only_inputs=True)[0] if mask is not None: gradients = gradients * mask gradients = gradients.view(gradients.size(0), -1) gp = ((gradients.norm(2, dim=1) - 1) ** 2).mean() return gp def random_interpolate(gt, pred): batch_size = gt.size(0) alpha = torch.rand(batch_size, 1, 1, 1).cuda() # alpha = alpha.expand(gt.size()).cuda() interpolated = gt * alpha + pred * (1 - alpha) return interpolated class IDMRFLoss(nn.Module): def __init__(self, featlayer=VGG19FeatLayer): super(IDMRFLoss, self).__init__() self.featlayer = featlayer() self.feat_style_layers = {'relu3_2': 1.0, 'relu4_2': 1.0} self.feat_content_layers = {'relu4_2': 1.0} self.bias = 1.0 self.nn_stretch_sigma = 0.5 self.lambda_style = 1.0 self.lambda_content = 1.0 def sum_normalize(self, featmaps): reduce_sum = torch.sum(featmaps, dim=1, keepdim=True) return featmaps / reduce_sum def patch_extraction(self, featmaps): patch_size = 1 patch_stride = 1 patches_as_depth_vectors = featmaps.unfold(2, patch_size, patch_stride).unfold(3, patch_size, patch_stride) self.patches_OIHW = patches_as_depth_vectors.permute(0, 2, 3, 1, 4, 5) dims = self.patches_OIHW.size() self.patches_OIHW = self.patches_OIHW.view(-1, dims[3], dims[4], dims[5]) return self.patches_OIHW def compute_relative_distances(self, cdist): epsilon = 1e-5 div = torch.min(cdist, dim=1, keepdim=True)[0] relative_dist = cdist / (div + epsilon) return relative_dist def exp_norm_relative_dist(self, relative_dist): scaled_dist = relative_dist dist_before_norm = torch.exp((self.bias - scaled_dist)/self.nn_stretch_sigma) self.cs_NCHW = self.sum_normalize(dist_before_norm) return self.cs_NCHW def mrf_loss(self, gen, tar): meanT = torch.mean(tar, 1, keepdim=True) gen_feats, tar_feats = gen - meanT, tar - meanT gen_feats_norm = torch.norm(gen_feats, p=2, dim=1, keepdim=True) tar_feats_norm = torch.norm(tar_feats, p=2, dim=1, keepdim=True) gen_normalized = gen_feats / gen_feats_norm tar_normalized = tar_feats / tar_feats_norm cosine_dist_l = [] BatchSize = tar.size(0) for i in range(BatchSize): tar_feat_i = tar_normalized[i:i+1, :, :, :] gen_feat_i = gen_normalized[i:i+1, :, :, :] patches_OIHW = self.patch_extraction(tar_feat_i) cosine_dist_i = F.conv2d(gen_feat_i, patches_OIHW) cosine_dist_l.append(cosine_dist_i) cosine_dist = torch.cat(cosine_dist_l, dim=0) cosine_dist_zero_2_one = - (cosine_dist - 1) / 2 relative_dist = self.compute_relative_distances(cosine_dist_zero_2_one) rela_dist = self.exp_norm_relative_dist(relative_dist) dims_div_mrf = rela_dist.size() k_max_nc = torch.max(rela_dist.view(dims_div_mrf[0], dims_div_mrf[1], -1), dim=2)[0] div_mrf = torch.mean(k_max_nc, dim=1) div_mrf_sum = -torch.log(div_mrf) div_mrf_sum = torch.sum(div_mrf_sum) return div_mrf_sum def forward(self, gen, tar): gen_vgg_feats = self.featlayer(gen) tar_vgg_feats = self.featlayer(tar) style_loss_list = [self.feat_style_layers[layer] * self.mrf_loss(gen_vgg_feats[layer], tar_vgg_feats[layer]) for layer in self.feat_style_layers] self.style_loss = reduce(lambda x, y: x+y, style_loss_list) * self.lambda_style content_loss_list = [self.feat_content_layers[layer] * self.mrf_loss(gen_vgg_feats[layer], tar_vgg_feats[layer]) for layer in self.feat_content_layers] self.content_loss = reduce(lambda x, y: x+y, content_loss_list) * self.lambda_content return self.style_loss + self.content_loss class StyleLoss(nn.Module): def __init__(self, featlayer=VGG19FeatLayer, style_layers=None): super(StyleLoss, self).__init__() self.featlayer = featlayer() if style_layers is not None: self.feat_style_layers = style_layers else: self.feat_style_layers = {'relu2_2': 1.0, 'relu3_2': 1.0, 'relu4_2': 1.0} def gram_matrix(self, x): b, c, h, w = x.size() feats = x.view(b * c, h * w) g = torch.mm(feats, feats.t()) return g.div(b * c * h * w) def _l1loss(self, gen, tar): return torch.abs(gen-tar).mean() def forward(self, gen, tar): gen_vgg_feats = self.featlayer(gen) tar_vgg_feats = self.featlayer(tar) style_loss_list = [self.feat_style_layers[layer] * self._l1loss(self.gram_matrix(gen_vgg_feats[layer]), self.gram_matrix(tar_vgg_feats[layer])) for layer in self.feat_style_layers] style_loss = reduce(lambda x, y: x + y, style_loss_list) return style_loss class ContentLoss(nn.Module): def __init__(self, featlayer=VGG19FeatLayer, content_layers=None): super(ContentLoss, self).__init__() self.featlayer = featlayer() if content_layers is not None: self.feat_content_layers = content_layers else: self.feat_content_layers = {'relu4_2': 1.0} def _l1loss(self, gen, tar): return torch.abs(gen-tar).mean() def forward(self, gen, tar): gen_vgg_feats = self.featlayer(gen) tar_vgg_feats = self.featlayer(tar) content_loss_list = [self.feat_content_layers[layer] * self._l1loss(gen_vgg_feats[layer], tar_vgg_feats[layer]) for layer in self.feat_content_layers] content_loss = reduce(lambda x, y: x + y, content_loss_list) return content_loss class TVLoss(nn.Module): def __init__(self): super(TVLoss, self).__init__() def forward(self, x): h_x, w_x = x.size()[2:] h_tv = torch.abs(x[:, :, 1:, :] - x[:, :, :h_x-1, :]) w_tv = torch.abs(x[:, :, :, 1:] - x[:, :, :, :w_x-1]) loss = torch.sum(h_tv) + torch.sum(w_tv) return loss ``` ### options.test_options ``` class TestOptions: def __init__(self): self.parser = argparse.ArgumentParser() self.initialized = False def initialize(self): self.parser.add_argument('--dataset', type=str, default='skin_test', help='The dataset of the experiment.') self.parser.add_argument('--data_file', type=str, default=os.path.join(dir_path, 'data', 'processed', 'cancer'), help='the file storing testing file paths') self.parser.add_argument('--mask_dir', type=str, default=os.path.join(dir_path, 'data', 'masks', 'dilated-masks-224'), help='directory with saved masks, if applicable') self.parser.add_argument('--test_dir', type=str, default=os.path.join(dir_path, 'data', 'results_gmcnn'), help='models are saved here') self.parser.add_argument('--load_model_dir', type=str, default= os.path.join(dir_path, 'models','inpainting_gmcnn', \ '20210601-112529_GMCNN_isic_b8_s224x224_gc32_dc64_randmask-ellipse'), help='pretrained models are given here') self.parser.add_argument('--seed', type=int, default=1, help='random seed') self.parser.add_argument('--gpu_ids', type=str, default='0') self.parser.add_argument('--model', type=str, default='gmcnn') self.parser.add_argument('--random_mask', type=int, default=1, help='using random mask') self.parser.add_argument('--img_shapes', type=str, default='224,224,3', help='given shape parameters: h,w,c or h,w') self.parser.add_argument('--mask_shapes', type=str, default='40', help='given mask parameters: h,w or if mask_type==ellipse then should be number representing ellipse width.') self.parser.add_argument('--mask_type', type=str, default='ellipse') self.parser.add_argument('--test_num', type=int, default=-1) self.parser.add_argument('--mode', type=str, default='save') self.parser.add_argument('--phase', type=str, default='test') # for generator self.parser.add_argument('--g_cnum', type=int, default=32, help='# of generator filters in first conv layer') self.parser.add_argument('--d_cnum', type=int, default=32, help='# of discriminator filters in first conv layer') def parse(self, args=[]): if not self.initialized: self.initialize() if isinstance(args, dict): # If args is supplied as a dict, flatten to a list. args = [item for pair in args.items() for item in pair] elif not isinstance(args, list): # Otherwise, it should be a list. raise('args should be a dict or a list.') self.opt = self.parser.parse_args(args=args) # Added args=[] to make it work in notebook. if self.opt.data_file != '': self.opt.dataset_path = self.opt.data_file if os.path.exists(self.opt.test_dir) is False: os.mkdir(self.opt.test_dir) assert self.opt.random_mask in [0, 1] self.opt.random_mask = True if self.opt.random_mask == 1 else False assert self.opt.mask_type in ['rect', 'stroke', 'ellipse', 'saved'] # Added ellipse mask_type option str_img_shapes = self.opt.img_shapes.split(',') self.opt.img_shapes = [int(x) for x in str_img_shapes] if self.opt.mask_type=='ellipse': # If ellipse type then the mask size is just one number. self.opt.mask_shapes = int(self.opt.mask_shapes) elif self.opt.mask_type == 'saved': pass else: str_mask_shapes = self.opt.mask_shapes.split(',') self.opt.mask_shapes = [int(x) for x in str_mask_shapes] # model name and date self.opt.date_str = 'test_'+time.strftime('%Y%m%d-%H%M%S') self.opt.model_folder = self.opt.date_str + '_' + self.opt.dataset + '_' + self.opt.model self.opt.model_folder += '_s' + str(self.opt.img_shapes[0]) + 'x' + str(self.opt.img_shapes[1]) self.opt.model_folder += '_gc' + str(self.opt.g_cnum) self.opt.model_folder += '_randmask-' + self.opt.mask_type if self.opt.random_mask else '' if self.opt.random_mask: self.opt.model_folder += '_seed-' + str(self.opt.seed) self.opt.saving_path = os.path.join(self.opt.test_dir, self.opt.model_folder) if os.path.exists(self.opt.saving_path) is False and self.opt.mode == 'save': os.mkdir(self.opt.saving_path) if os.path.exists(os.path.join(self.opt.saving_path, "combined")) is False and self.opt.mode == 'save': os.mkdir(os.path.join(self.opt.saving_path, "combined")) if os.path.exists(os.path.join(self.opt.saving_path, "inpainted")) is False and self.opt.mode == 'save': os.mkdir(os.path.join(self.opt.saving_path, "inpainted")) args = vars(self.opt) print('------------ Options -------------') for k, v in sorted(args.items()): print('%s: %s' % (str(k), str(v))) print('-------------- End ----------------') return self.opt ``` ### Create elliptical masks Original code ``` def find_angle(pos1, pos2, ret_type = 'deg'): # Find the angle between two pixel points, pos1 and pos2. angle_rads = math.atan2(pos2[1] - pos1[1], pos2[0] - pos1[1]) if ret_type == 'rads': return angle_rads elif ret_type == 'deg': return math.degrees(angle_rads) # Convert from radians to degrees. def sample_centre_pts(n, imsize, xlimits=(50,250), ylimits=(50,250)): # Function to generate random sample of points for the centres of the elliptical masks. pts = np.empty((n,2)) # Empty array to hold the final points count=0 while count < n: sample = randint(0, imsize[0], (n,2))[0] # Assumes im_size is symmetric # Check the point is in the valid region. is_valid = (sample[0] < xlimits[0]) | (sample[0] > xlimits[1]) | \ (sample[1] < ylimits[0]) | (sample[1] > ylimits[1]) if is_valid: # Only take the point if it's within the valid region. pts[count] = sample count += 1 return pts def generate_ellipse_mask(imsize, mask_size, seed=None): im_centre = (int(imsize[0]/2), int(imsize[1]/2)) x_bounds = (int(0.1*imsize[0]), int(imsize[0] - 0.1*imsize[0])) # Bounds for the valid region of mask centres. y_bounds = (int(0.1*imsize[1]), int(imsize[1] - 0.1*imsize[1])) if seed is not None: random.seed(seed) # Set seed for repeatability n = 1 + random.binomial(1, 0.3) # The number of masks per image either 1 (70% of the time) or 2 (30% of the time) centre_pts = sample_centre_pts(n, imsize, x_bounds, y_bounds) # Get a random sample for the mask centres. startAngle = 0.0 endAngle = 360.0 # Draw full ellipses (although part may fall outside the image) mask = np.zeros((imsize[0], imsize[1], 1), np.float32) # Create blank canvas for the mask. for pt in centre_pts: size = abs(int(random.normal(mask_size, mask_size/5.0))) # Randomness introduced in the mask size. ratio = 2*random.random(1) + 1 # Ratio between length and width. Sample from Unif(1,3). centrex = int(pt[0]) centrey = int(pt[1]) angle = find_angle(im_centre, (centrex, centrey)) # Get the angle between the centre of the image and the mask centre. angle = int(angle + random.normal(0.0, 5.0)) # Base the angle of rotation on the above angle. mask = cv2.ellipse(mask, (centrex,centrey), (size, int(size*ratio)), angle, startAngle, endAngle, color=1, thickness=-1) # Insert a ellipse with the parameters defined above. mask = np.minimum(mask, 1.0) # This may be redundant. mask = np.transpose(mask, [2, 0, 1]) # bring the 'channel' axis to the first axis. mask = np.expand_dims(mask, 0) # Add in extra axis at axis=0 - resulting shape (1, 1, imsize[0],imsize[1]) return mask # test_mask = generate_ellipse_mask((224,224)) # from matplotlib import pyplot as plt # plt.imshow(test_mask[0][0], cmap='Greys_r') # plt.show() ``` ### utils.utils ``` def gauss_kernel(size=21, sigma=3, inchannels=3, outchannels=3): interval = (2 * sigma + 1.0) / size x = np.linspace(-sigma-interval/2,sigma+interval/2,size+1) ker1d = np.diff(st.norm.cdf(x)) kernel_raw = np.sqrt(np.outer(ker1d, ker1d)) kernel = kernel_raw / kernel_raw.sum() out_filter = np.array(kernel, dtype=np.float32) out_filter = out_filter.reshape((1, 1, size, size)) out_filter = np.tile(out_filter, [outchannels, inchannels, 1, 1]) return out_filter def np_free_form_mask(maxVertex, maxLength, maxBrushWidth, maxAngle, h, w): mask = np.zeros((h, w, 1), np.float32) numVertex = np.random.randint(maxVertex + 1) startY = np.random.randint(h) startX = np.random.randint(w) brushWidth = 0 for i in range(numVertex): angle = np.random.randint(maxAngle + 1) angle = angle / 360.0 * 2 * np.pi if i % 2 == 0: angle = 2 * np.pi - angle length = np.random.randint(maxLength + 1) brushWidth = np.random.randint(10, maxBrushWidth + 1) // 2 * 2 nextY = startY + length * np.cos(angle) nextX = startX + length * np.sin(angle) nextY = np.maximum(np.minimum(nextY, h - 1), 0).astype(np.int) nextX = np.maximum(np.minimum(nextX, w - 1), 0).astype(np.int) cv2.line(mask, (startY, startX), (nextY, nextX), 1, brushWidth) cv2.circle(mask, (startY, startX), brushWidth // 2, 2) startY, startX = nextY, nextX cv2.circle(mask, (startY, startX), brushWidth // 2, 2) return mask def generate_rect_mask(im_size, mask_size, margin=8, rand_mask=True): mask = np.zeros((im_size[0], im_size[1])).astype(np.float32) if rand_mask: sz0, sz1 = mask_size[0], mask_size[1] of0 = np.random.randint(margin, im_size[0] - sz0 - margin) of1 = np.random.randint(margin, im_size[1] - sz1 - margin) else: sz0, sz1 = mask_size[0], mask_size[1] of0 = (im_size[0] - sz0) // 2 of1 = (im_size[1] - sz1) // 2 mask[of0:of0+sz0, of1:of1+sz1] = 1 mask = np.expand_dims(mask, axis=0) mask = np.expand_dims(mask, axis=0) rect = np.array([[of0, sz0, of1, sz1]], dtype=int) return mask, rect def generate_stroke_mask(im_size, parts=10, maxVertex=20, maxLength=100, maxBrushWidth=24, maxAngle=360): mask = np.zeros((im_size[0], im_size[1], 1), dtype=np.float32) for i in range(parts): mask = mask + np_free_form_mask(maxVertex, maxLength, maxBrushWidth, maxAngle, im_size[0], im_size[1]) mask = np.minimum(mask, 1.0) mask = np.transpose(mask, [2, 0, 1]) mask = np.expand_dims(mask, 0) return mask def generate_mask(type, im_size, mask_size): if type == 'rect': return generate_rect_mask(im_size, mask_size) elif type == 'ellipse': return generate_ellipse_mask(im_size, mask_size), None else: return generate_stroke_mask(im_size), None def getLatest(folder_path): files = glob.glob(folder_path) file_times = list(map(lambda x: time.ctime(os.path.getctime(x)), files)) return files[sorted(range(len(file_times)), key=lambda x: file_times[x])[-1]] ``` ### model.net ``` # generative multi-column convolutional neural net class GMCNN(BaseNet): def __init__(self, in_channels, out_channels, cnum=32, act=F.elu, norm=F.instance_norm, using_norm=False): super(GMCNN, self).__init__() self.act = act self.using_norm = using_norm if using_norm is True: self.norm = norm else: self.norm = None ch = cnum # network structure self.EB1 = [] self.EB2 = [] self.EB3 = [] self.decoding_layers = [] self.EB1_pad_rec = [] self.EB2_pad_rec = [] self.EB3_pad_rec = [] self.EB1.append(nn.Conv2d(in_channels, ch, kernel_size=7, stride=1)) self.EB1.append(nn.Conv2d(ch, ch * 2, kernel_size=7, stride=2)) self.EB1.append(nn.Conv2d(ch * 2, ch * 2, kernel_size=7, stride=1)) self.EB1.append(nn.Conv2d(ch * 2, ch * 4, kernel_size=7, stride=2)) self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1)) self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1)) self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1, dilation=2)) self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1, dilation=4)) self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1, dilation=8)) self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1, dilation=16)) self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1)) self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1)) self.EB1.append(PureUpsampling(scale=4)) self.EB1_pad_rec = [3, 3, 3, 3, 3, 3, 6, 12, 24, 48, 3, 3, 0] self.EB2.append(nn.Conv2d(in_channels, ch, kernel_size=5, stride=1)) self.EB2.append(nn.Conv2d(ch, ch * 2, kernel_size=5, stride=2)) self.EB2.append(nn.Conv2d(ch * 2, ch * 2, kernel_size=5, stride=1)) self.EB2.append(nn.Conv2d(ch * 2, ch * 4, kernel_size=5, stride=2)) self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1)) self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1)) self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1, dilation=2)) self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1, dilation=4)) self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1, dilation=8)) self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1, dilation=16)) self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1)) self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1)) self.EB2.append(PureUpsampling(scale=2, mode='nearest')) self.EB2.append(nn.Conv2d(ch * 4, ch * 2, kernel_size=5, stride=1)) self.EB2.append(nn.Conv2d(ch * 2, ch * 2, kernel_size=5, stride=1)) self.EB2.append(PureUpsampling(scale=2)) self.EB2_pad_rec = [2, 2, 2, 2, 2, 2, 4, 8, 16, 32, 2, 2, 0, 2, 2, 0] self.EB3.append(nn.Conv2d(in_channels, ch, kernel_size=3, stride=1)) self.EB3.append(nn.Conv2d(ch, ch * 2, kernel_size=3, stride=2)) self.EB3.append(nn.Conv2d(ch * 2, ch * 2, kernel_size=3, stride=1)) self.EB3.append(nn.Conv2d(ch * 2, ch * 4, kernel_size=3, stride=2)) self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1)) self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1)) self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1, dilation=2)) self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1, dilation=4)) self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1, dilation=8)) self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1, dilation=16)) self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1)) self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1)) self.EB3.append(PureUpsampling(scale=2, mode='nearest')) self.EB3.append(nn.Conv2d(ch * 4, ch * 2, kernel_size=3, stride=1)) self.EB3.append(nn.Conv2d(ch * 2, ch * 2, kernel_size=3, stride=1)) self.EB3.append(PureUpsampling(scale=2, mode='nearest')) self.EB3.append(nn.Conv2d(ch * 2, ch, kernel_size=3, stride=1)) self.EB3.append(nn.Conv2d(ch, ch, kernel_size=3, stride=1)) self.EB3_pad_rec = [1, 1, 1, 1, 1, 1, 2, 4, 8, 16, 1, 1, 0, 1, 1, 0, 1, 1] self.decoding_layers.append(nn.Conv2d(ch * 7, ch // 2, kernel_size=3, stride=1)) self.decoding_layers.append(nn.Conv2d(ch // 2, out_channels, kernel_size=3, stride=1)) self.decoding_pad_rec = [1, 1] self.EB1 = nn.ModuleList(self.EB1) self.EB2 = nn.ModuleList(self.EB2) self.EB3 = nn.ModuleList(self.EB3) self.decoding_layers = nn.ModuleList(self.decoding_layers) # padding operations padlen = 49 self.pads = [0] * padlen for i in range(padlen): self.pads[i] = nn.ReflectionPad2d(i) self.pads = nn.ModuleList(self.pads) def forward(self, x): x1, x2, x3 = x, x, x for i, layer in enumerate(self.EB1): pad_idx = self.EB1_pad_rec[i] x1 = layer(self.pads[pad_idx](x1)) if self.using_norm: x1 = self.norm(x1) if pad_idx != 0: x1 = self.act(x1) for i, layer in enumerate(self.EB2): pad_idx = self.EB2_pad_rec[i] x2 = layer(self.pads[pad_idx](x2)) if self.using_norm: x2 = self.norm(x2) if pad_idx != 0: x2 = self.act(x2) for i, layer in enumerate(self.EB3): pad_idx = self.EB3_pad_rec[i] x3 = layer(self.pads[pad_idx](x3)) if self.using_norm: x3 = self.norm(x3) if pad_idx != 0: x3 = self.act(x3) x_d = torch.cat((x1, x2, x3), 1) x_d = self.act(self.decoding_layers[0](self.pads[self.decoding_pad_rec[0]](x_d))) x_d = self.decoding_layers[1](self.pads[self.decoding_pad_rec[1]](x_d)) x_out = torch.clamp(x_d, -1, 1) return x_out # return one dimensional output indicating the probability of realness or fakeness class Discriminator(BaseNet): def __init__(self, in_channels, cnum=32, fc_channels=8*8*32*4, act=F.elu, norm=None, spectral_norm=True): super(Discriminator, self).__init__() self.act = act self.norm = norm self.embedding = None self.logit = None ch = cnum self.layers = [] if spectral_norm: self.layers.append(SpectralNorm(nn.Conv2d(in_channels, ch, kernel_size=5, padding=2, stride=2))) self.layers.append(SpectralNorm(nn.Conv2d(ch, ch * 2, kernel_size=5, padding=2, stride=2))) self.layers.append(SpectralNorm(nn.Conv2d(ch * 2, ch * 4, kernel_size=5, padding=2, stride=2))) self.layers.append(SpectralNorm(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, padding=2, stride=2))) self.layers.append(SpectralNorm(nn.Linear(fc_channels, 1))) else: self.layers.append(nn.Conv2d(in_channels, ch, kernel_size=5, padding=2, stride=2)) self.layers.append(nn.Conv2d(ch, ch * 2, kernel_size=5, padding=2, stride=2)) self.layers.append(nn.Conv2d(ch*2, ch*4, kernel_size=5, padding=2, stride=2)) self.layers.append(nn.Conv2d(ch*4, ch*4, kernel_size=5, padding=2, stride=2)) self.layers.append(nn.Linear(fc_channels, 1)) self.layers = nn.ModuleList(self.layers) def forward(self, x): for layer in self.layers[:-1]: x = layer(x) if self.norm is not None: x = self.norm(x) x = self.act(x) self.embedding = x.view(x.size(0), -1) self.logit = self.layers[-1](self.embedding) return self.logit class GlobalLocalDiscriminator(BaseNet): def __init__(self, in_channels, cnum=32, g_fc_channels=16*16*32*4, l_fc_channels=8*8*32*4, act=F.elu, norm=None, spectral_norm=True): super(GlobalLocalDiscriminator, self).__init__() self.act = act self.norm = norm self.global_discriminator = Discriminator(in_channels=in_channels, fc_channels=g_fc_channels, cnum=cnum, act=act, norm=norm, spectral_norm=spectral_norm) self.local_discriminator = Discriminator(in_channels=in_channels, fc_channels=l_fc_channels, cnum=cnum, act=act, norm=norm, spectral_norm=spectral_norm) def forward(self, x_g, x_l): x_global = self.global_discriminator(x_g) x_local = self.local_discriminator(x_l) return x_global, x_local # from util.utils import generate_mask class InpaintingModel_GMCNN(BaseModel): def __init__(self, in_channels, act=F.elu, norm=None, opt=None): super(InpaintingModel_GMCNN, self).__init__() self.opt = opt self.init(opt) self.confidence_mask_layer = ConfidenceDrivenMaskLayer() self.netGM = GMCNN(in_channels, out_channels=3, cnum=opt.g_cnum, act=act, norm=norm).cuda() # self.netGM = GMCNN(in_channels, out_channels=3, cnum=opt.g_cnum, act=act, norm=norm).cpu() init_weights(self.netGM) self.model_names = ['GM'] if self.opt.phase == 'test': return self.netD = None self.optimizer_G = torch.optim.Adam(self.netGM.parameters(), lr=opt.lr, betas=(0.5, 0.9)) self.optimizer_D = None self.wganloss = None self.recloss = nn.L1Loss() self.aeloss = nn.L1Loss() self.mrfloss = None self.lambda_adv = opt.lambda_adv self.lambda_rec = opt.lambda_rec self.lambda_ae = opt.lambda_ae self.lambda_gp = opt.lambda_gp self.lambda_mrf = opt.lambda_mrf self.G_loss = None self.G_loss_reconstruction = None self.G_loss_mrf = None self.G_loss_adv, self.G_loss_adv_local = None, None self.G_loss_ae = None self.D_loss, self.D_loss_local = None, None self.GAN_loss = None self.gt, self.gt_local = None, None self.mask, self.mask_01 = None, None self.rect = None self.im_in, self.gin = None, None self.completed, self.completed_local = None, None self.completed_logit, self.completed_local_logit = None, None self.gt_logit, self.gt_local_logit = None, None self.pred = None if self.opt.pretrain_network is False: if self.opt.mask_type == 'rect': self.netD = GlobalLocalDiscriminator(3, cnum=opt.d_cnum, act=act, g_fc_channels=opt.img_shapes[0]//16*opt.img_shapes[1]//16*opt.d_cnum*4, l_fc_channels=opt.mask_shapes[0]//16*opt.mask_shapes[1]//16*opt.d_cnum*4, spectral_norm=self.opt.spectral_norm).cuda() else: self.netD = GlobalLocalDiscriminator(3, cnum=opt.d_cnum, act=act, spectral_norm=self.opt.spectral_norm, g_fc_channels=opt.img_shapes[0]//16*opt.img_shapes[1]//16*opt.d_cnum*4, l_fc_channels=opt.img_shapes[0]//16*opt.img_shapes[1]//16*opt.d_cnum*4).cuda() init_weights(self.netD) self.optimizer_D = torch.optim.Adam(filter(lambda x: x.requires_grad, self.netD.parameters()), lr=opt.lr, betas=(0.5, 0.9)) self.wganloss = WGANLoss() self.mrfloss = IDMRFLoss() def initVariables(self): self.gt = self.input['gt'] mask, rect = generate_mask(self.opt.mask_type, self.opt.img_shapes, self.opt.mask_shapes) self.mask_01 = torch.from_numpy(mask).cuda().repeat([self.opt.batch_size, 1, 1, 1]) self.mask = self.confidence_mask_layer(self.mask_01) if self.opt.mask_type == 'rect': self.rect = [rect[0, 0], rect[0, 1], rect[0, 2], rect[0, 3]] self.gt_local = self.gt[:, :, self.rect[0]:self.rect[0] + self.rect[1], self.rect[2]:self.rect[2] + self.rect[3]] else: self.gt_local = self.gt self.im_in = self.gt * (1 - self.mask_01) self.gin = torch.cat((self.im_in, self.mask_01), 1) def forward_G(self): self.G_loss_reconstruction = self.recloss(self.completed * self.mask, self.gt.detach() * self.mask) self.G_loss_reconstruction = self.G_loss_reconstruction / torch.mean(self.mask_01) self.G_loss_ae = self.aeloss(self.pred * (1 - self.mask_01), self.gt.detach() * (1 - self.mask_01)) self.G_loss_ae = self.G_loss_ae / torch.mean(1 - self.mask_01) self.G_loss = self.lambda_rec * self.G_loss_reconstruction + self.lambda_ae * self.G_loss_ae if self.opt.pretrain_network is False: # discriminator self.completed_logit, self.completed_local_logit = self.netD(self.completed, self.completed_local) self.G_loss_mrf = self.mrfloss((self.completed_local+1)/2.0, (self.gt_local.detach()+1)/2.0) self.G_loss = self.G_loss + self.lambda_mrf * self.G_loss_mrf self.G_loss_adv = -self.completed_logit.mean() self.G_loss_adv_local = -self.completed_local_logit.mean() self.G_loss = self.G_loss + self.lambda_adv * (self.G_loss_adv + self.G_loss_adv_local) def forward_D(self): self.completed_logit, self.completed_local_logit = self.netD(self.completed.detach(), self.completed_local.detach()) self.gt_logit, self.gt_local_logit = self.netD(self.gt, self.gt_local) # hinge loss self.D_loss_local = nn.ReLU()(1.0 - self.gt_local_logit).mean() + nn.ReLU()(1.0 + self.completed_local_logit).mean() self.D_loss = nn.ReLU()(1.0 - self.gt_logit).mean() + nn.ReLU()(1.0 + self.completed_logit).mean() self.D_loss = self.D_loss + self.D_loss_local def backward_G(self): self.G_loss.backward() def backward_D(self): self.D_loss.backward(retain_graph=True) def optimize_parameters(self): self.initVariables() self.pred = self.netGM(self.gin) self.completed = self.pred * self.mask_01 + self.gt * (1 - self.mask_01) if self.opt.mask_type == 'rect': self.completed_local = self.completed[:, :, self.rect[0]:self.rect[0] + self.rect[1], self.rect[2]:self.rect[2] + self.rect[3]] else: self.completed_local = self.completed if self.opt.pretrain_network is False: for i in range(self.opt.D_max_iters): self.optimizer_D.zero_grad() self.optimizer_G.zero_grad() self.forward_D() self.backward_D() self.optimizer_D.step() self.optimizer_G.zero_grad() self.forward_G() self.backward_G() self.optimizer_G.step() def get_current_losses(self): l = {'G_loss': self.G_loss.item(), 'G_loss_rec': self.G_loss_reconstruction.item(), 'G_loss_ae': self.G_loss_ae.item()} if self.opt.pretrain_network is False: l.update({'G_loss_adv': self.G_loss_adv.item(), 'G_loss_adv_local': self.G_loss_adv_local.item(), 'D_loss': self.D_loss.item(), 'G_loss_mrf': self.G_loss_mrf.item()}) return l def get_current_visuals(self): return {'input': self.im_in.cpu().detach().numpy(), 'gt': self.gt.cpu().detach().numpy(), 'completed': self.completed.cpu().detach().numpy()} def get_current_visuals_tensor(self): return {'input': self.im_in.cpu().detach(), 'gt': self.gt.cpu().detach(), 'completed': self.completed.cpu().detach()} def evaluate(self, im_in, mask): im_in = torch.from_numpy(im_in).type(torch.FloatTensor).cuda() / 127.5 - 1 mask = torch.from_numpy(mask).type(torch.FloatTensor).cuda() im_in = im_in * (1-mask) xin = torch.cat((im_in, mask), 1) ret = self.netGM(xin) * mask + im_in * (1-mask) ret = (ret.cpu().detach().numpy() + 1) * 127.5 return ret.astype(np.uint8) ``` ### test.py Based on code from inpainting_gmcnn but adjusted significantly. ##### Set up inpainting parameters - first inpaint the coloured patches using saved masks. ``` args_patches = {'--dataset': 'inpaint_coloured_patches', '--test_num': '-1', '--data_file': '{}'.format(os.path.join(dir_path, 'models', 'test_files.txt')), '--mask_type': 'saved', '--random_mask': '0', '--load_model_dir': '{}'.format(os.path.join(dir_path, 'models', 'inpainting_gmcnn', \ '20210607-165607_GMCNN_expanded_isic_no_patch_fifth_run_b8_s224x224_gc32_dc64_randmask-ellipse')) } args_no_patches = {'--dataset': 'inpaint_no_patches', '--test_num': '-1', '--data_file': '{}'.format(os.path.join(dir_path, 'models', 'test_files.txt')), '--mask_type': 'ellipse', '--random_mask': '1', '--load_model_dir': '{}'.format(os.path.join(dir_path, 'models', 'inpainting_gmcnn', \ '20210607-165607_GMCNN_expanded_isic_no_patch_fifth_run_b8_s224x224_gc32_dc64_randmask-ellipse')) } # Arguments for inpainting the patches in the training set. args_train_patches = {'--dataset': 'inpaint_train_patches', '--test_num': '-1', '--data_file': '{}'.format(os.path.join(dir_path, 'models', 'train_files.txt')), '--mask_type': 'saved', '--random_mask': '0', '--load_model_dir': '{}'.format(os.path.join(dir_path, 'models', 'inpainting_gmcnn', \ '20210607-165607_GMCNN_expanded_isic_no_patch_fifth_run_b8_s224x224_gc32_dc64_randmask-ellipse')) } # Arguments for inpainting the patches in the training set. args_malignant = {'--dataset': 'inpaint_malignant', '--test_num': '-1', '--data_file': '{}'.format(os.path.join(dir_path, 'data', 'malignant-patches', 'manually-adjusted')), '--mask_type': 'saved', '--mask_dir': '{}'.format(os.path.join(dir_path, 'data', 'masks', 'malignant-patches')), '--random_mask': '0', '--load_model_dir': '{}'.format(os.path.join(dir_path, 'models', 'inpainting_gmcnn', \ '20210607-165607_GMCNN_expanded_isic_no_patch_fifth_run_b8_s224x224_gc32_dc64_randmask-ellipse')) } config_patches = TestOptions().parse(args=args_patches) config_no_patches = TestOptions().parse(args=args_no_patches) config_train_patches = TestOptions().parse(args=args_train_patches) config_malignant = TestOptions().parse(args=args_malignant) ``` ##### Set up the trained inpainting model. ``` # Get the available GPUs. os.environ['CUDA_VISIBLE_DEVICES']=str(np.argmax([int(x.split()[2]) for x in subprocess.Popen( "nvidia-smi -q -d Memory | grep -A4 GPU | grep Free", shell=True, stdout=subprocess.PIPE).stdout.readlines()] )) # Set up a model and load the trained weights. print('configuring model..') ourModel = InpaintingModel_GMCNN(in_channels=4, opt=config_patches) ourModel.print_networks() if config_patches.load_model_dir != '': print('Loading trained model from {}'.format(config_patches.load_model_dir)) ourModel.load_networks(getLatest(os.path.join(config_patches.load_model_dir, '*.pth'))) print('Loading done.') ``` ##### Extract the relevant file paths. ``` if os.path.isfile(config_patches.dataset_path): pathfile = open(config_patches.dataset_path, 'rt').read().splitlines() elif os.path.isdir(config_patches.dataset_path): pathfile = glob.glob(os.path.join(config_patches.dataset_path, '*.jpg')) # Changed from png. else: print('Invalid testing data file/folder path.') exit(1) mask_files = os.listdir(config_patches.mask_dir) # Get list of all the mask image names. # Separate images without patches and with patches (i.e. with corresponding mask) patch_ind = [os.path.basename(file) in mask_files for file in pathfile] path_ims_patches = [file for i, file in enumerate(pathfile) if patch_ind[i]] path_ims_no_patches = [file for i, file in enumerate(pathfile) if not patch_ind[i]] # Extract the paths for the training images with patches. pathfile_train = open(config_train_patches.dataset_path, 'rt').read().splitlines() path_train_patches = [file for file in pathfile_train if os.path.basename(file) in mask_files] # Extract the paths for the relevant malignant images. pathfile_malignant = glob.glob(os.path.join(config_malignant.dataset_path, '*.jpg')) ``` ##### Function for looping through the images, inpainting & saving the results. ``` def inpaint_ims(config, pathfile): print("-" * 30, "\n Inpainting on {}".format(config.dataset)) if config.random_mask: np.random.seed(config.seed) total_number = len(pathfile) test_num = total_number if config.test_num == -1 else min(total_number, config.test_num) print('The total number of testing images is {}, and we take {} for test.'.format(total_number, test_num)) for i in range(test_num): filename = os.path.basename(pathfile[i]) # Extract the filename from the full path. if config.mask_type == 'saved': # Use a saved mask for this project, rather than randomly generating one. mask_img = cv2.imread(os.path.join(config.mask_dir, filename), 0) # Read the mask in grayscale. mask = (mask_img > 100) # Threshold the mask for intensities from 100-255. # Add two extra axes at the start of the array to match the expected shape for the model: (1, 1, 224, 224) mask = np.expand_dims(mask, axis=(0,1)) else: mask, _ = generate_mask(config.mask_type, config.img_shapes, config.mask_shapes) image = cv2.imread(pathfile[i]) if image is None: # Added because some of the images in our directory may be empty. continue image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) h, w = image.shape[:2] if h >= config.img_shapes[0] and w >= config.img_shapes[1]: h_start = (h-config.img_shapes[0]) // 2 w_start = (w-config.img_shapes[1]) // 2 image = image[h_start: h_start+config.img_shapes[0], w_start: w_start+config.img_shapes[1], :] else: t = min(h, w) image = image[(h-t)//2:(h-t)//2+t, (w-t)//2:(w-t)//2+t, :] image = cv2.resize(image, (config.img_shapes[1], config.img_shapes[0])) image = np.transpose(image, [2, 0, 1]) image = np.expand_dims(image, axis=0) image_vis = image * (1-mask) + 255 * mask image_vis = np.transpose(image_vis[0][::-1,:,:], [1, 2, 0]) # cv2.imwrite(os.path.join(config.saving_path, 'input_{}'.format(filename)), image_vis.astype(np.uint8)) h, w = image.shape[2:] grid = 4 image = image[:, :, :h // grid * grid, :w // grid * grid] mask = mask[:, :, :h // grid * grid, :w // grid * grid] result = ourModel.evaluate(image, mask) result = np.transpose(result[0][::-1,:,:], [1, 2, 0]) cv2.imwrite(os.path.join(config.saving_path, "inpainted", filename), result) # The extension '.jpg' is already included in the filename variable. image = np.transpose(image[0][::-1,:,:], [1, 2, 0]) if (image.shape == result.shape) & (image.shape == image_vis.shape): im_combined = np.concatenate((image, image_vis, result), axis=1) # Combine the original, masked & output images and write to file. cv2.imwrite(os.path.join(config.saving_path, "combined", filename), im_combined) else: print('Mismatched shapes, images not combined. \n\toriginal: {}, input: {}, result: {}'.format(image.shape, image_vis.shape, result.shape)) print(' > {} / {}'.format(i+1, test_num)) print('done.') ``` ##### Run the inpainting for both sets. ``` inpaint_ims(config_patches, path_ims_patches) inpaint_ims(config_no_patches, path_ims_no_patches) # inpaint the patches in the training set. inpaint_ims(config_train_patches, path_train_patches) # inpaint the relevant patch sections for the malignant experiment. inpaint_ims(config_malignant, pathfile_malignant) ```
github_jupyter
# Classificação: Classificador Bayesiano Francisco Aparecido Rodrigues, francisco@icmc.usp.br.<br> Universidade de São Paulo, São Carlos, Brasil.<br> https://sites.icmc.usp.br/francisco<br> Vamos inicialmente ler os dados: ``` import random import pandas as pd import numpy as np import matplotlib.pyplot as plt random.seed(42) # define the seed (important to reproduce the results) #data = pd.read_csv('data/vertebralcolumn-3C.csv', header=(0)) #data = pd.read_csv('data/BreastCancer.csv', header=(0)) #data = pd.read_csv('data/Iris.csv', header=(0)) data = pd.read_csv('data/Vehicle.csv', header=(0)) data = data.dropna(axis='rows') #remove NaN # armazena os nomes das classes classes = np.array(pd.unique(data[data.columns[-1]]), dtype=str) nrow, ncol = data.shape print("Matriz de atributos: Número de linhas:", nrow, " colunas: ", ncol) attributes = list(data.columns) data.head(10) ``` Vamos construir as variáveis $X$ e $y$, sendo que o processo classificação se resume em estimar a função $f$ na relação $y = f(X) + \epsilon$, onde $\epsilon$ é o erro, que tem distribuição normal com média igual a zero e variância $\sigma^2$. Convertemos os dados para o formato Numpy para facilitar a sua manipulação. ``` data = data.to_numpy() nrow,ncol = data.shape y = data[:,-1] X = data[:,0:ncol-1] ``` Vamos normalizar os dados, de modo a evitar o efeito da escala dos atributos. ``` from sklearn.preprocessing import StandardScaler scaler = StandardScaler().fit(X) X = scaler.transform(X) print('Dados transformados:') print('Media: ', np.mean(X, axis = 0)) print('Desvio Padrao:', np.std(X, axis = 0)) ``` Para treinar o classificador, precisamos definir o conjunto de teste e treinamento. ``` from sklearn.model_selection import train_test_split p = 0.8 # fracao de elementos no conjunto de treinamento x_train, x_test, y_train, y_test = train_test_split(X, y, train_size = p, random_state = 42) ``` A partir desse conjunto de dados, podemos realizar a classificação. ## Classificador Bayesiano Vamos considerar o caso paramétrico, assumindo que cada variável está distribuída de acordo com uma distribuição Normal. Outras distribuições também podem ser utilizadas. Já selecionamos os conjuntos de treinamento e teste anteriormente. No conjunto de treinamento, vamos calcular a média e desvio padrão de cada atributo para cada classe. A seguir, reaizamos a classificação, dos dados usando a teoria da decisão Bayesiana, isto é: $X \in C_i$ se, e somente se, $P(C_i|X) = \max P(C_j|X)$ para todo $j$. ``` from scipy.stats import multivariate_normal #matrix to store the probabilities P = pd.DataFrame(data=np.zeros((x_test.shape[0], len(classes))), columns = classes) Pc = np.zeros(len(classes)) #fraction of elements in each class for i in np.arange(0, len(classes)): elements = tuple(np.where(y_train == classes[i])) Pc[i] = len(elements)/len(y_train) Z = x_train[elements,:][0] m = np.mean(Z, axis = 0) cv = np.cov(np.transpose(Z)) for j in np.arange(0,x_test.shape[0]): x = x_test[j,:] pj = multivariate_normal.pdf(x, mean=m, cov=cv, allow_singular=True) P[classes[i]][j] = pj*Pc[i] print(P) y_pred = [] #np.array(test_x.shape[0], dtype=str) for i in np.arange(0, x_test.shape[0]): c = np.argmax(np.array(P.iloc[[i]])) y_pred.append(classes[c]) y_pred = np.array(y_pred, dtype=str) print(y_pred) from sklearn.metrics import accuracy_score score = accuracy_score(y_pred, y_test) print('Accuracy:', score) ``` Código completo. ``` import random import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from scipy.stats import multivariate_normal from sklearn.metrics import accuracy_score random.seed(42) data = pd.read_csv('data/Iris.csv', header=(0)) classes = np.array(pd.unique(data[data.columns[-1]]), dtype=str) # Converte para matriz e vetor do numpy data = data.to_numpy() nrow,ncol = data.shape y = data[:,-1] X = data[:,0:ncol-1] # Transforma os dados para terem media igual a zero e variancia igual a 1 scaler = StandardScaler().fit(X) X = scaler.transform(X) # Seleciona os conjuntos de treinamento e teste p = 0.8 # fraction of elements in the test set x_train, x_test, y_train, y_test = train_test_split(X, y, train_size = p, random_state = 42) #### Realiza a classificacao #### # Matriz que armazena as probabilidades para cada classe P = pd.DataFrame(data=np.zeros((x_train.shape[0], len(classes))), columns = classes) Pc = np.zeros(len(classes)) # Armaze a fracao de elementos em cada classe for i in np.arange(0, len(classes)): # Para cada classe elements = tuple(np.where(y_train == classes[i])) # elmentos na classe i Pc[i] = len(elements)/len(y_train) # Probabilidade pertencer a classe i Z = x_train[elements,:][0] # Elementos no conjunto de treinamento m = np.mean(Z, axis = 0) # Vetor media cv = np.cov(np.transpose(Z)) # Matriz de covariancia for j in np.arange(0,x_test.shape[0]): # para cada observacao no conjunto de teste x = x_test[j,:] # calcula a probabilidade pertencer a cada classe pj = multivariate_normal.pdf(x, mean=m, cov=cv, allow_singular=True) P[classes[i]][j] = pj*Pc[i] y_pred = [] # Vetor com as classes preditas for i in np.arange(0, x_test.shape[0]): c = np.argmax(np.array(P.iloc[[i]])) y_pred.append(classes[c]) y_pred = np.array(y_pred, dtype=str) # calcula a acuracia score = accuracy_score(y_pred, y_test) print('Acuracia:', score) ``` ## Caso não paramétrico Para o caso unidimensional, seja $(X_1,X_2, \ldots, X_n)$ uma amostra aleatória unidimensional identicamente distribuída de acordo com alguma função de distribuição $f$ não conhecida. Para estimarmos o formato de $f$, usamos um estimador (kernel density estimator): \begin{equation} \widehat{f}_{h}(x)={\frac {1}{n}}\sum _{i=1}^{n}K_{h}(x-x_{i})={\frac {1}{nh}}\sum _{i=1}^{n}K{\Big (}{\frac {x-x_{i}}{h}}{\Big )}, \end{equation} onde $K$ é a função kernel. A estimação depende do parâmetro $h$, que é um parâmetro livre e controla a abertura da função. ``` import numpy as np import matplotlib.pyplot as plt N = 20 # gera os dados X = np.array([1, 2, 3, 4, 12, 20,21,22,23,24,40,41, 50]) X = X.reshape((len(X), 1)) # mostra os dados plt.figure(figsize=(12,4)) plt.plot(X[:, 0], 0.001*np.ones(X.shape[0]), 'ok') # valores x para serem usados nas densidades X_plot = np.linspace(np.min(X)-5, np.max(X)+5, 1000)[:, np.newaxis] h=2 fhat = 0 # estimacao obtida for x in X: # distribuição normal centrada em x f = (1/np.sqrt(2*np.pi*h))*np.exp(-((X_plot - x)**2)/(2*h**2)) fhat = fhat + f # acumula as distribuições plt.plot(X_plot,f, '--', color = 'blue', linewidth=1) # mostra a distribuição estimada plt.plot(X_plot,fhat/(len(X)*np.sqrt(h)), color = 'green', linewidth=2) plt.xlabel('x', fontsize = 20) plt.ylabel('P(x)', fontsize = 20) plt.xticks(fontsize=12) plt.yticks(fontsize=12) plt.ylim((0, 0.3)) plt.savefig('kernel-ex.eps') plt.show(True) ``` Esse resultado pode ser obtido usando-se a função KernelDensity scikit-learn. ``` import numpy as np from matplotlib.pyplot import cm from sklearn.neighbors import KernelDensity color=['red', 'blue', 'magenta', 'gray', 'green'] N = 20 X = np.array([1, 2, 3, 4, 12, 20,21,22,23,24,40,41, 50]) X = X.reshape((len(X), 1)) plt.figure(figsize=(12,4)) plt.plot(X[:, 0], 0.001*np.ones(X.shape[0]), 'ok') X_plot = np.linspace(np.min(X)-5, np.max(X)+5, 1000)[:, np.newaxis] h=2 fhat = 0 for x in X: f = (1/np.sqrt(2*np.pi*h))*np.exp(-((X_plot - x)**2)/(2*h**2)) fhat = fhat + f plt.plot(X_plot,f, '--', color = 'blue', linewidth=1) kde = KernelDensity(kernel='gaussian', bandwidth=h).fit(X) log_dens = np.exp(kde.score_samples(X_plot)) # score_samples() returns the log density. plt.plot(X_plot,log_dens, color = 'red', linewidth=2, label = 'h='+str(h)) plt.xlabel('x', fontsize = 20) plt.ylabel('P(x)', fontsize = 20) plt.xticks(fontsize=12) plt.yticks(fontsize=12) plt.ylim((0, 0.3)) plt.show(True) ``` Notem que o formato da estimação depende do parâmetro livre $h$. ``` import numpy as np from matplotlib.pyplot import cm color=['red', 'blue', 'gray', 'black', 'green', 'lightblue'] N = 20 X = np.array([1, 2, 3, 4, 12, 20,21,22,23,24,40,41, 50]) X = X.reshape((len(X), 1)) X_plot = np.linspace(np.min(X)-5, np.max(X)+5, 1000)[:, np.newaxis] plt.figure(figsize=(12,4)) plt.plot(X[:, 0], 0.001*np.ones(X.shape[0]), 'ok') c = 0 vh = [0.1, 0.5, 1, 2, 5, 10] for h in vh: kde = KernelDensity(kernel='gaussian', bandwidth=h).fit(X) log_dens = np.exp(kde.score_samples(X_plot)) # score_samples() returns the log density. plt.plot(X_plot,log_dens, color = color[c], linewidth=2, label = 'h='+str(h)) c = c + 1 plt.ylabel('P(x)', fontsize = 20) plt.xticks(fontsize=12) plt.yticks(fontsize=12) #plt.ylim((0, 0.2)) plt.legend(fontsize = 10) #plt.savefig('kernel.eps') plt.show(True) ``` Notem que a estimação é relacionada com a estimação usando-se histogramas. ``` import numpy as np N = 20 X = np.array([1, 2, 3, 4, 12, 20,21,22,23,24,40,41, 50]) X = X.reshape((len(X), 1)) plt.figure(figsize=(10,5)) # Histogram nbins = 10 plt.hist(X,bins = nbins, density = True, color='gray',alpha=0.7, rwidth=0.95) #Kernel density estimation X_plot = np.linspace(np.min(X)-5, np.max(X)+5, 1000)[:, np.newaxis] kde = KernelDensity(kernel='gaussian', bandwidth=2).fit(X) log_dens = np.exp(kde.score_samples(X_plot)) # score_samples() returns the log density. plt.plot(X_plot,log_dens, color = 'blue', linewidth=2) plt.plot(X[:, 0], 0.001*np.ones(X.shape[0]), 'ok') plt.xlabel('x', fontsize = 15) plt.ylabel('P(x)', fontsize = 15) plt.show(True) ``` Usando o método *kernel density estimation*, podemos realizar a classificação. ``` import random import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.neighbors import KernelDensity from sklearn.metrics import accuracy_score random.seed(42) #data = pd.read_csv('data/Iris.csv', header=(0)) data = pd.read_csv('data/Vehicle.csv', header=(0)) classes = np.array(pd.unique(data[data.columns[-1]]), dtype=str) # Converte para matriz e vetor do numpy data = data.to_numpy() nrow,ncol = data.shape y = data[:,-1] X = data[:,0:ncol-1] # Transforma os dados para terem media igual a zero e variancia igual a 1 scaler = StandardScaler().fit(X) X = scaler.transform(X) # Seleciona os conjuntos de treinamento e teste p = 0.8 # fraction of elements in the training set x_train, x_test, y_train, y_test = train_test_split(X, y, train_size = p, random_state = 42) # Matriz que armazena as probabilidades para cada classe P = pd.DataFrame(data=np.zeros((x_test.shape[0], len(classes))), columns = classes) Pc = np.zeros(len(classes)) # Armaze a fracao de elementos em cada classe h = 2 for i in np.arange(0, len(classes)): # Para cada classe elements = tuple(np.where(y_train == classes[i])) # elmentos na classe i Pc[i] = len(elements)/len(y_train) # Probabilidade pertencer a classe i Z = x_train[elements,:][0] # Elementos no conjunto de treinamento kde = KernelDensity(kernel='gaussian', bandwidth=h).fit(Z) for j in np.arange(0,x_test.shape[0]): # para cada observacao no conjunto de teste x = x_test[j,:] x = x.reshape((1,len(x))) # calcula a probabilidade pertencer a cada classe pj = np.exp(kde.score_samples(x)) P[classes[i]][j] = pj*Pc[i] y_pred = [] # Vetor com as classes preditas for i in np.arange(0, x_test.shape[0]): c = np.argmax(np.array(P.iloc[[i]])) y_pred.append(classes[c]) y_pred = np.array(y_pred, dtype=str) # calcula a acuracia score = accuracy_score(y_pred, y_test) print('Acuracia:', score) ``` ### Exercícios 1 - Considere outras distribuições na classificação dos dados, além da Normal.<br> 2 - Verifique como o parâmetro $h$ influencia na classificação.
github_jupyter
<a href="https://colab.research.google.com/github/Tessellate-Imaging/Monk_Object_Detection/blob/master/application_model_zoo/Example%20-%20NOAA%20Mous%20Underwater%20Fish%20Detection.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Table of contents ## 1. Installation Instructions ## 2. How to train using MMdetection wrapper # Installation - Run these commands - git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git - cd Monk_Object_Detection/16_mmdet/installation - Select the right file and run - chmod +x install.sh && ./install.sh ``` ! git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git ! cd Monk_Object_Detection/16_mmdet/installation && chmod +x install.sh && ./install.sh ``` # Training your own detector ## Dataset - Credits: https://www.viametoolkit.org/cvpr-2018-workshop-data-challenge/challenge-data-description/ ``` ! wget https://challenge.kitware.com/api/v1/file/5adecdee56357d4ff85705f8/download -O data-challenge-training-imagery.tar.gz ! wget https://challenge.kitware.com/api/v1/item/5ada39f756357d4ff856f550/download -O data-challenge-training-annotations.tar.gz ! tar -xzf data-challenge-training-imagery.tar.gz ! tar -xzf data-challenge-training-annotations.tar.gz ! mkdir moussseq1 ! mkdir moussseq1/annotations ! cp annotations/mouss_seq0_training.mscoco.json moussseq1/annotations/instances_images.json ! mv imagery/mouss_seq0 moussseq1/images import json with open('moussseq1/annotations/instances_images.json') as f: data = json.load(f) g = open("moussseq1/annotations/classes.txt", 'w') for i in range(len(data["categories"])): g.write(data["categories"][0]["name"] + "\n"); g.close(); ``` # Training ``` import os import sys sys.path.append("Monk_Object_Detection/16_mmdet/lib") from train_engine import Detector gtf = Detector(); img_dir = "moussseq1/images"; annofile = "moussseq1/annotations/instances_images.json" class_file = "moussseq1/annotations/classes.txt" gtf.Train_Dataset(img_dir, annofile, class_file); gtf.Val_Dataset(img_dir, annofile); gtf.Dataset_Params(batch_size=8, num_workers=4) gtf.List_Models(); gtf.Model_Params(model_name="retinanet_ghm_r101_fpn"); gtf.Hyper_Params(lr=0.02, momentum=0.9, weight_decay=0.0001); gtf.Training_Params(num_epochs=100, val_interval=50); gtf.Train(); ``` # Run inference on images ``` import os import sys sys.path.append("Monk_Object_Detection/16_mmdet/lib") from infer_engine import Infer gtf = Infer(); gtf.Model_Params("work_dirs/config_updated/config_updated.py", "work_dirs/config_updated/latest.pth") import os img_list = os.listdir("moussseq1/images"); result = gtf.Predict(img_path="moussseq1/images/" + img_list[0], out_img_path="result.jpg", thresh=0.8); from IPython.display import Image Image(filename='result.jpg', width=490, height=640) ```
github_jupyter
# Implementation This section shows how the linear regression extensions discussed in this chapter are typically fit in Python. First let's import the {doc}`Boston housing</content/appendix/data>` dataset. ``` import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn import datasets boston = datasets.load_boston() X_train = boston['data'] y_train = boston['target'] ``` ## Regularized Regression Both Ridge and Lasso regression can be easily fit using `scikit-learn`. A bare-bones implementation is provided below. Note that the regularization parameter `alpha` (which we called $\lambda$) is chosen arbitrarily. ``` from sklearn.linear_model import Ridge, Lasso alpha = 1 # Ridge ridge_model = Ridge(alpha = alpha) ridge_model.fit(X_train, y_train) # Lasso lasso_model = Lasso(alpha = alpha) lasso_model.fit(X_train, y_train); ``` In practice, however, we want to choose `alpha` through cross validation. This is easily implemented in `scikit-learn` by designating a set of `alpha` values to try and fitting the model with `RidgeCV` or `LassoCV`. ``` from sklearn.linear_model import RidgeCV, LassoCV alphas = [0.01, 1, 100] # Ridge ridgeCV_model = RidgeCV(alphas = alphas) ridgeCV_model.fit(X_train, y_train) # Lasso lassoCV_model = LassoCV(alphas = alphas) lassoCV_model.fit(X_train, y_train); ``` We can then see which values of `alpha` performed best with the following. ``` print('Ridge alpha:', ridgeCV.alpha_) print('Lasso alpha:', lassoCV.alpha_) ``` ## Bayesian Regression We can also fit Bayesian regression using `scikit-learn` (though another popular package is `pymc3`). A very straightforward implementation is provided below. ``` from sklearn.linear_model import BayesianRidge bayes_model = BayesianRidge() bayes_model.fit(X_train, y_train); ``` This is not, however, identical to our construction in the previous section since it infers the $\sigma^2$ and $\tau$ parameters, rather than taking those as fixed inputs. More information can be found [here](https://scikit-learn.org/stable/modules/linear_model.html#bayesian-regression). The hidden chunk below demonstrates a hacky solution for running Bayesian regression in `scikit-learn` using known values for $\sigma^2$ and $\tau$, though it is hard to imagine a practical reason to do so ````{toggle} By default, Bayesian regression in `scikit-learn` treats $\alpha = \frac{1}{\sigma^2}$ and $\lambda = \frac{1}{\tau}$ as random variables and assigns them the following prior distributions $$ \begin{aligned} \alpha &\sim \text{Gamma}(\alpha_1, \alpha_2) \\ \lambda &\sim \text{Gamma}(\lambda_1, \lambda_2). \end{aligned} $$ Note that $E(\alpha) = \frac{\alpha_1}{\alpha_2}$ and $E(\lambda) = \frac{\lambda_1}{\lambda_2}$. To *fix* $\sigma^2$ and $\tau$, we can provide an extremely strong prior on $\alpha$ and $\lambda$, guaranteeing that their estimates will be approximately equal to their expected value. Suppose we want to use $\sigma^2 = 11.8$ and $\tau = 10$, or equivalently $\alpha = \frac{1}{11.8}$, $\lambda = \frac{1}{10}$. Then let $$ \begin{aligned} \alpha_1 &= 10000 \cdot \frac{1}{11.8}, \\ \alpha_2 &= 10000, \\ \lambda_1 &= 10000 \cdot \frac{1}{10}, \\ \lambda_2 &= 10000. \end{aligned} $$ This guarantees that $\sigma^2$ and $\tau$ will be approximately equal to their pre-determined values. This can be implemented in `scikit-learn` as follows ```{code} big_number = 10**5 # alpha alpha = 1/11.8 alpha_1 = big_number*alpha alpha_2 = big_number # lambda lam = 1/10 lambda_1 = big_number*lam lambda_2 = big_number # fit bayes_model = BayesianRidge(alpha_1 = alpha_1, alpha_2 = alpha_2, alpha_init = alpha, lambda_1 = lambda_1, lambda_2 = lambda_2, lambda_init = lam) bayes_model.fit(X_train, y_train); ``` ```` ## Poisson Regression GLMs are most commonly fit in Python through the `GLM` class from `statsmodels`. A simple Poisson regression example is given below. As we saw in the GLM concept section, a GLM is comprised of a random distribution and a link function. We identify the random distribution through the `family` argument to `GLM` (e.g. below, we specify the `Poisson` family). The default link function depends on the random distribution. By default, the Poisson model uses the link function $$ \eta_n = g(\mu_n) = \log(\lambda_n), $$ which is what we use below. For more information on the possible distributions and link functions, check out the `statsmodels` GLM [docs](https://www.statsmodels.org/stable/glm.html). ``` import statsmodels.api as sm X_train_with_constant = sm.add_constant(X_train) poisson_model = sm.GLM(y_train, X_train, family=sm.families.Poisson()) poisson_model.fit(); ```
github_jupyter